Book contents
1 - Introduction
Published online by Cambridge University Press: 06 July 2010
Summary
What is statistical inference?
In statistical inference experimental or observational data are modelled as the observed values of random variables, to provide a framework from which inductive conclusions may be drawn about the mechanism giving rise to the data.
We wish to analyse observations x = (x1, …, xn) by:
Regarding x as the observed value of a random variable X = (X1, …, Xn) having an (unknown) probability distribution, conveniently specified by a probability density, or probability mass function, f (x).
Restricting the unknown density to a suitable family or set F. In parametric statistical inference, f (x) is of known analytic form, but involves a finite number of real unknown parameters θ = (θ1, …, θd). We specify the region Θ ⊆ ℝd of possible values of θ, the parameter space. To denote the dependency of f (x) on θ, we write f (x; θ) and refer to this as the model function. Alternatively, the data could be modelled non-parametrically, a non-parametric model simply being one which does not admit a parametric representation. We will be concerned almost entirely in this book with parametric statistical inference.
The objective that we then assume is that of assessing, on the basis of the observed data x, some aspect of θ, which for the purpose of the discussion in this paragraph we take to be the value of a particular component, θi say. In that regard, we identify three main types of inference: point estimation, confidence set estimation and hypothesis testing.
- Type
- Chapter
- Information
- Essentials of Statistical Inference , pp. 1 - 3Publisher: Cambridge University PressPrint publication year: 2005