Statistical inference involves using data from a sample to draw conclusions about a wider population. Given a partly specified statistical model, in which at least one parameter is unknown, and some observations for which the model is valid, it is possible to draw inferences about the unknown parameters and hence about the population from which the sample is drawn. As such, inference
underpins all aspects of statistics. However, inference can take different forms. It may be adequate to provide a point estimate of a parameter, i.e. a single number. More usually, an interval is required, giving a measure of precision. It may also be necessary to test a pre-specified hypothesis about the parameter(s). These forms of inference can all be considered as special cases of the use of a decision function.
There are a number of different philosophies about how these inferences should be drawn, ranging from that which says the sample contains all the information available about a parameter (likelihood), through that which says account should be taken of what would happen in repeated sampling (frequentist), to that which allows the sample to modify prior beliefs about a parameter’s value (Bayesian).
This Module aims to explore these approaches to parametric statistical inference, particularly through application of the methods to numerous examples.