Statistical inference involves using data from a sample to draw conclusions about a wider population. Given a partly specified statistical model, in which at least one parameter is unknown, and some observations for which the model is valid, it is possible to draw inferences about the unknown parameters and hence about the population from which the sample is drawn. As such, inference
underpins all aspects of statistics. However, inference can take different forms. It may be adequate to provide a point estimate of a parameter, i.e. a single number. More usually, an interval is required, giving a measure of precision. It may also be necessary to test a pre-specified hypothesis about the parameter(s). These forms of inference can all be considered as special cases of the use of a decision function.
There are a number of different philosophies about how these inferences should be drawn, ranging from that which says the sample contains all the information available about a parameter (likelihood), through that which says account should be taken of what would happen in repeated sampling (frequentist), to that which allows the sample to modify prior beliefs about a parameter’s value (Bayesian).
This Module aims to explore these approaches to parametric statistical inference, particularly through application of the methods to numerous examples.
Aims and Objectives
Having successfully completed this module you will be able to:
- Demonstrate knowledge and understanding of: Test a hypothesis concerning the distribution of a random variable; Evaluate different estimators using their theoretical properties; Understand how to update the prior distribution to obtain the posterior distribution in a Bayesian analysis, and be able to apply this knowledge to simple conjugate analyses
- Derive suitable point estimators of the parameters of the distribution of a random variable and give a measure of their precision
- Appreciate the differences between inference paradigms and how they can be embedded in decision theory
- Use computational methods to obtain and evaluate estimators
Sufficiency and the factorisation theorem
Maximum Likelihood Estimation (MLE)
Cramer-Rao lower bound
Minimum Variance Unbiased Estimators (MVUE)
Uniformly most powerful test
Likelihood ratio test
Numerical solutions to Maximum Likelihood Estimation – Newton-Raphson and Fisher Scoring
Re-sampling methods – Jacknife and Bootstrap
Prior and posterior distributions
Uniform and conjugate prior distributions
Loss functions and risk functions
Learning and Teaching
Teaching and learning methods
Lectures, problem classes, coursework, exercises, private study
|Total study time||150|
Resources & Reading list
Lee PM (2004). Bayesian Statistics : An Introduction. Arnold.
Mukhopadhyay H (2006). Introductory Statistical Inference. Chapman & Hall/CRC.
Young GA & Smith RL (2005). Essentials of Statistical Inference. Cambridge University Press.
Garthwaite OH, Jolliffe IT & Jones B (2002). Statistical Inference. Oxford Science Publications.
50% written assessment, 50% coursework
Referral arrangements: Written assessment
This is how we’ll give you feedback as you are learning. It is not a formal test or exam.Assignments and problem sheets
This is how we’ll formally assess what you have learned in this module.
This is how we’ll assess you if you don’t meet the criteria to pass this module.
Repeat type: Internal & External