When planning experiments, it is essential that the data collected are as relevant and informative as possible. The statistical principles for the design of experiments include the choice of optimal or good treatments sets and appropriate replication of them, randomization to ensure unbiasedness and the use of blocking and other methods for reduction of variance
Pre-requisites: MATH6153 OR STAT6083 OR MATH3014 OR (MATH2011 and MATH2010)
Aims and Objectives
Having successfully completed this module you will be able to:
- appreciate the advantages and disadvantages of a design for a particular experiment
- understand the potential practical problems in its implementation
- construct optimal or good designs for a range of practical experiments
- describe how the analysis of the data from the experiment should be carried out
Emphasis throughout will be on the statistical principles underlying the methods and how they can be applied to and adapted for practical experiments. The following methods will be discussed and practised.
1) Basic ideas: objectives leading to choice of treatments; randomization to ensure validity of analysis; blocking to separate sources of variation in order to ensure efficiency of analysis, ANOVA methodology.
2) Choice of treatments: replication for unstructured treatments; optimal design for quantitative treatments; the factorial treatment structure and its advantages; incomplete factorial structures,
including regular fractional factorials; screening experiments; response surface treatment designs for multiple quantitative factors; optimal design algorithms for choosing multifactor treatment sets.
3) Randomization: practical constraints on randomization.
4) Blocking: incomplete block designs for unstructured treatments, including balanced incomplete block designs; confounding for factorial designs; optimal blocked factorial and response surface
designs; split-plot and other multi-stratum designs.
5) Special topics: Optimal designs for nonlinear models.
Learning and Teaching
Teaching and learning methods
Lectures, computer practical sessions and self-directed computer work
|Total study time||150|
Resources & Reading list
Website on Blackboard.
Box, G.E.P., and Draper, N.R (2007). Response Surfaces, Mixtures and Ridge Analyses. New York: Wiley.
Dean, A.M. and Voss, D.T. (1999). Design and Analysis of Experiments. New York: Springer-Verlag.
Montgomery, D.C. (2009). Design and Analysis of Experiments. New York: Wiley.
Mead, R, Gilmour, SG, and Mead, A (2012). Statistical Principles for the Design of Experiments. Cambridge.
Myers, R.H., Montgomery, D.C. and Anderson-Cook, C.M. (2009). Response Surface. New York: Wiley.
John, J.A. and Williams, E.R. (1995). Cyclic and computer generated designs. London: Chapman and Hall.
Wu, C.F.J. and Hamada, M. (2009). Experiments - Planning, Analysis and Parameter. New York: Wiley.
Box, G.E.P., Hunter, J.S. and Hunter, W.G. (2005). Statistics for Experimenters. New York: Wiley.
Atkinson, A.C., Donev, A.N. and Tobias, R.D. (2007). Optimum Experimental Designs, with SAS. Oxford: Oxford Science Publication.
This is how we’ll formally assess what you have learned in this module.
|Weekly quizzes and puzzles||10%|
This is how we’ll assess you if you don’t meet the criteria to pass this module.
Repeat type: External