Module overview
Module Contents: This module discusses continuous optimization problems where either the objective function or constraint functions or both are nonlinear. It explains optimality conditions, that is, which conditions an optimal solution must satisfy. It introduces the most popular numerical methods such as line search methods, Newton’s method and quasi-Newton’s methods and conjugate gradient methods for solving unconstrained optimization problems, and penalty function method and sequential quadratic programming methods for solving constrained optimization problems. Further, it explores
the theoretical background behind these powerful optimization methods by looking into how a specific method was motivated, developed, implemented and applied.
Characters: This module considers nonlinear continuous optimisation problems. This differentiates it from linear programming where the objective and constaint functions are affine. The continuity means that the decision variable may take continuous values instead of discrete values as in integer
programming. The module does not discuss mathematical modelling of practical problems; instead it focuses on numerical methods for solving standard optimization models of practical problems
Linked modules
Pre-requisite: MATH2039
Aims and Objectives
Learning Outcomes
Learning Outcomes
Having successfully completed this module you will be able to:
- Apply properly the resulting algorithms to solving practical optimization problems
- Be sufficiently familiar with a range of powerful numerical methods for solving nonlinear optimization problems
- Understand optimality conditions for both unconstrained and constrained optimization problems and use them to identify optimal solutions of simple academic examples
- Understand the theoretical background behind each of the methods including motivation, development, restrictions, advantages and disadvantages, and implementation
Syllabus
Basics in optimization, Convexity
Unconstrained optimization
Line search methods including Golden Section method and Fibonacci method
Nelder-Mead simplex method
Newton's methods and quasi-Newton's methods
Conjugate gradient methods
Optimality conditions for constrained minimization, duality
Lagrange duality
Penalty function method
Learning and Teaching
Teaching and learning methods
Lectures, coursework, private study
Type | Hours |
---|---|
Teaching | 42 |
Independent Study | 108 |
Total study time | 150 |
Resources & Reading list
Textbooks
PERESSINI, A.L.; SULLIVAN, F.E. and UHL Jr., J.J. The Mathematics of non-linear programming.
FLETCHER, R. Practical Methods of Optimization. Wiley.
NOCEDAL J. and WEIGHT S.J.. Numerical Optimization. Springer-Verlag.
DENNIS Jr., J.E. and SCHNABEL R.B.. Numerical methods for unconstrained minimization and nonlinear equations. Prentice-Hall.
WALSH, G R. Methods of Optimization. Wiley.
Assessment
Summative
This is how we’ll formally assess what you have learned in this module.
Method | Percentage contribution |
---|---|
Coursework | 30% |
Written assessment | 40% |
Coursework | 30% |
Referral
This is how we’ll assess you if you don’t meet the criteria to pass this module.
Method | Percentage contribution |
---|---|
Written assessment | 100% |
Repeat Information
Repeat type: Internal & External