Module Contents: This module discusses continuous optimization problems where either the objective function or constraint functions or both are nonlinear. It explains optimality conditions, that is, which conditions an optimal solution must satisfy. It introduces the most popular numerical methods such as line search methods, Newton’s method and quasi-Newton’s methods and conjugate gradient methods for solving unconstrained optimization problems, and penalty function method and sequential quadratic programming methods for solving constrained optimization problems. Further, it explores the theoretical background behind these powerful optimization methods by looking into how a specific method was motivated, developed, implemented and applied. Characters: This module considers nonlinear continuous optimisation problems. This differentiates it from linear programming where the objective and constaint functions are affine. The continuity means that the decision variable may take continuous values instead of discrete values as in integer programming. The module does not discuss mathematical modelling of practical problems; instead it focuses on numerical methods for solving standard optimization models of practical problems
Aims and Objectives
To introduce students to numerical methods for solving smooth nonlinear optimization problems and to explore the theoretical background behind these methods.
Having successfully completed this module you will be able to:
- Be sufficiently familiar with a range of powerful numerical methods for solving nonlinear optimization problems
- Apply properly the resulting algorithms to solving practical optimization problems
- Understand the theoretical background behind each of the methods including motivation, development, restrictions, advantages and disadvantages, and implementation
- Understand optimality conditions for both unconstrained and constrained optimization problems and use them to identify optimal solutions of simple academic examples
Basics in optimization, Convexity Unconstrained optimization Line search methods including Golden Section method and Fibonacci method Nelder-Mead simplex method Newton's methods and quasi-Newton's methods Conjugate gradient methods Optimality conditions for constrained minimization, duality Lagrange duality Penalty function method
Learning and Teaching
Teaching and learning methods
Lectures, coursework, private study
|Total study time||150|
Resources & Reading list
PERESSINI, A.L.; SULLIVAN, F.E. and UHL Jr., J.J. The Mathematics of non-linear programming.
DENNIS Jr., J.E. and SCHNABEL R.B.. Numerical methods for unconstrained minimization and nonlinear equations.
FLETCHER, R. Practical Methods of Optimization.
NOCEDAL J. and WEIGHT S.J.. Numerical Optimization.
WALSH, G R. Methods of Optimization.
|Exam (2 hours)||80%|
To study this module, you will need to have studied the following module(s):
Costs associated with this module
Students are responsible for meeting the cost of essential textbooks, and of producing such essays, assignments, laboratory reports and dissertations as are required to fulfil the academic requirements for each programme of study.
In addition to this, students registered for this module typically also have to pay for:
Books and Stationery equipment
Course texts are provided by the library and there are no additional compulsory costs associated with the module.
Please also ensure you read the section on additional costs in the University’s Fees, Charges and Expenses Regulations in the University Calendar available at www.calendar.soton.ac.uk.