Skip to main navigationSkip to main content
The University of Southampton

MATH3016 Optimization

Module Overview

Module Contents: This module discusses continuous optimization problems where either the objective function or constraint functions or both are nonlinear. It explains optimality conditions, that is, which conditions an optimal solution must satisfy. It introduces the most popular numerical methods such as line search methods, Newton’s method and quasi-Newton’s methods and conjugate gradient methods for solving unconstrained optimization problems, and penalty function method and sequential quadratic programming methods for solving constrained optimization problems. Further, it explores the theoretical background behind these powerful optimization methods by looking into how a specific method was motivated, developed, implemented and applied. Characters: This module considers nonlinear continuous optimisation problems. This differentiates it from linear programming where the objective and constaint functions are affine. The continuity means that the decision variable may take continuous values instead of discrete values as in integer programming. The module does not discuss mathematical modelling of practical problems; instead it focuses on numerical methods for solving standard optimization models of practical problems

Aims and Objectives

Learning Outcomes

Learning Outcomes

Having successfully completed this module you will be able to:

  • Be sufficiently familiar with a range of powerful numerical methods for solving nonlinear optimization problems
  • Apply properly the resulting algorithms to solving practical optimization problems
  • Understand the theoretical background behind each of the methods including motivation, development, restrictions, advantages and disadvantages, and implementation
  • Understand optimality conditions for both unconstrained and constrained optimization problems and use them to identify optimal solutions of simple academic examples


Basics in optimization, Convexity Unconstrained optimization Line search methods including Golden Section method and Fibonacci method Nelder-Mead simplex method Newton's methods and quasi-Newton's methods Conjugate gradient methods Optimality conditions for constrained minimization, duality Lagrange duality Penalty function method

Learning and Teaching

Teaching and learning methods

Lectures, coursework, private study

Independent Study108
Total study time150

Resources & Reading list

DENNIS Jr., J.E. and SCHNABEL R.B.. Numerical methods for unconstrained minimization and nonlinear equations. 

FLETCHER, R. Practical Methods of Optimization. 

WALSH, G R. Methods of Optimization. 

NOCEDAL J. and WEIGHT S.J.. Numerical Optimization. 

PERESSINI, A.L.; SULLIVAN, F.E. and UHL Jr., J.J. The Mathematics of non-linear programming. 



MethodPercentage contribution
Coursework 30%
Coursework 30%
Written assessment 40%


MethodPercentage contribution
Written assessment 100%

Repeat Information

Repeat type: Internal & External

Linked modules

Pre-requisite: MATH2039


Costs associated with this module

Students are responsible for meeting the cost of essential textbooks, and of producing such essays, assignments, laboratory reports and dissertations as are required to fulfil the academic requirements for each programme of study.

In addition to this, students registered for this module typically also have to pay for:

Books and Stationery equipment

Recommended texts for this module may be available in limited supply in the University Library and students may wish to purchase reading texts as appropriate.

Course texts are provided by the library and there are no additional compulsory costs associated with the module.

Please also ensure you read the section on additional costs in the University’s Fees, Charges and Expenses Regulations in the University Calendar available at

Share this module Share this on Facebook Share this on Twitter Share this on Weibo
Privacy Settings