Postgraduate research project

Human-centred AI to support decision-making in defence and security

Funding
Competition funded View fees and funding
Type of degree
Doctor of Philosophy
Entry requirements
2:1 honours degree View full entry requirements
Faculty graduate school
Faculty of Engineering and Physical Sciences
Closing date

About the project

This interdisciplinary project aims to develop human-centred AI decision support to help operators of complex systems in a way that is reliable, trustworthy and maintains ‘meaningful human control’.

Have you ever suffered from “analysis paralysis” when trying to make a big decision? Our connected world can overwhelm us with information from myriad sources. This might be frustrating when, say, choosing a new smartphone, but what if your decisions were safety-critical? Operators in complex systems such as defence, security, transport and healthcare face similar dilemmas, but the outcomes of their decisions can have much greater impact. 

There is an opportunity to use artificial intelligence (machine learning and large language models) to help process and filter some of this data, but where you might be happy to rely on ChatGPT to summarise the pros and cons of your new smartphone, those complex system operators need a higher level of confidence.

The requirement that is often cited in these contexts is ‘meaningful human control’ – although what exactly that means is not necessarily well-defined. It brings into play concepts of explainability and transparency, and how they interact with operator workload, trust and situation awareness. The problem is not just one of interface design, but goes deeper into the system architecture to determine what information is filtered and what is presented to the operator, and how the system communicates its own confidence and reliability. 

Fundamentally, what all this implies is a human-centred approach to designing any such decision support system – ensuring that the human operators, who ultimately take responsibility for the decisions, are able to both understand and trust the recommendations that the system is offering.