Postgraduate research project

Resilient trust in AI-based human-machine teaming

Funding
Competition funded View fees and funding
Type of degree
Doctor of Philosophy
Entry requirements
2:1 honours degree View full entry requirements
Faculty graduate school
Faculty of Engineering and Physical Sciences
Closing date

About the project

As of now, there are no systematic mechanism that can sustain calibrated level of trust in human-machine teams. The aim of this research is to better understand trust dynamics and build the first prototype of AI-based human-machine system that can autonomously facilitate trust resilience in various situations.

Trust is a critical factor in the effectiveness of human-machine teams. Insufficient trust can result in underutilisation of advanced technologies, while excessive trust may lead to dangerous overreliance on autonomy in safety-critical situations. Even when the appropriate level of trust is achieved, it is fragile and susceptible to disruption.

A fundamental requirement for achieving resilient trust is a quantitative understanding of how trust evolves across different operational contexts, levels of system transparency, and in response to failures. This project will develop machine learning models that predict the dynamics of human trust under varying conditions, informed by empirical user studies. The models will characterise trust trajectories during various normal system operation, as well as following incidents caused by system errors or external factors.

The final outcome of the project will be a prototype human–machine system that leverages these predictive models to determine and display the appropriate level of information to the operator, enabling calibrated and resilient trust. This research will provide a scientific foundation for trust-aware system design and practical guidelines for sustaining effective human–machine collaboration in high-stakes environments.