About the project
The aim of this project is to develop formal models of accountability and liability using logic, game theory, and agent-based simulation. You will explore responsibility under uncertainty, delegation, and trust, with applications in autonomous systems, digital governance, and ethical AI.
As intelligent systems become more integrated into society, across mobility, healthcare, and digital governance, the question of who is responsible for decisions and outcomes becomes increasingly complex. This project tackles the urgent need for formal, ethical, and transparent reasoning about responsibility in multiagent systems (MAS).
You will investigate different forms of responsibility (including accountability, blameworthiness, and liability), and develop mathematical and computational models to capture how responsibility can be attributed in settings where multiple autonomous agents interact. This project will explore key challenges such as reasoning under uncertainty, delegation and trust, and institutional responsibilities.
Research methods will include:
- formal logic for MAS, such as alternating-time temporal logic
- game theory
- mechanism design
- agent-based simulation
Application areas span autonomous driving, responsible decision-making in organisations, AI ethics in legal contexts, and trust modelling in human-AI systems.
You will join the Agents, Interaction and Complexity (AIC) Group, one of the leading institutes in multiagent systems research. You will be embedded in a vibrant ecosystem of responsible AI initiatives, including the Responsible AI UK (RAI UK) hub, the Centre for Doctoral Training in AI for Sustainability (SustAI CDT), and will have close interaction with researchers at the Citizen-Centred AI Systems (CCAIS) project.