About the project
Effective human-machine teaming in joint fleet operations presents significant challenges, especially under uncertain and dynamic conditions. The aim of this project is to design decentralised control and formation algorithms, predictive models under communication limits, and transparent interfaces. The aim is to optimise swarm behaviour, enhance autonomy, and strengthen human trust and mission effectiveness.
Future joint fleet structures face significant challenges in maintaining situational awareness and operational effectiveness in complex, dynamic, and partially observable environments. Given the operator’s limited capacity to process the full scope of information generated by the swarm, autonomous robots must possess sufficient autonomy to handle low-level decision-making while remaining aligned with high-level operational intent.
This project proposes the development of adaptive swarms capable of dynamically adjusting paths, formations, and task allocation in response to both mission requirements and operator's directives. For instance, during exploration, the autonomous vehicles may operate in dispersed formations for wide-area sensing and detection, then dynamically tighten formations near critical zones to enhance data fusion, navigation accuracy, and communication with the lead vehicles.
A core challenge is balancing autonomous swarm decision-making with human-in-the-loop control, especially under communication constraints, requiring the system to determine when and how to seek guidance or relay information to the pilot without compromising mission effectiveness or transparency.
The objectives are:
- to design decentralised control and adaptive formation algorithms for accompanying swarms that respond to environmental dynamics and the operator intent;
- develop predictive models for autonomous decision-making under limited or intermittent communication;
- integrate intuitive human-swarm interface to enable high-level transparency while minimising operator cognitive load;
- evaluate system performance through simulation and real-world testing (with one operated and many autonomous robots), using metrics such as mission effectiveness and pilot workload.
This project will seek to build new strategies for swarm control using partially observed Markov decision processes and decentralised versions thereof. The swarm will be modelled as a Markov decision process which each robot can observe only incompletely. By explicitly modelling uncertainty in the state of the entire swarm, the policies of each robot can be optimised to meet an objective with high probability.
Adopting this formalism will also allow for the behaviour of the swarm to be studied theoretically, and connected to optimal nonlinear control problems. This will allow for current policies, which are poorly understood theoretically despite their strong empirical performance, to be put on solid theoretical footing.
This research will advance human-swarm collaboration in manned-unmanned operations, providing a scalable framework for future accompanying swarms that facilitate human trust, enhance mission flexibility, and maintain operational robustness in contested and uncertain environments.
Lab: The student will have access to various robotic platforms (e.g., wheeled, quadruped, quadcopters, etc.), sensors, communication modules, and environments, both indoor for initial testing and outdoor for advanced deployment, to develop and validate their algorithms.
Supervisors' former projects:
[1] Enabling trustworthiness in human-swarm systems through a digital twin
[2] Designing a user-centered interaction interface for human–swarm teaming
[3] A user study evaluation of predictive formal modelling at runtime in human-swarm interaction
[4] Partially observed Markov decision processes
[5] A concise introduction to decentralized POMDPs
[6] Partially observable Markov decision processes in robotics: a survey