To support the management of large numbers of robots, Artificial Intelligence (AI) algorithms have been developed to automate the actions of such robot swarms and enable to act in a cohesive and coordinated way.
AI thus allows swarms to allocate tasks to each other, react to losses in communication or resources (e.g., other members of the swarm) and therefore reduce the workload of their human counterparts. However, it has been shown that, in some situations, operators are overwhelmed with information coming from robots, may not completely trust their decisions, and therefore override them; by doing so, they may cause the system to fail. In other situations, humans may completely rely on the automation and fail to notice obvious errors in the system. Moreover, to manage such large fleets of robots in a safe manner, previous studies suggest there should be shifts in autonomy levels to allow humans to take corrective action to recover from failures or to optimize task performance.
Understanding when such shifts should occur without losing out on the fault-tolerance benefits of a decentralized swarm, what levels of workload these shifts induce, and how the team of operators should enact such shifts are key questions that will be addressed in this project.
Principal Investigator: Professor Sarvapali Ramchurn (Southampton)
Co Investigator: Dr Danesh Tarapore (Southampton)