Postgraduate research project

Understanding the influence of LLMs in human negotiation and decision-making

Funding
Competition funded View fees and funding
Type of degree
Doctor of Philosophy
Entry requirements
2:1 honours degree View full entry requirements
Faculty graduate school
Faculty of Engineering and Physical Sciences
Closing date

About the project

AI systems are becoming powerful persuaders, capable of influencing opinions, behaviours, and even critical decisions. This PhD dives deep into the risks of AI-driven manipulation, exploring how people respond to machine-generated arguments and trust AI advice. Through experiments and user studies, you’ll help shape safer, more ethical human–AI communication.

Negotiation and persuasion are fundamental methods for resolving conflicts in social interactions, whether in person or online. The ability to persuade others or negotiate agreements plays a critical role across various domains, from everyday decisions to high-stake scenarios, such as financial or legal advice. 

With the rapid advancement of Generative AI, particularly Large Language Models (LLMs), it has become easier than ever to generate highly persuasive – and potentially manipulative – textual content. These models can create convincing arguments, simulate expert opinions, and even mimic personal communication styles, making it difficult for users to discern whether they are engaging with human or AI-generated content. 

While LLMs present opportunities for enhancing communication, education, and accessibility, they also pose significant risks, particularly when used to sway public opinion, encourage risky decisions, or spread misinformation. 

This project will explore how LLMs influence human decision-making, focusing on their role in negotiation and persuasion. It will investigate the extent to which individuals are susceptible to AI-generated arguments, how trust in AI-mediated communication is formed, and what factors contribute to over reliance on machine-generated advice. 

The project will also examine strategies to improve AI-literacy among the public, aiming to equip users with the skills to evaluate AI-generated content and guard against making decisions that are not in their best interest. Using mixed methods, including user studies and controlled experiments, this research will provide insights into the ethical implications of AI-driven persuasion and contribute to the development of guidelines for responsible AI use in digital communication.

Comprehensive training on research skills needed for a PhD will be provided.

The School of Electronics & Computer Science is committed to promoting equality, diversity inclusivity as demonstrated by our Athena SWAN award. We welcome all applicants regardless of their gender, ethnicity, disability, sexual orientation or age, and will give full consideration to applicants seeking flexible working patterns and those who have taken a career break. The University has a generous maternity policy, onsite childcare facilities, and offers a range of benefits to help ensure employees’ well-being and work-life balance. The University of Southampton is committed to sustainability and has been awarded the Platinum EcoAward.