Skip to main navigation Skip to main content
The University of Southampton
News

ChatGPT versus lawyers: Which would you choose?

Published: 2025-04-28 10:29:00
Bronze statue of a woman wearing a blindfold, holding up the scales of justice
Legal advice from ChatGPT is favourable to advice from a lawyer, according to new research

People are more willing to rely on legal advice from ChatGPT than advice from a lawyer, new research led by the University of Southampton has found.

Academics specialising in computer science, psychology and law joined forces to test hundreds of people’s willingness to rely on legal advice provided by generative AI chatbot ChatGPT compared to advice from qualified lawyers

Some participants did not know the source of the legal advice they were reading, whilst others did.

The study found that participants, when not knowing the source of the legal advice provided, were more willing to rely on ChatGPT – leading academics involved to call for education in AI literacy for the general public.

Dr Eike Schneiders , Assistant Professor of Computer Science at the University of Southampton, led the project. He said: “Two elements which might explain why people are more willing to rely on AI-generated legal advice – the length of response provided, and the complexity of the advice.

“We found that the lawyer-generated advice was longer, but also less complex. This was a surprise – we expected lawyer-generated advice to be more complex, but this wasn’t the case.”

The research team comprised academics from Computer Science and Psychology at the University of Southampton, academics in law and computer science at the University of Nottingham, and academics in law at the University of Antwerp.

Using insights from genuine legal questions, they wrote 18 hypothetical legal cases related to traffic law, planning law, and property law. They tested the advice for these on a total of 288 people in a series of experiments.

One experiment involved 50 people who knew the source of the advice and 50 who did not.

Those who did not know the source showed a significantly higher willingness to rely on the ChatGPT advice. Those who knew the source were equally willing to rely on both sources of advice.

Another experiment tested whether people could identify the source of the advice. Random guessing would have produced a score of 0.5, while perfect discrimination would have produced a score of 1.0.

The average participant score was 0.59, indicating discrimination performance that was above guessing level but still relatively weak. This suggests that while there is some awareness, mistakes are still common, leaving ample room for improvement.

“It demonstrated that people struggle to tell the difference, but that there is something there that gives people a feeling about the source,” said Dr Schneiders.

He added: “These findings are surprising – and it was especially surprising that the participants who knew the source of the advice did not trust the lawyers more.”

Dr Schneiders concluded: “Deployment of technology such as ChatGPT is already happening – and advancing fast – so we need to embrace it. People need to be aware of the capabilities and limitations of models such as ChatGPT.

“Education to help people distinguish the source of what they are reading is becoming more and more vital. I believe increasing AI literacy is the way forward, thereby giving the public the knowhow to navigate it.”

The research paper is being launched at the world’s premier conference on human-computer interaction, CHI 2025 (the Association of Computer Machinery’s Conference on Human Factors in Computing Systems), in Yokohama, Japan, from Saturday 26 April to Thursday 1 May.

Privacy Settings