Skip to main content
A silhouette of somebody's hand placing their vote into the polling booth box

Can AI be used to undermine elections?

Published: 7 November 2023

A position paper by Ben Hawes, Dame Wendy Hall and Dr Matt Ryan examines the potential threats artificial intelligence poses to elections around the world and explores what can be done to protect the integrity of the most fundamental of democratic processes. Following are highlights from the full paper.

We should demand better accountability, oversight, and transparency around the potential effects of AI systems on elections. We should be reassured that elections are fully protected from threats that will continue to evolve.
Regius Professor of Computer Science

Dame Wendy is director of the Web Science Institute and leads on research into digital trust. This strand of research focuses on the ethics and practice of online privacy and security, data trusts and data sharing, and emerging socio-technical issues related to trust and trustworthiness of digital systems.

How AI applications could threaten elections

Political disinformation, profiling and targeting people, and other dubious online efforts to influence people’s votes have been a recurrent feature during recent elections, particularly in the US and UK.  These threats are likely to be magnified due to the rapid development of large language models and other forms of artificial intelligence. 

Much has been said about the threats posed by “deepfake” videos and images, where anyone with the right software can make it appear as if a politician said or did something they didn’t. While this is a novel and formidable challenge for the very near future, most of the threats AI poses in relation to elections are not new in themselves. Rather, they’re known tactics that AI would greatly enhance by increasing their volume, frequency and effectiveness.

These include:

  • creating and spreading misinformation and disinformation through news articles, social media posts, photos, soundbites and videos
  • manipulating public opinion and behaviour in online forums and groups
  • using publicly available data records on voters to target them with personalised political content on social media
  • hacking into online systems for voter registration and for voting
  • distrupting civil society by enabling cyber attacks on critical infrastructure before or during an election

Generative AI technologies that create and distribute new information are a particular worry, as they make it easy to rapidly broadcast messages across a large number of channels. One such Russian propaganda technique has been called “the firehose of falsehood”.

What can be done to counter AI interference

Government, law enforcement, media, civil society organisations and the tech industry all have a role to play in combating election interference by AI. They’ll have to work together to be effective.

Governments and regulators should be transparent about the the threats in order to foster public trust. They could explore further regulation requiring social media companies to take further steps to:

  • prevent the spread of disinformation
  • remove fake accounts and posts
  • label misleading content generated by AI
  • critically examine targeting techniques

Recent developments in AI make it easy to broadcast a much greater quantity and variety of disinformation. This could test the effectiveness of efforts by technology platforms like Google to control disinformation. Challenges will likely involve keeping up with the sheer volume of content preceding elections, as well as detecting, labelling and removing content and identifying its creators.

Governments need to do more to ensure these and related challenges can be overcome. They could do so by investing in:

  • election oversight and security (including cybersecurity)
  • the skills and capability of independent regulators for elections and for data protection and use
  • training election officials to recognise and address emerging threats
  • research on the capacity of algorithmic tools to influence voting intentions, and advice for decision-makers based on that research
  • better international collaboration between governments to share understanding of threats and best practice for dealing with them

Studies show that the public also needs to be educated so that they can better evaluate the accuracy and bias of online content and protect their online data.

Read more

For more on the individual threats AI could pose to democracy and on the challenges to addressing them, read the full position paper, Can artificial intelligence be used to undermine elections?

Back
to top