Skip to main content
Research project

DISCERN-AI: Developing an Intervention to Support Critical Evaluation of Real and Non-Authentic Images.

Project overview

Artificial intelligence (AI) systems can now produce images of faces that are so realistic that humans cannot distinguish them from real human faces. Miller et al. (2023), for example, recently presented participants with real and AI-generated images of faces and tested participants’ ability to distinguish them. Participants were not only unable to distinguish between the real and AI-generated faces but, remarkably, were more likely to judge an AI-generated face as human than a real human face. This effect, termed AI hyperrealism, is thought to arise because AI-generated images are closer to the average, prototypical face, which makes them less distinctive than real human faces. AI hyperrealism is deeply concerning because of the potential for AI-generated images to mislead the public and distort the truth. Indeed, AI-generated images are thought to contribute to the spread of political misinformation (Hatmaker, 2020), increase cybersecurity risks (Khosravy et al., 2021), and facilitate fraudulent attacks (Bray et al., 2023).

Artificial intelligence (AI) systems can now produce images of faces that are so realistic that humans cannot distinguish them from real human faces. Miller et al. (2023), for example, recently presented participants with real and AI-generated images of faces and tested participants’ ability to distinguish them. Participants were not only unable to distinguish between the real and AI-generated faces but, remarkably, were more likely to judge an AI-generated face as human than a real human face. This effect, termed AI hyperrealism, is thought to arise because AI-generated images are closer to the average, prototypical face, which makes them less distinctive than real human faces. AI hyperrealism is deeply concerning because of the potential for AI-generated images to mislead the public and distort the truth. Indeed, AI-generated images are thought to contribute to the spread of political misinformation (Hatmaker, 2020), increase cybersecurity risks (Khosravy et al., 2021), and facilitate fraudulent attacks (Bray et al., 2023).

In Miller et al.’s (2023) work, participants were asked to indicate, using open-ended qualitative reports, the visual attributes that they had used to judge whether the images were real or AI-generated. These reports revealed several attributes that participants commonly used, such as face proportionality, image lighting, and emotion. The researchers then used machine learning (a random forest classifier, RFC) to determine whether these attributes could be used to accurately classify the images as real or AI-generated. While the RFC classified the images with 94% accuracy using those attributes, human participants could not, even though they had identified the attributes themselves. This finding suggests that humans can perceive the visual attributes that are useful for reliably distinguishing between real and AI-generated images, but they do not understand how to use these attributes. In turn, this finding begs for an intervention to be developed which helps people distinguish between real and AI-generated images.

This project will use the pump-priming funding to develop an innovative intervention that supports people to accurately discriminate between real and AI-generated images of faces. Participants will be recruited via Prolific to ensure a large and diverse sample. Participants will complete a pre-test, where they will be shown a series of real and AI-generated images of faces and will be asked to identify whether they are real or AI-generated (and rate their confidence and provide qualitative feedback on their decisions). Feedback will not be provided. Participants will then be randomly allocated to an intervention or control group (see below), before completing a post-test which will be identical to the pre-test but with new images. Allocation of the images to the pre- and post-test will be counterbalanced across participants. In between the pre- and post-test, the intervention group will be trained using inductive learning, which derives from learning and memory research and which we are using to inform interventions that help participants discriminate between true and fake news headlines (e.g., Modirrousta-Galian, Higham, & Seabrooke, 2023). Participants will be shown real and AI-generated images in an interleaved schedule (which is known to enhance discrimination; Kornell & Bjork, 2008) and will be asked to guess the category (real or AI-generated) before receiving corrective feedback. Participants will complete several blocks of inductive learning training, with each block focusing on one feature that is, or is not, useful for distinguishing between real and AI-generated images. For example, Miller et al. (2023) found that participants thought that highly proportional faces were more likely to be human when they were actually more likely to be AI-generated, and that smooth skin was indicative of an AI-generated image when it was not predictive at all. In each training block, participants will be shown images that are strongly representative of a given category (e.g., proportionality) and they will receive clear instructions on how to discriminate images using this feature. The control group will simply play Tetris for an equivalent period to the intervention group, to equate the experimental timings between groups.

We predict that on the pre-test, the intervention group will perform similarly to the control group, with both groups showing AI hyperrealism (i.e., identifying more AI-generated faces as human than real faces). On the post-test, we predict that the intervention group will show better discrimination than the control group. Following Miller et al. (2023), discrimination will be measured by using d’ and meta-d’.

Staff

Lead researcher

Dr Tina Seabrooke

Lecturer B

Research interests

  • Episodic memory
  • Associative learning
Other researchers

Professor Philip Higham

Professor of Experimental Psychology

Research interests

  • Enhancing student learning in educational settings
  • Protecting social media users from fake news
  • Understanding the interplay of controlled and automatic influences of retrieval practice

Dr Jennifer Williams

Lecturer

Research interests

  • Responsible and trustworthy audio processing applied to a variety of domains and use-cases;
  • Audio AI safety in terms of usability, privacy, and security;
  • Ethical issues of trust for audio AI (deepfake detection, voice-related rights, and speaker and content privacy). 

Collaborating research institutes, centres and groups

Back
to top