Skip to main content
Research project

The Academic Turing Test

Project overview

WSI Pilot Project

The popular version of the Turing test (or the imitation game) is to determine whether a computer can pass themselves off as human in a 1-1 text conversation (Turing, 2004). A less popular version asks whether we can distinguish between a machine and a person (Shah, 2011). In 2022 this question has become an existential threat to higher education. That is: can educators distinguish between a machine and a student? Put another way, ‘can ArtificiaI Intelligence (AI) get a degree’? A degree qualification is now thought to offer a ‘basic minimum’ within a competitive graduate labour market (Brooks and Everett, 2009). AI tools can already pass some University level courses with a ‘C’ grade average (best-universities.net, 2022) and so the perceptions of prospective students and employers about the value of a degree are set to change.

GPT-3 was launched in June 2020 as a system to produce human-like text. It has been described as “one of the most interesting and important AI systems ever produced” (Wikipedia, 2022). AI writing is now available as a paid service using GPT-3 and other competing AI models, and there are several blog posts along the lines of ‘the 13 best essay writing tools’ (Derungs, 2022). There are a few free and many paid options available, from $20 a month. These general purpose large language models are now also being followed by AI writers with specific training, such as Galactica from Meta, which was heralded as a new paradigm for science but criticised for being inaccurate (Ryan, 2022).

The kneejerk response by educators might be to find out ways to prevent or police the use of AI writing programs. Practically, if the output from an AI writing tool is unique it won’t fail a Turnitin (similarity) test. The cost of such tools is also well below that of essay mills, described as a growing crisis (Naughton, 2020), and also very hard to detect (Medway et al., 2018). If AI writing can pass University assessment, this requires a much deeper response than improving detection.

“Students will employ AI to write assignments. Teachers will use AI to assess them. Nobody learns, nobody gains. If ever there were a time to rethink assessment, it’s now” (Sharples, 2022)”

Of course, handing in an essay entirely written by an AI program is cheating. What about an essay edited by an AI program? What about AI generated summaries in a literature review? Should we use AI generated student feedback? Responding to this hugely important technological innovation could include philosophical, pedagogical, legal, moral, strategic, and economic perspectives. Every teaching committee should be demanding a report on what the emerging generation of AI writing technology means to education strategy and practice. Every educator  3 should be asking how to respond in their teaching and assessment design. This project does the groundwork for exactly such discussions.



Staff

Lead researcher

Dr David Baxter

Associate Professor

Research interests

  • Innovation management
  • Agile 
  • Knowledge and learning in innovation
Other researchers

Professor Leslie Carr

Professor of Web Science

Professor Christian Bokhove

Professor

Research interests

  • Mathematics education in the classroom
  • Technology use in schools
  • Research methods

Collaborating research institutes, centres and groups

Back
to top