Research project

Enhancing International Language Testing at Scale

Project overview

High-stakes language assessment underpins access to education, employment, and migration for millions of learners worldwide. To meet these demands, tests must be valid, efficient, and internationally aligned. Research led by Zheng has prompted major changes to assessment practice globally. It informed the redesign of the Pearson Test of English Academic, reducing test time by one hour while maintaining psychometric quality, improving candidate experience and operational efficiency. It also led to the creation of the CEFR-aligned Global Scale of Languages, enabling consistent multilingual benchmarking and now embedded in widely used global language learning and assessment digital products. Zheng’s work has shaped the design, validation, and policy use of multiple large-scale tests, improving their quality, accessibility, and fairness for diverse test-taker populations.

Staff

Lead researchers

Professor Ying Zheng PhD

Professor
Research interests
  • Psychometrics and test validation; Human scoring vs. machine scoring; Statistics in applied linguistics; Comparative Judgement in language testing
  • Mandarin exams in the UK school system, including A-Level and GCSE exams; Mandarin Chinese teachers’ professional development
  • Learner motivation and language teaching pedagogy; ESL/EFL learner characteristics and test performance
Connect with Ying

Research outputs