Many large-scale multiple-choice tests administered as part of aptitude testing and university education are formula scored. This typically means that correct answers are rewarded and errors are penalised. However, penalties (and rewards) can be avoided by “passing” on questions, which yields no points. My research has focused on the cognitive, metacognitive and strategic components underlying performance on such tests using signal-detection analysis. A key finding from my research is that people are underconfident which leads to an opportunity cost; that is, they set a criterion for reporting answers that is too conservative, which produces a lower-than-optimal final score. The results from this research are useful not just because they inform us about basic mental processes and strategies that people use (rightly or wrongly) when writing formula-scored tests, but they also have the potential to affect how such large-scale tests are scored.
Collaborating research institutes, centres and groups
A little bias goes a long way: the effects of feedback on the strategic regulation of accuracy on formula-scored tests
& B. Martin-Luengo, 2013 , Journal of Experimental Psychology: Applied , 19 , 383--402
& M.M. Arnold, 2007
, 2007 , Journal of Experimental Psychology: General , 136 (1) , 1--22
How many questions should I answer? Using bias profiles to estimate optimal bias and maximum score on formula-scored tests. (In special issue on: Bridging cognitive science and education: learning, memory, and metacognition.)
& M.M. Arnold, 2007 , European Journal of Cognitive Psychology , 19 , 718--742