Do experience and text quality matter for raters’ decision-making behaviors? Seminar
- Time:
- 17:00 - 18:30
- Date:
- 27 April 2022
- Venue:
- on line
Event details
CLLEAR Seminar Series
Abstract
The score assigned to an essay is not the outcome of interaction between test-taker and test, but the result of interactions between several factors including the test-taker, the prompt or task, the written text itself, the rater(s), and the rating scale (Hamp-Lyons, 1990). Therefore, ratings given to individuals’ performances are believed to be subjective, since they not only reflect the quality of performance, but also the quality of the rater’s judgment (McNamara, 2000). In this talk, I will focus on the impact of raters’ experience and text quality on their decision-making strategies. Using a 10-point analytic rubric, each rater voice-recorded their thoughts through think-aloud protocols (TAPs) while scoring essays. The results revealed that text quality has a larger effect than rating experience on raters’ decision-making behaviors. In addition, raters prioritized aspects of style, grammar, and mechanics when rating low-quality essays, but emphasized rhetoric and their general impressions of the text for high-quality essays. Furthermore, low-experienced raters differed more in their behaviors while assessing scripts of distinct qualities than did the medium- and high-experienced groups. The findings suggest that raters’ scoring behaviors might evolve with practice, resulting in less variation in their decisions. As such, developing strategy-based rater training programs might help to increase consistency across raters of different experience levels.
References
Hamp-Lyons, L. (1990). Second language writing: Assessment issues. In B. Kroll (Ed.), Second language writing: Research insights for the classroom (pp. 69–87). Cambridge University Press
McNamara, T. F. (2000). Language testing. Oxford University Press.
Speaker information
Özgür Şahan ,MLL, University of Southampton