Abstract
Assessment of foreign/second language (L2) oral proficiency is known to be complex and influenced by the local context. In Sweden, extensive assessment guidelines for the National English Speaking Test (NEST) are offered to teachers, who act as raters of their own students’ performances on this high-stakes L2 English oral proficiency (OP) test. Despite guidelines, teachers commonly construct their own NEST scoring rubric. The present study aims to unveil teachers-as-raters’ conceptualizations, as these emerge from the self-made scoring rubrics, and possible transformations of policy. Data consist of 20 teacher-generated scoring rubrics used for assessing NEST (years 6 and 9). Rubrics were collected via personal networks and online teacher membership groups. Employing content analysis, data were analysed qualitatively to examine (i) what OP sub-skills were in focus for assessment, (ii) how sub-skills were conceptualized, and (iii) scoring rubric design. Results showed that the content and design of rubrics were heavily influenced by the official assessment guidelines, which led to broad consensus about what to assess—but not about how to assess. Lack of consensus was particularly salient for interactive skills. Analysis of policy transformations revealed that teachers’ self-made templates, in fact, lead to an analytic rather than a holistic assessment practice.
Subject
Linguistics and Language,Language and Linguistics
Reference56 articles.
1. A framework for understanding the conditions of science representation and dissemination in museums
2. Series editors preface to Assessing Speaking;Alderson,2004
3. Understanding Discrepancies in Rater Judgement on National-Level Oral Examination Tasks
4. Fundamental Considerations in Language Testing;Bachman,1990
5. What is the construct? The dialectic of abilities and contexts in defining constructs in language assessment;Bachman,2007
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献