Abstract
Pressure for accountability, transparency, and consistency of the assessment process is increasing. For assessing complex cognitive achievements, essays are probably the most familiar method, but essay scoring is notoriously unreliable. To address issues of assessment process, accountability, and consistency, this study explores essay marking practice amongst examiners in a UK dental school using a qualitative approach. Think aloud interviews were used to gain insight into how examiners make judgements whilst engaged in marking essays. The issues were multifactorial. These interviews revealed differing interpretations of assessment and corresponding individualised practices which contributed to skewing the outcome when essays were marked. Common to all examiners was the tendency to rank essays rather than adhere to criterion-referencing. Whether examiners mark holistically or analytically, essay marking guides presented a problem to inexperienced examiners, who needed more guidance and seemed reluctant to make definitive judgements. The marking and re-marking of scripts revealed that only 1 of the 9 examiners achieved the same grade category. All examiners awarded different scores corresponding to at least one grade difference; the magnitude of the difference was unrelated to experience examining. This study concludes that in order to improve assessment, there needs to be a shared understanding of standards and of how criteria are to be used for the benefit of staff and students.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献