Abstract
ObjectivesSources of bias, such as the examiners, domains and stations, can influence the student marks in objective structured clinical examination (OSCE). This study describes the extent to which the facets modelled in an OSCE can contribute to scoring variance and how they fit into a Many-Facet Rasch Model (MFRM) of OSCE performance. A further objective is to identify the functioning of the rating scale used.DesignA non-experimental cross-sectional design.Participants and settingsAn MFRM was used to identify sources of error (eg, examiner, domain and station), which may influence the student outcome. A 16-station OSCE was conducted for 329 final year medical students. Domain-based marking was applied, each station using a sample from eight defined domains across the whole OSCE. The domains were defined as follows: communication skills, professionalism, information gathering, information giving, clinical interpretation, procedure, diagnosis and management. The domains in each station were weighted to ensure proper attention to the construct of the individual station. Four facets were assessed: students, examiners, domains and stations.ResultsThe results suggest that the OSCE data fit the model, confirming that an MFRM approach was appropriate to use. The variable map allows a comparison with and between the facets of students, examiners, domains and stations and the 5-point score for each domain with each station as they are calibrated to the same scale. Fit statistics showed that the domains map well to the performance of the examiners. No statistically significant difference between examiner sensitivity (3.85 logits) was found. However, the results did suggest examiners were lenient and that some behaved inconsistently. The results also suggest that the functioning of response categories on the 5-point rating scale need further examination and optimisation.ConclusionsThe results of the study have important implications for examiner monitoring and training activities, to aid assessment improvement.
Reference37 articles.
1. Brookbart S . The art and science of classroom assessment: the missing part of pedagogy. Washington, DC: The George Washington University, Graduate School of Education and Human Development, 1999.
2. Cronbach L . Essentials of psychological testing. New York: Harper and Row, 1990.
3. Lane S , Stone CA , assessment P . Performance assessment. In: Brennan RL , ed. Educational measurement. Westport, CT: Prager, 2006: 387–431.
4. Adjustments for Rater Effects in Performance Assessment
Cited by
20 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献