Abstract
AbstractClinical reasoning theories agree that knowledge and the diagnostic process are associated with diagnostic success. However, the exact contributions of these components of clinical reasoning to diagnostic success remain unclear. This is particularly the case when operationalizing the diagnostic process with diagnostic activities (i.e., teachable practices that generate knowledge). Therefore, we conducted a study investigating to what extent knowledge and diagnostic activities uniquely explain variance in diagnostic success with virtual patients among medical students. The sample consisted of N = 106 medical students in their third to fifth year of university studies in Germany (6-years curriculum). Participants completed professional knowledge tests before diagnosing virtual patients. Diagnostic success with the virtual patients was assessed with diagnostic accuracy as well as a comprehensive diagnostic score to answer the call for more extensive measurement of clinical reasoning outcomes. The three diagnostic activities hypothesis generation, evidence generation, and evidence evaluation were tracked. Professional knowledge predicted performance in terms of the comprehensive diagnostic score and displayed a small association with diagnostic accuracy. Diagnostic activities predicted comprehensive diagnostic score and diagnostic accuracy. Hierarchical regressions showed that the diagnostic activities made a unique contribution to diagnostic success, even when knowledge was taken into account. Our results support the argument that the diagnostic process is more than an embodiment of knowledge and explains variance in diagnostic success over and above knowledge. We discuss possible mechanisms explaining this finding.
Funder
Deutsche Forschungsgemeinschaft
Ludwig-Maximilians-Universität München
Publisher
Springer Science and Business Media LLC
Subject
Education,General Medicine
Reference49 articles.
1. Barrows, H. S., Norman, G. R., Neufeld, V. R., & Feightner, J. W. (1982). The clinical reasoning of randomly selected physicians in general medical practice. Clinical and Investigative Medicine, 5, 49–55.
2. Bauer, D., Holzer, M., Kopp, V., & Fischer, M. R. (2011). Pick-N multiple choice-exams: A comparison of scoring algorithms. Advances in Health Sciences Education, 16, 211–221. https://doi.org/10.1007/s10459-010-9256-1
3. Boshuizen, H. P. A., & Schmidt, H. G. (1992). On the role of biomedical knowledge in clinical reasoning by experts, intermediates and novices. Cognitive Science, 16, 153–184. https://doi.org/10.1016/0364-0213(92)90022-M
4. Boulet, J. R., & Durning, S. J. (2019). What we measure … and what we should measure in medical education. Medical Education, 53, 86–94. https://doi.org/10.1111/medu.13652
5. Charlin, B., Boshuizen, H. P. A., Custers, E. J., & Feltovich, P. J. (2007). Scripts and clinical reasoning. Medical Education, 41, 1178–1184. https://doi.org/10.1111/j.1365-2923.2007.02924.x
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献