BACKGROUND
Virtual Patient Simulators (VPSs) log all user’s actions thereby enabling the creation of a multidimensional representation of the student’s medical knowledge. This representation can be used to create metrics that provide teachers with valuable learning information.
OBJECTIVE
To describe the metrics we developed to analyze the clinical diagnostic reasoning of medical students, provide examples of their application and preliminarily validate these metrics on a class of undergraduate medical students. Metrics are computed out of the data obtained by a novel VPS, embedding Natural Language Processing (NLP) techniques.
METHODS
Two clinical case simulations (tests) were created to test our metrics. During each simulation, students’ step-by-step actions were logged into the program database for off-line analysis.
Overall students’ performance was split into 7 dimensions: 1) the identification of relevant information in the given clinical scenario (SC); 2) history taking (AN); 3) physical exam (PE); 4) medical tests (MT) ordering; 5) diagnostic hypotheses (HY) setting; 6) binary analysis fulfillment (BA); 7) final diagnosis (RS) setting. Sensitivity (percentage of relevant information found) and precision (percentage of correct actions performed) metrics were computed for each issue and combined into a harmonic mean (F1), thereby obtaining a single score evaluating the student’s performances. The seven metrics were further grouped to reflect the student’s capability to collect (SC, AN, PE and MT) and to analyze (HY, BA and RS) information obtaining an overall performance score. A methodological score was computed on the basis of the discordance between the diagnostic pathway followed by the student and a reference one, previously defined by the teacher.
Twenty-five students, attending the 5th-year of the School of Medicine at Humanitas University, underwent test 1 which simulated a patient suffering from dyspnea. Test 2 dealt with abdominal pain and was attended by 36 students on a different day.
For validation, we assessed the Spearman’s rank correlation between out performance these scores and the score obtained by each student in the hematology curricular exam.
RESULTS
Mean overall scores were consistent between test 1 (0.59±0.05) and test 2 (0.54±0.12).
In each student, overall performance was achieved by a different contribution in collecting and analyzing information. Methodological scores highlighted discordance between the reference diagnostic pattern previously set by the teacher and the one pursued by the student. No significant correlation was found between the VPS scores and hematology exam scores.
CONCLUSIONS
Different components of the student’s diagnostic process may be disentangled and quantified by appropriate metrics applied to student’s actions recorded while addressing a virtual case. Such an approach may help teachers in giving students individualized feedbacks aimed at filling up competence drawbacks and methodological inconsistencies. There was no correlation between the hematology curricular exam score and any of the proposed scores since these scores address different aspects of student’s medical knowledge.
CLINICALTRIAL