BACKGROUND
Despite the usefulness of artificial intelligence (AI)-based diagnostic decision-support systems, the over-reliance of physicians on AI-generated diagnoses may lead to diagnostic errors.
OBJECTIVE
We investigated the safe use of AI-based diagnostic-support systems with trust calibration, adjusting trust levels to AI’s actual reliability.
METHODS
A quasi-experimental study was conducted at Dokkyo Medical University, Japan, with physicians allocated (1:1) to the intervention and control groups. The participants reviewed medical histories of 20 clinical cases generated by an AI-driven automated medical history-taking system with an AI-generated list of 10 differential diagnoses and provided one to three possible diagnoses. Physicians were asked to consider whether the final diagnosis was included in the AI-generated list of 10 differential diagnoses in the intervention group, which served as trust calibration. We analyzed the diagnostic accuracy of physicians and the correctness of trust calibration in the intervention group.
RESULTS
Among the 20 physicians assigned to the intervention (n=10) and control (n=10) groups, the diagnostic accuracy was 41.5% and 46.0%, respectively, without significant difference (odds ratio 1.20, 95% confidence interval [CI] 0.81–1.78, P=.42). The overall accuracy of the trust calibration was only 61.5%, and despite correct calibration, the diagnostic accuracy was 54.5%.
CONCLUSIONS
Trust calibration did not significantly improve physicians' diagnostic accuracy when considering differential diagnoses generated by reading medical histories and the possible differential diagnosis lists of an AI-driven automated medical history-taking system. This study underscores the limitations of the extant trust-calibration system and highlights the need to apply supportive measures of trust calibration rather than solely utilizing trust calibration.