Author:
Gaube Susanne,Suresh Harini,Raue Martina,Lermer Eva,Koch Timo K.,Hudecek Matthias F. C.,Ackery Alun D.,Grover Samir C.,Coughlin Joseph F.,Frey Dieter,Kitamura Felipe C.,Ghassemi Marzyeh,Colak Errol
Abstract
AbstractArtificial intelligence (AI)-generated clinical advice is becoming more prevalent in healthcare. However, the impact of AI-generated advice on physicians’ decision-making is underexplored. In this study, physicians received X-rays with correct diagnostic advice and were asked to make a diagnosis, rate the advice’s quality, and judge their own confidence. We manipulated whether the advice came with or without a visual annotation on the X-rays, and whether it was labeled as coming from an AI or a human radiologist. Overall, receiving annotated advice from an AI resulted in the highest diagnostic accuracy. Physicians rated the quality of AI advice higher than human advice. We did not find a strong effect of either manipulation on participants’ confidence. The magnitude of the effects varied between task experts and non-task experts, with the latter benefiting considerably from correct explainable AI advice. These findings raise important considerations for the deployment of diagnostic advice in healthcare.
Funder
Volkswagen Foundation
Ludwig-Maximilians-Universität München
Publisher
Springer Science and Business Media LLC
Cited by
30 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献