Abstract
AbstractLarge language models offer novel opportunities to seek digital medical advice. While previous research primarily addressed the performance of such artificial intelligence (AI)-based tools, public perception of these advancements received little attention. In two preregistered studies (n = 2,280), we presented participants with scenarios of patients obtaining medical advice. All participants received identical information, but we manipulated the putative source of this advice (‘AI’, ‘human physician’, ‘human + AI’). ‘AI’- and ‘human + AI’-labeled advice was evaluated as significantly less reliable and less empathetic compared with ‘human’-labeled advice. Moreover, participants indicated lower willingness to follow the advice when AI was believed to be involved in advice generation. Our findings point toward an anti-AI bias when receiving digital medical advice, even when AI is supposedly supervised by physicians. Given the tremendous potential of AI for medicine, elucidating ways to counteract this bias should be an important objective of future research.
Funder
Studienstiftung des Deutschen Volkes
Faculty of Humanities of the University of Wuerzburg
Pfizer Pharma GmbH
Publisher
Springer Science and Business Media LLC
Reference25 articles.
1. Ker, J., Wang, L., Rao, J. & Lim, T. Deep learning applications in medical image analysis. IEEE Access 6, 9375–9389 (2018).
2. Han, K. et al. A review of approaches for predicting drug–drug interactions based on machine learning. Front. Pharmacol. 12, 814858 (2022).
3. Nori, H., King, N., McKinney, S. M., Carignan, D. & Horvitz, E. Capabilities of GPT-4 on medical challenge problems. Preprint at https://arxiv.org/abs/2303.13375 (2023).
4. Li, J. Security implications of AI chatbots in health care. J. Med. Internet Res. 25, e47551 (2023).
5. Hirosawa, T. et al. ChatGPT-generated differential diagnosis lists for complex case-derived clinical vignettes: diagnostic accuracy evaluation. JMIR Med. Inform. 11, e48808 (2023).