Author:
Omiye Jesutofunmi A.,Lester Jenna,Spichak Simon,Rotemberg Veronica,Daneshjou Roxana
Abstract
ImportanceLarge language models (LLMs) are being integrated into healthcare systems; but these models recapitulate harmful, race-based medicine.ObjectiveThe objective of this study is to assess whether four commercially available large language models (LLMs) propagate harmful, inaccurate, race-based content when responding to eight different scenarios that historically included race-based medicine or widespread misconceptions around race.Evidence ReviewQuestions were derived from discussion among 4 physician experts and prior work on race-based medical misconceptions of medical trainees.FindingsWe assessed four large language models with eight different questions that were interrogated five times each with a total of forty responses per a model. All models had examples of perpetuating race-based medicine in their responses. Models were not always consistent in their responses when asked the same question repeatedly.Conclusions and RelevanceLLMs are being proposed for use in the healthcare setting, with some models already connecting to electronic health record systems. However, this study shows that based on our findings, these LLMs could potentially cause harm by perpetuating debunked, racist concepts.
Publisher
Cold Spring Harbor Laboratory
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献