Affiliation:
1. Department of Dermatology, University of Michigan , Ann Arbor, MI , USA
Abstract
Abstract
Background
ChatGPT is a free artificial intelligence (AI)-based natural language processing tool that generates complex responses to inputs from users.
Objectives
To determine whether ChatGPT is able to generate high-quality responses to patient-submitted questions in the patient portal.
Methods
Patient-submitted questions and the corresponding responses from their dermatology physician were extracted from the electronic medical record for analysis. The questions were input into ChatGPT (version 3.5) and the outputs extracted for analysis, with manual removal of verbiage pertaining to ChatGPT’s inability to provide medical advice. Ten blinded reviewers (seven physicians and three nonphysicians) rated and selected their preference in terms of ‘overall quality’, ‘readability’, ‘accuracy’, ‘thoroughness’ and ‘level of empathy’ of the physician- and ChatGPT-generated responses.
Results
Thirty-one messages and responses were analysed. Physician-generated responses were vastly preferred over the ChatGPT responses by the physician and nonphysician reviewers and received significantly higher ratings for ‘readability’ and ‘level of empathy’.
Conclusions
The results of this study suggest that physician-generated responses to patients’ portal messages are still preferred over ChatGPT, but generative AI tools may be helpful in generating the first drafts of responses and providing information on education resources for patients.
Publisher
Oxford University Press (OUP)
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献