Abstract
AbstractBackgroundThere are many myths regarding Alzheimer’s disease (AD) that have been circulated on the Internet, each exhibiting varying degrees of accuracy, inaccuracy, and misinformation. Large language models such as ChatGPT, may be a useful tool to help assess these myths for veracity and inaccuracy. However, they can induce misinformation as well. The objective of this study is to assess ChatGPT’s ability to identify and address AD myths with reliable information.MethodsWe conducted a cross-sectional study of clinicians’ evaluation of ChatGPT (GPT 4.0)’s responses to 20 selected AD myths. We prompted ChatGPT to express its opinion on each myth and then requested it to rephrase its explanation using a simplified language that could be more readily understood by individuals with a middle school education. We implemented a survey using Redcap to determine the degree to which clinicians agreed with the accuracy of each ChatGPT’s explanation and the degree to which the simplified rewriting was readable and retained the message of the original. We also collected their explanation on any disagreement with ChatGPT’s responses. We used five Likert-type scale with a score ranging from -2 to 2 to quantify clinicians’ agreement in each aspect of the evaluation.ResultsThe clinicians (n=11) were generally satisfied with ChatGPT’s explanations, with a mean (SD) score of 1.0(±0.3) across the 20 myths. While ChatGPT correctly identified that all the 20 myths were inaccurate, some clinicians disagreed with its explanations on 7 of the myths.Overall, 9 of the 11 professionals either agreed or strongly agreed that ChatGPT has the potential to provide meaningful explanations of certain myths.ConclusionsThe majority of surveyed healthcare professionals acknowledged the potential value of ChatGPT in mitigating AD misinformation. However, the need for more refined and detailed explanations of the disease’s mechanisms and treatments was highlighted.Impact StatementThere are many statements regarding Alzheimer’s disease (AD) diagnosis, management, and treatment circulating on the Internet, each exhibiting varying degrees of accuracy, inaccuracy, and misinformation. Large language models are a popular topic currently, and many patients and caregivers may turn to LLMs such as ChatGPT to learn more about the disease. This study aims to assess ChatGPT’s ability to identify and address AD myths with reliable information. We certify that this work is novel.Key Points-Geriatricians acknowledged the potential value of ChatGPT in mitigating misinformation in Alzheimer’s Disease-There remain nuanced cases where ChatGPT explanations are not as refined or appropriate.-Why does this matter? Large language models such as ChatGPT are very popular nowadays and patients and caregivers often may use them to learn about their disease. The paper seeks to determine whether ChatGPT does an appropriate job in moderating understanding of Alzheimer’s Disease myths.
Publisher
Cold Spring Harbor Laboratory
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献