Abstract
ABSTRACTArtificial intelligence (AI), especially the most recent large language models (LLMs), holds great promise in healthcare and medicine, with applications spanning from biological scientific discovery and clinical patient care to public health policymaking. However, AI methods have the critical concern for generating factually incorrect or unfaithful information, posing potential long-term risks, ethical issues, and other serious consequences. This review aims to provide a comprehensive overview of the faithfulness problem in existing research on AI in healthcare and medicine, with a focus on the analysis of the causes of unfaithful results, evaluation metrics, and mitigation methods. We systematically reviewed the recent progress in optimizing the factuality across various generative medical AI methods, including knowledge-grounded LLMs, text-to-text generation, multimodality-to-text generation, and automatic medical fact-checking tasks. We further discussed the challenges and opportunities of ensuring the faithfulness of AI-generated information in these applications. We expect that this review will assist researchers and practitioners in understanding the faithfulness problem in AI-generated information in healthcare and medicine, as well as the recent progress and challenges in related research. Our review can also serve as a guide for researchers and practitioners who are interested in applying AI in medicine and healthcare.
Publisher
Cold Spring Harbor Laboratory
Cited by
6 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献