Abstract
AbstractObjectivesSimplifying medical information to make it understandable for patients, specifically in the case of radiology reports, is challenging. It requires time and effort from medical personnel. This systematic review focuses on the application of large language models (LLMs) in generating simplified radiological imaging reports, as well as answering patient inquiries regarding radiological procedures.Materials and MethodsThe authors searched for studies published up to January 2024. Search terms focused on LLMs generated simplified radiological reports and answers to patient inquiries regarding radiological procedures. MEDLINE was used as a search database.ResultsOverall, eight studies published between May 2023 and November 2023 were included. All studies showed that LLMs can produce simplified medical information for patients. Four studies (50%) used GPT-3.5, Two studies (25%) conducted a comparative analysis between GPT-3.5 and GPT-4. One study (12.5%) examined Microsoft Bing. One study (12.5%) utilized GPT-4. Four studies (50%) used LLMs to simplify radiological reports. Four studies (50%) used LLMs to answer patient questions regarding radiological procedures. Only two studies (25%) used patients to evaluate the LLMs output. One study (12.5%) compared their initial prompt with optimized prompt. Five studies (62.5%) showed missing, inaccurate and potentially harmful AI outputs.ConclusionLLMs can be used to simplify medical imaging reports and procedures, for improved patient comprehension. However, their limitations cannot be ignored. Further study in this field is essential and more conclusive evidence is needed.
Publisher
Cold Spring Harbor Laboratory