Author:
Tariq Amara,Fathizadeh Sam,Ramaswamy Gokul,Trivedi Shubham,Urooj Aisha,Tan Nelly,Stib Matthew T.,Patel Bhavik N.,Banerjee Imon
Abstract
ABSTRACTObjectiveDevelop automated AI models for patient-sensitive summarization of radiology reports. Level of medical education or socio-economic background of a patient may dictate their level of understanding of medical jargon. Inability to understand primary findings from a radiology report may lead to unnecessary anxiety among patients or result in missed follow up.Materials and MethodsComputed tomography exams of chest were selected as a use-case for this study. Approximately 7K chest CT reports were collected from Mayo Clinic Enterprise. Summarization model was built on the T5 large language model (LLM) as its text-to-text transfer architecture is intuitively suited for abstractive text summarization, resulting in a model size of ~0.77B. Noisy groundtruth for model training was collected by prompting LLaMA 13B model.ResultsWe recruited both experts (board-certified radiologists) and laymen to manually evaluate summaries generated by model. Model-generated summaries rarely missed information as marked by majority opinion of radiologists. Laymen indicated 63% improvement in their understanding by reading layman summaries generated by the model. Comparative study with zero-shot performance of LLaMA indicated that LLaMA hallucinated and missed information 3 and 4 times more often, respectively, than the proposed model.DiscussionThe proposed patient-sensitive summarization model can generate summaries for radiology reports understandable by patients with vastly different levels of medical knowledge. In addition, task-specific training allows for more reliable performance compared to much larger off-the-shelf models.ConclusionsThe proposed model could improve adherence to follow up treatment suggested by radiology reports by increasing patients’ level of understanding of these reports.
Publisher
Cold Spring Harbor Laboratory
Reference29 articles.
1. “Readability of radiology reports: implications for patient-centered care;Clin. Imaging,2019
2. “Preventing delayed and missed care by applying artificial intelligence to trigger radiology imaging follow-up;NEJM Catal. Innov. Care Deliv,2022
3. T. Mabotuwana , C. S. Hall , J. Tieder , and M. L. Gunn , “Improving quality of follow-up imaging recommendations in radiology,” presented at the AMIA annual symposium proceedings, American Medical Informatics Association, 2017, p. 1196.
4. Automated Tracking of Follow-Up Imaging Recommendations
5. “Patient-level factors influencing adherence to follow-up imaging recommendations;Clin. Imaging,2022