Improving Radiology Report Generation Quality and Diversity through Reinforcement Learning and Text Augmentation
-
Published:2024-04-03
Issue:4
Volume:11
Page:351
-
ISSN:2306-5354
-
Container-title:Bioengineering
-
language:en
-
Short-container-title:Bioengineering
Author:
Parres Daniel1ORCID, Albiol Alberto1ORCID, Paredes Roberto12ORCID
Affiliation:
1. Campus de Vera, Universitat Politècnica València, Camí de Vera s/n, 46022 Valencia, Spain 2. Valencian Graduate School and Research Network of Artificial Intelligence, Camí de Vera s/n, 46022 Valencia, Spain
Abstract
Deep learning is revolutionizing radiology report generation (RRG) with the adoption of vision encoder–decoder (VED) frameworks, which transform radiographs into detailed medical reports. Traditional methods, however, often generate reports of limited diversity and struggle with generalization. Our research introduces reinforcement learning and text augmentation to tackle these issues, significantly improving report quality and variability. By employing RadGraph as a reward metric and innovating in text augmentation, we surpass existing benchmarks like BLEU4, ROUGE-L, F1CheXbert, and RadGraph, setting new standards for report accuracy and diversity on MIMIC-CXR and Open-i datasets. Our VED model achieves F1-scores of 66.2 for CheXbert and 37.8 for RadGraph on the MIMIC-CXR dataset, and 54.7 and 45.6, respectively, on Open-i. These outcomes represent a significant breakthrough in the RRG field. The findings and implementation of the proposed approach, aimed at enhancing diagnostic precision and radiological interpretations in clinical settings, are publicly available on GitHub to encourage further advancements in the field.
Funder
Generalitat Valenciana
Reference51 articles.
1. Online Policy Learning-Based Output-Feedback Optimal Control of Continuous-Time Systems;Zhao;IEEE Trans. Circuits Syst. II Express Briefs,2024 2. Gale, W., Oakden-Rayner, L., Carneiro, G., Bradley, A.P., and Palmer, L.J. (2018). Producing radiologist-quality reports for interpretable artificial intelligence. arXiv. 3. Li, Y., Liang, X., Hu, Z., and Xing, E.P. (2018, January 3–8). Hybrid Retrieval-Generation Reinforced Agent for Medical Image Report Generation. Proceedings of the Advances in Neural Information Processing Systems, Montreal, BC, Canada. 4. Wang, X., Peng, Y., Lu, L., Lu, Z., and Summers, R.M. (2018, January 18–23). TieNet: Text-Image Embedding Network for Common Thorax Disease Classification and Reporting in Chest X-Rays. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA. 5. Liu, G., Hsu, T.M.H., McDermott, M., Boag, W., Weng, W.H., Szolovits, P., and Ghassemi, M. (2019, January 9–10). Clinically Accurate Chest X-ray Report Generation. Proceedings of the 4th Machine Learning for Healthcare Conference, Ann Arbor, MI, USA.
|
|