Author:
Aksoy Nurbanu,Sharoff Serge,Baser Selcuk,Ravikumar Nishant,Frangi Alejandro F.
Abstract
Image-to-text radiology report generation aims to automatically produce radiology reports that describe the findings in medical images. Most existing methods focus solely on the image data, disregarding the other patient information accessible to radiologists. In this paper, we present a novel multi-modal deep neural network framework for generating chest x-rays reports by integrating structured patient data, such as vital signs and symptoms, alongside unstructured clinical notes. We introduce a conditioned cross-multi-head attention module to fuse these heterogeneous data modalities, bridging the semantic gap between visual and textual data. Experiments demonstrate substantial improvements from using additional modalities compared to relying on images alone. Notably, our model achieves the highest reported performance on the ROUGE-L metric compared to relevant state-of-the-art models in the literature. Furthermore, we employed both human evaluation and clinical semantic similarity measurement alongside word-overlap metrics to improve the depth of quantitative analysis. A human evaluation, conducted by a board-certified radiologist, confirms the model’s accuracy in identifying high-level findings, however, it also highlights that more improvement is needed to capture nuanced details and clinical context.
Reference34 articles.
1. Automatic medical image report generation with multi-view, multi-modal attention mechanism;Yang,2020
2. Show, tell and summarise: learning to generate and summarise radiology findings from medical images;Singh;Neural Comput Appl,2021
3. Multimodal recurrent model with attention for automated radiology report generation;Xue,2018
4. Automatic radiology report generation based on multi-view image fusion and medical concept enrichment;Yuan,2019
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. A Multi-Modal Feature Fusion-Based Approach for Chest X-Ray Report Generation;2024 11th International Conference on Wireless Networks and Mobile Communications (WINCOM);2024-07-23