Author:
Selivanov Alexander,Rogov Oleg Y.,Chesakov Daniil,Shelmanov Artem,Fedulova Irina,Dylov Dmitry V.
Abstract
AbstractThe proposed model for automatic clinical image caption generation combines the analysis of radiological scans with structured patient information from the textual records. It uses two language models, the Show-Attend-Tell and the GPT-3, to generate comprehensive and descriptive radiology records. The generated textual summary contains essential information about pathologies found, their location, along with the 2D heatmaps that localize each pathology on the scans. The model has been tested on two medical datasets, the Open-I, MIMIC-CXR, and the general-purpose MS-COCO, and the results measured with natural language assessment metrics demonstrated its efficient applicability to chest X-ray image captioning.
Publisher
Springer Science and Business Media LLC
Reference69 articles.
1. Irvin, J. et al. Chexpert: A large chest radiograph dataset with uncertainty labels and expert comparison. Proc. AAAI Conf. Artif. Intell. 33, 590–597 (2019).
2. Demner-Fushman, D. et al. Preparing a collection of radiology examinations for distribution and retrieval. J. Am. Med. Inform. Assoc. 23, 304–310 (2016).
3. Chan, Y.-H. et al. Effective pneumothorax detection for chest X-ray images using local binary pattern and support vector machine. J. Healthc. Eng. 2018, 1–11 (2018).
4. Maghdid, H. S. et al. Diagnosing covid-19 pneumonia from X-ray and CT images using deep learning and transfer learning algorithms. In Multimodal Image Exploitation and Learning 2021 Vol. 11734, 117340E (International Society for Optics and Photonics, 2021).
5. Monshi, M. M. A., Poon, J. & Chung, V. Deep learning in generating radiology reports: A survey. Artif. Intell. Med. 106, 101878 (2020).
Cited by
29 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献