A Cross-Domain Weakly Supervised Diabetic Retinopathy Lesion Identification Method Based on Multiple Instance Learning and Domain Adaptation
-
Published:2023-09-20
Issue:9
Volume:10
Page:1100
-
ISSN:2306-5354
-
Container-title:Bioengineering
-
language:en
-
Short-container-title:Bioengineering
Author:
Li Renyu1, Gu Yunchao123, Wang Xinliang1, Pan Junjun1ORCID
Affiliation:
1. State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing 100191, China 2. Hangzhou Innovation Institute, Beihang University, Hangzhou 310051, China 3. Research Unit of Virtual Body and Virtual Surgery Technologies, Chinese Academy of Medical Sciences, 2019RU004, Beijing 100191, China
Abstract
Accurate identification of lesions and their use across different medical institutions are the foundation and key to the clinical application of automatic diabetic retinopathy (DR) detection. Existing detection or segmentation methods can achieve acceptable results in DR lesion identification, but they strongly rely on a large number of fine-grained annotations that are not easily accessible and suffer severe performance degradation in the cross-domain application. In this paper, we propose a cross-domain weakly supervised DR lesion identification method using only easily accessible coarse-grained lesion attribute labels. We first propose the novel lesion-patch multiple instance learning method (LpMIL), which leverages the lesion attribute label for patch-level supervision to complete weakly supervised lesion identification. Then, we design a semantic constraint adaptation method (LpSCA) that improves the lesion identification performance of our model in different domains with semantic constraint loss. Finally, we perform secondary annotation on the open-source dataset EyePACS, to obtain the largest fine-grained annotated dataset EyePACS-pixel, and validate the performance of our model on it. Extensive experimental results on the public dataset FGADR and our EyePACS-pixel demonstrate that compared with the existing detection and segmentation methods, the proposed method can identify lesions accurately and comprehensively, and obtain competitive results using only coarse-grained annotations.
Funder
Technological Innovation 2030—“New Generation Artificial Intelligence” Major Project CAMS Innovation Fund for Medical Sciences
Reference31 articles.
1. Ren, S., He, K., Girshick, R., and Sun, J. (2015, January 7–10). Faster r-cnn: Towards real-time object detection with region proposal networks. Proceedings of the Conference on Neural Information Processing Systems (NeurIPS 2015), Montreal, QC, Canada. 2. A benchmark for studying diabetic retinopathy: Segmentation, grading, and transferability;Zhou;IEEE Trans. Med. Imaging,2020 3. Ronneberger, O., Fischer, P., and Brox, T. (2015, January 5–9). U-net: Convolutional networks for biomedical image segmentation. Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany. 4. Zhou, Y., He, X., Huang, L., Liu, L., Zhu, F., Cui, S., and Shao, L. (2019, January 15–20). Collaborative learning of semi-supervised segmentation and classification for medical images. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA. 5. Wang, Z., Yin, Y., Shi, J., Fang, W., Li, H., and Wang, X. (2017, January 11–13). Zoom-in-net: Deep mining lesions for diabetic retinopathy detection. Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Quebec City, QC, Canada.
|
|