Collaborative Learning for Annotation‐Efficient Volumetric MR Image Segmentation

Author:

Osman Yousuf Babiker M.12ORCID,Li Cheng1,Huang Weijian123,Wang Shanshan13

Affiliation:

1. Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology Chinese Academy of Sciences Shenzhen China

2. University of Chinese Academy of Sciences Beijing China

3. Peng Cheng Laboratory Shenzhen China

Abstract

BackgroundDeep learning has presented great potential in accurate MR image segmentation when enough labeled data are provided for network optimization. However, manually annotating three‐dimensional (3D) MR images is tedious and time‐consuming, requiring experts with rich domain knowledge and experience.PurposeTo build a deep learning method exploring sparse annotations, namely only a single two‐dimensional slice label for each 3D training MR image.Study TypeRetrospective.PopulationThree‐dimensional MR images of 150 subjects from two publicly available datasets were included. Among them, 50 (1377 image slices) are for prostate segmentation. The other 100 (8800 image slices) are for left atrium segmentation. Five‐fold cross‐validation experiments were carried out utilizing the first dataset. For the second dataset, 80 subjects were used for training and 20 were used for testing.Field Strength/Sequence1.5 T and 3.0 T; axial T2‐weighted and late gadolinium‐enhanced, 3D respiratory navigated, inversion recovery prepared gradient echo pulse sequence.AssessmentA collaborative learning method by integrating the strengths of semi‐supervised and self‐supervised learning schemes was developed. The method was trained using labeled central slices and unlabeled noncentral slices. Segmentation performance on testing set was reported quantitatively and qualitatively.Statistical TestsQuantitative evaluation metrics including boundary intersection‐over‐union (B‐IoU), Dice similarity coefficient, average symmetric surface distance, and relative absolute volume difference were calculated. Paired t test was performed, and P < 0.05 was considered statistically significant.ResultsCompared to fully supervised training with only the labeled central slice, mean teacher, uncertainty‐aware mean teacher, deep co‐training, interpolation consistency training (ICT), and ambiguity‐consensus mean teacher, the proposed method achieved a substantial improvement in segmentation accuracy, increasing the mean B‐IoU significantly by more than 10.0% for prostate segmentation (proposed method B‐IoU: 70.3% ± 7.6% vs. ICT B‐IoU: 60.3% ± 11.2%) and by more than 6.0% for left atrium segmentation (proposed method B‐IoU: 66.1% ± 6.8% vs. ICT B‐IoU: 60.1% ± 7.1%).Data ConclusionsA collaborative learning method trained using sparse annotations can segment prostate and left atrium with high accuracy.Level of Evidence0Technical EfficacyStage 1

Funder

National Natural Science Foundation of China

Key Laboratory for Magnetic Resonance and Multimodality Imaging of Guangdong Province

Publisher

Wiley

Subject

Radiology, Nuclear Medicine and imaging

Cited by 1 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3