Affiliation:
1. Department of Radiology, Seoul St. Mary's Hospital, College of Medicine The Catholic University of Korea Seoul Republic of Korea
2. Division of Biomedical Engineering Hankuk University of Foreign Studies Yongin Republic of Korea
Abstract
BackgroundDeep learning models require large‐scale training to perform confidently, but obtaining annotated datasets in medical imaging is challenging. Weak annotation has emerged as a way to save time and effort.PurposeTo develop a deep learning model for 3D breast cancer segmentation in dynamic contrast‐enhanced magnetic resonance imaging (DCE‐MRI) using weak annotation with reliable performance.Study TypeRetrospective.PopulationSeven hundred and thirty‐six women with breast cancer from a single institution, divided into the development (N = 544) and test dataset (N = 192).Field Strength/Sequence3.0‐T, 3D fat‐saturated gradient‐echo axial T1‐weighted flash 3D volumetric interpolated brain examination (VIBE) sequences.AssessmentTwo radiologists performed a weak annotation of the ground truth using bounding boxes. Based on this, the ground truth annotation was completed through autonomic and manual correction. The deep learning model using 3D U‐Net transformer (UNETR) was trained with this annotated dataset. The segmentation results of the test set were analyzed by quantitative and qualitative methods, and the regions were divided into whole breast and region of interest (ROI) within the bounding box.Statistical TestsAs a quantitative method, we used the Dice similarity coefficient to evaluate the segmentation result. The volume correlation with the ground truth was evaluated with the Spearman correlation coefficient. Qualitatively, three readers independently evaluated the visual score in four scales. A P‐value <0.05 was considered statistically significant.ResultsThe deep learning model we developed achieved a median Dice similarity score of 0.75 and 0.89 for the whole breast and ROI, respectively. The volume correlation coefficient with respect to the ground truth volume was 0.82 and 0.86 for the whole breast and ROI, respectively. The mean visual score, as evaluated by three readers, was 3.4.Data ConclusionThe proposed deep learning model with weak annotation may show good performance for 3D segmentations of breast cancer using DCE‐MRI.Level of Evidence3Technical EfficacyStage 2
Subject
Radiology, Nuclear Medicine and imaging
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献