Reducing annotation burden in MR: A novel MR‐contrast guided contrastive learning approach for image segmentation

Author:

Umapathy Lavanya123ORCID,Brown Taylor24,Mushtaq Raza24,Greenhill Mark24,Lu J'rick24,Martin Diego56,Altbach Maria26,Bilgin Ali1267ORCID

Affiliation:

1. Department of Electrical and Computer Engineering University of Arizona Tucson Arizona USA

2. Department of Medical Imaging University of Arizona Tucson Arizona USA

3. Department of Radiology Center for Advanced Imaging Innovation and Research (CAI2R) New York University Grossman School of Medicine New York New York USA

4. College of Medicine University of Arizona Tucson Arizona USA

5. Department of Radiology Houston Methodist Hospital Houston Texas USA

6. Department of Biomedical Engineering University of Arizona Tucson Arizona USA

7. Program in Applied Mathematics University of Arizona Tucson Arizona USA

Abstract

AbstractBackgroundContrastive learning, a successful form of representational learning, has shown promising results in pretraining deep learning (DL) models for downstream tasks. When working with limited annotation data, as in medical image segmentation tasks, learning domain‐specific local representations can further improve the performance of DL models.PurposeIn this work, we extend the contrastive learning framework to utilize domain‐specific contrast information from unlabeled Magnetic Resonance (MR) images to improve the performance of downstream MR image segmentation tasks in the presence of limited labeled data.MethodsThe contrast in MR images is controlled by underlying tissue properties (e.g., T1 or T2) and image acquisition parameters. We hypothesize that learning to discriminate local representations based on underlying tissue properties should improve subsequent segmentation tasks on MR images. We propose a novel constrained contrastive learning (CCL) strategy that uses tissue‐specific information via a constraint map to define positive and negative local neighborhoods for contrastive learning, embedding this information in the representational space during pretraining. For a given MR contrast image, the proposed strategy uses local signal characteristics (constraint map) across a set of related multi‐contrast MR images as a surrogate for underlying tissue information. We demonstrate the utility of the approach for downstream: (1) multi‐organ segmentation tasks in T2‐weighted images where a DL model learns T2 information with constraint maps from a set of 2D multi‐echo T2‐weighted images (n = 101) and (2) tumor segmentation tasks in multi‐parametric images from the public brain tumor segmentation (BraTS) (n = 80) dataset where DL models learn T1 and T2 information from multi‐parametric BraTS images. Performance is evaluated on downstream multi‐label segmentation tasks with limited data in (1) T2‐weighted images of the abdomen from an in‐house Radial‐T2 (Train/Test = 30/20), (2) public Cartesian‐T2 (Train/Test = 6/12) dataset, and (3) multi‐parametric MR images from the public brain tumor segmentation dataset (BraTS) (Train/Test = 40/50). The performance of the proposed CCL strategy is compared to state‐of‐the‐art self‐supervised contrastive learning techniques. In each task, a model is also trained using all available labeled data for supervised baseline performance.ResultsThe proposed CCL strategy consistently yielded improved Dice scores, Precision, and Recall metrics, and reduced HD95 values across all segmentation tasks. We also observed performance comparable to the baseline with reduced annotation effort. The t‐SNE visualization of features for T2‐weighted images demonstrates its ability to embed T2 information in the representational space. On the BraTS dataset, we also observed that using an appropriate multi‐contrast space to learn T1+T2, T1, or T2 information during pretraining further improved the performance of tumor segmentation tasks.ConclusionsLearning to embed tissue‐specific information that controls MR image contrast with the proposed constrained contrastive learning improved the performance of DL models on subsequent segmentation tasks compared to conventional self‐supervised contrastive learning techniques. The use of such domain‐specific local representations could help understand, improve performance, and mitigate the scarcity of labeled data in MR image segmentation tasks.

Funder

National Institutes of Health

Arizona Biomedical Research Commission

Publisher

Wiley

Subject

General Medicine

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3