A modality‐collaborative convolution and transformer hybrid network for unpaired multi‐modal medical image segmentation with limited annotations

Author:

Liu Hong1,Zhuang Yuzhou1,Song Enmin1,Xu Xiangyang1,Ma Guangzhi1,Cetinkaya Coskun2,Hung Chih‐Cheng2

Affiliation:

1. School of Computer Science and Technology Huazhong University of Science and Technology Wuhan China

2. Center for Machine Vision and Security Research Kennesaw State University Kennesaw Georgia USA

Abstract

AbstractBackgroundMulti‐modal learning is widely adopted to learn the latent complementary information between different modalities in multi‐modal medical image segmentation tasks. Nevertheless, the traditional multi‐modal learning methods require spatially well‐aligned and paired multi‐modal images for supervised training, which cannot leverage unpaired multi‐modal images with spatial misalignment and modality discrepancy. For training accurate multi‐modal segmentation networks using easily accessible and low‐cost unpaired multi‐modal images in clinical practice, unpaired multi‐modal learning has received comprehensive attention recently.PurposeExisting unpaired multi‐modal learning methods usually focus on the intensity distribution gap but ignore the scale variation problem between different modalities. Besides, within existing methods, shared convolutional kernels are frequently employed to capture common patterns in all modalities, but they are typically inefficient at learning global contextual information. On the other hand, existing methods highly rely on a large number of labeled unpaired multi‐modal scans for training, which ignores the practical scenario when labeled data is limited. To solve the above problems, we propose a modality‐collaborative convolution and transformer hybrid network (MCTHNet) using semi‐supervised learning for unpaired multi‐modal segmentation with limited annotations, which not only collaboratively learns modality‐specific and modality‐invariant representations, but also could automatically leverage extensive unlabeled scans for improving performance.MethodsWe make three main contributions to the proposed method. First, to alleviate the intensity distribution gap and scale variation problems across modalities, we develop a modality‐specific scale‐aware convolution (MSSC) module that can adaptively adjust the receptive field sizes and feature normalization parameters according to the input. Secondly, we propose a modality‐invariant vision transformer (MIViT) module as the shared bottleneck layer for all modalities, which implicitly incorporates convolution‐like local operations with the global processing of transformers for learning generalizable modality‐invariant representations. Third, we design a multi‐modal cross pseudo supervision (MCPS) method for semi‐supervised learning, which enforces the consistency between the pseudo segmentation maps generated by two perturbed networks to acquire abundant annotation information from unlabeled unpaired multi‐modal scans.ResultsExtensive experiments are performed on two unpaired CT and MR segmentation datasets, including a cardiac substructure dataset derived from the MMWHS‐2017 dataset and an abdominal multi‐organ dataset consisting of the BTCV and CHAOS datasets. Experiment results show that our proposed method significantly outperforms other existing state‐of‐the‐art methods under various labeling ratios, and achieves a comparable segmentation performance close to single‐modal methods with fully labeled data by only leveraging a small portion of labeled data. Specifically, when the labeling ratio is 25%, our proposed method achieves overall mean DSC values of 78.56% and 76.18% in cardiac and abdominal segmentation, respectively, which significantly improves the average DSC value of two tasks by 12.84% compared to single‐modal U‐Net models.ConclusionsOur proposed method is beneficial for reducing the annotation burden of unpaired multi‐modal medical images in clinical applications.

Publisher

Wiley

Subject

General Medicine

Cited by 1 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3