SwinCross: Cross‐modal Swin transformer for head‐and‐neck tumor segmentation in PET/CT images

Author:

Li Gary Y.1,Chen Junyu23,Jang Se‐In1,Gong Kuang1,Li Quanzheng1

Affiliation:

1. Center for Advanced Medical Computing and Analysis Massachusetts General Hospital/Harvard Medical School Boston Massachusetts USA

2. The Russell H Morgan Department of Radiology and Radiological Science School of Medicine Johns Hopkins University Baltimore Maryland USA

3. Department of Electrical and Computer Engineering Whiting School of Engineering Johns Hopkins University Baltimore Maryland USA

Abstract

AbstractBackgroundRadiotherapy (RT) combined with cetuximab is the standard treatment for patients with inoperable head and neck cancers. Segmentation of head and neck (H&N) tumors is a prerequisite for radiotherapy planning but a time‐consuming process. In recent years, deep convolutional neural networks (DCNN) have become the de facto standard for automated image segmentation. However, due to the expensive computational cost associated with enlarging the field of view in DCNNs, their ability to model long‐range dependency is still limited, and this can result in sub‐optimal segmentation performance for objects with background context spanning over long distances. On the other hand, Transformer models have demonstrated excellent capabilities in capturing such long‐range information in several semantic segmentation tasks performed on medical images.PurposeDespite the impressive representation capacity of vision transformer models, current vision transformer‐based segmentation models still suffer from inconsistent and incorrect dense predictions when fed with multi‐modal input data. We suspect that the power of their self‐attention mechanism may be limited in extracting the complementary information that exists in multi‐modal data. To this end, we propose a novel segmentation model, debuted, Cross‐modal Swin Transformer (SwinCross), with cross‐modal attention (CMA) module to incorporate cross‐modal feature extraction at multiple resolutions.MethodsWe propose a novel architecture for cross‐modal 3D semantic segmentation with two main components: (1) a cross‐modal 3D Swin Transformer for integrating information from multiple modalities (PET and CT), and (2) a cross‐modal shifted window attention block for learning complementary information from the modalities. To evaluate the efficacy of our approach, we conducted experiments and ablation studies on the HECKTOR 2021 challenge dataset. We compared our method against nnU‐Net (the backbone of the top‐5 methods in HECKTOR 2021) and other state‐of‐the‐art transformer‐based models, including UNETR and Swin UNETR. The experiments employed a five‐fold cross‐validation setup using PET and CT images.ResultsEmpirical evidence demonstrates that our proposed method consistently outperforms the comparative techniques. This success can be attributed to the CMA module's capacity to enhance inter‐modality feature representations between PET and CT during head‐and‐neck tumor segmentation. Notably, SwinCross consistently surpasses Swin UNETR across all five folds, showcasing its proficiency in learning multi‐modal feature representations at varying resolutions through the cross‐modal attention modules.ConclusionsWe introduced a cross‐modal Swin Transformer for automating the delineation of head and neck tumors in PET and CT images. Our model incorporates a cross‐modality attention module, enabling the exchange of features between modalities at multiple resolutions. The experimental results establish the superiority of our method in capturing improved inter‐modality correlations between PET and CT for head‐and‐neck tumor segmentation. Furthermore, the proposed methodology holds applicability to other semantic segmentation tasks involving different imaging modalities like SPECT/CT or PET/MRI. Code:https://github.com/yli192/SwinCross_CrossModalSwinTransformer_for_Medical_Image_Segmentation

Funder

National Institutes of Health

Publisher

Wiley

Subject

General Medicine

Cited by 2 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3