Information fusion for fully automated segmentation of head and neck tumors from PET and CT images

Author:

Shiri Isaac1ORCID,Amini Mehdi1ORCID,Yousefirizi Fereshteh2,Vafaei Sadr Alireza34ORCID,Hajianfar Ghasem1ORCID,Salimi Yazdan1ORCID,Mansouri Zahra1ORCID,Jenabi Elnaz5,Maghsudi Mehdi6,Mainta Ismini1ORCID,Becker Minerva7ORCID,Rahmim Arman28ORCID,Zaidi Habib191011ORCID

Affiliation:

1. Division of Nuclear Medicine and Molecular Imaging Geneva University Hospital Geneva Switzerland

2. Department of Integrative Oncology BC Cancer Research Institute Vancouver British Columbia Canada

3. Institute of Pathology RWTH Aachen University Hospital Aachen Germany

4. Department of Public Health Sciences, College of Medicine The Pennsylvania State University Hershey USA

5. Research Center for Nuclear Medicine Shariati Hospital Tehran University of Medical Sciences Tehran Iran

6. Rajaie Cardiovascular Medical and Research Center Iran University of Medical Sciences Tehran Iran

7. Service of Radiology Geneva University Hospital Geneva Switzerland

8. Department of Radiology and Physics University of British Columbia Vancouver Canada

9. Geneva University Neurocenter Geneva University Geneva Switzerland

10. Department of Nuclear Medicine and Molecular Imaging University of Groningen University Medical Center Groningen Groningen Netherlands

11. Department of Nuclear Medicine University of Southern Denmark Odense Denmark

Abstract

AbstractBackgroundPET/CT images combining anatomic and metabolic data provide complementary information that can improve clinical task performance. PET image segmentation algorithms exploiting the multi‐modal information available are still lacking.PurposeOur study aimed to assess the performance of PET and CT image fusion for gross tumor volume (GTV) segmentations of head and neck cancers (HNCs) utilizing conventional, deep learning (DL), and output‐level voting‐based fusions.MethodsThe current study is based on a total of 328 histologically confirmed HNCs from six different centers. The images were automatically cropped to a 200 × 200 head and neck region box, and CT and PET images were normalized for further processing. Eighteen conventional image‐level fusions were implemented. In addition, a modified U2‐Net architecture as DL fusion model baseline was used. Three different input, layer, and decision‐level information fusions were used. Simultaneous truth and performance level estimation (STAPLE) and majority voting to merge different segmentation outputs (from PET and image‐level and network‐level fusions), that is, output‐level information fusion (voting‐based fusions) were employed. Different networks were trained in a 2D manner with a batch size of 64. Twenty percent of the dataset with stratification concerning the centers (20% in each center) were used for final result reporting. Different standard segmentation metrics and conventional PET metrics, such as SUV, were calculated.ResultsIn single modalities, PET had a reasonable performance with a Dice score of 0.77 ± 0.09, while CT did not perform acceptably and reached a Dice score of only 0.38 ± 0.22. Conventional fusion algorithms obtained a Dice score range of [0.76–0.81] with guided‐filter‐based context enhancement (GFCE) at the low‐end, and anisotropic diffusion and Karhunen–Loeve transform fusion (ADF), multi‐resolution singular value decomposition (MSVD), and multi‐level image decomposition based on latent low‐rank representation (MDLatLRR) at the high‐end. All DL fusion models achieved Dice scores of 0.80. Output‐level voting‐based models outperformed all other models, achieving superior results with a Dice score of 0.84 for Majority_ImgFus, Majority_All, and Majority_Fast. A mean error of almost zero was achieved for all fusions using SUVpeak, SUVmean and SUVmedian.ConclusionPET/CT information fusion adds significant value to segmentation tasks, considerably outperforming PET‐only and CT‐only methods. In addition, both conventional image‐level and DL fusions achieve competitive results. Meanwhile, output‐level voting‐based fusion using majority voting of several algorithms results in statistically significant improvements in the segmentation of HNC.

Funder

Schweizerischer Nationalfonds zur Förderung der Wissenschaftlichen Forschung

Publisher

Wiley

Subject

General Medicine

Cited by 4 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Self-adaptive 2D 3D image fusion for automated pixel-level pavement crack detection;Automation in Construction;2024-12

2. Machine learning-based analysis of 68Ga-PSMA-11 PET/CT images for estimation of prostate tumor grade;Physical and Engineering Sciences in Medicine;2024-03-25

3. PET/MR Imaging in Head and Neck Cancer;Magnetic Resonance Imaging Clinics of North America;2023-11

4. Multi-institutional PET/CT image segmentation using federated deep transformer learning;Computer Methods and Programs in Biomedicine;2023-10

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3