Deep learning in computed tomography super resolution using multi‐modality data training

Author:

Fok Wai Yan Ryana12,Fieselmann Andreas1,Herbst Magdalena1,Ritschl Ludwig1,Kappler Steffen1,Saalfeld Sylvia34

Affiliation:

1. X‐ray Products Siemens Healthcare GmbH Forchheim Germany

2. Faculty of Computer Science Otto‐von‐Guericke University of Magdeburg Magdeburg Germany

3. Computational Medicine Group Ilmenau University of Technology Ilmenau Germany

4. Research Campus STIMULATE Otto‐von‐Guericke University of Magdeburg Magdeburg Germany

Abstract

AbstractBackgroundOne of the limitations in leveraging the potential of artificial intelligence in X‐ray imaging is the limited availability of annotated training data. As X‐ray and CT shares similar imaging physics, one could achieve cross‐domain data sharing, so to generate labeled synthetic X‐ray images from annotated CT volumes as digitally reconstructed radiographs (DRRs). To account for the lower resolution of CT and the CT‐generated DRRs as compared to the real X‐ray images, we propose the use of super‐resolution (SR) techniques to enhance the CT resolution before DRR generation.PurposeAs spatial resolution can be defined by the modulation transfer function kernel in CT physics, we propose to train a SR network using paired low‐resolution (LR) and high‐resolution (HR) images by varying the kernel's shape and cutoff frequency. This is different to previous deep learning‐based SR techniques on RGB and medical images which focused on refining the sampling grid. Instead of generating LR images by bicubic interpolation, we aim to create realistic multi‐detector CT (MDCT) like LR images from HR cone‐beam CT (CBCT) scans.MethodsWe propose and evaluate the use of a SR U‐Net for the mapping between LR and HR CBCT image slices. We reconstructed paired LR and HR training volumes from the same CT scans with small in‐plane sampling grid size of . We used the residual U‐Net architecture to train two models. SRUN: trained with kernel‐based LR images, and SRUN: trained with bicubic downsampled data as baseline. Both models are trained on one CBCT dataset (n = 13 391). The performance of both models was then evaluated on unseen kernel‐based and interpolation‐based LR CBCT images (n = 10 950), and also on MDCT images (n = 1392).ResultsFive‐fold cross validation and ablation study were performed to find the optimal hyperparameters. Both SRUN and SRUN models show significant improvements (p‐value < 0.05) in mean absolute error (MAE), peak signal‐to‐noise ratio (PSNR) and structural similarity index measures (SSIMs) on unseen CBCT images. Also, the improvement percentages in MAE, PSNR, and SSIM by SRUN is larger than SRUN. For SRUN, MAE is reduced by 14%, and PSNR and SSIMs increased by 6 and 8%, respectively. To conclude, SRUN outperforms SRUN, which the former generates sharper images when tested with kernel‐based LR CBCT images as well as cross‐modality LR MDCT data.ConclusionsOur proposed method showed better performance than the baseline interpolation approach on unseen LR CBCT. We showed that the frequency behavior of the used data is important for learning the SR features. Additionally, we showed cross‐modality resolution improvements to LR MDCT images. Our approach is, therefore, a first and essential step in enabling realistic high spatial resolution CT‐generated DRRs for deep learning training.

Publisher

Wiley

Subject

General Medicine

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3