Affiliation:
1. Shandong Key Laboratory of Medical Physics and Image Processing Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine School of Physics and Electronics Shandong Normal University Jinan Shandong China
2. Sir Run Run Shaw Hospital, Zhejiang University School of Medicine Institute of Translational Medicine Zhejiang University Hangzhou Zhejiang China
3. Shenzhen Bay Laboratory Shenzhen China
4. School of Automation Zhejiang Institute of Mechanical & Electrical Engineering Hangzhou China
Abstract
AbstractBackgroundCone beam computed tomography (CBCT) plays an increasingly important role in image‐guided radiation therapy. However, the image quality of CBCT is severely degraded by excessive scatter contamination, especially in the abdominal region, hindering its further applications in radiation therapy.PurposeTo restore low‐quality CBCT images contaminated by scatter signals, a scatter correction algorithm combining the advantages of convolutional neural networks (CNN) and Swin Transformer is proposed.MethodsIn this paper a scatter correction model for CBCT image, the Flip Swin Transformer U‐shape network (FSTUNet) model, is proposed. In this model, the advantages of CNN in texture detail and Swin Transformer in global correlation are used to accurately extract shallow and deep features, respectively. Instead of using the original Swin Transformer tandem structure, we build the Flip Swin Transformer Block to achieve a more powerful inter‐window association extraction. The validity and clinical relevance of the method is demonstrated through extensive experiments on a Monte Carlo (MC) simulation dataset and frequency split dataset generated by a validated method, respectively.ResultExperimental results on the MC simulated dataset show that the root mean square error of images corrected by the method is reduced from over 100 HU to about 7 HU. Both the structural similarity index measure (SSIM) and the universal quality index (UQI) are close to 1. Experimental results on the frequency split dataset demonstrate that the method not only corrects shading artifacts but also exhibits a high degree of structural consistency. In addition, comparison experiments show that FSTUNet outperforms UNet, Deep Residual Convolutional Neural Network (DRCNN), DSENet, Pix2pixGAN, and 3DUnet methods in both qualitative and quantitative metrics.ConclusionsAccurately capturing the features at different levels is greatly beneficial for reconstructing high‐quality scatter‐free images. The proposed FSTUNet method is an effective solution to CBCT scatter correction and has the potential to improve the accuracy of CBCT image‐guided radiation therapy.
Funder
Natural Science Foundation of Shandong Province
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献