Abstract
AbstractPurposeThis study proposed a novel retrospective motion reduction method named motion artifact unsupervised disentanglement generative adversarial network (MAUDGAN) that reduces the motion artifacts from brain images with tumors and metastases. The MAUDGAN was trained using a mutlimodal multicenter 3D T1-Gd and T2-fluid attenuated inversion recovery MRI images.ApproachThe motion artifact with different artifact levels were simulated ink-space for the 3D T1-Gd MRI images. The MAUDGAN consisted of two generators, two discriminators and two feature extractor networks constructed using the residual blocks. The generators map the images from content space to artifact space and vice-versa. On the other hand, the discriminators attempted to discriminate the content codes to learn the motion-free and motion-corrupted content spaces.ResultsWe compared the MAUDGAN with the CycleGAN and Pix2pix-GAN. Qualitatively, the MAUDGAN could remove the motion with the highest level of soft-tissue contrasts without adding spatial and frequency distortions. Quantitatively, we reported six metrics including normalized mean squared error (NMSE), structural similarity index (SSIM), multi-scale structural similarity index (MS-SSIM), peak signal-to-noise ratio (PSNR), visual information fidelity (VIF), and multi-scale gradient magnitude similarity deviation (MS-GMSD). The MAUDGAN got the lowest NMSE and MS-GMSD. On average, the proposed MAUDGAN reconstructed motion-free images with the highest SSIM, PSNR, and VIF values and comparable MS-SSIM values.ConclusionsThe MAUDGAN can disentangle motion artifacts from the 3D T1-Gd dataset under a multimodal framework. The motion reduction will improve automatic and manual post-processing algorithms including auto-segmentations, registrations, and contouring for guided therapies such as radiotherapy and surgery.
Publisher
Cold Spring Harbor Laboratory
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献