High‐fidelity direct contrast synthesis from magnetic resonance fingerprinting

Author:

Wang Ke12ORCID,Doneva Mariya3,Meineke Jakob3ORCID,Amthor Thomas3ORCID,Karasan Ekin1ORCID,Tan Fei4ORCID,Tamir Jonathan I.5,Yu Stella X.126,Lustig Michael3

Affiliation:

1. Electrical Engineering and Computer Sciences University of California at Berkeley Berkeley California USA

2. International Computer Science Institute University of California at Berkeley Berkeley California USA

3. Philips Research Europe Hamburg Germany

4. Bioengineering UC Berkeley‐UCSF San Francisco California USA

5. Chandra Family Department of Electrical and Computer Engineering The University of Texas at Austin Austin Texas USA

6. Computer Science and Engineering University of Michigan Ann Arbor Michigan USA

Abstract

PurposeThis work was aimed at proposing a supervised learning‐based method that directly synthesizes contrast‐weighted images from the Magnetic Resonance Fingerprinting (MRF) data without performing quantitative mapping and spin‐dynamics simulations.MethodsTo implement our direct contrast synthesis (DCS) method, we deploy a conditional generative adversarial network (GAN) framework with a multi‐branch U‐Net as the generator and a multilayer CNN (PatchGAN) as the discriminator. We refer to our proposed approach as N‐DCSNet. The input MRF data are used to directly synthesize T1‐weighted, T2‐weighted, and fluid‐attenuated inversion recovery (FLAIR) images through supervised training on paired MRF and target spin echo‐based contrast‐weighted scans. The performance of our proposed method is demonstrated on in vivo MRF scans from healthy volunteers. Quantitative metrics, including normalized root mean square error (nRMSE), peak signal‐to‐noise ratio (PSNR), structural similarity (SSIM), learned perceptual image patch similarity (LPIPS), and Fréchet inception distance (FID), were used to evaluate the performance of the proposed method and compare it with others.ResultsIn‐vivo experiments demonstrated excellent image quality with respect to that of simulation‐based contrast synthesis and previous DCS methods, both visually and according to quantitative metrics. We also demonstrate cases in which our trained model is able to mitigate the in‐flow and spiral off‐resonance artifacts typically seen in MRF reconstructions, and thus more faithfully represent conventional spin echo‐based contrast‐weighted images.ConclusionWe present N‐DCSNet to directly synthesize high‐fidelity multicontrast MR images from a single MRF acquisition. This method can significantly decrease examination time. By directly training a network to generate contrast‐weighted images, our method does not require any model‐based simulation and therefore can avoid reconstruction errors due to dictionary matching and contrast simulation (code available at:https://github.com/mikgroup/DCSNet).

Publisher

Wiley

Subject

Radiology, Nuclear Medicine and imaging

Cited by 5 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3