Deformable motion compensation in interventional cone‐beam CT with a context‐aware learned autofocus metric

Author:

Huang Heyuan1,Liu Yixuan1,Siewerdsen Jeffrey H.12,Lu Alexander1,Hu Yicheng3,Zbijewski Wojciech1,Unberath Mathias3,Weiss Clifford R.4,Sisniega Alejandro1

Affiliation:

1. Department of Biomedical Engineering Johns Hopkins University Baltimore Maryland USA

2. Department of Imaging Physics The University of Texas MD Anderson Cancer Center Houston Texas USA

3. Department of Computer Science Johns Hopkins University Baltimore Maryland USA

4. Department of Radiology Johns Hopkins University Baltimore Maryland USA

Abstract

AbstractPurposeInterventional Cone‐Beam CT (CBCT) offers 3D visualization of soft‐tissue and vascular anatomy, enabling 3D guidance of abdominal interventions. However, its long acquisition time makes CBCT susceptible to patient motion. Image‐based autofocus offers a suitable platform for compensation of deformable motion in CBCT, but it relies on handcrafted motion metrics based on first‐order image properties and that lack awareness of the underlying anatomy. This work proposes a data‐driven approach to motion quantification via a learned, context‐aware, deformable metric, , that quantifies the amount of motion degradation as well as the realism of the structural anatomical content in the image.MethodsThe proposed was modeled as a deep convolutional neural network (CNN) trained to recreate a reference‐based structural similarity metric—visual information fidelity (VIF). The deep CNN acted on motion‐corrupted images, providing an estimation of the spatial VIF map that would be obtained against a motion‐free reference, capturing motion distortion, and anatomic plausibility. The deep CNN featured a multi‐branch architecture with a high‐resolution branch for estimation of voxel‐wise VIF on a small volume of interest. A second contextual, low‐resolution branch provided features associated to anatomical context for disentanglement of motion effects and anatomical appearance. The deep CNN was trained on paired motion‐free and motion‐corrupted data obtained with a high‐fidelity forward projection model for a protocol involving 120 kV and 9.90 mGy. The performance of was evaluated via metrics of correlation with ground truth and with the underlying deformable motion field in simulated data with deformable motion fields with amplitude ranging from 5 to 20 mm and frequency from 2.4 up to 4 cycles/scan. Robustness to variation in tissue contrast and noise levels was assessed in simulation studies with varying beam energy (90–120 kV) and dose (1.19–39.59 mGy). Further validation was obtained on experimental studies with a deformable phantom. Final validation was obtained via integration of on an autofocus compensation framework, applied to motion compensation on experimental datasets and evaluated via metric of spatial resolution on soft‐tissue boundaries and sharpness of contrast‐enhanced vascularity.ResultsThe magnitude and spatial map of showed consistent and high correlation levels with the ground truth in both simulation and real data, yielding average normalized cross correlation (NCC) values of 0.95 and 0.88, respectively. Similarly, achieved good correlation values with the underlying motion field, with average NCC of 0.90. In experimental phantom studies, properly reflects the change in motion amplitudes and frequencies: voxel‐wise averaging of the local across the full reconstructed volume yielded an average value of 0.69 for the case with mild motion (2 mm, 12 cycles/scan) and 0.29 for the case with severe motion (12 mm, 6 cycles/scan). Autofocus motion compensation using resulted in noticeable mitigation of motion artifacts and improved spatial resolution of soft tissue and high‐contrast structures, resulting in reduction of edge spread function width of 8.78% and 9.20%, respectively. Motion compensation also increased the conspicuity of contrast‐enhanced vascularity, reflected in an increase of 9.64% in vessel sharpness.ConclusionThe proposed , featuring a novel context‐aware architecture, demonstrated its capacity as a reference‐free surrogate of structural similarity to quantify motion‐induced degradation of image quality and anatomical plausibility of image content. The validation studies showed robust performance across motion patterns, x‐ray techniques, and anatomical instances. The proposed anatomy‐ and context‐aware metric poses a powerful alternative to conventional motion estimation metrics, and a step forward for application of deep autofocus motion compensation for guidance in clinical interventional procedures.

Funder

National Institute of Biomedical Imaging and Bioengineering

Publisher

Wiley

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3