Estimate and compensate head motion in non‐contrast head CT scans using partial angle reconstruction and deep learning

Author:

Chen Zhennong1,Li Quanzheng1,Wu Dufan1

Affiliation:

1. Center for Advanced Medical Computing and Analysis Massachusetts General Hospital and Harvard Medical School Boston Massachusetts USA

Abstract

AbstractBackgroundPatient head motion is a common source of image artifacts in computed tomography (CT) of the head, leading to degraded image quality and potentially incorrect diagnoses. The partial angle reconstruction (PAR) means dividing the CT projection into several consecutive angular segments and reconstructing each segment individually. Although motion estimation and compensation using PAR has been developed and investigated in cardiac CT scans, its potential for reducing motion artifacts in head CT scans remains unexplored.PurposeTo develop a deep learning (DL) model capable of directly estimating head motion from PAR images of head CT scans and to integrate the estimated motion into an iterative reconstruction process to compensate for the motion.MethodsHead motion is considered as a rigid transformation described by six time‐variant variables, including the three variables for translation and three variables for rotation. Each motion variable is modeled using a B‐spline defined by five control points (CP) along time. We split the full projections from 360° into 25 consecutive PARs and subsequently input them into a convolutional neural network (CNN) that outputs the estimated CPs for each motion variable. The estimated CPs are used to calculate the object motion in each projection, which are incorporated into the forward and backprojection of an iterative reconstruction algorithm to reconstruct the motion‐compensated image. The performance of our DL model is evaluated through both simulation and phantom studies.ResultsThe DL model achieved high accuracy in estimating head motion, as demonstrated in both the simulation study (mean absolute error (MAE) ranging from 0.28 to 0.45 mm or degree across different motion variables) and the phantom study (MAE ranging from 0.40 to 0.48 mm or degree). The resulting motion‐corrected image, , exhibited a significant reduction in motion artifacts when compared to the traditional filtered back‐projection reconstructions, which is evidenced both in the simulation study (image MAE drops from 178 33HU to 37 9HU, structural similarity index (SSIM) increases from 0.60 0.06 to 0.98 0.01) and the phantom study (image MAE drops from 117 17HU to 42 19HU, SSIM increases from 0.83 0.04 to 0.98 0.02).ConclusionsWe demonstrate that using PAR and our proposed deep learning model enables accurate estimation of patient head motion and effectively reduces motion artifacts in the resulting head CT images.

Funder

National Institute of Biomedical Imaging and Bioengineering

Publisher

Wiley

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3