A robust and automatic CT‐3D ultrasound registration method based on segmentation, context, and edge hybrid metric

Author:

He Baochun12,Zhao Sheng3,Dai Yanmei3,Wu Jiaqi4,Luo Huoling1,Guo Jianxi5,Ni Zhipeng6,Wu Tianchong7,Kuang Fangyuan8,Jiang Huijie3,Zhang Yanfang5,Jia Fucang129

Affiliation:

1. Center for Medical AI Shenzhen Institute of Advanced Technology Chinese Academy of Sciences Shenzhen China

2. Shenzhen College of Advanced Technology University of Chinese Academy of Sciences Shenzhen China

3. Department of Radiology The Second Affiliated Hospital of Harbin Medical University Harbin China

4. Department of Inpatient Ultrasound The Second Affiliated Hospital of Harbin Medical University Harbin China

5. Department of Interventional Radiology Shenzhen People's Hospital Shenzhen China

6. Department of Ultrasound Shenzhen People's Hospital Shenzhen China

7. Department of Hepatobiliary and Pancreatic Surgery Shenzhen People's Hospital Shenzhen China

8. College of Medicine and Biological Information Engineering Northeastern University Shenyang China

9. Pazhou Lab Guangzhou China

Abstract

AbstractBackgroundThe fusion of computed tomography (CT) and ultrasound (US) image can enhance lesion detection ability and improve the success rate of liver interventional radiology. The image‐based fusion methods encounter the challenge of registration initialization due to the random scanning pose and limited field of view of US. Existing automatic methods those used vessel geometric information and intensity‐based metric are sensitive to parameters and have low success rate. The learning‐based methods require a large number of registered datasets for training.PurposeThe aim of this study is to provide a fully automatic and robust US‐3D CT registration method without registered training data and user‐specified parameters assisted by the revolutionary deep learning‐based segmentation, which can further be used for preparing training samples for the study of learning‐based methods.MethodsWe propose a fully automatic CT‐3D US registration method by two improved registration metrics. We propose to use 3D U‐Net‐based multi‐organ segmentation of US and CT to assist the conventional registration. The rigid transform is searched in the space of any paired vessel bifurcation planes where the best transform is decided by a segmentation overlap metric, which is more related to the segmentation precision than Dice coefficient. In nonrigid registration phase, we propose a hybrid context and edge based image similarity metric with a simple mask that can remove most noisy US voxels to guide the B‐spline transform registration. We evaluate our method on 42 paired CT‐3D US datasets scanned with two different US devices from two hospitals. We compared our methods with other exsiting methods with both quantitative measures of target registration error (TRE) and the Jacobian determinent with paired t‐test and qualitative registration imaging results.ResultsThe results show that our method achieves fully automatic rigid registration TRE of 4.895 mm, deformable registration TRE of 2.995 mm in average, which outperforms state‐of‐the‐art automatic linear methods and nonlinear registration metrics with paired t‐test's p value less than 0.05. The proposed overlap metric achieves better results than self similarity description (SSD), edge matching (EM), and block matching (BM) with p values of 1.624E‐10, 4.235E‐9, and 0.002, respectively. The proposed hybrid edge and context‐based metric outperforms context‐only, edge‐only, and intensity statistics‐only‐based metrics with p values of 0.023, 3.81E‐5, and 1.38E‐15, respectively. The 3D US segmentation has achieved mean Dice similarity coefficient (DSC) of 0.799, 0.724, 0.788, and precision of 0.871, 0.769, 0.862 for gallbladder, vessel, and branch vessel, respectively.ConclusionsThe deep learning‐based US segmentation can achieve satisfied result to assist robust conventional rigid registration. The Dice similarity coefficient‐based metrics, hybrid context, and edge image similarity metric contribute to robust and accurate registration.

Funder

National Key Research and Development Program of China

National Natural Science Foundation of China

Natural Science Foundation of Guangdong Province

Publisher

Wiley

Subject

General Medicine

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3