Affiliation:
1. Chair for Computer Aided Medical Procedures & Augmented Reality Technical University of Munich Munich Bavaria Germany
2. Department of Electrical Engineering Sharif University of Technology Tehran Iran
3. ImFusion GmbH Munich Bavaria Germany
4. Department of Computer Science University of Verona Verona VR Italy
5. Department of Radiology Department of Medicine University of British Columbia Vancouver Canada
Abstract
AbstractBackgroundUltrasound (US) has demonstrated to be an effective guidance technique for lumbar spine injections, enabling precise needle placement without exposing the surgeon or the patient to ionizing radiation. However, noise and acoustic shadowing artifacts make US data interpretation challenging. To mitigate these problems, many authors suggested using computed tomography (CT)‐to‐US registration to align the spine in pre‐operative CT to intra‐operative US data, thus providing localization of spinal landmarks.PurposeIn this paper, we propose a deep learning (DL) pipeline for CT‐to‐US registration and address the problem of a need for annotated medical data for network training. Firstly, we design a data generation method to generate paired CT‐US data where the spine is deformed in a physically consistent manner. Secondly, we train a point cloud (PC) registration network using anatomy‐aware losses to enforce anatomically consistent predictions.MethodsOur proposed pipeline relies on training the network on realistic generated data. In our data generation method, we model the properties of the joints and disks between vertebrae based on biomechanical measurements in previous studies. We simulate the supine and prone position deformation by applying forces on the spine models. We choose the spine models from 35 patients in VerSe dataset. Each spine is deformed 10 times to create a noise‐free data with ground‐truth segmentation at hand. In our experiments, we use one‐leave‐out cross‐validation strategy to measure the performance and the stability of the proposed method. For each experiment, we choose generated PCs from three spines as the test set. From the remaining, data from 3 spines act as the validation set and we use the rest of the data for training the algorithm.To train our network, we introduce anatomy‐aware losses and constraints on the movement to match the physics of the spine, namely, rigidity loss and bio‐mechanical loss. We define rigidity loss based on the fact that each vertebra can only transform rigidly while the disks and the surrounding tissue are deformable. Second, by using bio‐mechanical loss we stop the network from inferring extreme movements by penalizing the force needed to get to a certain pose.ResultsTo validate the effectiveness of our fully automated data generation pipeline, we qualitatively assess the fidelity of the generated data. This assessment involves verifying the realism of the spinal deformation and subsequently confirming the plausibility of the simulated ultrasound images. Next, we demonstrate that the introduction of the anatomy‐aware losses brings us closer to state‐of‐the‐art (SOTA) and yields a reduction of 0.25 mm in terms of target registration error (TRE) compared to using only mean squared error (MSE) loss on the generated dataset. Furthermore, by using the proposed losses, the rigidity loss in inference decreases which shows that the inferred deformation respects the rigidity of the vertebrae and only introduces deformations in the soft tissue area to compensate the difference to the target PC. We also show that our results are close to the SOTA for the simulated US dataset with TRE of 3.89 mm and 3.63 mm for the proposed method and SOTA respectively. In addition, we show that our method is more robust against errors in the initialization in comparison to SOTA and significantly achieves better results (TRE of 4.88 mm compared to 5.66 mm) in this experiment.ConclusionsIn conclusion, we present a pipeline for spine CT‐to‐US registration and explore the potential benefits of utilizing anatomy‐aware losses to enhance registration results. Additionally, we propose a fully automatic method to synthesize paired CT‐US data with physically consistent deformations, which offers the opportunity to generate extensive datasets for network training.The generated dataset and the source code for data generation and registration pipeline can be accessed via https://github.com/mfazampour/medphys_ct_us_registration.
Funder
Qatar National Research Fund
Bayerische Forschungsstiftung
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献