Author:
Thies Mareike,Zäch Jan-Nico,Gao Cong,Taylor Russell,Navab Nassir,Maier Andreas,Unberath Mathias
Abstract
Abstract
Purpose
During spinal fusion surgery, screws are placed close to critical nerves suggesting the need for highly accurate screw placement. Verifying screw placement on high-quality tomographic imaging is essential. C-arm cone-beam CT (CBCT) provides intraoperative 3D tomographic imaging which would allow for immediate verification and, if needed, revision. However, the reconstruction quality attainable with commercial CBCT devices is insufficient, predominantly due to severe metal artifacts in the presence of pedicle screws. These artifacts arise from a mismatch between the true physics of image formation and an idealized model thereof assumed during reconstruction. Prospectively acquiring views onto anatomy that are least affected by this mismatch can, therefore, improve reconstruction quality.
Methods
We propose to adjust the C-arm CBCT source trajectory during the scan to optimize reconstruction quality with respect to a certain task, i.e., verification of screw placement. Adjustments are performed on-the-fly using a convolutional neural network that regresses a quality index over all possible next views given the current X-ray image. Adjusting the CBCT trajectory to acquire the recommended views results in non-circular source orbits that avoid poor images, and thus, data inconsistencies.
Results
We demonstrate that convolutional neural networks trained on realistically simulated data are capable of predicting quality metrics that enable scene-specific adjustments of the CBCT source trajectory. Using both realistically simulated data as well as real CBCT acquisitions of a semianthropomorphic phantom, we show that tomographic reconstructions of the resulting scene-specific CBCT acquisitions exhibit improved image quality particularly in terms of metal artifacts.
Conclusion
The proposed method is a step toward online patient-specific C-arm CBCT source trajectories that enable high-quality tomographic imaging in the operating room. Since the optimization objective is implicitly encoded in a neural network trained on large amounts of well-annotated projection images, the proposed approach overcomes the need for 3D information at run-time.
Funder
National Institutes of Health
Publisher
Springer Science and Business Media LLC
Subject
Health Informatics,Radiology, Nuclear Medicine and imaging,General Medicine,Surgery,Computer Graphics and Computer-Aided Design,Computer Science Applications,Computer Vision and Pattern Recognition,Biomedical Engineering
Reference36 articles.
1. Aebi M (2010) Indication for lumbar spinal fusion. In: Szpalski M, Gunzburg R, Rydevik BL, Le Huec JC, Mayer HM (eds) Surgery for low back pain. Springer, Berlin, pp 109–122
2. Andersson G, Watkins-Castillo SI (2014) Spinal fusion. The burden of musculoskeletal diseases in the United States (BMUS). https://www.boneandjointburden.org/. Accessed 17 Feb 2020
3. Canziani A, Paszke A, Culurciello E (2016) An analysis of deep neural network models for practical applications. arXiv preprint arXiv:1605.07678
4. Clark K, Vendt B, Smith K, Freymann J, Kirby J, Koppel P, Moore S, Phillips S, Maffitt D, Pringle M, Tarbox L (2013) The cancer imaging archive (TCIA): maintaining and operating a public information repository. J Digit imaging 26(6):1045–1057
5. Cordemans V, Kaminski L, Banse X, Francq BG, Cartiaux O (2017) Accuracy of a new intraoperative cone beam CT imaging technique (Artis zeego II) compared to postoperative CT scan for assessment of pedicle screws placement and breaches detection. Eur Spine J 26(11):2906–2916
Cited by
24 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献