Abstract
Abstract
Purpose
Accurate estimation of the position and orientation (pose) of surgical instruments is crucial for delicate minimally invasive temporal bone surgery. Current techniques lack in accuracy and/or line-of-sight constraints (conventional tracking systems) or expose the patient to prohibitive ionizing radiation (intra-operative CT). A possible solution is to capture the instrument with a c-arm at irregular intervals and recover the pose from the image.
Methods
i3PosNet infers the position and orientation of instruments from images using a pose estimation network. Said framework considers localized patches and outputs pseudo-landmarks. The pose is reconstructed from pseudo-landmarks by geometric considerations.
Results
We show i3PosNet reaches errors $$<\,0.05$$<0.05 mm. It outperforms conventional image registration-based approaches reducing average and maximum errors by at least two thirds. i3PosNet trained on synthetic images generalizes to real X-rays without any further adaptation.
Conclusion
The translation of deep learning-based methods to surgical applications is difficult, because large representative datasets for training and testing are not available. This work empirically shows sub-millimeter pose estimation trained solely based on synthetic training data.
Funder
Technische Universität Darmstadt
Publisher
Springer Science and Business Media LLC
Subject
Health Informatics,Radiology, Nuclear Medicine and imaging,General Medicine,Surgery,Computer Graphics and Computer-Aided Design,Computer Science Applications,Computer Vision and Pattern Recognition,Biomedical Engineering
Cited by
21 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献