Abstract
Abstract
Purpose
Emerging holographic headsets can be used to register patient-specific virtual models obtained from medical scans with the patient’s body. Maximising accuracy of the virtual models’ inclination angle and position (ideally, ≤ 2° and ≤ 2 mm, respectively, as in currently approved navigation systems) is vital for this application to be useful. This study investigated the accuracy with which a holographic headset registers virtual models with real-world features based on the position and size of image markers.
Methods
HoloLens® and the image-pattern-recognition tool Vuforia Engine™ were used to overlay a 5-cm-radius virtual hexagon on a monitor’s surface in a predefined position. The headset’s camera detection of an image marker (displayed on the monitor) triggered the rendering of the virtual hexagon on the headset’s lenses. 4 × 4, 8 × 8 and 12 × 12 cm image markers displayed at nine different positions were used. In total, the position and dimensions of 114 virtual hexagons were measured on photographs captured by the headset’s camera.
Results
Some image marker positions and the smallest image marker (4 × 4 cm) led to larger errors in the perceived dimensions of the virtual models than other image marker positions and larger markers (8 × 8 and 12 × 12 cm). ≤ 2° and ≤ 2 mm errors were found in 70.7% and 76% of cases, respectively.
Conclusion
Errors obtained in a non-negligible percentage of cases are not acceptable for certain surgical tasks (e.g. the identification of correct trajectories of surgical instruments). Achieving sufficient accuracy with image marker sizes that meet surgical needs and regardless of image marker position remains a challenge.
Funder
The Roland Sutton Academic Trust
University of Aberdeen
Publisher
Springer Science and Business Media LLC
Subject
Health Informatics,Radiology Nuclear Medicine and imaging,General Medicine,Surgery,Computer Graphics and Computer-Aided Design,Computer Science Applications,Computer Vision and Pattern Recognition,Biomedical Engineering
Reference38 articles.
1. Zhou Z, Yang Z, Jiang S, Zhang F, Yan H (2019) Design and validation of a surgical navigation system for brachytherapy based on mixed reality. Med Phys 46(8):3709–3718. https://doi.org/10.1002/mp.13645
2. Holloway RL (1997) Registration error analysis for augmented reality. Presence Teleoper Virtual Environ 6(4):413–432. https://doi.org/10.1162/pres.1997.6.4.413
3. Administration USFD (2020) 510(k) Premarket notification (K192703): brainlab, cranial image guided surgery system. U.S. Department of Health & Human Services, Silver Spring, Maryland, U.S.
4. Administration USFD (2019) 510(k) Premarket notification (K190672): medtronic navigation Inc., StealthStation Synergy Cranial S7 Software v.2.2.8, StealthStation Cranial Software v3.1.1. U.S. Department of Health & Human Services, Silver Spring, Maryland, U.S.
5. Xiang Z, Fronz S, Navab N (2002) Visual marker detection and decoding in AR systems: a comparative study. In: Proceedings. International Symposium on Mixed and Augmented Reality, 1–1 Oct. 2002. pp 97–106. doi:https://doi.org/10.1109/ISMAR.2002.1115078
Cited by
21 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献