Abstract
Abstract
Purpose
Given the high level of expertise required for navigation and interpretation of ultrasound images, computational simulations can facilitate the training of such skills in virtual reality. With ray-tracing based simulations, realistic ultrasound images can be generated. However, due to computational constraints for interactivity, image quality typically needs to be compromised.
Methods
We propose herein to bypass any rendering and simulation process at interactive time, by conducting such simulations during a non-time-critical offline stage and then learning image translation from cross-sectional model slices to such simulated frames. We use a generative adversarial framework with a dedicated generator architecture and input feeding scheme, which both substantially improve image quality without increase in network parameters. Integral attenuation maps derived from cross-sectional model slices, texture-friendly strided convolutions, providing stochastic noise and input maps to intermediate layers in order to preserve locality are all shown herein to greatly facilitate such translation task.
Results
Given several quality metrics, the proposed method with only tissue maps as input is shown to provide comparable or superior results to a state-of-the-art that uses additional images of low-quality ultrasound renderings. An extensive ablation study shows the need and benefits from the individual contributions utilized in this work, based on qualitative examples and quantitative ultrasound similarity metrics. To that end, a local histogram statistics based error metric is proposed and demonstrated for visualization of local dissimilarities between ultrasound images.
Conclusion
A deep-learning based direct transformation from interactive tissue slices to likeness of high quality renderings allow to obviate any complex rendering process in real-time, which could enable extremely realistic ultrasound simulations on consumer-hardware by moving the time-intensive processes to a one-time, offline, preprocessing data preparation stage that can be performed on dedicated high-end hardware.
Funder
Innosuisse - Schweizerische Agentur für Innovationsförderung
Publisher
Springer Science and Business Media LLC
Subject
Health Informatics,Radiology Nuclear Medicine and imaging,General Medicine,Surgery,Computer Graphics and Computer-Aided Design,Computer Science Applications,Computer Vision and Pattern Recognition,Biomedical Engineering
Reference29 articles.
1. Bargsten L, Schlaefer A (2020) SpeckleGan: a generative adversarial network with an adaptive speckle layer to augment limited training data for ultrasound image processing. International journal of computer assisted radiology and surgery 15(9):1427–1436
2. Blum T, Rieger A, Navab N, Friess H, Martignoni M (2013) A review of computer-based simulators for ultrasound training. Simulat Healthcare 8(2):98–108
3. Burger B, Bettinghausen S, Radle M, Hesser J (2013) Real-time GPU-based ultrasound simulation using deformable mesh models. IEEE Trans Med Imag 32(3):609–618
4. Goksel O, Salcudean SE (2009) B-mode ultrasound image simulation in deformable 3-D medium. IEEE Trans med imag 28(11):1657–1669
5. Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio, Y (2014) Generative adversarial nets. In: Advances in neural information processing systems, pp. 2672–2680
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献