Abstract
AbstractMedical imaging applications are challenging for machine learning and computer vision methods, in general, for two main reasons: it is difficult to generate reliable ground truth and databases are usually too small in size for training state of the art methods. Virtual images obtained from computer simulations could be used to train classifiers and validate image processing methods if their appearances were comparable (in texture and color) to the actual appearance of intra-operative medical images. Recent works focus on style transfer to generate artistic images by combining the content of an image and the style of another one. A main challenge is the generation of pairs with similar content ensuring preservation of anatomical features, especially across multi-modal data. This paper presents a deep-learning approach to content-preserving style transfer of intra-operative medical data for realistic virtual endoscopy. We propose a multi-objective optimization strategy for Generative Adversarial Networks (GANs) to obtain content-matching pairs that are blended using a siamese u-net architecture (called Content-net) that uses a measure of the content of activations to modulate skip connections. Our approach has been applied to transfer the appearance of bronchoscopic intra-operative videos to virtual bronchoscopies. Experiments assess images in terms of, both, content and appearance and show that our simulated data can substitute intra-operative videos for the design and training of image processing methods.
Publisher
Cold Spring Harbor Laboratory
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献