Author:
Go Taesik,Lee Sangseung,You Donghyun,Lee Sang Joon
Abstract
AbstractDigital holographic microscopy enables the recording of sample holograms which contain 3D volumetric information. However, additional optical elements, such as partially or fully coherent light source and a pinhole, are required to induce diffraction and interference. Here, we present a deep neural network based on generative adversarial network (GAN) to perform image transformation from a defocused bright-field (BF) image acquired from a general white light source to a holographic image. Training image pairs of 11,050 for image conversion were gathered by using a hybrid BF and hologram imaging technique. The performance of the trained network was evaluated by comparing generated and ground truth holograms of microspheres and erythrocytes distributed in 3D. Holograms generated from BF images through the trained GAN showed enhanced image contrast with 3–5 times increased signal-to-noise ratio compared to ground truth holograms and provided 3D positional information and light scattering patterns of the samples. The developed GAN-based method is a promising mean for dynamic analysis of microscale objects with providing detailed 3D positional information and monitoring biological samples precisely even though conventional BF microscopic setting is utilized.
Publisher
Springer Science and Business Media LLC
Cited by
18 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献