Abstract
A hologram, measured by using appropriate coherent illumination, records all substantial volumetric information of the measured sample. It is encoded in its interference patterns and, from these, the image of the sample objects can be reconstructed in different depths by using standard techniques of digital holography. We claim that a 2D convolutional network (CNN) cannot be efficient in decoding this volumetric information spread across the whole image as it inherently operates on local spatial features. Therefore, we propose a method, where we extract the volumetric information of the hologram by mapping it to a volume—using a standard wavefield propagation algorithm—and then feed it to a 3D-CNN-based architecture. We apply this method to a challenging real-life classification problem and compare its performance with an equivalent 2D-CNN counterpart. Furthermore, we inspect the robustness of the methods to slightly defocused inputs and find that the 3D method is inherently more robust in such cases. Additionally, we introduce a hologram-specific augmentation technique, called hologram defocus augmentation, that improves the performance of both methods for slightly defocused inputs. The proposed 3D-model outperforms the standard 2D method in classification accuracy both for in-focus and defocused input samples. Our results confirm and support our fundamental hypothesis that a 2D-CNN-based architecture is limited in the extraction of volumetric information globally encoded in the reconstructed hologram image.
Funder
Artificial Intelligence National Laboratory
Subject
Electrical and Electronic Engineering,Biochemistry,Instrumentation,Atomic and Molecular Physics, and Optics,Analytical Chemistry
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献