Author:
Liu Shu-Bin,Xie Bing-Kun,Yuan Rong-Ying,Zhang Meng-Xuan,Xu Jian-Cheng,Li Lei,Wang Qiong-Hua
Abstract
AbstractHigh performance imaging in parallel cameras is a worldwide challenge in computational optics studies. However, the existing solutions are suffering from a fundamental contradiction between the field of view (FOV), resolution and bandwidth, in which system speed and FOV decrease as system scale increases. Inspired by the compound eyes of mantis shrimp and zoom cameras, here we break these bottlenecks by proposing a deep learning-based parallel (DLBP) camera, with an 8-μrad instantaneous FOV and 4 × computational zoom at 30 frames per second. Using the DLBP camera, the snapshot of 30-MPs images is captured at 30 fps, leading to orders-of-magnitude reductions in system complexity and costs. Instead of directly capturing photography with large scale, our interactive-zoom platform operates to enhance resolution using deep learning. The proposed end-to-end model mainly consists of multiple convolution layers, attention layers and deconvolution layer, which preserves more detailed information that the image reconstructs in real time compared with the famous super-resolution methods, and it can be applied to any similar system without any modification. Benefiting from computational zoom without any additional drive and optical component, the DLBP camera provides unprecedented-competitive advantages in improving zoom response time (~ 100 ×) over the comparison systems. Herein, with the experimental system described in this work, the DLBP camera provides a novel strategy to solve the inherent contradiction among FOV, resolution and bandwidth.
Funder
National Natural Science Foundation of China
Publisher
Springer Science and Business Media LLC
Subject
Atomic and Molecular Physics, and Optics,Electrical and Electronic Engineering,Engineering (miscellaneous)
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献