Abstract
AbstractWe propose deep depth from focal stack (DDFS), which takes a focal stack as input of a neural network for estimating scene depth. Defocus blur is a useful cue for depth estimation. However, the size of the blur depends on not only scene depth but also camera settings such as focus distance, focal length, and f-number. Current learning-based methods without any defocus models cannot estimate a correct depth map if camera settings are different at training and test times. Our method takes a plane sweep volume as input for the constraint between scene depth, defocus images, and camera settings, and this intermediate representation enables depth estimation with different camera settings at training and test times. This camera-setting invariance can enhance the applicability of DDFS. The experimental results also indicate that our method is robust against a synthetic-to-real domain gap.
Funder
Japan Society for the Promotion of Science
Publisher
Springer Science and Business Media LLC
Subject
Artificial Intelligence,Computer Vision and Pattern Recognition,Software
Reference41 articles.
1. Anwar, S., Hayder, Z., & Porikli, F. (2017). Depth estimation and blur removal from a single out-of-focus image. In: BMVC.
2. Carvalho, M., Le Saux, B., Trouve-Peloux, P., Almansa, A., & Champagnat, F. (2018). Deep depth from defocus: How can defocus blur improve 3D estimation using dense neural networks? In: ECCVW. https://github.com/marcelampc/d3net_depth_estimation (GPLv3 license).
3. Ceruso, S., Bonaque-González, S., Oliva-García, R., & Rodríguez-Ramos, J. M. (2021). Relative multiscale deep depth from focus. Signal Processing: Image Communication, 99, 116417.
4. Collins, R. T. (1996). A space-sweep approach to true multi-image matching. In: CVPR (pp. 358–363).
5. digiCamControl. http://digicamcontrol.com/