Adaptable 2D to 3D Stereo Vision Image Conversion Based on a Deep Convolutional Neural Network and Fast Inpaint Algorithm
Affiliation:
1. Faculty of Electrical Engineering, Automatics, Computer Science and Biomedical Engineering, AGH University of Krakow, Al. Mickiewicza 30, 30-059 Krakow, Poland
Abstract
Algorithms for converting 2D to 3D are gaining importance following the hiatus brought about by the discontinuation of 3D TV production; this is due to the high availability and popularity of virtual reality systems that use stereo vision. In this paper, several depth image-based rendering (DIBR) approaches using state-of-the-art single-frame depth generation neural networks and inpaint algorithms are proposed and validated, including a novel very fast inpaint (FAST). FAST significantly exceeds the speed of currently used inpaint algorithms by reducing computational complexity, without degrading the quality of the resulting image. The role of the inpaint algorithm is to fill in missing pixels in the stereo pair estimated by DIBR. Missing estimated pixels appear at the boundaries of areas that differ significantly in their estimated distance from the observer. In addition, we propose parameterizing DIBR using a singular, easy-to-interpret adaptable parameter that can be adjusted online according to the preferences of the user who views the visualization. This single parameter governs both the camera parameters and the maximum binocular disparity. The proposed solutions are also compared with a fully automatic 2D to 3D mapping solution. The algorithm proposed in this work, which features intuitive disparity steering, the foundational deep neural network MiDaS, and the FAST inpaint algorithm, received considerable acclaim from evaluators. The mean absolute error of the proposed solution does not contain statistically significant differences from state-of-the-art approaches like Deep3D and other DIBR-based approaches using different inpaint functions. Since both the source codes and the generated videos are available for download, all experiments can be reproduced, and one can apply our algorithm to any selected video or single image to convert it.
Subject
General Physics and Astronomy
Reference55 articles.
1. Loop, C., and Zhang, Z. (1999, January 23–25). Computing rectifying homographies for stereo vision. Proceedings of the 999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149), Ft. Collins, CO, USA. 2. Mononizing Binocular Videos;Hu;ACM Trans. Graph.,2020 3. Chen, W.Y., Chang, Y.L., Lin, S.F., Ding, L.F., and Chen, L.G. (2005, January 6–8). Efficient Depth Image Based Rendering with Edge Dependent Depth Filter and Interpolation. Proceedings of the 2005 IEEE International Conference on Multimedia and Expo, Amsterdam, The Netherlands. 4. Feng, Z., Chao, Z., Huamin, Y., and Yuying, D. (2019, January 22–24). Research on Fully Automatic 2D to 3D Method Based on Deep Learning. Proceedings of the 2019 IEEE 2nd International Conference on Automation, Electronics and Electrical Engineering (AUTEEE), Chicago, IL, USA. 5. Po, L.M., Xu, X., Zhu, Y., Zhang, S., Cheung, K.W., and Ting, C.W. (2010, January 18–21). Automatic 2D-to-3D video conversion technique based on depth-from-motion and color segmentation. Proceedings of the IEEE 10th International Conference on Signal Processing Proceedings, Bangalore, India.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|