Monocular Depth Estimation via Self-Supervised Self-Distillation
Author:
Hu Haifeng1ORCID, Feng Yuyang1ORCID, Li Dapeng1, Zhang Suofei2, Zhao Haitao23
Affiliation:
1. College of Telecommunications and Information Engineering, Nanjing University of Posts and Telecommunications, Nanjing 210003, China 2. College of Internet of Things, Nanjing University of Posts and Telecommunications, Nanjing 210003, China 3. Engineering Research Center of Health Service System Based on Ubiquitous Wireless Networks, Ministry of Education, Nanjing University of Posts and Telecommunications, Nanjing 210003, China
Abstract
Self-supervised monocular depth estimation can exhibit excellent performance in static environments due to the multi-view consistency assumption during the training process. However, it is hard to maintain depth consistency in dynamic scenes when considering the occlusion problem caused by moving objects. For this reason, we propose a method of self-supervised self-distillation for monocular depth estimation (SS-MDE) in dynamic scenes, where a deep network with a multi-scale decoder and a lightweight pose network are designed to predict depth in a self-supervised manner via the disparity, motion information, and the association between two adjacent frames in the image sequence. Meanwhile, in order to improve the depth estimation accuracy of static areas, the pseudo-depth images generated by the LeReS network are used to provide the pseudo-supervision information, enhancing the effect of depth refinement in static areas. Furthermore, a forgetting factor is leveraged to alleviate the dependency on the pseudo-supervision. In addition, a teacher model is introduced to generate depth prior information, and a multi-view mask filter module is designed to implement feature extraction and noise filtering. This can enable the student model to better learn the deep structure of dynamic scenes, enhancing the generalization and robustness of the entire model in a self-distillation manner. Finally, on four public data datasets, the performance of the proposed SS-MDE method outperformed several state-of-the-art monocular depth estimation techniques, achieving an accuracy (δ1) of 89% while minimizing the error (AbsRel) by 0.102 in NYU-Depth V2 and achieving an accuracy (δ1) of 87% while minimizing the error (AbsRel) by 0.111 in KITTI.
Funder
National Natural Science Foundation of China
Reference54 articles.
1. Monocular Depth Estimation: A Review of the 2022 State of the Art;Ehret;Image Process. Line,2023 2. A Study on the Generality of Neural Network Structures for Monocular Depth Estimation;Bae;IEEE Trans. Pattern Anal. Mach. Intell.,2024 3. Saxena, A., Chung, S.H., and Ng, A.Y. (2005, January 5–8). Learning depth from single monocular images. Proceedings of the 18th International Conference on Neural Information Processing Systems NIPS’05, Cambridge, MA, USA. 4. Saxena, A., Schulte, J., and Ng, A.Y. (2007, January 6–12). Depth estimation using monocular and stereo cues. Proceedings of the 20th International Joint Conference on Artifical Intelligence IJCAI’07, San Francisco, CA, USA. 5. Doersch, C., Gupta, A., and Efros, A.A. (2015, January 7–13). Unsupervised Visual Representation Learning by Context Prediction. Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|