Unsupervised Depth Completion Guided by Visual Inertial System and Confidence
Affiliation:
1. School of Electrical Engineering and Automation, Harbin Institute of Technology, Harbin 150001, China
Abstract
This paper solves the problem of depth completion learning from sparse depth maps and RGB images. Specifically, a real-time unsupervised depth completion method in dynamic scenes guided by visual inertial system and confidence is described. The problems such as occlusion (dynamic scenes), limited computational resources and unlabeled training samples can be better solved in our method. The core of our method is a new compact network, which uses images, pose and confidence guidance to perform depth completion. Since visual-inertial information is considered as the only source of supervision, the loss function of confidence guidance is creatively designed. Especially for the problem of pixel mismatch caused by object motion and occlusion in dynamic scenes, we divide the images into static, dynamic and occluded regions, and design loss functions to match each region. Our experimental results in dynamic datasets and real dynamic scenes show that this regularization alone is sufficient to train depth completion models. Our depth completion network exceeds the accuracy achieved in prior work for unsupervised depth completion, and only requires a small number of parameters.
Funder
National Natural Science Foundation of China
Subject
Electrical and Electronic Engineering,Biochemistry,Instrumentation,Atomic and Molecular Physics, and Optics,Analytical Chemistry
Reference33 articles.
1. Hu, J., Bao, C., Ozay, M., Fan, C., Gao, Q., Liu, H., and Lam, T.L. (2022). Deep Depth Completion from Extremely Sparse Data: A Survey. arXiv, Available online: https://arxiv.org/abs/2205.05335. 2. A multi-cue guidance network for depth completion;Zhang;Neurocomputing,2021 3. Sartipi, K., Do, T., Ke, T., Vuong, K., and Roumeliotis, S.I. (2020–24, January 24). Deep Depth Estimation from Visual-Inertial SLAM. Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA. 4. Lin, Y., Cheng, T., Thong, Q., Zhou, W., and Yang, H. (2022). Dynamic Spatial Propagation Network for Depth Completion. arXiv, Available online: https://arxiv.org/abs/2202.09769. 5. Van Gansbeke, W., Neven, D., de Brabandere, B., and van Gool, L. (2019, January 27–31). Sparse and Noisy LiDAR Completion with RGB Guidance and Uncertainty. Proceedings of the 2019 16th International Conference on Machine Vision Applications (MVA), Tokyo, Japan.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Autonomous Localization Method for Railway Trains Based on Multi-Source Information Fusion;2024 International Conference on Electronic Engineering and Information Systems (EEISS);2024-01-13
|
|