Abstract
The scientific research strength in the aerospace field has become an essential criterion for measuring a country’s scientific and technological level and comprehensive national power, but in the grand scheme, many factors are beyond human control. As is known, the difficulty with non-cooperative target intersection docking is its failure to provide attitude information autonomously. The existing non-cooperative target poses estimation methods with low accuracy and high resource consumption. This paper proposes a deep-learning-based pose estimation method for solving these problems. The proposed pose estimation method consists of two distinctly innovative works. You Only Look Once v5 (YOLOv5) is an innovative and lightweight network that is used to pre-recognize non-cooperative targets. Another part introduces concurrent space and channel compressor theory modules in a lightweight High-Resolution Network (HRNet) to extend its advantages in real-time, and hence proposes a spatial and channel Squeeze and Excitation—Lightweight High-Resolution Network (scSE-LHRNet) network for pose estimation. To verify the superiority of the proposed network, experiments were conducted on a publicly available dataset with multiple evaluation metrics to compare and analyze existing methods. The experimental results show that the proposed pose estimation method dramatically reduces the complexity of the model, effectively decreases the amount of computation, and achieves significant pose estimation results.
Funder
National Science Foundation of Heilongjiang Province
National Science Foundation for Young Scientists of China
Reference32 articles.
1. Research status and development trend of aerospace vehicle control technology;Bao;Acta Autom. Sin.,2013
2. Advances in Space Robot on-orbit Servicing for Non-cooperative Spacecraft;Liang;Jiqiren,2012
3. Li, R., Wang, S., Long, Z., and Gu, D. (2018, January 21–25). Undeepvo: Monocular Visual Odometry Through Unsupervised Deep Learning. Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia.
4. Kendall, A., Grimes, M., and Cipolla, R. (2015, January 7–13). PoseNet: A Convolutional Network for Real-time 6-dof Camera Relocalization. Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile.
5. Review of the Robustness and Applicability of Monocular Pose Estimation Systems for Relative Navigation with an Uncooperative Spacecraft;Cassinis;Prog. Aerosp. Sci.,2019
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献