Affiliation:
1. Global Navigation Satellite System (GNSS) Research Center, Wuhan University, 299 Bayi Road, Wuchang District, Wuhan 430072, China
Abstract
Drones are typically built with integrated cameras and inertial measurement units (IMUs). It is crucial to achieve drone attitude control through relative pose estimation using cameras. IMU drift can be ignored over short periods. Based on this premise, in this paper, four methods are proposed for estimating relative pose and focal length across various application scenarios: for scenarios where the camera’s focal length varies between adjacent moments and is unknown, the relative pose and focal length can be computed from four-point correspondences; for planar motion scenarios where the camera’s focal length varies between adjacent moments and is unknown, the relative pose and focal length can be determined from three-point correspondences; for instances of planar motion where the camera’s focal length is equal between adjacent moments and is unknown, the relative pose and focal length can be calculated from two-point correspondences; finally, for scenarios where multiple cameras are employed for image acquisition but only one is calibrated, a method proposed for estimating the pose and focal length of uncalibrated cameras can be used. The numerical stability and performance of these methods are compared and analyzed under various noise conditions using simulated datasets. We also assessed the performance of these methods on real datasets captured by a drone in various scenes. The experimental results demonstrate that the method proposed in this paper achieves superior accuracy and stability to classical methods.
Funder
National Key Research and Development Program of China
Ministry of Industry and Information Technology of China through the High-Precision Timing Service Project
Reference38 articles.
1. Monocular VO Based on Deep Siamese Convolutional Neural Network;Wang;Complexity,2020
2. Approaches, Challenges, and Applications for Deep Visual Odometry: Toward Complicated and Emerging Areas;Wang;IEEE Trans. Cogn. Dev. Syst.,2022
3. Chen, J., Xie, F., Huang, L., Yang, J., Liu, X., and Shi, J. (2022). A Robot Pose Estimation Optimized Visual SLAM Algorithm Based on CO-HDC Instance Segmentation Network for Dynamic Scenes. Remote Sens., 14.
4. Relative Pose Estimation of Space Tumbling Non cooperative Target Based on Vision only SLAM;Hao;J. Astronaut.,2015
5. Yin, Z., Wen, H., Nie, W., and Zhou, M. (2023). Localization of Mobile Robots Based on Depth Camera. Remote Sens., 15.