Abstract
Precise position, velocity, and attitude is essential for self-driving cars and unmanned aerial vehicles (UAVs). The integration of global navigation satellite system (GNSS) real-time kinematics (RTK) and inertial measurement units (IMUs) is able to provide high-accuracy navigation solutions in open-sky conditions, but the accuracy will be degraded severely in GNSS-challenged environments, especially integrated with the low-cost microelectromechanical system (MEMS) IMUs. In order to navigate in GNSS-denied environments, the visual–inertial system has been widely adopted due to its complementary characteristics, but it suffers from error accumulation. In this contribution, we tightly integrate the raw measurements from the single-frequency multi-GNSS RTK, MEMS-IMU, and monocular camera through the extended Kalman filter (EKF) to enhance the navigation performance in terms of accuracy, continuity, and availability. The visual measurement model from the well-known multistate constraint Kalman filter (MSCKF) is combined with the double-differenced GNSS measurement model to update the integration filter. A field vehicular experiment was carried out in GNSS-challenged environments to evaluate the performance of the proposed algorithm. Results indicate that both multi-GNSS and vision contribute significantly to the centimeter-level positioning availability in GNSS-challenged environments. Meanwhile, the velocity and attitude accuracy can be greatly improved by using the tightly-coupled multi-GNSS RTK/INS/Vision integration, especially for the yaw angle.
Funder
the National Key Research and Development Program of China
Subject
General Earth and Planetary Sciences
Cited by
100 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献