Author:
S.U. Suganthi,C.S. Hemanth Kumar,S. Aditya Gurjale,A. Mithunesh Rajan
Abstract
Navigation has traditionally served the purpose of determining one's position, locating destinations, and charting a course towards them. It furnishes accurate details about the whereabouts of specific places or objects. Despite numerous advancements and enhancements in navigation technology, there have been ongoing discussions about its potential for autonomy. This suggests a scenario where navigation operates independently, without human intervention. Devices equipped with this capability comprehend their destination and chart the most efficient route to reach it. A crucial concept in this context is Visual Odometry (VO), which calculates the relative position between successive image frames. Likewise, the positioning of mobile robots relies on similar principles. However, a significant challenge arises over time as VO is susceptible to accumulating errors, known as drift. The Inertial Measurement Unit (IMU), which consists of accelerometers, gyroscopes, and magnetometers, is added to counteract this. These elements provide data that is more accurate and helps reduce noise. The integration of IMU with VO results in the creation of Visual Inertial Odometry (VIO). Furthermore, combining VIO with Global Positioning System (GPS) data through an Extended Kalman Filter (EKF) enhances localization accuracy both locally and globally. Additionally, stereo disparity estimation is employed to generate a depth perception map for obstacle detection, converted into a 2D grid map of occupancy after. While a waypoint follower directs the robot or devices toward its intended objective, local route planning algorithms create interim waypoints to prevent obstructions.
Reference12 articles.
1. Alvarez H., Paz L., Sturm J., and Cremers D.. Collision avoidance for quadrotors with a monocular camera. In International Symposium on Experimental Robotics (ISER), pp. 195–209, Springer, 2016.
2. Estimation, planning, and mapping for autonomous flight using an RGB-D camera in GPS-denied environments
3. Bhatti S., Desmaison A., Miksik O., N. Nardelli N. Siddharth, Torr P. H. S.. Playing Doom with SLAM augmented deep reinforcement learning. arXiv preprint, arXiv:1612.00380, 2016.
4. Bojarski M. et al. End to End Learning for Self-Driving Cars. arXiv preprint, arXiv:1604.07316v1, 2016.
5. Bry A., Bachrach A., and Roy N.. State estimation for aggressive flight in GPS-denied environments using onboard sensing. In International Conference on Robotics and Automation (ICRA), 2012.