Affiliation:
1. Oxford University Mobile Robotics Group, Oxford, UK
Abstract
This paper is about long-term navigation in environments whose appearance changes over time, suddenly or gradually. We describe, implement and validate an approach which allows us to incrementally learn a model whose complexity varies naturally in accordance with variation of scene appearance. It allows us to leverage the state of the art in pose estimation to build over many runs, a world model of sufficient richness to allow simple localisation despite a large variation in conditions. As our robot repeatedly traverses its workspace, it accumulates distinct visual experiences that in concert, implicitly represent the scene variation: each experience captures a visual mode. When operating in a previously visited area, we continually try to localise in these previous experiences while simultaneously running an independent vision-based pose estimation system. Failure to localise in a sufficient number of prior experiences indicates an insufficient model of the workspace and instigates the laying down of the live image sequence as a new distinct experience. In this way, over time we can capture the typical time-varying appearance of an environment and the number of experiences required tends to a constant. Although we focus on vision as a primary sensor throughout, the ideas we present here are equally applicable to other sensor modalities. We demonstrate our approach working on a road vehicle operating over a 3-month period at different times of day, in different weather and lighting conditions. We present extensive results analysing different aspects of the system and approach, in total processing over 136,000 frames captured from 37 km of driving.
Subject
Applied Mathematics,Artificial Intelligence,Electrical and Electronic Engineering,Mechanical Engineering,Modelling and Simulation,Software
Cited by
129 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Terrain-based Place Recognition for Quadruped Robots with Limited Field-of-view LiDAR;2024 21st International Conference on Ubiquitous Robots (UR);2024-06-24
2. Appearance-invariant Visual Localization for Long-term Navigation;2024 10th International Conference on Electrical Engineering, Control and Robotics (EECR);2024-03-29
3. Assessing domain gap for continual domain adaptation in object detection;Computer Vision and Image Understanding;2024-01
4. BioSLAM: A Bioinspired Lifelong Memory System for General Place Recognition;IEEE Transactions on Robotics;2023-12
5. What to Learn: Features, Image Transformations, or Both?;2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS);2023-10-01