Affiliation:
1. School of Software Engineering, Tongji University, Shanghai, China
2. Artificial Intelligence Institute, Shanghai Jiao Tong University, Shanghai, China
3. Department of Computer and Information Science, University of Macau, Macau, China
Abstract
For the task of autonomous indoor parking, various Visual-Inertial Simultaneous Localization And Mapping (SLAM) systems are expected to achieve comparable results with the benefit of complementary effects of visual cameras and the Inertial Measurement Units. To compare these competing SLAM systems, it is necessary to have publicly available datasets, offering an objective way to demonstrate the pros/cons of each SLAM system. However, the availability of such high-quality datasets is surprisingly limited due to the profound challenge of the groundtruth trajectory acquisition in the Global Positioning Satellite denied indoor parking environments. In this article, we establish BeVIS, a large-scale
Be
nchmark dataset with
V
isual (front-view),
I
nertial and
S
urround-view sensors for evaluating the performance of SLAM systems developed for autonomous indoor parking, which is the first of its kind where both the raw data and the groundtruth trajectories are available. In BeVIS, the groundtruth trajectories are obtained by tracking artificial landmarks scattered in the indoor parking environments, whose coordinates are recorded in a surveying manner with a high-precision Electronic Total Station. Moreover, the groundtruth trajectories are comprehensively evaluated in terms of two respects, the reprojection error and the pose volatility, respectively. Apart from BeVIS, we propose a novel tightly coupled semantic SLAM framework, namely VIS
SLAM
-2, leveraging
V
isual (front-view),
I
nertial, and
S
urround-view sensor modalities, specially for the task of autonomous indoor parking. It is the first work attempting to provide a general form to model various semantic objects on the ground. Experiments on BeVIS demonstrate the effectiveness of the proposed VIS
SLAM
-2. Our benchmark dataset BeVIS is publicly available at
https://shaoxuan92.github.io/BeVIS
.
Funder
National Natural Science Foundation of China
Natural Science Foundation of Shanghai
Shanghai Science and Technology Innovation Plan
Dawn Program of Shanghai Municipal Education Commission
Shanghai Municipal Science and Technology Major Project
Fundamental Research Funds for the Central Universities
Publisher
Association for Computing Machinery (ACM)
Subject
Computer Networks and Communications,Hardware and Architecture
Reference53 articles.
1. Direct Linear Transformation from Comparator Coordinates into Object Space Coordinates in Close-Range Photogrammetry
2. Joseph Awange, Erik Wilhelm Grafarend, Béla Paláncz, and Piroska Zaletnyik. 2010. Positioning by Intersection Methods. Algebraic Geodesy and Geoinformatics, Springer, Berlin, 249–263.
3. DynaSLAM: Tracking, Mapping, and Inpainting in Dynamic Scenes
4. The Málaga urban dataset: High-rate stereo and LiDAR in a realistic urban scenario
5. Michael Bloesch, Sammy Omari, Marco Hutter, and Roland Siegwart. 2015. Robust visual inertial odometry using a direct EKF-based approach. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems. 298–304.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献