METRIC—Multi-Eye to Robot Indoor Calibration Dataset
-
Published:2023-05-29
Issue:6
Volume:14
Page:314
-
ISSN:2078-2489
-
Container-title:Information
-
language:en
-
Short-container-title:Information
Author:
Allegro Davide1ORCID, Terreran Matteo1ORCID, Ghidoni Stefano1
Affiliation:
1. Departement of Information Engineering, University of Padova, Via Giovanni Gradenigo 6b, 35131 Padova, Italy
Abstract
Multi-camera systems are an effective solution for perceiving large areas or complex scenarios with many occlusions. In such a setup, an accurate camera network calibration is crucial in order to localize scene elements with respect to a single reference frame shared by all the viewpoints of the network. This is particularly important in applications such as object detection and people tracking. Multi-camera calibration is a critical requirement also in several robotics scenarios, particularly those involving a robotic workcell equipped with a manipulator surrounded by multiple sensors. Within this scenario, the robot-world hand-eye calibration is an additional crucial element for determining the exact position of each camera with respect to the robot, in order to provide information about the surrounding workspace directly to the manipulator. Despite the importance of the calibration process in the two scenarios outlined above, namely (i) a camera network, and (ii) a camera network with a robot, there is a lack of standard datasets available in the literature to evaluate and compare calibration methods. Moreover they are usually treated separately and tested on dedicated setups. In this paper, we propose a general standard dataset acquired in a robotic workcell where calibration methods can be evaluated in two use cases: camera network calibration and robot-world hand-eye calibration. The Multi-Eye To Robot Indoor Calibration (METRIC) dataset consists of over 10,000 synthetic and real images of ChAruCo and checkerboard patterns, each one rigidly attached to the robot end-effector, which was moved in front of four cameras surrounding the manipulator from different viewpoints during the image acquisition. The real images in the dataset includes several multi-view image sets captured by three different types of sensor networks: Microsoft Kinect V2, Intel RealSense Depth D455 and Intel RealSense Lidar L515, to evaluate their advantages and disadvantages for calibration. Furthermore, in order to accurately analyze the effect of camera-robot distance on calibration, we acquired a comprehensive synthetic dataset, with related ground truth, with three different camera network setups corresponding to three levels of calibration difficulty depending on the cell size. An additional contribution of this work is to provide a comprehensive evaluation of state-of-the-art calibration methods using our dataset, highlighting their strengths and weaknesses, in order to outline two benchmarks for the two aforementioned use cases.
Funder
European Union’s Horizon 2020 research and innovation program
Subject
Information Systems
Reference36 articles.
1. Dong, Z., Song, J., Chen, X., Guo, C., and Hilliges, O. (2021, January 11–17). Shape-aware multi-person pose estimation from multi-view images. Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada. 2. Golda, T., Kalb, T., Schumann, A., and Beyerer, J. (2019, January 18–21). Human pose estimation for real-world crowded scenarios. Proceedings of the 2019 16th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), Taipei, Taiwan. 3. Cortés, I., Beltrán, J., de la Escalera, A., and García, F. (November, January 19). Sianms: Non-maximum suppression with siamese networks for multi-camera 3d object detection. Proceedings of the 2020 IEEE Intelligent Vehicles Symposium (IV), Las Vegas, NV, USA. 4. Quintana, M., Karaoglu, S., Alvarez, F., Menendez, J.M., and Gevers, T. (2019). Three-D Wide Faces (3DWF): Facial Landmark Detection and 3D Reconstruction over a New RGB–D Multi-Camera Dataset. Sensors, 19. 5. Low-cost scalable people tracking system for human-robot collaboration in industrial environment;Terreran;Procedia Manuf.,2020
|
|