Visual Odometry of a Low-Profile Pallet Robot Based on Ortho-Rectified Ground Plane Image from Fisheye Camera
-
Published:2023-08-09
Issue:16
Volume:13
Page:9095
-
ISSN:2076-3417
-
Container-title:Applied Sciences
-
language:en
-
Short-container-title:Applied Sciences
Author:
Park Soon-Yong1ORCID, Lee Ung-Gyo2ORCID, Baek Seung-Hae3
Affiliation:
1. School of Electronic and Electrical Engineering, Kyungpook National University, 80 Daehak-ro, Puk-gu, Daegu 41566, Republic of Korea 2. HL Klemove, 224 Harmony-ro, Yeonsu-gu, Incheon 22011, Republic of Korea 3. KLA Corporation, 830 Dongtansunwhan-daero, Hwaseong-si 18468, Republic of Korea
Abstract
This study presents a visual-only odometry technique of a low-profile pallet robot using image feature tracking in ground plane images generated from a fisheye camera. The fisheye camera is commonly used in many robot vision applications because it provides a larger field of view (FoV) around a robot. However, because of the large radial distortion, the fisheye image is generally converted to a pinhole image for visual feature tracking or matching. Although the radial distortion can be eliminated via image undistortion with the lens calibration parameters, it causes several side effects, such as degraded image resolution and a significant reduction in the FoV. In this paper, instead of using the pinhole model, we propose to generate a ground plane image (GPI) from the fisheye image. GPI is a virtual top-view image that only contains the ground plane at the front of the robot. First, the original fisheye image is projected to several virtual pinhole images to generate a cubemap. Second, the front and bottom faces of the cubemap are projected to a GPI. Third, the GPI is homographically transformed again to further reduce image distortion. As a result, an accurate ortho-rectified ground plane image is obtained from the virtual top-view camera. For visual odometry using the ortho-rectified GPI, a number of 2D motion vectors are obtained using feature extraction and tracking between the previous and current frames in the GPI. By calculating a scaled motion vector, which is the measurement of the virtual wheel encoder of the mobile robot, we estimate the velocity and steering angle of the virtual wheel using the motion vector. Finally, we estimate the pose of the mobile robot by applying a kinematic model to the mobile robot.
Funder
Ministry of Education
Subject
Fluid Flow and Transfer Processes,Computer Science Applications,Process Chemistry and Technology,General Engineering,Instrumentation,General Materials Science
Reference33 articles.
1. Wang, Y., Cai, S., Li, S.J., Liu, Y., Guo, Y., Li, T., and Cheng, M.M. (2019). Proceedings of the Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Springer International Publishing. 2. Performance of Optical Flow Techniques;Barron;Int. J. Comput. Vis.,1994 3. Polack, P., Altche, F., DAndrea-Novel, B., and de La Fortelle, A. (2017, January 11–14). The Kinematic Bicycle Model: A Consistent Model for Planning Feasible Trajectories for Autonomous Vehicles?. Proceedings of the IEEE Intelligent Vehicles Symposium, Los Angeles, CA, USA. 4. Unscented Filtering and Nonlinear Estimation;Julier;Proc. IEEE,2004 5. (2023, August 04). Intel T265 Tracking Camera. Available online: https://www.intelrealsense.com/tracking-camera-t265/.
|
|