Camera localization with Siamese neural networks using iterative relative pose estimation
Author:
Kim Daewoon1ORCID,
Ko Kwanghee1ORCID
Affiliation:
1. The School of Mechanical Engineering, Gwangju Institute of Science and Technology (GIST) , Gwangju 61005, Republic of Korea
Abstract
Abstract
This paper presents a novel deep learning-based camera localization method using iterative relative pose estimation to improve the accuracy of pose estimation from a single RGB image. Although most existing deep learning-based camera localization methods are more robust for textureless cases, illumination changes, and occlusions, they are less accurate than other non-deep learning-based methods. The proposed method improved the localization accuracy by using the relative poses between the input image and the training dataset images. It simultaneously trained the network for the absolute poses of the input images and their relative poses using Siamese networks. In the inference stage, it estimated the absolute pose of a query image and iteratively updated the pose using relative pose information. Real world examples with widely used camera localization datasets and our dataset were utilized to validate the performance of the proposed method, which exhibited higher localization accuracy than the state-of-the-art deep learning-based camera localization methods. In the end, the application of the proposed method to augmented reality was presented.
Funder
Institute of Civil-Military Technology Cooperation
Defense Acquisition Program Administration
Ministry of Trade, Industry and Energy, Korea
Publisher
Oxford University Press (OUP)
Subject
Computational Mathematics,Computer Graphics and Computer-Aided Design,Human-Computer Interaction,Engineering (miscellaneous),Modeling and Simulation,Computational Mechanics