Large-Scale Indoor Camera Positioning Using Fiducial Markers
Author:
García-Ruiz Pablo1ORCID, Romero-Ramirez Francisco J.2ORCID, Muñoz-Salinas Rafael13ORCID, Marín-Jiménez Manuel J.13ORCID, Medina-Carnicer Rafael13ORCID
Affiliation:
1. Departamento de Informática y Análisis Numérico, Edificio Einstein, Campus de Rabanales, Universidad de Coŕdoba, 14071 Córdoba, Spain 2. Departamento de Teoría de la Señal y Comunicaciones y Sistemas Telemáticos y Computación, Campus de Fuenlabrada, Universidad Rey Juan Carlos, 28942 Fuenlabrada, Spain 3. Instituto Maimónides de Investigación en Biomedicina (IMIBIC), Avenida Menéndez Pidal s/n, 14004 Córdoba, Spain
Abstract
Estimating the pose of a large set of fixed indoor cameras is a requirement for certain applications in augmented reality, autonomous navigation, video surveillance, and logistics. However, accurately mapping the positions of these cameras remains an unsolved problem. While providing partial solutions, existing alternatives are limited by their dependence on distinct environmental features, the requirement for large overlapping camera views, and specific conditions. This paper introduces a novel approach to estimating the pose of a large set of cameras using a small subset of fiducial markers printed on regular pieces of paper. By placing the markers in areas visible to multiple cameras, we can obtain an initial estimation of the pair-wise spatial relationship between them. The markers can be moved throughout the environment to obtain the relationship between all cameras, thus creating a graph connecting all cameras. In the final step, our method performs a full optimization, minimizing the reprojection errors of the observed markers and enforcing physical constraints, such as camera and marker coplanarity and control points. We validated our approach using novel artificial and real datasets with varying levels of complexity. Our experiments demonstrated superior performance over existing state-of-the-art techniques and increased effectiveness in real-world applications. Accompanying this paper, we provide the research community with access to our code, tutorials, and an application framework to support the deployment of our methodology.
Funder
Spanish Ministry of Economy, Industry and Competitiveness and FEDER NextGeneration/PRTR
Reference40 articles.
1. Shi, Y., Zhang, W., Yao, Z., Li, M., Liang, Z., Cao, Z., Zhang, H., and Huang, Q. (2018). Design of a Hybrid Indoor Location System Based on Multi-Sensor Fusion for Robot Navigation. Sensors, 18. 2. Vegesana, S., Penumatcha, H., Jaiswal, C., AlHmoud, I.W., and Gokaraju, B. (2024, January 15–24). Design and Integration of a Multi-Sensor System for Enhanced Indoor Autonomous Navigation. Proceedings of the SoutheastCon 2024, Atlanta, GA, USA. 3. Accurate 3-D Position and Orientation Method for Indoor Mobile Robot Navigation Based on Photoelectric Scanning;Huang;IEEE Trans. Instrum. Meas.,2015 4. Jamil, F., Iqbal, N., Ahmad, S., and Kim, D.H. (2020). Toward Accurate Position Estimation Using Learning to Prediction Algorithm in Indoor Navigation. Sensors, 20. 5. Mobile Augmented Reality enhances indoor navigation for wheelchair users;Oliveira;Res. Biomed. Eng.,2016
|
|