Author:
Jia Shoujun,Liu Chun,Wu Hangbin,Chen Chen
Abstract
Abstract
Visual localization and mapping have received considerable attention in the fields of computer vision, photogrammetry, and remote sensing. Image matching is key to visual localization and mapping. However, light is complex and uneven in real scenarios, creating difficulties in feature extraction and matching. Hence, feature mismatch and loss reduce the efficiency, accuracy, and robustness of visual localization and mapping. We developed a visual localization and mapping method for complex light scenarios based on image enhancement. Starting with initial images, the irradiance and reflectance components were separated based on logarithmic transformation. Our method strengthened high-frequency components and restrained low-frequency components with improved homomorphic filtering, restraining the light component and enhancing the important reflection component. The SIFT algorithm was used for feature detection and matching. The proposed method was tested on images with uneven light captured using a stereo vision camera in an indoor environment, focusing on visual localization and mapping. The experimental results emphasized that the method improved the rate of image localization and number of point clouds, as well as the reprojection error, which ranged from 0.85 to 0.82 on average. Thus, the proposed method is robust rather than probabilistic for improving visual localization and mapping under complex light conditions.