Target Localization Method Based on Image Degradation Suppression and Multi-Similarity Fusion in Low-Illumination Environments
-
Published:2023-07-27
Issue:8
Volume:12
Page:300
-
ISSN:2220-9964
-
Container-title:ISPRS International Journal of Geo-Information
-
language:en
-
Short-container-title:IJGI
Author:
Tang Huapeng1, Qin Danyang12ORCID, Yang Jiaqiang1, Bie Haoze1, Yan Mengying1, Zhang Gengxin1, Ma Lin3ORCID
Affiliation:
1. Department of Electronic and Communication Engineering, Heilongjiang University, Harbin 150080, China 2. National Mobile Communications Research Laboratory, Southeast University, Nanjing 210096, China 3. Department of Electronics and Information Engineering, Harbin Institute of Technology, Harbin 150080, China
Abstract
Frame buildings as important nodes of urban space. The include high-speed railway stations, airports, residences, and office buildings, which carry various activities and functions. Due to illumination irrationality and mutual occlusion between complex objects, low illumination situations frequently develop in these architectural environments. In this case, the location information of the target is difficult to determine. At the same time, the change in the indoor electromagnetic environment also affects the location information of the target. Therefore, this paper adopts the vision method to achieve target localization in low-illumination environments by feature matching of images collected in the offline state. However, the acquired images have serious quality degradation problems in low-illumination conditions, such as low brightness, low contrast, color distortion, and noise interference. These problems mean that the local features in the collected images are missing, meaning that they fail to achieve a match with the offline database images; as a result, the location information of the target cannot be determined. Therefore, a Visual Localization with Multiple-Similarity Fusions (VLMSF) is proposed based on the Nonlinear Enhancement And Local Mean Filtering (NEALMF) preprocessing enhancement. The NEALMF method solves the problem of missing local features by improving the quality of the acquired images, thus improving the robustness of the visual positioning system. The VLMSF method solves the problem of low matching accuracy in similarity retrieval methods by effectively extracting and matching feature information. Experiments show that the average localization error of the VLMSF method is only 8 cm, which is 33.33% lower than that of the Kears-based VGG-16 similarity retrieval method. Meanwhile, the localization error is reduced by 75.76% compared with the Perceptual hash (Phash) retrieval method. The results show that the method proposed in this paper greatly alleviates the influence of low illumination on visual methods, thus helping city managers accurately grasp the location information of targets under complex illumination conditions.
Funder
Open Research Fund of National Mobile Communications Research Laboratory, Southeast University Outstanding Youth Program of Natural Science Foundation of Heilongjiang Province National Natural Science Foundation of China Fundamental Scientific Research Funds of Heilongjiang Province
Subject
Earth and Planetary Sciences (miscellaneous),Computers in Earth Sciences,Geography, Planning and Development
Reference30 articles.
1. Pinem, M., Zardika, A., and Siregar, Y. (2020, January 3–4). Location Misplacement Analysis on Global Positioning System. Proceedings of the 2020 4rd International Conference on Electrical, Telecommunication and Computer Engineering (ELTICOM), Medan, Indonesia. 2. Combined positioning algorithm based on BeiDou navigation satellite system and raw 5G observations;Li;Measurement,2022 3. Monocular vision based navigation and localisation in indoor environments;Agarwal;IFAC Proc. Vol.,2012 4. Exposure based multi-histogram equalization contrast enhancement for non-uniform illumination images;Tan;IEEE Access,2019 5. Gu, K., Zhai, G., Liu, M., Min, X., Yang, X., and Zhang, W. (2013, January 17–20). Brightness preserving video contrast enhancement using S-shaped transfer function. Proceedings of the 2013 Visual Communications and Image Processing (VCIP), Kuching, Malaysia.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|