Co-Visual Pattern-Augmented Generative Transformer Learning for Automobile Geo-Localization
-
Published:2023-04-22
Issue:9
Volume:15
Page:2221
-
ISSN:2072-4292
-
Container-title:Remote Sensing
-
language:en
-
Short-container-title:Remote Sensing
Author:
Zhao Jianwei12, Zhai Qiang12ORCID, Zhao Pengbo3ORCID, Huang Rui12, Cheng Hong12ORCID
Affiliation:
1. School of Automation Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China 2. Center for Robotics, University of Electronic Science and Technology of China, Chengdu 611731, China 3. McCormick School of Engineering, Northwestern University, Evanston, IL 60611, USA
Abstract
Geolocation is a fundamental component of route planning and navigation for unmanned vehicles, but GNSS-based geolocation fails under denial-of-service conditions. Cross-view geo-localization (CVGL), which aims to estimate the geographic location of the ground-level camera by matching against enormous geo-tagged aerial (e.g., satellite) images, has received a lot of attention but remains extremely challenging due to the drastic appearance differences across aerial–ground views. In existing methods, global representations of different views are extracted primarily using Siamese-like architectures, but their interactive benefits are seldom taken into account. In this paper, we present a novel approach using cross-view knowledge generative techniques in combination with transformers, namely mutual generative transformer learning (MGTL), for CVGL. Specifically, by taking the initial representations produced by the backbone network, MGTL develops two separate generative sub-modules—one for aerial-aware knowledge generation from ground-view semantics and vice versa—and fully exploits the entirely mutual benefits through the attention mechanism. Moreover, to better capture the co-visual relationships between aerial and ground views, we introduce a cascaded attention masking algorithm to further boost accuracy. Extensive experiments on challenging public benchmarks, i.e., CVACT and CVUSA, demonstrate the effectiveness of the proposed method, which sets new records compared with the existing state-of-the-art models. Our code will be available upon acceptance.
Funder
National Natural Science Foundation of China National Key Research and Development Program of China
Subject
General Earth and Planetary Sciences
Reference81 articles.
1. Image based geo-localization in the alps;Saurer;Int. J. Comput. Vis.,2016 2. Senlet, T., and Elgammal, A. (2012, January 14–19). Satellite image-based precise robot localization on sidewalks. Proceedings of the IEEE International Conference on Robotics and Automation, St Paul, MN, USA. 3. Multimodal end-to-end autonomous driving;Xiao;IEEE Trans. Intell. Transp. Syst.,2020 4. Wang, S., Zhang, Y., and Li, H. (2022). Satellite image based cross-view localization for autonomous vehicle. arXiv. 5. Thoma, J., Paudel, D.P., Chhatkuli, A., Probst, T., and Gool, L.V. (November, January 27). Mapping, localization and path planning for image-based navigation using visual features and map. Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Korea.
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|