Thangka Image—Text Matching Based on Adaptive Pooling Layer and Improved Transformer
-
Published:2024-01-17
Issue:2
Volume:14
Page:807
-
ISSN:2076-3417
-
Container-title:Applied Sciences
-
language:en
-
Short-container-title:Applied Sciences
Author:
Wang Kaijie1, Wang Tiejun1ORCID, Guo Xiaoran1, Xu Kui1, Wu Jiao1
Affiliation:
1. Key Laboratory of China’s Ethnic Languages and Information Technology of Ministry of Education, Northwest Minzu University, Lanzhou 730030, China
Abstract
Image–text matching is a research hotspot in the multimodal task of integrating image and text processing. In order to solve the difficult problem of associating image and text data in the multimodal knowledge graph of Thangka, we propose an image and text matching method based on the Visual Semantic Embedding (VSE) model. The method introduces an adaptive pooling layer to improve the feature extraction capability of semantic associations between Thangka images and texts. We also improved the traditional Transformer architecture by combining bidirectional residual concatenation and mask attention mechanisms to improve the stability of the matching process and the ability to extract semantic information. In addition, we designed a multi-granularity tag alignment module that maps global and local features of images and text into a common coding space, leveraging inter- and intra-modal semantic associations to improve image and text accuracy. Comparative experiments on the Thangka dataset show that our method achieves significant improvements compared to the VSE baseline method. Specifically, our method improves the recall by 9.4% and 10.5% for image-matching text and text-matching images, respectively. Furthermore, without any large-scale corpus pre-training, our method outperforms all models without pre-training and outperforms two out of four pre-trained models on the Flickr30k public dataset. Also, the execution efficiency of our model is an order of magnitude higher than that of the pre-trained models, which highlights the superior performance and efficiency of our model in the image–text matching task.
Funder
National Natural Science Foundation of China Natural Science Foundation of Gansu Province Northwest University for Nationalities‘ Talent Recruitment Scientific Research Pro-ject
Subject
Fluid Flow and Transfer Processes,Computer Science Applications,Process Chemistry and Technology,General Engineering,Instrumentation,General Materials Science
Reference26 articles.
1. Hotelling, H. (1992). Breakthroughs in Statistics: Methodology and Distribution, Springer. 2. Frome, A., Corrado, G.S., Shlens, J., Bengio, S., Dean, J., Ranzato, M.A., and Mikolov, T. (2013). Advances in Neural Information Processing Systems, NeurIPS. 3. Lee, K.H., Chen, X., Hua, G., Hu, H., and He, X. (2018, January 8–14). Stacked cross attention for image-text matching. Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany. 4. Zhang, Q., Lei, Z., Zhang, Z., Hu, H., and He, X. (2020, January 13–19). Context-aware attention network for image-text retrieval. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA. 5. Chen, H., Ding, G., Liu, X., Lin, Z., Liu, J., and Han, J. (2020, January 13–19). Imram: Iterative matching with recurrent attention memory for cross-modal image-text retrieval. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA.
|
|