An effective spatial relational reasoning networks for visual question answering

Author:

Shen Xiang,Han DezhiORCID,Chen Chongqing,Luo Gaofeng,Wu Zhongdai

Abstract

Visual Question Answering (VQA) is a method of answering questions in natural language based on the content of images and has been widely concerned by researchers. The existing research on the visual question answering model mainly focuses on the point of view of attention mechanism and multi-modal fusion. It only pays attention to the visual semantic features of the image in the process of image modeling, ignoring the importance of modeling the spatial relationship of visual objects. We are aiming at the existing problems of the existing VQA model research. An effective spatial relationship reasoning network model is proposed, which can combine visual object semantic reasoning and spatial relationship reasoning at the same time to realize fine-grained multi-modal reasoning and fusion. A sparse attention encoder is designed to capture contextual information effectively in the semantic reasoning module. In the spatial relationship reasoning module, the graph neural network attention mechanism is used to model the spatial relationship of visual objects, which can correctly answer complex spatial relationship reasoning questions. Finally, a practical compact self-attention (CSA) mechanism is designed to reduce the redundancy of self-attention in linear transformation and the number of model parameters and effectively improve the model’s overall performance. Quantitative and qualitative experiments are conducted on the benchmark datasets of VQA 2.0 and GQA. The experimental results demonstrate that the proposed method performs favorably against the state-of-the-art approaches. Our best single model has an overall accuracy of 71.18% on the VQA 2.0 dataset and 57.59% on the GQA dataset.

Funder

National Natural Science Foundation of China

Scientific Research Foundation of Hunan Provincial Education Department

Publisher

Public Library of Science (PLoS)

Subject

Multidisciplinary

Reference71 articles.

1. Antol, Stanislaw, Agrawal, Aishwarya, Lu, Jiasen, et al. Vqa: Visual question answering[C]. International Conference on Computer Vision. 2015: 2425–2433.

2. A multi-world approach to question answering about real-world scenes based on uncertain input;M Malinowski;Advances in neural information processing systems,2014

3. Xu K, Ba J, Kiros R, et al. Show, attend and tell: Neural image caption generation with visual attention[C]. International conference on machine learning, 2015: 2048–2057.

4. Xu K, Ba J, Kiros R, et al. Modeling text with graph convolutional network for cross-modal information retrieval[C]. Pacific Rim Conference on Multimedia, 2018: 223–234.

5. Chen H, Ding G, Lin Z, et al. Cross-modal image-text retrieval with semantic consistency[C]. Proceedings of the 27th ACM International Conference on Multimedia.2019: 1749–1757.

Cited by 2 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3