Enhancing visual question answering with a two‐way co‐attention mechanism and integrated multimodal features

Author:

Agrawal Mayank1,Jalal Anand Singh1,Sharma Himanshu1

Affiliation:

1. Department of Computer Engineering and Applications GLA University Mathura India

Abstract

AbstractIn Visual question answering (VQA), a natural language answer is generated for a given image and a question related to that image. There is a significant growth in the VQA task by applying an efficient attention mechanism. However, current VQA models use region features or object features that are not adequate to improve the accuracy of generated answers. To deal with this issue, we have used a Two‐way Co‐Attention Mechanism (TCAM), which is capable enough to fuse different visual features (region, object, and concept) from diverse perspectives. These diverse features lead to different sets of answers, and also, there is an inherent relationship between these visual features. We have developed a powerful attention mechanism that uses these two critical aspects by using both bottom‐up and top‐down TCAM to extract discriminative feature information. We have proposed a Collective Feature Integration Module (CFIM) to combine multimodal attention features and thus capture the valuable information from these visual features by employing a TCAM. Further, we have formulated a Vertical CFIM for fusing the features belonging to the same class and a Horizontal CFIM for combining the features belonging to different types, thus balancing the influence of top‐down and bottom‐up co‐attention. The experiments are conducted on two significant datasets, VQA 1.0 and VQA 2.0. On VQA 1.0, the overall accuracy of our proposed method is 71.23 on the test‐dev set and 71.94 on the test‐std set. On VQA 2.0, the overall accuracy of our proposed method is 75.89 on the test‐dev set and 76.32 on the test‐std set. The above overall accuracy clearly reflecting the superiority of our proposed TCAM based approach over the existing methods.

Publisher

Wiley

Subject

Artificial Intelligence,Computational Mathematics

Reference57 articles.

1. Stacked Attention Networks for Image Question Answering

2. IlievskiI YanS FengJ.A focused dynamic attention model for visual question answering.arXiv Preprint arXiv:1604.01485; 2016.

3. LuJ YangJ BatraD ParikhD.Hierarchical question‐image co‐attention for visual question answering. In:Advances in Neural Information Processing Systems Barcelona Spain December 5–10. Vol 29; 2016:289‐297.

4. SchwartzI SchwingA HazanT.High‐order attention models for visual question answering. In:Advances in Neural Information Processing Systems Long Beach California USA December 4–9. Vol 30; 2017:3667‐3677.

5. Object-difference drived graph convolutional networks for visual question answering

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3