Affiliation:
1. Department of Computer Engineering and Applications GLA University Mathura India
Abstract
AbstractIn Visual question answering (VQA), a natural language answer is generated for a given image and a question related to that image. There is a significant growth in the VQA task by applying an efficient attention mechanism. However, current VQA models use region features or object features that are not adequate to improve the accuracy of generated answers. To deal with this issue, we have used a Two‐way Co‐Attention Mechanism (TCAM), which is capable enough to fuse different visual features (region, object, and concept) from diverse perspectives. These diverse features lead to different sets of answers, and also, there is an inherent relationship between these visual features. We have developed a powerful attention mechanism that uses these two critical aspects by using both bottom‐up and top‐down TCAM to extract discriminative feature information. We have proposed a Collective Feature Integration Module (CFIM) to combine multimodal attention features and thus capture the valuable information from these visual features by employing a TCAM. Further, we have formulated a Vertical CFIM for fusing the features belonging to the same class and a Horizontal CFIM for combining the features belonging to different types, thus balancing the influence of top‐down and bottom‐up co‐attention. The experiments are conducted on two significant datasets, VQA 1.0 and VQA 2.0. On VQA 1.0, the overall accuracy of our proposed method is 71.23 on the test‐dev set and 71.94 on the test‐std set. On VQA 2.0, the overall accuracy of our proposed method is 75.89 on the test‐dev set and 76.32 on the test‐std set. The above overall accuracy clearly reflecting the superiority of our proposed TCAM based approach over the existing methods.
Subject
Artificial Intelligence,Computational Mathematics
Reference57 articles.
1. Stacked Attention Networks for Image Question Answering
2. IlievskiI YanS FengJ.A focused dynamic attention model for visual question answering.arXiv Preprint arXiv:1604.01485; 2016.
3. LuJ YangJ BatraD ParikhD.Hierarchical question‐image co‐attention for visual question answering. In:Advances in Neural Information Processing Systems Barcelona Spain December 5–10. Vol 29; 2016:289‐297.
4. SchwartzI SchwingA HazanT.High‐order attention models for visual question answering. In:Advances in Neural Information Processing Systems Long Beach California USA December 4–9. Vol 30; 2017:3667‐3677.
5. Object-difference drived graph convolutional networks for visual question answering