Affiliation:
1. Department of Computer Engineering and Applications GLA University Mathura India
Abstract
AbstractVisual question answering (VQA) is a challenging task in computer vision. Recently, there has been a growing interest in text‐based VQA tasks, emphasizing the important role of textual information for better understanding of images. Effectively utilizing text information within the image is crucial for achieving success in this task. However, existing approaches often overlook the contextual information and neglect to utilize the relationships between scene‐text tokens and image objects. They simply incorporate the scene‐text tokens mined from the image into the VQA model without considering these important factors. In this paper, the proposed model initially analyzes the image to extract text and identify scene objects. It then comprehends the question and mines relationships among the question, OCRed text, and scene objects, ultimately generating an answer through relational reasoning by conducting semantic and positional attention. Our decoder with attention map loss enables prediction of complex answers and handles dynamic vocabularies, reducing decoding space. It outperforms softmax‐based cross entropy loss in accuracy and efficiency by accommodating varying vocabulary sizes. We evaluated our model's performance on the TextVQA dataset and achieved an accuracy of 53.91% on the validation set and 53.98% on the test set. Moreover, on the ST‐VQA dataset, our model obtained ANLS scores of 0.699 on the validation set and 0.692 on the test set.