Affiliation:
1. Institute of Aeronautical Engineering, India
2. CMR Technical Campus, India
Abstract
Human beings can describe scenarios and objects in a picture through vision easily whereas performing the same task with a computer is a complicated one. Generating captions for the objects of an image helps everyone to understand the scenario of the image in a better way. Instinctively describing the content of an image requires the apprehension of computer vision as well as natural language processing. This task has gained huge popularity in the field of technology and there is a lot of research work being carried out. Recent works have been successful in identifying objects in the image but are facing many challenges in generating captions to the given image accurately by understanding the scenario. To address this challenge, we propose a model to generate the caption for an image. Residual Neural Network (ResNet) is used to extract the features from an image. These features are converted into a vector of size 2048. The caption generation for the image is obtained with Long Short-Term Memory (LSTM). The proposed model is experimented on the Flickr8K dataset and obtained an accuracy of 88.4\%. The experimental results indicate that our model produces appropriate captions compared to the state of art models.
Publisher
Vladimir Andrunachievici Institute of Mathematics and Computer Science
Subject
Artificial Intelligence,Computational Mathematics,Computational Theory and Mathematics,Control and Optimization,Computer Networks and Communications,Computer Science Applications,Modeling and Simulation,Software
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献