Scene description with context information using dense-LSTM

Author:

Singh Varsha1,Agrawal Prakhar1,Tiwary Uma Shanker1

Affiliation:

1. Department of Information Technology, Indian Institute of Information Technology Allahabad, Prayagraj, U.P., India

Abstract

Generating natural language description for visual content is a technique for describing the content available in the image(s). It requires knowledge of both the domains of computer vision and natural language processing. For this, various models with different approaches are suggested. One of them is encoder-decoder-based description generation. Existing papers used only objects for descriptions, but the relationship between them is equally essential, requiring context information. Which required techniques like Long Short-Term Memory (LSTM). This paper proposes an encoder-decoder-based methodology to generate human-like textual descriptions. Dense-LSTM is presented for better description as a decoder with a modified VGG19 encoder to capture information to describe the scene. Standard datasets Flickr8K and Flickr30k are used for testing and training purposes. BLEU (Bilingual Evaluation Understudy) score is used to evaluate the generated text. For the proposed model, a GUI (Graphical User Interface) is developed, which produces the audio description of the output received and provides an interface for searching the related visual content and query-based search.

Publisher

IOS Press

Subject

Artificial Intelligence,General Engineering,Statistics and Probability

Reference19 articles.

1. Show and tell: Lessons learned from the mscoco image captioning challenge;Vinyals;IEEE Transactions on Pattern Analysis and Machine Intelligence,2016

2. Chu Y.Y. , Yu X. , Sergei L. , Wang M. and Zhengkui , Automatic Image Captioning Based on ResNet50 and LSTM with Soft Attention, Wireless Communications and Mobile Computing 2020 (2020).

3. Multi-Level Policy and Reward-Based Deep Reinforcement Learning Framework for Image Captioning;Xu;in IEEE Transactions on Multimedia,2020

4. Aung S.P. and Win nwe T. , Automatic Myanmar Image Captioning using CNN and LSTM-Based Language Model. 1st Joint SLTU and CCURL Workshop (SLTU-CCURL 2020) At: Marseille, France, 2020 (2020).

5. That ‘internet of things’ thing;Ashton;RFID Journal,2009

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3