A Systematic Literature Review on Using the Encoder-Decoder Models for Image Captioning in English and Arabic Languages

Author:

Alsayed Ashwaq1ORCID,Arif Muhammad1ORCID,Qadah Thamir M.1ORCID,Alotaibi Saud2ORCID

Affiliation:

1. Computer Science Department, Umm Al-Qura University, Makkah 24230, Saudi Arabia

2. Information Systems Department, Umm Al-Qura University, Makkah 24230, Saudi Arabia

Abstract

With the explosion of visual content on the Internet, creating captions for images has become a necessary task and an exciting topic for many researchers. Furthermore, image captioning is becoming increasingly important as the number of people utilizing social media platforms grows. While there is extensive research on English image captioning (EIC), studies focusing on image captioning in other languages, especially Arabic, are limited. There has also yet to be an attempt to survey Arabic image captioning (AIC) systematically. This research aims to systematically survey encoder-decoder EIC while considering the following aspects: visual model, language model, loss functions, datasets, evaluation metrics, model comparison, and adaptability to the Arabic language. A systematic review of the literature on EIC and AIC approaches published in the past nine years (2015–2023) from well-known databases (Google Scholar, ScienceDirect, IEEE Xplore) is undertaken. We have identified 52 primary English and Arabic studies relevant to our objectives (The number of articles on Arabic captioning is 11, and the rest are for the English language). The literature review shows that applying the English-specific models to the Arabic language is possible, with the use of a high-quality Arabic database and following the appropriate preprocessing. Moreover, we discuss some limitations and ideas to solve them as a future direction.

Publisher

MDPI AG

Subject

Fluid Flow and Transfer Processes,Computer Science Applications,Process Chemistry and Technology,General Engineering,Instrumentation,General Materials Science

Reference114 articles.

1. Framing image description as a ranking task: Data, models and evaluation metrics;Hodosh;J. Artif. Intell. Res.,2013

2. Kiros, R., Salakhutdinov, R., and Zemel, R. (2014, January 22–24). Multimodal neural language models. Proceedings of the International Conference on Machine Learning, PMLR, Beijing, China.

3. Xu, K., Ba, J., Kiros, R., Cho, K., Courville, A., Salakhudinov, R., Zemel, R., and Bengio, Y. (2015, January 7–9). Show, attend and tell: Neural image caption generation with visual attention. Proceedings of the International Conference on Machine Learning, PMLR, Lille, France.

4. Vinyals, O., Toshev, A., Bengio, S., and Erhan, D. (2015, January 7–12). Show and tell: A neural image caption generator. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA.

5. Chen, L., Zhang, H., Xiao, J., Nie, L., Shao, J., Liu, W., and Chua, T.S. (2017, January 21–26). Sca-cnn: Spatial and channel-wise attention in convolutional networks for image captioning. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.

Cited by 1 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3