A Reverse Positional Encoding Multi-Head Attention-Based Neural Machine Translation Model for Arabic Dialects

Author:

Baniata Laith H.,Kang SangwooORCID,Ampomah Isaac. K. E.

Abstract

Languages with a grammatical structure that have a free order for words, such as Arabic dialects, are considered a challenge for neural machine translation (NMT) models because of the attached suffixes, affixes, and out-of-vocabulary words. This paper presents a new reverse positional encoding mechanism for a multi-head attention (MHA) neural machine translation (MT) model to translate from right-to-left texts such as Arabic dialects (ADs) to modern standard Arabic (MSA). The proposed model depends on an MHA mechanism that has been suggested recently. The utilization of the new reverse positional encoding (RPE) mechanism and the use of sub-word units as an input to the self-attention layer improve this sublayer for the proposed model’s encoder by capturing all dependencies between the words in right-to-left texts, such as AD input sentences. Experiments were conducted on Maghrebi Arabic to MSA, Levantine Arabic to MSA, Nile Basin Arabic to MSA, Gulf Arabic to MSA, and Iraqi Arabic to MSA. Experimental analysis proved that the proposed reverse positional encoding MHA NMT model was efficiently able to handle the open grammatical structure issue of Arabic dialect sentences, and the proposed RPE MHA NMT model enhanced the translation quality for right-to-left texts such as Arabic dialects.

Publisher

MDPI AG

Subject

General Mathematics,Engineering (miscellaneous),Computer Science (miscellaneous)

Reference30 articles.

1. On using very large target vocabulary for neural machine translation;Jean;Proceedings of the 53rd Annual Meeting of the Association for the Computational Linguistics and the 7th International Joint Conference on Natural Language Processing,2015

2. Addressing the rare word problem in neural machine translation;Luong;Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint conference on Natural Language processing,2015

3. Sequence to sequence learning with neural networks;Sutskever;Proceedings of the 27th International Conference on Neural Information Systems,2014

4. Neural machine translation by jointly learning to align and translate;Bahdanau;Proceedings of the 33rd International Conference on Learning Representations (ICLR),2015

5. Attention is all you need;Vaswani;Proceedings of the 3rd Conference on Neural Information Processing system (NIPS),2017

Cited by 4 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3