Low-resource Multilingual Neural Translation Using Linguistic Feature-based Relevance Mechanisms

Author:

Chakrabarty Abhisek1ORCID,Dabre Raj1ORCID,Ding Chenchen1ORCID,Utiyama Masao1ORCID,Sumita Eiichiro1ORCID

Affiliation:

1. NICT, Japan

Abstract

This article investigates approaches to effectively harness source-side linguistic features for low-resource multilingual neural machine translation (MNMT). Previous works focus on using various features of a word such as lemma, part-of-speech tag, dependency label, and so on, to improve translation quality in a low-resource scenario. However, these studies deal with bilingual translation and do not focus on using features in multilingual training setups. Our work focuses on this particular point and experiments with low-resource multilingual models incorporating source-side linguistic features. Although techniques for integrating features into an NMT model such as concatenation and feature relevance perform quite well in bilingual settings, they do not work well in multilingual settings. To remedy this, we propose the use of dummy features and language indicator features in MNMT models. Experiments are conducted on English to Asian language translation on a multilingual, multi-parallel corpus spanning English and eight Asian languages where for each language pair, the training data size does not exceed 20,000 parallel sentences. After establishing strong bilingual baselines using feature relevance mechanisms and multilingual baselines without any features, we show that our proposed dummy features and language indicator features, in combination with feature relevance mechanisms, yield significant improvements in BLEU points for all language pairs. We then analyze our models from the perspectives of model sizes, the impact of individual linguistic features, validation perplexity computed during training, visualization of the activations of the relevance mechanisms, and exhaustive tuning of hyperparameters. We also report preliminary results for multilingual multi-way models using linguistic features.

Publisher

Association for Computing Machinery (ACM)

Subject

General Computer Science

Reference41 articles.

1. Naveen Arivazhagan Ankur Bapna Orhan Firat Dmitry Lepikhin Melvin Johnson Maxim Krikun Mia Xu Chen Yuan Cao George Foster Colin Cherry Wolfgang Macherey Zhifeng Chen and Yonghui Wu. 2019. Massively multilingual neural machine translation in the wild: Findings and challenges. arxiv:1907.05019 [cs.CL]. Retrieved from https://arxiv.org/abs/1907.05019.

2. Dzmitry Bahdanau, Kyung Hyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the International Conference on Learning Representations.

3. Enhancing Machine Translation with Dependency-Aware Self-Attention

4. Word Representations in Factored Neural Machine Translation

5. Abhisek Chakrabarty, Raj Dabre, Chenchen Ding, Masao Utiyama, and Eiichiro Sumita. 2020. Improving low-resource NMT through relevance based linguistic features incorporation. In Proceedings of the 28th International Conference on Computational Linguistics. International Committee on Computational Linguistics, 4263–4274.

Cited by 1 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Reading Scene Text with Aggregated Temporal Convolutional Encoder;ACM Transactions on Asian and Low-Resource Language Information Processing;2023-11-20

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3