Affiliation:
1. National Institute of Technology (NIT), Rourkela, India
2. Computer Science and Engineering Department, IIT BHU, Varanasi, India
3. Indian Institute of Technology (IIT), Patna, India
Abstract
The method of translation from one language to another without human intervention is known as Machine Translation (MT). Multilingual neural machine translation (MNMT) is a technique for MT that builds a single model for multiple languages. It is preferred over other approaches, since it decreases training time and improves translation in low-resource contexts, i.e., for languages that have insufficient corpus. However, good-quality MT models are yet to be built for many scenarios such as for Indic-to-Indic Languages (IL-IL). Hence, this article is an attempt to address and develop the baseline models for low-resource languages i.e., IL-IL (for 11 Indic Languages (ILs)) in a multilingual environment. The models are built on the Samanantar corpus and analyzed on the Flores-200 corpus. All the models are evaluated using standard evaluation metrics i.e., Bilingual Evaluation Understudy (BLEU) score (with the range of 0 to 100). This article examines the effect of the grouping of related languages, namely, East Indo-Aryan (EI), Dravidian (DR), and West Indo-Aryan (WI) on the MNMT model. From the experiments, the results reveal that related language grouping is beneficial for the WI group only while it is detrimental for the EI group and it shows an inconclusive effect on the DR group. The role of pivot-based MNMT models in enhancing translation quality is also investigated in this article. Owing to the presence of large good-quality corpora from English (EN) to ILs, MNMT IL-IL models using EN as a pivot are built and examined. To achieve this, English-Indic Language (EN-IL) models are developed with and without the usage of related languages. Results show that the use of related language grouping is advantageous specifically for EN to ILs. Thus, related language groups are used for the development of pivot MNMT models. It is also observed that the usage of pivot models greatly improves MNMT baselines. Furthermore, the effect of transliteration on ILs is also analyzed in this article. To explore transliteration, the best MNMT models from the previous approaches (in most of cases pivot model using related groups) are determined and built on corpus transliterated from the corresponding scripts to a modified Indian language Transliteration script (ITRANS). The outcome of the experiments indicates that transliteration helps the models built for lexically rich languages, with the best increment of BLEU scores observed in Malayalam (ML) and Tamil (TA), i.e., 6.74 and 4.72, respectively. The BLEU score using transliteration models ranges from 7.03 to 24.29. The best model obtained is the Punjabi (PA)-Hindi (HI) language pair trained on PA-WI transliterated corpus.
Funder
Meity (Ministry of Electronics and Information Technology, Government of India) for project sanction
BT
Publisher
Association for Computing Machinery (ACM)
Reference57 articles.
1. Massively multilingual neural machine translation;Aharoni Roee;arXiv preprint arXiv:1903.00089,2019
2. Image–Text Multimodal Sentiment Analysis Framework of Assamese News Articles Using Late Fusion
3. Rupjyoti Baruah and Rajesh Kumar Mundotiya. 2020. NLPRL Odia-English: Indic language neural machine translation system. In Proceedings of the 7th Workshop on Asian Translation. 118–121.
4. Low resource neural machine translation: Assamese to/from other Indo-Aryan (Indic) languages;Baruah Rupjyoti;Trans. Asian Low-resour. Lang. Inf. Process.,2021
5. Robert S. P. Beekes. 2011. Comparative Indo-European linguistics. Compar. Indo-Eur. Ling Taylor & Francis (2011) 1–439.