Affiliation:
1. Indian Institute Technology, BHU, Varanasi, Uttar-pradesh, INDIA
Abstract
Machine translation (MT) systems have been built using numerous different techniques for bridging the language barriers. These techniques are broadly categorized into approaches like Statistical Machine Translation (SMT) and Neural Machine Translation (NMT). End-to-end NMT systems significantly outperform SMT in translation quality on many language pairs, especially those with the adequate parallel corpus. We report comparative experiments on baseline MT systems for Assamese to other Indo-Aryan languages (in both translation directions) using the traditional Phrase-Based SMT as well as some more successful NMT architectures, namely basic sequence-to-sequence model with attention, Transformer, and finetuned Transformer. The results are evaluated using the most prominent and popular standard automatic metric BLEU (BiLingual Evaluation Understudy), as well as other well-known metrics for exploring the performance of different baseline MT systems, since this is the first such work involving Assamese. The evaluation scores are compared for SMT and NMT models for the effectiveness of bi-directional language pairs involving Assamese and other Indo-Aryan languages (Bangla, Gujarati, Hindi, Marathi, Odia, Sinhalese, and Urdu). The highest BLEU scores obtained are for Assamese to Sinhalese for SMT (35.63) and the Assamese to Bangla for NMT systems (seq2seq is 50.92, Transformer is 50.01, and finetuned Transformer is 50.19). We also try to relate the results with the language characteristics, distances, family trees, domains, data sizes, and sentence lengths. We find that the effect of the domain is the most important factor affecting the results for the given data domains and sizes. We compare our results with the only existing MT system for Assamese (Bing Translator) and also with pairs involving Hindi.
Publisher
Association for Computing Machinery (ACM)
Reference98 articles.
1. Dzmitry Bahdanau Kyunghyun Cho and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. In Proceedings of the 3rd International Conference on Learning Representations (ICLR’15) San Diego CA USA May 7-9 2015 Yoshua Bengio and Yann LeCun (Eds.). http://arxiv.org/abs/1409.0473.
2. Paul Baker Andrew Hardie Tony McEnery Hamish Cunningham and Robert J. Gaizauskas. 2002. EMILLE A 67-Million word corpus of Indic languages: Data collection mark-up and harmonisation. In Proceedings of the 3rd International Conference on Language Resources and Evaluation .
3. Assembling translations from multi-engine machine translation outputs
Cited by
8 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Language System Path of Intelligent Machine Translation;2024 Third International Conference on Distributed Computing and Electrical Circuits and Electronics (ICDCECE);2024-04-26
2. Improving translation between English, Assamese bilingual pair with monolingual data, length penalty and model averaging;International Journal of Information Technology;2024-01-30
3. A Study of Word Embedding Models for Machine Translation of North Eastern Languages;Communications in Computer and Information Science;2023-11-30
4. Investigation of Data Augmentation Techniques for Assamese-English Language Pair Machine Translation;2023 18th International Joint Symposium on Artificial Intelligence and Natural Language Processing (iSAI-NLP);2023-11-27
5. Speech-to-speech Low-resource Translation;2023 IEEE 24th International Conference on Information Reuse and Integration for Data Science (IRI);2023-08