Enhancement of English-Bengali Machine Translation Leveraging Back-Translation
-
Published:2024-08-05
Issue:15
Volume:14
Page:6848
-
ISSN:2076-3417
-
Container-title:Applied Sciences
-
language:en
-
Short-container-title:Applied Sciences
Author:
Mondal Subrota Kumar1ORCID, Wang Chengwei1, Chen Yijun1, Cheng Yuning1, Huang Yanbo1, Dai Hong-Ning2ORCID, Kabir H. M. Dipu34ORCID
Affiliation:
1. School of Computer Science and Engineering, Macau University of Science and Technology, Taipa, Macau 999078, China 2. Department of Computer Science, Hong Kong Baptist University, Hong Kong, China 3. AI and Cyber Futures Institute, Charles Sturt University, Orange, NSW 2800, Australia 4. Rural Health Research Institute, Charles Sturt University, Orange, NSW 2800, Australia
Abstract
An English-Bengali machine translation (MT) application can convert an English text into a corresponding Bengali translation. To build a better model for this task, we can optimize English-Bengali MT. MT for languages with rich resources, like English-German, started decades ago. However, MT for languages lacking many parallel corpora remains challenging. In our study, we employed back-translation to improve the translation accuracy. With back-translation, we can have a pseudo-parallel corpus, and the generated (pseudo) corpus can be added to the original dataset to obtain an augmented dataset. However, the new data can be regarded as noisy data because they are generated by models that may not be trained very well or not evaluated well, like human translators. Since the original output of a translation model is a probability distribution of candidate words, to make the model more robust, different decoding methods are used, such as beam search, top-k random sampling and random sampling with temperature T, and others. Notably, top-k random sampling and random sampling with temperature T are more commonly used and more optimal decoding methods than the beam search. To this end, our study compares LSTM (Long-Short Term Memory, as a baseline) and Transformer. Our results show that Transformer (BLEU: 27.80 in validation, 1.33 in test) outperforms LSTM (3.62 in validation, 0.00 in test) by a large margin in the English-Bengali translation task. (Evaluating LSTM and Transformer without any augmented data is our baseline study.) We also incorporate two decoding methods, top-k random sampling and random sampling with temperature T, for back-translation that help improve the translation accuracy of the model. The results show that data generated by back-translation without top-k or temperature sampling (“no strategy”) help improve the accuracy (BLEU 38.22, +10.42 on validation, 2.07, +0.74 on test). Specifically, back-translation with top-k sampling is less effective (k=10, BLEU 29.43, +1.83 on validation, 1.36, +0.03 on test), while sampling with a proper value of T, T=0.5 makes the model achieve a higher score (T=0.5, BLEU 35.02, +7.22 on validation, 2.35, +1.02 on test). This implies that in English-Bengali MT, we can augment the training set through back-translation using random sampling with a proper temperature T.
Funder
The Science and Technology Development Fund of Macao, Macao SAR, China
Reference93 articles.
1. Neural machine translation: A review;Stahlberg;J. Artif. Intell. Res.,2020 2. Machine translation and its evaluation: A study;Mondal;Artif. Intell. Rev.,2023 3. Cho, K., Van Merriënboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., and Bengio, Y. (2014). Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv. 4. Sutskever, I., Vinyals, O., and Le, Q.V. (2014, January 8–13). Sequence to sequence learning with neural networks. Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, USA. 5. Bahdanau, D., Cho, K., and Bengio, Y. (2014). Neural machine translation by jointly learning to align and translate. arXiv.
|
|