Abstract
Machine translation technology, which employs computers to autonomously convert text between source and target languages, represents a pivotal realm within artificial intelligence and natural language processing research. This paper introduces a novel algorithm grounded in multi-task learning, which is aimed at enhancing the efficacy of Chinese-English neural machine translation systems. This proposition addresses three key challenges: the scarcity of parallel Chinese-English corpora, substantial disparities in sentence structure between the two languages, and the intricate, mutable nature of word formations in Mongolian, a factor influencing Chinese due to historical linguistic interactions. To counter these issues, we devise a parameter transfer strategy. Our methodology commences with the training of a high-resource neural machine translation model leveraging the encoder-decoder architecture prevalent in neural machine translation systems. Subsequently, the learned parameters are utilised to initialise a low-resource model, thereby kickstarting its training with a more informed starting point. It should be noted that the word embeddings and fully-connected layers of the low-resource model are randomly initialised and undergo continuous updating throughout the iterative training process. The experimental outcomes affirm the superiority of our proposed Dual-Task Multi-Task Learning (DFMTL) method, which achieves a BLEU score of 10.1. This not only outperforms the performance of three established baseline models but also demonstrates a notable 0.7 BLEU score increase over models trained exclusively on a mixed-corpus dataset. These findings highlight the potential of our parameter migration strategy in enhancing the precision and fluency of Chinese-English machine translations under resource-constrained scenarios.