Affiliation:
1. Guangdong Police College , Guangzhou Guangdong, , China .
Abstract
Abstract
The mainstream machine translation model Transformer is completely based on the self-attention mechanism for translation operation, but there are still some problems, such as not being able to combine the syntactic structure information of the natural language for translation, which leads to problems such as mistranslation and omission. In this paper, for the problem that the position encoding obtained by traditional RNN and attention mechanism machine translation models using a fixed formula does not contain contextual information, the source language sequences containing contextual positional information are obtained by introducing a bidirectional long-short-term memory network and a tree-shaped long-short-term memory network, trained horizontally and vertically, respectively, and the self-attention mechanism is used in Tree-LSTM for the prediction of the contribution of the decision that The relative position information between words is preserved to the maximum extent, and finally, the Bi-Tree-LSTM translation model based on positional encoding optimization is constructed. The performance of the model is tested on four datasets: general, legal, business, film, and television, and the BLEU value of the model translation is analyzed under low data resources and increased sentence length, and then a 4000-sentence long English text is translated to check the wrong sentences and analyze the translation quality. It was found that the BLEU values of this paper’s model are 33.5, 35.2, 31.7, and 34.4 in the four types of text tests, which are the highest among the models. The BLEU of this paper’s model at 5K data volume has been as high as 26.14 points, which is 2.72 points higher than the highest score of the rest of the machine translation models at 50K data volume. The BLEU scores for 8-18 word sentences consistently remain above 45, and the peak performance is superior to that of other models. 4000 sentences of English long text translation, the total number of error sentences is 54, accounting for 1.39% of the whole text, which is lower than that of the Transformer model’s 7.15%, and the performance is in line with the expectation of the optimization design. This paper provides a new idea and useful exploration for optimizing English machine translation accuracy.
Reference22 articles.
1. Fan, A., Bhosale, S., Schwenk, H., Ma, Z., El-Kishky, A., Goyal, S., ... & Joulin, A. (2021). Beyond english-centric multilingual machine translation. Journal of Machine Learning Research, 22(107), 1-48.
2. Qin, H., Shinozaki, T., & Duh, K. (2017). Evolution strategy based automatic tuning of neural machine translation systems. In Proceedings of the 14th International Conference on Spoken Language Translation (pp. 120-128).
3. Chen, C. (2023). Application of Q-learning virtual network and embedded processor in Chinese English translation sentence accuracy analysis. Soft Computing, 1-10.
4. Jiaxin, L., & Peiting, W. (2022). Research on Transfer Learning‐Based English‐Chinese Machine Translation. Mobile Information Systems, 2022(1), 8478760.
5. Cang, H., & Feng, D. (2024). Construction of English corpus oral instant translation model based on internet of things and deep learning of information security. Journal of Computational Methods in Sciences and Engineering, 24(3), 1507-1522.