Affiliation:
1. Beijing Institute of Technology, and Beijing Engineering Research Center of High Volume Language Information Processing and Cloud Computing Applications, Beijing, China
Abstract
Statistical machine translation (SMT) models rely on word-, phrase-, and syntax-level alignments. But neural machine translation (NMT) models rarely explicitly learn the phrase- and syntax-level alignments. In this article, we propose to improve NMT by explicitly learning the bilingual syntactic constituent alignments. Specifically, we first utilize syntactic parsers to induce syntactic structures of sentences, and then we propose two ways to utilize the syntactic constituents in a perceptual (not adversarial) generator-discriminator training framework. One way is to use them to measure the alignment score of sentence-level training examples, and the other is to directly score the alignments of constituent-level examples generated with an algorithm based on word-level alignments from SMT. In our generator-discriminator framework, the discriminator is pre-trained to learn constituent alignments and distinguish the ground-truth translation from the fake ones, while the generative translation model is fine-tuned to receive the alignment knowledge and to generate translations that best approximate the true ones. Experiments and analysis show that the learned constituent alignments can help improve the translation results.
Funder
National Natural Science Foundation of China
National Key Research and Development Program of China
Publisher
Association for Computing Machinery (ACM)
Reference56 articles.
1. Towards String-To-Tree Neural Machine Translation
2. Syntactically Supervised Transformers for Faster Neural Machine Translation
3. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the 3rd International Conference on Learning Representations (ICLR’15). Retrieved from http://arxiv.org/abs/1409.0473.
4. Wanxiang Che, Zhenghua Li, and Ting Liu. 2010. Ltp: A chinese language technology platform. In Proceedings of the 23rd International Conference on Computational Linguistics: Demonstrations. Association for Computational Linguistics, 13–16.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Reusable Component Retrieval: A Semantic Search Approach for Low Resource Languages;ACM Transactions on Asian and Low-Resource Language Information Processing;2022-09-22