Author:
Wang Dongqi,Wei Haoran,Zhang Zhirui,Huang Shujian,Xie Jun,Chen Jiajun
Abstract
We study the problem of online learning with human feedback in the human-in-the-loop machine translation, in which the human translators revise the machine-generated translations and then the corrected translations are used to improve the neural machine translation (NMT) system. However, previous methods require online model updating or additional translation memory networks to achieve high-quality performance, making them inflexible and inefficient in practice.
In this paper, we propose a novel non-parametric online learning method without changing the model structure.
This approach introduces two k-nearest-neighbor (KNN) modules: one module memorizes the human feedback, which is the correct sentences provided by human translators,
while the other balances the usage of the history human feedback and original NMT models adaptively.
Experiments conducted on EMEA and JRC-Acquis benchmarks demonstrate that our proposed method obtains substantial improvements on translation accuracy and achieves better adaptation performance with less repeating human correction operations.
Publisher
Association for the Advancement of Artificial Intelligence (AAAI)
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Datastore Distillation for Nearest Neighbor Machine Translation;IEEE/ACM Transactions on Audio, Speech, and Language Processing;2024
2. An Adversarial Attack Considering Effectiveness and Concealment on Faster R-CNN;Proceedings of the 2023 5th Asia Pacific Information Technology Conference;2023-02-09
3. Faster and More Robust Low-Resource Nearest Neighbor Machine Translation;Natural Language Processing and Chinese Computing;2023