Affiliation:
1. Utah Valley University
Abstract
In 2018, the first claims of empirical backing for human-machine parity in translation (HMPT) emerged at the WMT18 Conference on Machine Translation and in a study using WMT resources. Other researchers quickly refuted these claims, pointing to a flawed human evaluation campaign. Subsequent HMPT claims at WMT19 were also empirically refuted. This chapter discusses the evolution of recommendations for human evaluation of MT stemming from these claims to HMPT and evaluates possibilities of HMPT at WMT20 in the context of these recommendations. Finally, we summarize all criteria for human evaluation of MT based on recent literature.
Publisher
John Benjamins Publishing Company
Reference33 articles.
1. Findings of the 2020 Conference on Machine Translation (WMT20);Barrault,2020
2. Findings of the 2019 Conference on Machine Translation (WMT19)
3. Machine Translation Human Evaluation: An Investigation of Evaluation Based on Post-Editing and Its Relation with Direct Assessment;Bentivogli,2018
4. Findings of the 2016 Conference on Machine Translation