Experts, Errors, and Context: A Large-Scale Study of Human Evaluation for Machine Translation

Author:

Freitag Markus1,Foster George2,Grangier David3,Ratnakar Viresh4,Tan Qijun5,Macherey Wolfgang6

Affiliation:

1. Google Research. freitag@google.com

2. Google Research. fosterg@google.com

3. Google Research. grangier@google.com

4. Google Research. vratnakar@google.com

5. Google Research. qijuntan@google.com

6. Google Research. wmach@google.com

Abstract

Abstract Human evaluation of modern high-quality machine translation systems is a difficult problem, and there is increasing evidence that inadequate evaluation procedures can lead to erroneous conclusions. While there has been considerable research on human evaluation, the field still lacks a commonly accepted standard procedure. As a step toward this goal, we propose an evaluation methodology grounded in explicit error analysis, based on the Multidimensional Quality Metrics (MQM) framework. We carry out the largest MQM research study to date, scoring the outputs of top systems from the WMT 2020 shared task in two language pairs using annotations provided by professional translators with access to full document context. We analyze the resulting data extensively, finding among other results a substantially different ranking of evaluated systems from the one established by the WMT crowd workers, exhibiting a clear preference for human over machine output. Surprisingly, we also find that automatic metrics based on pre-trained embeddings can outperform human crowd workers. We make our corpus publicly available for further research.

Publisher

MIT Press - Journals

Subject

Artificial Intelligence,Computer Science Applications,Linguistics and Language,Human-Computer Interaction,Communication

Reference35 articles.

1. Involving Language Professionals in the Evaluation of Machine Translation;Avramidis,2012

2. Findings of the 2020 Conference on Machine Translation (WMT20);Barrault,2020

3. Machine Translation Human Evaluation: An investigation of evaluation based on Post-Editing and its relation with Direct Assessment;Bentivogli,2018

4. Findings of the 2017 Conference on Machine Translation (WMT17);Bojar,2017

Cited by 43 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Using chat GPT to evaluate police threats, risk and harm;International Journal of Law, Crime and Justice;2024-09

2. Initial exploration into sarcasm and irony through machine translation;Natural Language Processing Journal;2024-09

3. Fine Tuning Language Models: A Tale of Two Low-Resource Languages;Data Intelligence;2024-07-01

4. Document-Level Machine Translation with Effective Batch-Level Context Representation;2024 International Joint Conference on Neural Networks (IJCNN);2024-06-30

5. A comparative evaluation for question answering over Greek texts by using machine translation and BERT;Language Resources and Evaluation;2024-06-19

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3