How to evaluate machine translation: A review of automated and human metrics

Author:

Chatzikoumi EiriniORCID

Abstract

AbstractThis article presents the most up-to-date, influential automated, semiautomated and human metrics used to evaluate the quality of machine translation (MT) output and provides the necessary background for MT evaluation projects. Evaluation is, as repeatedly admitted, highly relevant for the improvement of MT. This article is divided into three parts: the first one is dedicated to automated metrics; the second, to human metrics; and the last, to the challenges posed by neural machine translation (NMT) regarding its evaluation. The first part includes reference translation–based metrics; confidence or quality estimation (QE) metrics, which are used as alternatives for quality assessment; and diagnostic evaluation based on linguistic checkpoints. Human evaluation metrics are classified according to the criterion of whether human judges directly express a so-called subjective evaluation judgment, such as ‘good’ or ‘better than’, or not, as is the case in error classification. The former methods are based on directly expressed judgment (DEJ); therefore, they are called ‘DEJ-based evaluation methods’, while the latter are called ‘non-DEJ-based evaluation methods’. In the DEJ-based evaluation section, tasks such as fluency and adequacy annotation, ranking and direct assessment (DA) are presented, whereas in the non-DEJ-based evaluation section, tasks such as error classification and postediting are detailed, with definitions and guidelines, thus rendering this article a useful guide for evaluation projects. Following the detailed presentation of the previously mentioned metrics, the specificities of NMT are set forth along with suggestions for its evaluation, according to the latest studies. As human translators are the most adequate judges of the quality of a translation, emphasis is placed on the human metrics seen from a translator-judge perspective to provide useful methodology tools for interdisciplinary research groups that evaluate MT systems.

Publisher

Cambridge University Press (CUP)

Subject

Artificial Intelligence,Linguistics and Language,Language and Linguistics,Software

Reference84 articles.

1. Attaining the Unattainable? Reassessing Claims of Human Parity in Neural Machine Translation

2. Tomás, J. , Mas, J.A. and Casacuberta, F. (2003). A quantitative method for machine translation evaluation. In Proceedings of the EACL 2003 Workshop on Evaluation Initiatives in Natural Language Processing: Are Evaluation Methods, Metrics and Resources Reusable?, Budapest, Hungary.

3. Temnikova, I. (2010). A cognitive evaluation approach for a controlled language post-editing experiment. In Proceedings of International Conference Language Resources and Evaluation (LREC2010), Valletta, Malta.

4. Sutskever, I. , Vinyals, O. and Le, Q. (2014). Sequence to sequence learning with neural networks. In Proceedings of Advances in Neural Information Processing Systems, Montreal, Canada, pp. 3104–3112 .

5. Specia, L. , Shah, K. , De Souza, J.G.C. and Cohn, T. (2013). QuEst – A translation quality estimation framework. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, Sofia, Bulgaria, pp. 79–84.

Cited by 37 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Low resource Twi-English parallel corpus for machine translation in multiple domains (Twi-2-ENG);Discover Computing;2024-07-05

2. Online English Machine Translation Algorithm Based on Large Language Model;2024 3rd International Conference on Sentiment Analysis and Deep Learning (ICSADL);2024-03-13

3. Evaluation of Instagram's Neural Machine Translation for Literary Texts: An MQM-Based Analysis;GEMA Online® Journal of Language Studies;2024-02-28

4. English Kashmiri Machine Translation System related to Tourism Domain;2024 11th International Conference on Computing for Sustainable Global Development (INDIACom);2024-02-28

5. Error Analysis of Pretrained Language Models (PLMs) in English-to-Arabic Machine Translation;Human-Centric Intelligent Systems;2024-02-05

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3