Fairness Testing of Machine Translation Systems

Author:

Sun Zeyu1ORCID,Chen Zhenpeng2ORCID,Zhang Jie3ORCID,Hao Dan4ORCID

Affiliation:

1. Science & Technology on Integrated Information System Laboratory, Institute of Software, Chinese Academy of Sciences, Beijing, China

2. Nanyang Technological University, Singapore, Singapore

3. King's College London, London, United Kingdom of Great Britain and Northern Ireland

4. Key Laboratory of High Confidence Software Technologies (Peking University), MoE, School of Computer Science, Peking University, Beijing, China

Abstract

Machine translation is integral to international communication and extensively employed in diverse human-related applications. Despite remarkable progress, fairness issues persist within current machine translation systems. In this article, we propose FairMT, an automated fairness testing approach tailored for machine translation systems. FairMT operates on the assumption that translations of semantically similar sentences, containing protected attributes from distinct demographic groups, should maintain comparable meanings. It comprises three key steps: (1) test input generation, producing inputs covering various demographic groups; (2) test oracle generation, identifying potential unfair translations based on semantic similarity measurements; and (3) regression, discerning genuine fairness issues from those caused by low-quality translation. Leveraging FairMT, we conduct an empirical study on three leading machine translation systems–Google Translate, T5, and Transformer. Our investigation uncovers up to 832, 1,984, and 2,627 unfair translations across the three systems, respectively. Intriguingly, we observe that fair translations tend to exhibit superior translation performance, challenging the conventional wisdom of a fairness-performance tradeoff prevalent in the fairness literature.

Funder

National Natural Science Foundation of China

Publisher

Association for Computing Machinery (ACM)

Reference60 articles.

1. 2013. Google Translate’s Gender Problem (And Bing Translate’s and Systrans’s...). https://www.fastcompany.com/3010223/google-translates-gender-problem-and-bing-translates-and-systrans

2. 2015. Google Apologizes After Its Translator Produced Homophobic Slurs For The Word ‘Gay’. https://www.businessinsider.com/google-apologizes-for-translate-flaw-producing-homophobic-slurs-2015-1

3. 2020. Female Historians and Male Nurses do not Exist. https://algorithmwatch.org/en/google-translate-gender-bias/

4. Black box fairness testing of machine learning models

5. BiasFinder: Metamorphic test generation to uncover bias for sentiment analysis systems;Asyrofi Muhammad Hilmi;IEEE Transactions on Software Engineering,2022

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3