Addressing Gender Bias in Generative Large Language Models

Author:

Zhou Hanqing1,Inkpen Diana1,Kantarci Burak1

Affiliation:

1. University of Ottawa

Abstract

Abstract

The examination of gender bias, alongside other demographic biases like race, nationality, and religion, within generative large language models (LLMs), is increasingly capturing the attention of both the scientific community and industry stakeholders. These biases often permeate generative LLMs, influencing widely used products and potentially compromising user experiences. A growing body of research is dedicated to enhancing gender representations in natural language processing (NLP) across a spectrum of generative LLMs. This paper explores the current research focused on identifying and evaluating gender bias in generative LLMs. A comprehensive investigation is conducted to assess and mitigate gender bias across five distinct generative LLMs. The mitigation strategies implemented yield significant improvements in gender bias scores, with performance enhancements of up to 46% compared to zero-shot text generation approaches. Additionally, we explore how different levels of LLM precision and quantization impact gender bias, providing insights into how technical factors influence bias mitigation strategies. By tackling these challenges and suggesting areas for future research, we aim to contribute to the ongoing discussion about gender bias in language technologies, promoting more equitable and inclusive NLP systems.

Publisher

Springer Science and Business Media LLC

Reference69 articles.

1. Ungless, Eddie and Rafferty, Amy and Nag, Hrichika and Ross, Bj{\"o}rn (2022) A Robust Bias Mitigation Procedure Based on the Stereotype Content Model. Association for Computational Linguistics, Abu Dhabi, UAE, The Stereotype Content model (SCM) states that we tend to perceive minority groups as cold, incompetent or both. In this paper we adapt existing work to demonstrate that the Stereotype Content model holds for contextualised word embeddings, then use these results to evaluate a fine-tuning process designed to drive a language model away from stereotyped portrayals of minority groups. We find the SCM terms are better able to capture bias than demographic agnostic terms related to pleasantness. Further, we were able to reduce the presence of stereotypes in the model through a simple fine-tuning procedure that required minimal human and computer resources, without harming downstream performance. We present this work as a prototype of a debiasing procedure that aims to remove the need for a priori knowledge of the specifics of bias in the model., 207--217, 10.18653/v1/2022.nlpcss-1.23, https://aclanthology.org/2022.nlpcss-1.23, November, Proceedings of the Fifth Workshop on Natural Language Processing and Computational Social Science (NLP +CSS), Bamman, David and Hovy, Dirk and Jurgens, David and Keith, Katherine and O'Connor, Brendan and Volkova, Svitlana

2. Thakur, Himanshu and Jain, Atishay and Vaddamanu, Praneetha and Liang, Paul Pu and Morency, Louis-Philippe (2023) Language Models Get a Gender Makeover: Mitigating Gender Bias with Few-Shot Data Interventions. Association for Computational Linguistics, Toronto, Canada, Societal biases present in pre-trained large language models are a critical issue as these models have been shown to propagate biases in countless downstream applications, rendering them unfair towards specific groups of people. Since large-scale retraining of these models from scratch is both time and compute-expensive, a variety of approaches have been previously proposed that de-bias a pre-trained model. While the majority of current state-of-the-art debiasing methods focus on changes to the training regime, in this paper, we propose data intervention strategies as a powerful yet simple technique to reduce gender bias in pre-trained models. Specifically, we empirically show that by fine-tuning a pre-trained model on only 10 debiased (intervened) training examples, the tendency to favor any gender is significantly reduced. Since our proposed method only needs a few training examples, we argue that our few-shot de-biasing approach is highly feasible and practical. Through extensive experimentation, we show that our de-biasing technique performs better than competitive state-of-the-art baselines with minimal loss in language modeling ability., 340--351, 10.18653/v1/2023.acl-short.30, https://aclanthology.org/2023.acl-short.30, July, Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki

3. Borchers, Conrad and Gala, Dalia and Gilburt, Benjamin and Oravkin, Eduard and Bounsi, Wilfried and Asano, Yuki M and Kirk, Hannah (2022) Looking for a Handsome Carpenter! Debiasing {GPT}-3 Job Advertisements. Association for Computational Linguistics, Seattle, Washington, The growing capability and availability of generative language models has enabled a wide range of new downstream tasks. Academic research has identified, quantified and mitigated biases present in language models but is rarely tailored to downstream tasks where wider impact on individuals and society can be felt. In this work, we leverage one popular generative language model, GPT-3, with the goal of writing unbiased and realistic job advertisements. We first assess the bias and realism of zero-shot generated advertisements and compare them to real-world advertisements. We then evaluate prompt-engineering and fine-tuning as debiasing methods. We find that prompt-engineering with diversity-encouraging prompts gives no significant improvement to bias, nor realism. Conversely, fine-tuning, especially on unbiased real advertisements, can improve realism and reduce bias., 212--224, 10.18653/v1/2022.gebnlp-1.22, https://aclanthology.org/2022.gebnlp-1.22, July, Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP), Hardmeier, Christian and Basta, Christine and Costa-juss{\`a}, Marta R. and Stanovsky, Gabriel and Gonen, Hila

4. Joshi, Pratik and Santy, Sebastin and Budhiraja, Amar and Bali, Kalika and Choudhury, Monojit (2020) The State and Fate of Linguistic Diversity and Inclusion in the {NLP} World. Association for Computational Linguistics, Online, Language technologies contribute to promoting multilingualism and linguistic diversity around the world. However, only a very small number of the over 7000 languages of the world are represented in the rapidly evolving language technologies and applications. In this paper we look at the relation between the types of languages, resources, and their representation in NLP conferences to understand the trajectory that different languages have followed over time. Our quantitative investigation underlines the disparity between languages, especially in terms of their resources, and calls into question the {``}language agnostic{''} status of current models and systems. Through this paper, we attempt to convince the ACL community to prioritise the resolution of the predicaments highlighted here, so that no language is left behind., 6282--6293, 10.18653/v1/2020.acl-main.560, https://aclanthology.org/2020.acl-main.560, July, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Jurafsky, Dan and Chai, Joyce and Schluter, Natalie and Tetreault, Joel

5. Hovy, Dirk and Bianchi, Federico and Fornaciari, Tommaso (2020) {``}You Sound Just Like Your Father{''} Commercial Machine Translation Systems Include Stylistic Biases. Association for Computational Linguistics, Online, The main goal of machine translation has been to convey the correct content. Stylistic considerations have been at best secondary. We show that as a consequence, the output of three commercial machine translation systems (Bing, DeepL, Google) make demographically diverse samples from five languages {``}sound{''} older and more male than the original. Our findings suggest that translation models reflect demographic bias in the training data. This opens up interesting new research avenues in machine translation to take stylistic considerations into account., 1686--1690, 10.18653/v1/2020.acl-main.154, https://aclanthology.org/2020.acl-main.154, July, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Jurafsky, Dan and Chai, Joyce and Schluter, Natalie and Tetreault, Joel

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3