Scaling Implicit Bias Analysis across Transformer-Based Language Models through Embedding Association Test and Prompt Engineering

Author:

Bevara Ravi Varma Kumar1ORCID,Mannuru Nishith Reddy1ORCID,Karedla Sai Pranathi2ORCID,Xiao Ting12ORCID

Affiliation:

1. Department of Information Science, University of North Texas, Denton, TX 76205, USA

2. Department of Computer Science and Engineering, University of North Texas, Denton, TX 76205, USA

Abstract

In the evolving field of machine learning, deploying fair and transparent models remains a formidable challenge. This study builds on earlier research, demonstrating that neural architectures exhibit inherent biases by analyzing a broad spectrum of transformer-based language models from base to x-large configurations. This article investigates movie reviews for genre-based bias, which leverages the Word Embedding Association Test (WEAT), revealing that scaling models up tends to mitigate bias, with larger models showing up to a 29% reduction in prejudice. Alternatively, this study also underscores the effectiveness of prompt-based learning, a facet of prompt engineering, as a practical approach to bias mitigation, as this technique reduces genre bias in reviews by more than 37% on average. This suggests that the refinement of development practices should include the strategic use of prompts in shaping model outputs, highlighting the crucial role of ethical AI integration to weave fairness seamlessly into the core functionality of transformer models. Despite the basic nature of the prompts employed in this research, this highlights the possibility of embracing structured prompt engineering to create AI systems that are ethical, equitable, and more responsible for their actions.

Publisher

MDPI AG

Reference32 articles.

1. Do actions speak louder than voices? The signaling role of social information cues in influencing consumer purchase decisions;Cheung;Decis. Support Syst.,2014

2. Liang, P.P., Wu, C., Morency, L.P., and Salakhutdinov, R. (2021, January 18–24). Towards understanding and mitigating social biases in language models. Proceedings of the International Conference on Machine Learning, Virtual.

3. Silberg, J., and Manyika, J. (2019). Notes from the AI Frontier: Tackling Bias in AI (and in Humans), McKinsey Global Institute.

4. A survey on bias and fairness in machine learning;Mehrabi;ACM Comput. Surv. (CSUR),2021

5. Bias in data-driven artificial intelligence systems—An introductory survey;Ntoutsi;Wiley Interdiscip. Rev. Data Min. Knowl. Discov.,2020

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3