Affiliation:
1. Department of Information Science, University of North Texas, Denton, TX 76205, USA
2. Department of Computer Science and Engineering, University of North Texas, Denton, TX 76205, USA
Abstract
In the evolving field of machine learning, deploying fair and transparent models remains a formidable challenge. This study builds on earlier research, demonstrating that neural architectures exhibit inherent biases by analyzing a broad spectrum of transformer-based language models from base to x-large configurations. This article investigates movie reviews for genre-based bias, which leverages the Word Embedding Association Test (WEAT), revealing that scaling models up tends to mitigate bias, with larger models showing up to a 29% reduction in prejudice. Alternatively, this study also underscores the effectiveness of prompt-based learning, a facet of prompt engineering, as a practical approach to bias mitigation, as this technique reduces genre bias in reviews by more than 37% on average. This suggests that the refinement of development practices should include the strategic use of prompts in shaping model outputs, highlighting the crucial role of ethical AI integration to weave fairness seamlessly into the core functionality of transformer models. Despite the basic nature of the prompts employed in this research, this highlights the possibility of embracing structured prompt engineering to create AI systems that are ethical, equitable, and more responsible for their actions.
Reference32 articles.
1. Do actions speak louder than voices? The signaling role of social information cues in influencing consumer purchase decisions;Cheung;Decis. Support Syst.,2014
2. Liang, P.P., Wu, C., Morency, L.P., and Salakhutdinov, R. (2021, January 18–24). Towards understanding and mitigating social biases in language models. Proceedings of the International Conference on Machine Learning, Virtual.
3. Silberg, J., and Manyika, J. (2019). Notes from the AI Frontier: Tackling Bias in AI (and in Humans), McKinsey Global Institute.
4. A survey on bias and fairness in machine learning;Mehrabi;ACM Comput. Surv. (CSUR),2021
5. Bias in data-driven artificial intelligence systems—An introductory survey;Ntoutsi;Wiley Interdiscip. Rev. Data Min. Knowl. Discov.,2020