Author:
Fang Xiao,Che Shangkun,Mao Minjia,Zhang Hongzhe,Zhao Ming,Zhao Xiaohang
Abstract
AbstractLarge language models (LLMs) have the potential to transform our lives and work through the content they generate, known as AI-Generated Content (AIGC). To harness this transformation, we need to understand the limitations of LLMs. Here, we investigate the bias of AIGC produced by seven representative LLMs, including ChatGPT and LLaMA. We collect news articles from The New York Times and Reuters, both known for their dedication to provide unbiased news. We then apply each examined LLM to generate news content with headlines of these news articles as prompts, and evaluate the gender and racial biases of the AIGC produced by the LLM by comparing the AIGC and the original news articles. We further analyze the gender bias of each LLM under biased prompts by adding gender-biased messages to prompts constructed from these news headlines. Our study reveals that the AIGC produced by each examined LLM demonstrates substantial gender and racial biases. Moreover, the AIGC generated by each LLM exhibits notable discrimination against females and individuals of the Black race. Among the LLMs, the AIGC generated by ChatGPT demonstrates the lowest level of bias, and ChatGPT is the sole model capable of declining content generation when provided with biased prompts.
Publisher
Springer Science and Business Media LLC
Reference41 articles.
1. Ouyang, L. et al. Training language models to follow instructions with human feedback. Adv. Neural Inf. Process. Syst. 35, 27730–27744 (2022).
2. Touvron, H. et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023).
3. Li, F.-F. et al. Generative AI: Perspectives from stanford HAI. Stanf. HAI Rep. (2023).
4. Friedman, B. & Nissenbaum, H. Bias in computer systems. ACM Trans. Inf. Syst. (TOIS) 14, 330–347 (1996).
5. Guglielmi, G. Gender bias goes away when grant reviewers focus on the science. Nature 554, 14–16 (2018).
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献