Author:
Fujimoto Sasuke,Takemoto Kazuhiro
Abstract
Although ChatGPT promises wide-ranging applications, there is a concern that it is politically biased; in particular, that it has a left-libertarian orientation. Nevertheless, following recent trends in attempts to reduce such biases, this study re-evaluated the political biases of ChatGPT using political orientation tests and the application programming interface. The effects of the languages used in the system as well as gender and race settings were evaluated. The results indicate that ChatGPT manifests less political bias than previously assumed; however, they did not entirely dismiss the political bias. The languages used in the system, and the gender and race settings may induce political biases. These findings enhance our understanding of the political biases of ChatGPT and may be useful for bias evaluation and designing the operational strategy of ChatGPT.
Funder
Japan Society for the Promotion of Science
Reference30 articles.
1. BassD.
Buzzy ChatGPT chatbot is so error-prone that its maker just publicly promised to fix the tech's ‘glaring and subtle biases.'
2. BassD.
ChatGPT maker OpenAI says it's working to reduce bias, bad behavior.
3. ChowdhuryH.
Sam Altman has one big problem to solve before ChatGPT can generate big cash — making it ‘woke'. 2023
4. Should ChatGPT be biased? Challenges and risks of bias in large language models;Ferrara;arXiv,2023
5. FrackiewiczM.
ChatGPT and the risks of deepening political polarization and divides. 2023
Cited by
6 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献