Author:
Krügel Sebastian,Ostermaier Andreas,Uhl Matthias
Abstract
AbstractChatGPT is not only fun to chat with, but it also searches information, answers questions, and gives advice. With consistent moral advice, it can improve the moral judgment and decisions of users. Unfortunately, ChatGPT’s advice is not consistent. Nonetheless, it does influence users’ moral judgment, we find in an experiment, even if they know they are advised by a chatting bot, and they underestimate how much they are influenced. Thus, ChatGPT corrupts rather than improves its users’ moral judgment. While these findings call for better design of ChatGPT and similar bots, we also propose training to improve users’ digital literacy as a remedy. Transparency, however, is not sufficient to enable the responsible use of AI.
Funder
Bavarian Research Institute for Digital Transformation
Technische Hochschule Ingolstadt
Publisher
Springer Science and Business Media LLC
Reference22 articles.
1. OpenAI. ChatGPT: Optimizing language models for dialogue. https://openai.com/blog/chatgpt/. (November 30, 2022).
2. Heilweil, R. AI is finally good at stuff. Now what? Vox. https://www.vox.com/recode/2022/12/7/23498694/ai-artificial-intelligence-chat-gpt-openai. (December 7, 2022).
3. Reich, A. ChatGPT: What is the new free AI chatbot? Jerusalem Post. https://www.jpost.com/business-and-innovation/tech-and-start-ups/article-725910. (December 27, 2022).
4. Borji, A. A categorical archive of ChatGPT failures. https://arxiv.org/abs/2302.03494. (February 23, 2023).
5. Bender, E. M., Gebru, T., McMillan-Major, A. & Shmitchell, S. On the dangers of stochastic parrots: Can language models be too big? in Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT’21), 610–623. https://doi.org/10.1145/3442188.3445922 (2021).
Cited by
57 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献