Abstract
Abstract
The emergence of large language models (LLMs) that leverage deep learning and web-scale corpora has made it possible for artificial intelligence (AI) to tackle many higher-order cognitive tasks, with critical implications for industry, government, and labor markets in the US and globally. Here, we investigate whether existing, openly-available LLMs are capable of influencing humans’ political attitudes, an ability recently regarded as the unique purview of other humans. Across three preregistered experiments featuring diverse samples of Americans (total N=4,836), we find consistent evidence that messages generated by LLMs (OpenAI’s GPT 3 and 3.5 models) are able to persuade humans across a number of policy issues, including highly polarized issues, such as an assault weapon ban, a carbon tax, and a paid parental-leave program. Overall, LLM-generated messages were as persuasive as messages crafted by lay humans. Our results show LLMs can persuade humans, even on highly polarized policy issues. As the capacity of LLMs is expected to improve substantially in the near future, these results suggest that LLMs may change political discourse, calling for immediate attention for the identification and regulation of potential misuse of LLMs.
Publisher
Research Square Platform LLC
Cited by
12 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献