Affiliation:
1. Cornell University, USA
Abstract
Advances in machine learning have led to the creation natural language models that can mimic human writing style and substance. Here we investigate the challenge that machine-generated content, such as that produced by the model GPT-3, presents to democratic representation by assessing the extent to which machine-generated content can pass as constituent sentiment. We conduct a field experiment in which we send both handwritten and machine-generated letters (a total of 32,398 emails) to 7132 state legislators. We compare legislative response rates for the human versus machine-generated constituency letters to gauge whether language models can approximate inauthentic constituency voices at scale. Legislators were only slightly less likely to respond to artificial intelligence (AI)-generated content than to human-written emails; the 2% difference in response rate was statistically significant but substantively small. Qualitative evidence sheds light on the potential perils that this technology presents for democratic representation, but also suggests potential techniques that legislators might employ to guard against misuses of language models.
Subject
Sociology and Political Science,Communication
Cited by
8 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献