Affiliation:
1. Department of Political Science and School of Public Policy University College London London UK
Abstract
ABSTRACTSynthetic content, which has been produced by generative artificial intelligence, is beginning to spread through the public sphere. Increasingly, we find ourselves exposed to convincing ‘deepfakes’ and powerful chatbots in our online environments. How should we mitigate the emerging risks to individuals and society? This article argues that labelling synthetic content in public forums is an essential first step. While calls for labelling have already been growing in volume, no principled argument has yet been offered to justify this measure (which inevitably comes with some additional costs). Rectifying that deficit, I conduct a close examination of our epistemic and expressive interests in identifying synthetic content as such. In so doing, I develop a cumulative case for social media platforms to enforce a labelling duty. I argue that this represents an important element of good platform governance, helping to shore up the integrity of our contemporary public discourse, which takes place increasingly online.
Funder
UK Research and Innovation
Reference20 articles.
1. How To Do Things With Words
2. Harm, Liberty and Free Speech;Baker C. Edwin;Southern California Law Review,1996
3. Autonomy and Free Speech;Baker C. Edwin;Constitutional Commentary,2011
4. Freedom of expression
5. Making AI Intelligible