Abstract
AbstractThis paper critically examines the political implications of Large Language Models (LLMs), focusing on the individual and collective ability to engage in political practices. The advent of AI-based chatbots powered by LLMs has sparked debates on their democratic implications. These debates typically focus on how LLMS spread misinformation and thus hinder the evaluative skills of people essential for informed decision-making and deliberation. This paper suggests that beyond the spread of misinformation, the political significance of LLMs extends to the core of political subjectivity and action. It explores how LLMs contribute to political de-skilling by influencing the capacities of critical engagement and collective action. Put differently, we explore how LLMs shape political subjectivity. We draw from Arendt’s distinction between speech and language and Foucault’s work on counter-conduct to articulate in what sense LLMs give rise to political de-skilling, and hence pose a threat to political subjectivity. The paper concludes by considering how to reconcile the impact of LLMs on political agency without succumbing to technological determinism, and by pointing to how the practice of parrhesia enables one to form one’s political subjectivity in relation to LLMs.
Funder
Technische Universiteit Delft
Publisher
Springer Science and Business Media LLC
Reference50 articles.
1. Chomsky, N.: The false promise of ChatGPT. The New York Times. (2023). https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html Accessed October 19 2023
2. Michael, A., Hocquard, C.: Artificial intelligence, democracy and elections. European Parliamentary Research Service, PE 751.478. (2023). https://www.europarl.europa.eu/thinktank/en/document/EPRS_BRI(2023)751478 Accessed November 3 2023
3. Hacker, P., Engel, A., Mauer, M.: Regulating ChatGPT and other large generative AI models. In: Association for Computer Machinery (ed.), Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, pp. 1112–1123 (2023)
4. Bender, E.M., Gebru, T., McMillan-Major, A., Shmitchell, S.: On the dangers of stochastic parrots: Can language models be too big?列. In: Association for Computer Machinery (ed.), Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pp. 610–623 (2021). https://doi-org.tudelft.idm (2021)., March https://doi.org/10.1145/3442188.3445922
5. Deshpande, A., Murahari, V., Rajpurohit, T., Kalyan, A., Narasimhan, K.: Toxicity in ChatGPT: Analyzing persona-assigned language models. arXiv Preprint. (2023). https://doi.org/10.48550/arXiv.2304.05335