Abstract
AbstractThis article argues that large language models (LLMs) should be interpreted as a form of gods. In a theological sense, a god is an immortal being that exists beyond time and space. This is clearly nothing like LLMs. In an anthropological sense, however, a god is rather defined as the personified authority of a group through time—a conceptual tool that molds a collective of ancestors into a unified agent or voice. This is exactly what LLMs are. They are products of vast volumes of data, literally traces of past human (speech) acts, synthesized into a single agency that is (falsely) experienced by users as extra-human. This reconceptualization, I argue, opens up new avenues of critique of LLMs by allowing the mobilization of theoretical resources from centuries of religious critique. For illustration, I draw on the Marxian religious philosophy of Martin Hägglund. From this perspective, the danger of LLMs emerge not only as bias or unpredictability, but as a temptation to abdicate our spiritual and ultimately democratic freedom in favor of what I call a tyranny of the past.
Funder
Wallenberg AI, Autonomous Systems and Software Programme – Humanities and Society
Uppsala University
Publisher
Springer Science and Business Media LLC
Reference47 articles.
1. Albergotti, R. (2023). The secret history of Elon Musk, Sam Altman, and OpenAI. Semafor. https://www.semafor.com/article/03/24/2023/the-secret-history-of-elon-musk-sam-altman-and-openai.
2. Arendt, H. (1993). Between past and future. Penguin Books.
3. Augustine, St. (1998). The confessions. Paris, France: Bibliothèque de La Pléiade. http://www.ourladyswarriors.org/saints/augcon10.htm.
4. Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? FAccT 2021 – Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency 610 623
5. Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press.