Abstract
AbstractThe emergence of generative artificial intelligence, such as large language models and text-to-image models, has had a profound impact on society. The ability of these systems to simulate human capabilities such as text writing and image creation is radically redefining a wide range of practices, from artistic production to education. While there is no doubt that these innovations are beneficial to our lives, the pervasiveness of these technologies should not be underestimated, and raising increasingly pressing ethical questions that require a radical resemantization of certain notions traditionally ascribed to humans alone. Among these notions, that of technological intentionality plays a central role. With regard to this notion, this paper first aims to highlight what we propose to define in terms of the intentionality gap, whereby, insofar as, currently, (1) it is increasingly difficult to assign responsibility for the actions performed by AI systems to humans, as these systems are increasingly autonomous, and (2) it is increasingly complex to reconstruct the reasoning behind the results they produce as we move away from good old fashioned AI; it is now even more difficult to trace the intentionality of AI systems back to the intentions of the developers and end users. This gap between human and technological intentionality requires a revision of the concept of intentionality; to this end, we propose here to assign preter-intentional behavior to generative AI. We use this term to highlight how AI intentionality both incorporates and transcends human intentionality; i.e., it goes beyond (preter) human intentionality while being linked to it. To show the merits of this notion, we first rule out the possibility that such preter-intentionality is merely an unintended consequence and then explore its nature by comparing it with some paradigmatic notions of technological intentionality present in the wider debate on the moral (and technological) status of AI.
Funder
Università degli Studi di Milano
Publisher
Springer Science and Business Media LLC
Reference52 articles.
1. Akinwalere SN, Ivanov V (2022) Artificial intelligence in higher education: challenges and opportunities. Border Cross. https://doi.org/10.33182/bc.v12i1.2015
2. Artificial Intelligence Act 2024. European Parliament legislative resolution of 13 March 2024 on the proposal for a regulation of the European Parliament and of the Council on laying down harmonized rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts (COM(2021)0206 – C9–0146/2021 – 2021/0106(COD))
3. Burrell J (2016) How the machine 'thinks': understanding opacity in machine learning algorithms. Big Data Soc 3(1):1–12. https://doi.org/10.1177/2053951715622512
4. Coeckelbergh M (2020a) AI Ethics. MIT press, Cambridge, Massachusetts
5. Coeckelbergh M (2020b) Introduction to Philosophy of Technology. Oxford University Press, New York