Abstract
AbstractArtificial General Intelligence (AGI) is said to pose many risks, be they catastrophic, existential and otherwise. This paper discusses whether the notion of risk can apply to AGI, both descriptively and in the current regulatory framework. The paper argues that current definitions of risk are ill-suited to capture supposed AGI existential risks, and that the risk-based framework of the EU AI Act is inadequate to deal with truly general, agential systems.
Funder
Università degli Studi di Pavia
Publisher
Springer Science and Business Media LLC
Reference30 articles.
1. Anwar U et al (2024) Foundational challenges in assuring alignment and safety of large language models
2. Arntzenius F (2014) Utilitarianism, decision theory and eternity. Philos Perspect 28:31–58
3. Bengio Y et al (2023) Managing AI risks in an era of rapid progress
4. Coeckelbergh M (2024) The case for global governance of AI: arguments, counter-arguments, and challenges ahead. AI & Soc. https://doi.org/10.1007/s00146-024-01949-5
5. German Federal Office for Information Security (2024) Generative AI models, opportunities and risks for industry and authorities