Affiliation:
1. Center for Human-Compatible AI, University of California—Berkeley , Berkeley, CA, United States
Abstract
Abstract
This article examines the challenges of regulating artificial intelligence (AI) systems and proposes an adapted model of regulation suitable for AI’s novel features. Unlike past technologies, AI systems built using techniques like deep learning cannot be directly analyzed, specified, or audited against regulations. Their behavior emerges unpredictably from training rather than intentional design. However, the traditional model of delegating oversight to an expert agency, which has succeeded in high-risk sectors like aviation and nuclear power, should not be wholly discarded. Instead, policymakers must contain risks from today’s opaque models while supporting research into provably safe AI architectures. Drawing lessons from AI safety literature and past regulatory successes, effective AI governance will likely require consolidated authority, licensing regimes, mandated training data and modeling disclosures, formal verification of system behavior, and the capacity for rapid intervention.
Publisher
Oxford University Press (OUP)
Reference93 articles.
1. The Bletchley Declaration;AI Safety Summit,2023
2. Concrete problems in AI safety;Amodei,2016
3. Frontier AI regulation: Managing emerging risks to public safety;Anderljung,2023