Risks, innovation, and adaptability in the UK’s incrementalism versus the European Union’s comprehensive artificial intelligence regulation
Author:
Gikay Asress Adimi
Abstract
Abstract
The regulation of artificial intelligence (AI) should strike a balance between addressing the risks of the technology and its benefits through enabling useful innovation whilst remaining adaptable to evolving risks. The European Union’s (EU) overarching risk-based regulation subjects AI systems across industries to a set of regulatory standards depending on where they fall in the risk bucket, whilst the UK’s sectoral approach advocates for an incremental regulation. By demonstrating the EU AI Act’s inability to adapt to evolving risks and regulate the technology proportionately, this article argues that the UK should avoid the EU AI Act’s compartmentalized high-risk classification system. The UK should refine its incremental regulation by adopting a generic principle for risk classification that allows for contextual risk assessment whilst adapting to evolving risks. The article contends that if refined appropriately, the UK’s incremental approach that relies on coordinate sectionalism encourages innovation without undermining the UK technology sector’s competitiveness in the global market of compliant AI, while also mitigating the potential risks presented by the technology.
Publisher
Oxford University Press (OUP)