Abstract
Abstract
This paper critically evaluates the European Commission’s proposed AI Act’s approach to risk management and risk acceptability for high-risk artificial intelligence systems that pose risks to fundamental rights and safety. The Act aims to promote “trustworthy” AI with a proportionate regulatory burden. Its provisions on risk acceptability require residual risks from high-risk systems to be reduced or eliminated “as far as possible”, having regard for the “state of the art”. This criterion, especially if interpreted narrowly, is unworkable and promotes neither proportionate regulatory burden nor trustworthiness. By contrast, the Parliament’s most recent draft amendments to the risk management provisions introduce “reasonableness” and cost–benefit analyses and are more transparent regarding the value-laden and contextual nature of risk acceptability judgments. This paper argues that the Parliament’s approach is more workable and better balances the goals of proportionality and trustworthiness. It explains what reasonableness in risk acceptability judgments would entail, drawing on principles from negligence law and European medical devices regulation. It also contends that the approach to risk acceptability judgments needs a firm foundation of civic legitimacy, including detailed guidance or involvement from regulators and meaningful input from affected stakeholders.
Publisher
Cambridge University Press (CUP)
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献