Abstract
Abstract
This article examines artificial intelligence (AI) co-regulation in the EU AI Act and the critical role of standards under this regulatory strategy. It engages with the foundation of democratic legitimacy in EU standardization, emphasizing the need for reform to keep pace with the rapid evolution of AI capabilities, as recently suggested by the European Parliament. The article highlights the challenges posed by interdisciplinarity and the lack of civil society expertise in standard-setting. It critiques the inadequate representation of societal stakeholders in the development of AI standards, posing pressing questions about the potential risks this entails to the protection of fundamental rights, given the lack of democratic oversight and the global composition of standard-developing organizations. The article scrutinizes how under the AI Act technical standards will define AI risks and mitigation measures and questions whether technical experts are adequately equipped to standardize thresholds of acceptable residual risks in different high-risk contexts. More specifically, the article examines the complexities of regulating AI, drawing attention to the multi-dimensional nature of identifying risks in AI systems and the value-laden nature of the task. It questions the potential creation of a typology of AI risks and highlights the need for a nuanced, inclusive, and context-specific approach to risk identification and mitigation. Consequently, in the article we underscore the imperative for continuous stakeholder involvement in developing, monitoring, and refining the technical rules and standards for high-risk AI applications. We also emphasize the need for rigorous training, certification, and surveillance measures to ensure the enforcement of fundamental rights in the face of AI developments. Finally, we recommend greater transparency and inclusivity in risk identification methodologies, urging for approaches that involve stakeholders and require a diverse skill set for risk assessment. At the same time, we also draw attention to the diversity within the European Union and the consequent need for localized risk assessments that consider national contexts, languages, institutions, and culture. In conclusion, the article argues that co-regulation under the AI Act necessitates a thorough re-examination and reform of standard-setting processes, to ensure a democratically legitimate, interdisciplinary, stakeholder-inclusive, and responsive approach to AI regulation, which can safeguard fundamental rights and anticipate, identify, and mitigate a broad spectrum of AI risks.
Publisher
Oxford University Press (OUP)
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献