Affiliation:
1. Department of Government University of Bergen Bergen Norway
Abstract
AbstractThe European Commission has pioneered the coercive regulation of artificial intelligence (AI), including a proposal of banning some applications altogether on moral grounds. Core to its regulatory strategy is a nominally “risk‐based” approach with interventions that are proportionate to risk levels. Yet, neither standard accounts of risk‐based regulation as rational problem‐solving endeavor nor theories of organizational legitimacy‐seeking, both prominently discussed in Regulation & Governance, fully explain the Commission's attraction to the risk heuristic. This article responds to this impasse with three contributions. First, it enrichens risk‐based regulation scholarship—beyond AI—with a firm foundation in constructivist and critical political economy accounts of emerging tech regulation to capture the performative politics of defining and enacting risk vis‐à‐vis global economic competitiveness. Second, it conceptualizes the role of risk analysis within a Cultural Political Economy framework: as a powerful epistemic tool for the discursive and regulatory differentiation of an uncertain regulatory terrain (semiosis and structuration) which the Commission wields in its pursuit of a future common European AI market. Thirdly, the paper offers an in‐depth empirical reconstruction of the Commission's risk‐based semiosis and structuration in AI regulation through qualitative analysis of a substantive sample of documents and expert interviews. This finds that the Commission's use of risk analysis, outlawing some AI uses as matters of deep value conflicts and tightly controlling (at least discursively) so‐called high‐risk AI systems, enables Brussels to fashion its desired trademark of European “cutting‐edge AI … trusted throughout the world” in the first place.
Subject
Law,Public Administration,Sociology and Political Science
Cited by
6 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献