Affiliation:
1. Leverhulme Centre for the Future of Intelligence University of Cambridge Cambridge UK
2. Department of Media, Communication and Cultural Studies Goldsmiths, University of London London UK
Abstract
AbstractBased on a focus on artificial intelligence (AI) policy in the European Union (EU), we explore the dominant approach taken to data justice in policy. More specifically, we ask how the particular issue of discrimination is translated into policy goals and measures as a way to address prominent concerns about AI. Looking at the stage of policy formulation, we provide an analysis of the way (non) discrimination is currently pursued within the EU's AI policy debate through the study of relevant policy documents and public consultations between 2017 and 2023. We argue that whilst the issue of discrimination has moved from the margins to the mainstream in policy debate, it has done so based on an understanding of discrimination as an inevitable risk of AI; such risk is specific to particular situations and the technological features of AI; the nature of this risk can be assessed and managed through a set of procedural safeguards; and such safeguards can be supported by the creation of a trustworthy AI market. Whilst this translation of justice is very important for contending with some of the critique surrounding the advancement of AI, it may also serve to contain and neutralize such critique in the interest of marketization.
Funder
Horizon 2020 Framework Programme
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献