Abstract
AbstractOrganisations increasingly use automated decision-making systems (ADMS) to inform decisions that affect humans and their environment. While the use of ADMS can improve the accuracy and efficiency of decision-making processes, it is also coupled with ethical challenges. Unfortunately, the governance mechanisms currently used to oversee human decision-making often fail when applied to ADMS. In previous work, we proposed that ethics-based auditing (EBA)—that is, a structured process by which ADMS are assessed for consistency with relevant principles or norms—can (a) help organisations verify claims about their ADMS and (b) provide decision-subjects with justifications for the outputs produced by ADMS. In this article, we outline the conditions under which EBA procedures can be feasible and effective in practice. First, we argue that EBA is best understood as a ‘soft’ yet ‘formal’ governance mechanism. This implies that the main responsibility of auditors should be to spark ethical deliberation at key intervention points throughout the software development process and ensure that there is sufficient documentation to respond to potential inquiries. Second, we frame AMDS as parts of larger sociotechnical systems to demonstrate that to be feasible and effective, EBA procedures must link to intervention points that span all levels of organisational governance and all phases of the software lifecycle. The main function of EBA should, therefore, be to inform, formalise, assess, and interlink existing governance structures. Finally, we discuss the policy implications of our findings. To support the emergence of feasible and effective EBA procedures, policymakers and regulators could provide standardised reporting formats, facilitate knowledge exchange, provide guidance on how to resolve normative tensions, and create an independent body to oversee EBA of ADMS.
Publisher
Springer Science and Business Media LLC
Subject
Artificial Intelligence,Human-Computer Interaction,Philosophy
Reference149 articles.
1. Accenture (2018) Explainable AI: the next stage of human-machine collaboration. Accenture Labs. https://www.accenture.com/gb-en/insights/technology/explainable-ai-human-machine
2. Adler P, Falk C, Friedler SA, Nix T, Rybeck G, Scheidegger C, Smith B, Venkatasubramanian S (2018) Auditing black-box models for indirect influence. Knowl Inf Syst 54(1):95–122. https://doi.org/10.1007/s10115-017-1116-3
3. AI HLEG (2019) Ethics Guidelines for Trustworthy AI. European Commission
4. AIEIG (2020) From principles to practice—an interdisciplinary framework to operationalise AI ethics. AI Ethics Impact Group, VDE Association for Electrical Electronic and Information Technologies e.V., Bertelsmann Stiftung, pp 1–56. https://doi.org/10.11586/2020013
5. AlgorithmWatch (2019) Automating society: taking stock of automated decision-making in the EU. Bertelsmann Stiftung, Open Society Foundations
Cited by
9 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献