Author:
Methnani Leila,Aler Tubella Andrea,Dignum Virginia,Theodorou Andreas
Abstract
As Artificial Intelligence (AI) continues to expand its reach, the demand for human control and the development of AI systems that adhere to our legal, ethical, and social values also grows. Many (international and national) institutions have taken steps in this direction and published guidelines for the development and deployment of responsible AI systems. These guidelines, however, rely heavily on high-level statements that provide no clear criteria for system assessment, making the effective control over systems a challenge. “Human oversight” is one of the requirements being put forward as a means to support human autonomy and agency. In this paper, we argue that human presence alone does not meet this requirement and that such a misconception may limit the use of automation where it can otherwise provide so much benefit across industries. We therefore propose the development of systems with variable autonomy—dynamically adjustable levels of autonomy—as a means of ensuring meaningful human control over an artefact by satisfying all three core values commonly advocated in ethical guidelines: accountability, responsibility, and transparency.
Funder
Knut Och Alice Wallenbergs Stiftelse
Horizon 2020
Reference68 articles.
1. Contestable Black Boxes;Aler Tubella,2020
2. Seeing Without Knowing: Limitations of the Transparency Ideal and its Application to Algorithmic Accountability;Ananny,2018
3. Power to the People: The Role of Humans in Interactive Machine Learning;Amershi;AIMag,2014
4. Robustness Based on Accountability in Multiagent Organizations;Baldoni,2021
5. Analysing and Assessing Accountability: A Conceptual Framework;Bovens;Eur. L. J,2007
Cited by
20 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献