Abstract
AbstractA number of governmental and nongovernmental organizations have made significant efforts to encourage the development of artificial intelligence in line with a series of aspirational concepts such as transparency, interpretability, explainability, and accountability. The difficulty at present, however, is that these concepts exist at a fairly abstract level, whereas in order for them to have the tangible effects desired they need to become more concrete and specific. This article undertakes precisely this process of concretisation, mapping how the different concepts interrelate and what in particular they each require in order to move from being high-level aspirations to detailed and enforceable requirements. We argue that the key concept in this process is accountability, since unless an entity can be held accountable for compliance with the other concepts, and indeed more generally, those concepts cannot do the work required of them. There is a variety of taxonomies of accountability in the literature. However, at the core of each account appears to be a sense of “answerability”; a need to explain or to give an account. It is this ability to call an entity to account which provides the impetus for each of the other concepts and helps us to understand what they must each require.
Funder
Engineering and Physical Sciences Research Council
Publisher
Cambridge University Press (CUP)
Reference87 articles.
1. Reed, C , Kennedy, E , Nogueira Silva, S (2016) Responsibility, Autonomy and Accountability: Legal Liability for Machine Learning. Queen Mary University of London, School of Law Legal Studies Research Paper No. 243/2016.
2. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
3. Marcinkevičs, R and Vogt, J (2020) Interpretability and Explainability: A Machine Learning Zoo Mini-Tour. arXiv:2012.01805v1 [cs.LG], 3 December 2020.
4. The limits of privacy in automated profiling and data mining
5. Imagination, distributed responsibility and vulnerable technological systems: the case of Snorre A
Cited by
14 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献