Affiliation:
1. School of Information, University of California Berkeley, Berkeley, CA, USA
Abstract
Contrary to the criticism that mysterious, unaccountable black-box software systems threaten to make the logic of critical decisions inscrutable, we argue that algorithms are fundamentally understandable pieces of technology. Software systems are designed to interact with the world in a controlled way and built or operated for a specific purpose, subject to choices and assumptions. Traditional power structures can and do turn systems into opaque black boxes, but technologies can always be understood at a higher level, intensionally in terms of their designs and operational goals and extensionally in terms of their inputs, outputs and outcomes. The mechanisms of a system's operation can always be examined and explained, but a focus on machinery obscures the key issue of power dynamics. While structural inscrutability frustrates users and oversight entities, system creators and operators always determine that the technologies they deploy are fit for certain uses, making no system wholly inscrutable. We investigate the contours of inscrutability and opacity, the way they arise from power dynamics surrounding software systems, and the value of proposed remedies from disparate disciplines, especially computer ethics and privacy by design. We conclude that policy should not accede to the idea that some systems are of necessity inscrutable. Effective governance of algorithms comes from demanding rigorous science and engineering in system design, operation and evaluation to make systems verifiably trustworthy. Rather than seeking explanations for each behaviour of a computer system, policies should formalize and make known the assumptions, choices, and adequacy determinations associated with a system.
This article is part of the theme issue ‘Governing artificial intelligence: ethical, legal, and technical opportunities and challenges’.
Funder
Berkeley Center for Law and Technology
Subject
General Physics and Astronomy,General Engineering,General Mathematics
Reference52 articles.
1. The Black Box Society
2. Selbst AD Barocas S. The intuitive appeal of explainable machines. Preprint available at SSRN. (https://ssrn.com/abstract=3126971)
3. Doshi-Velez F Kim B. 2017 Towards a rigorous science of interpretable machine learning. (http://arxiv.org/abs/1702.08608)
4. Doshi-Velez F Kortz M Budish R Bavitz C Gershman S O'Brien D Schieber S Waldo J Weinberger D Wood A. 2017 Accountability of AI under the law: the role of explanation. (http://arxiv.org/abs/1711.01134)
Cited by
85 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献