Abstract
Abstract
Increasingly, laws are being proposed and passed by governments around the world to regulate artificial intelligence (AI) systems implemented into the public and private sectors. Many of these regulations address the transparency of AI systems, and related citizen-aware issues like allowing individuals to have the right to an explanation about how an AI system makes a decision that impacts them. Yet, almost all AI governance documents to date have a significant drawback: they have focused on what to do (or what not to do) with respect to making AI systems transparent, but have left the brunt of the work to technologists to figure out how to build transparent systems. We fill this gap by proposing a stakeholder-first approach that assists technologists in designing transparent, regulatory-compliant systems. We also describe a real-world case study that illustrates how this approach can be used in practice.
Funder
National Science Foundation
Publisher
Cambridge University Press (CUP)
Reference94 articles.
1. It’s Just Not That Simple: An Empirical Study of the Accuracy-Explainability Trade-off in Machine Learning for Public Policy
2. Doshi-Velez, F and Kim, B (2017) Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608.
3. Lu, J , Lee, D , Kim, TW and Danks, D (2019) Good explanation for algorithmic transparency. Available at SSRN 3503603.
4. Explainable Artificial Intelligence: Objectives, Stakeholders, and Future Research Opportunities
Cited by
12 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献