Author:
Hacker Philipp,Passoth Jan-Hendrik
Abstract
AbstractThe quest to explain the output of artificial intelligence systems has clearly moved from a mere technical to a highly legally and politically relevant endeavor. In this paper, we provide an overview of legal obligations to explain AI and evaluate current policy proposals. In this, we distinguish between different functional varieties of AI explanations - such as multiple forms of enabling, technical and protective transparency - and show how different legal areas engage with and mandate such different types of explanations to varying degrees. Starting with the rights-enabling framework of the GDPR, we proceed to uncover technical and protective forms of explanations owed under contract, tort and banking law. Moreover, we discuss what the recent EU proposal for an Artificial Intelligence Act means for explainable AI, and review the proposal’s strengths and limitations in this respect. Finally, from a policy perspective, we advocate for moving beyond mere explainability towards a more encompassing framework for trustworthy and responsible AI that includes actionable explanations, values-in-design and co-design methodologies, interactions with algorithmic fairness, and quality benchmarking.
Publisher
Springer International Publishing
Reference106 articles.
1. Acquisti, A., Taylor, C., Wagman, L.: The economics of privacy. J. Econ. Liter. 54(2), 442–92 (2016)
2. Lecture Notes in Business Information Processing;H Aldewereld,2021
3. Arrieta, A.B., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020)
4. Avgouleas, E.: Governance of Global Financial Markets: The Law, The Economics, The Politics. Cambridge University Press, Cambridge (2012)
5. Bachmann, G.: Commentary on §241 BGB, in: Münchener Kommentar zum BGB. BECK, Munich, 8th ed. (2019)
Cited by
25 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献