Author:
Hacker Philipp,Krestel Ralf,Grundmann Stefan,Naumann Felix
Abstract
AbstractThis paper shows that the law, in subtle ways, may set hitherto unrecognized incentives for the adoption of explainable machine learning applications. In doing so, we make two novel contributions. First, on the legal side, we show that to avoid liability, professional actors, such as doctors and managers, may soon be legally compelled to use explainable ML models. We argue that the importance of explainability reaches far beyond data protection law, and crucially influences questions of contractual and tort liability for the use of ML models. To this effect, we conduct two legal case studies, in medical and corporate merger applications of ML. As a second contribution, we discuss the (legally required) trade-off between accuracy and explainability and demonstrate the effect in a technical case study in the context of spam classification.
Publisher
Springer Science and Business Media LLC
Subject
Law,Artificial Intelligence
Reference71 articles.
1. Almeida T, Hidalgo JMG, Silva TP (2013) Towards sms spam filtering: results under a new dataset. Int J Inf Secur Sci 2(1):1–18
2. Arras L, Horn F, Montavon G, Müller KR, Samek W (2017) What is relevant in a text document?: An interpretable machine learning approach. PloS one 12(8):e0181142
3. Bach S, Binder A, Montavon G, Klauschen F, Müller KR, Samek W (2015) On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one 10(7):e0130140
4. Biran O, Cotton C (2017) Explanation and justification in machine learning: a survey. In: IJCAI-17 workshop on explainable AI (XAI), pp 1–6
5. Burrell J (2016) How the machine thinks: understanding opacity in machine learning algorithms. Big Data Soc 3(1):2053951715622512
Cited by
78 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献