Affiliation:
1. Department of Computer Science ETH Zurich Zurich Switzerland
Abstract
AbstractInterpretability and explainability are crucial for machine learning (ML) and statistical applications in medicine, economics, law, and natural sciences and form an essential principle for ML model design and development. Although interpretability and explainability have escaped a precise and universal definition, many models and techniques motivated by these properties have been developed over the last 30 years, with the focus currently shifting toward deep learning. We will consider concrete examples of state‐of‐the‐art, including specially tailored rule‐based, sparse, and additive classification models, interpretable representation learning, and methods for explaining black‐box models post hoc. The discussion will emphasize the need for and relevance of interpretability and explainability, the divide between them, and the inductive biases behind the presented “zoo” of interpretable models and explanation methods.This article is categorized under:
Fundamental Concepts of Data and Knowledge > Explainable AI
Technologies > Machine Learning
Commercial, Legal, and Ethical Issues > Social Considerations
Funder
Schweizerischer Nationalfonds zur Förderung der Wissenschaftlichen Forschung
Cited by
35 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献