Interpretability Is in the Mind of the Beholder: A Causal Framework for Human-Interpretable Representation Learning

Author:

Marconato Emanuele12,Passerini Andrea1,Teso Stefano13

Affiliation:

1. Dipartimento di Ingegneria e Scienza dell’Informazione, University of Trento, 38123 Trento, Italy

2. Dipartimento di Informatica, University of Pisa, 56126 Pisa, Italy

3. Centro Interdipartimentale Mente/Cervello, University of Trento, 38123 Trento, Italy

Abstract

Research on Explainable Artificial Intelligence has recently started exploring the idea of producing explanations that, rather than being expressed in terms of low-level features, are encoded in terms of interpretable concepts learned from data. How to reliably acquire such concepts is, however, still fundamentally unclear. An agreed-upon notion of concept interpretability is missing, with the result that concepts used by both post hoc explainers and concept-based neural networks are acquired through a variety of mutually incompatible strategies. Critically, most of these neglect the human side of the problem: a representation is understandable only insofar as it can be understood by the human at the receiving end. The key challenge in human-interpretable representation learning (hrl) is how to model and operationalize this human element. In this work, we propose a mathematical framework for acquiring interpretable representations suitable for both post hoc explainers and concept-based neural networks. Our formalization of hrl builds on recent advances in causal representation learning and explicitly models a human stakeholder as an external observer. This allows us derive a principled notion of alignment between the machine’s representation and the vocabulary of concepts understood by the human. In doing so, we link alignment and interpretability through a simple and intuitive name transfer game, and clarify the relationship between alignment and a well-known property of representations, namely disentanglement. We also show that alignment is linked to the issue of undesirable correlations among concepts, also known as concept leakage, and to content-style separation, all through a general information-theoretic reformulation of these properties. Our conceptualization aims to bridge the gap between the human and algorithmic sides of interpretability and establish a stepping stone for new research on human-interpretable representations.

Funder

NextGenerationEU

EU Horizon 2020 research and innovation programme

Publisher

MDPI AG

Subject

General Physics and Astronomy

Reference127 articles.

1. A survey of methods for explaining black box models;Guidotti;ACM Comput. Surv. (CSUR),2018

2. Explaining prediction models and individual predictions with feature contributions;Kononenko;Knowl. Inf. Syst.,2014

3. Ribeiro, M.T., Singh, S., and Guestrin, C. (2016, January 13–17). “Why should I Trust You?” Explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA.

4. Kim, B., Khanna, R., and Koyejo, O.O. (2016). Examples are not enough, learn to criticize! Criticism for interpretability. Adv. Neural Inf. Process. Syst., 29.

5. Koh, P.W., and Liang, P. (2017, January 6–11). Understanding black-box predictions via influence functions. Proceedings of the International Conference on Machine Learning, PMLR, Sydney, Australia.

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3