Abstract
AbstractPractitioners increasingly use machine learning (ML) models, yet models have become more complex and harder to understand. To understand complex models, researchers have proposed techniques to explain model predictions. However, practitioners struggle to use explainability methods because they do not know which explanation to choose and how to interpret the explanation. Here we address the challenge of using explainability methods by proposing TalkToModel: an interactive dialogue system that explains ML models through natural language conversations. TalkToModel consists of three components: an adaptive dialogue engine that interprets natural language and generates meaningful responses; an execution component that constructs the explanations used in the conversation; and a conversational interface. In real-world evaluations, 73% of healthcare workers agreed they would use TalkToModel over existing systems for understanding a disease prediction model, and 85% of ML professionals agreed TalkToModel was easier to use, demonstrating that TalkToModel is highly effective for model explainability.
Funder
NSF | Directorate for Computer & Information Science & Engineering | Division of Information and Intelligent Systems
I was supported by a fellowship from the hasso plattner institute during the bulk of completing this work.
Google, JP Morgan, Amazon, Harvard Data Science Initiative, D^3 institute at Harvard
Publisher
Springer Science and Business Media LLC
Subject
Artificial Intelligence,Computer Networks and Communications,Computer Vision and Pattern Recognition,Human-Computer Interaction,Software
Reference79 articles.
1. Lakkaraju, H., Bach, S. H. & Leskovec, J. Interpretable decision sets: a joint framework for description and prediction. In Proc. 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining 1675–1684 (Association for Computing Machinery, 2016).
2. Angelino, E., Larus-Stone, N., Alabi, D., Seltzer, M. & Rudin, C. Learning certifiably optimal rule lists. In Proc. 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining 35–44 (Association for Computing Machinery, 2017).
3. Lou, Y., Caruana, R., Gehrke, J. & Hooker, G. Accurate intelligible models with pairwise interactions. In Proc. 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (eds Ghani, R. et al.) 623–631 (Association for Computing Machinery, 2013).
4. Agarwal, R. et al. Neural additive models: interpretable machine learning with neural nets. Adv. Neural Inf. Process. Syst. 34, 4699–4711 (2021).
5. Chang, C.-H., Caruana, R. & Goldenberg, A. Node-GAM: neural generalized additive model for interpretable deep learning. In International Conference on Learning Representations (2022).
Cited by
21 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献