The Need for Interpretable Features

Author:

Zytek Alexandra1,Arnaldo Ignacio2,Liu Dongyu1,Berti-Equille Laure3,Veeramachaneni Kalyan1

Affiliation:

1. MIT, Cambridge MA, USA

2. Corelight

3. IRD, ESPACE-DEV

Abstract

Through extensive experience developing and explaining machine learning (ML) applications for real-world domains, we have learned that ML models are only as interpretable as their features. Even simple, highly interpretable model types such as regression models can be difficult or impossible to understand if they use uninterpretable features. Different users, especially those using ML models for decision-making in their domains, may require different levels and types of feature interpretability. Furthermore, based on our experiences, we claim that the term "interpretable feature" is not specific nor detailed enough to capture the full extent to which features impact the usefulness of ML explanations. In this paper, we motivate and discuss three key lessons: 1) more attention should be given to what we refer to as the interpretable feature space, or the state of features that are useful to domain experts taking real-world actions, 2) a formal taxonomy is needed of the feature properties that may be required by these domain experts (we propose a partial taxonomy in this paper), and 3) transforms that take data from the model-ready state to an interpretable form are just as essential as traditional ML transforms that prepare features for the model.

Publisher

Association for Computing Machinery (ACM)

Subject

General Medicine

Reference40 articles.

1. The Holy Grail of "Systems for Machine Learning"

2. Real-Time Detection of Fake-Shops through Machine Learning

3. Explainable machine learning in deployment

4. X. Chen , Q. Lin , C. Luo , X. Li , H. Zhang , Y. Xu , Y. Dang , K. Sui , X. Zhang , B. Qiao , W. Zhang , W. Wu , M. Chintalapati , and D. Zhang . Neural Feature Search: A Neural Architecture for Automated Feature Engineering. In 2019 IEEE International Conference on Data Mining (ICDM) , pages 71 -- 80 , Nov. 2019 . ISSN: 2374--8486. X. Chen, Q. Lin, C. Luo, X. Li, H. Zhang, Y. Xu, Y. Dang, K. Sui, X. Zhang, B. Qiao, W. Zhang, W.Wu, M. Chintalapati, and D. Zhang. Neural Feature Search: A Neural Architecture for Automated Feature Engineering. In 2019 IEEE International Conference on Data Mining (ICDM), pages 71--80, Nov. 2019. ISSN: 2374--8486.

5. F. Cheng , D. Liu , F. Du , Y. Lin , A. Zytek , H. Li , H. Qu , and K. Veeramachaneni . VBridge: Connecting the Dots Between Features and Data to Explain SIGKDD Explorations Volume 24 , Issue 1 11 Healthcare Models. IEEE Transactions on Visualization and Computer Graphics , pages 1 -- 1 , 2021 . arXiv: 2108.02550. F. Cheng, D. Liu, F. Du, Y. Lin, A. Zytek, H. Li, H. Qu, and K. Veeramachaneni. VBridge: Connecting the Dots Between Features and Data to Explain SIGKDD Explorations Volume 24, Issue 1 11 Healthcare Models. IEEE Transactions on Visualization and Computer Graphics, pages 1--1, 2021. arXiv: 2108.02550.

Cited by 12 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3