FXAM: A unified and fast interpretable model for predictive analytics
-
Published:2024-10
Issue:
Volume:252
Page:123890
-
ISSN:0957-4174
-
Container-title:Expert Systems with Applications
-
language:en
-
Short-container-title:Expert Systems with Applications
Author:
Jiang Yuanyuan,
Ding RuiORCID,
Qiao Tianchi,
Zhu Yunan,
Han Shi,
Zhang Dongmei
Reference53 articles.
1. Abdul, A., Vermeulen, J., Wang, D., Lim, B. Y., & Kankanhalli, M. (2018). Trends and trajectories for explainable, accountable and intelligible systems: An hci research agenda. In Proceedings of the 2018 CHI conference on human factors in computing systems (pp. 1–18).
2. Abdul, A., von der Weth, C., Kankanhalli, M., & Lim, B. Y. (2020). COGAM: Measuring and Moderating Cognitive Load in Machine Learning Model Explanations. In Proceedings of the 2020 CHI conference on human factors in computing systems (pp. 1–14).
3. What does “explained variance “explain?: Reply;Achen;Political Analysis,1990
4. Neural additive models: Interpretable machine learning with neural nets;Agarwal,2020
5. Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI;Arrieta;Information Fusion,2020