What about the Latent Space? The Need for Latent Feature Saliency Detection in Deep Time Series Classification

Author:

Schröder Maresa12ORCID,Zamanian Alireza31ORCID,Ahmidi Narges1

Affiliation:

1. Fraunhofer Institute for Cognitive Systems IKS, 80686 Munich, Germany

2. Department of Mathematics, TUM School of Computation, Information and Technology, Technical University of Munich, 80333 Munich, Germany

3. Department of Computer Science, TUM School of Computation, Information and Technology, Technical University of Munich, 80333 Munich, Germany

Abstract

Saliency methods are designed to provide explainability for deep image processing models by assigning feature-wise importance scores and thus detecting informative regions in the input images. Recently, these methods have been widely adapted to the time series domain, aiming to identify important temporal regions in a time series. This paper extends our former work on identifying the systematic failure of such methods in the time series domain to produce relevant results when informative patterns are based on underlying latent information rather than temporal regions. First, we both visually and quantitatively assess the quality of explanations provided by multiple state-of-the-art saliency methods, including Integrated Gradients, Deep-Lift, Kernel SHAP, and Lime using univariate simulated time series data with temporal or latent patterns. In addition, to emphasize the severity of the latent feature saliency detection problem, we also run experiments on a real-world predictive maintenance dataset with known latent patterns. We identify Integrated Gradients, Deep-Lift, and the input-cell attention mechanism as potential candidates for refinement to yield latent saliency scores. Finally, we provide recommendations on using saliency methods for time series classification and suggest a guideline for developing latent saliency methods for time series.

Funder

Bavarian Ministry for Economic Affairs, Regional Development and Energy

Publisher

MDPI AG

Subject

Artificial Intelligence,Engineering (miscellaneous)

Reference53 articles.

1. A Survey of Methods for Explaining Black Box Models;Guidotti;ACM Comput. Surv.,2018

2. Ismail, A.A., Gunady, M.K., Corrada Bravo, H., and Feizi, S. (2020, January 6–12). Benchmarking Deep Learning Interpretability in Time Series Predictions. Proceedings of the 34th Conference on Neural Information Processing Systems (NeurIPS), Online.

3. Loeffler, C., Lai, W.C., Eskofier, B., Zanca, D., Schmidt, L., and Mutschler, C. (2022). Don’t Get Me Wrong: How to apply Deep Visual Interpretations to Time Series. arXiv.

4. Schlegel, U., Oelke, D., Keim, D.A., and El-Assady, M. (2020, January 11). An Empirical Study of Explainable AI Techniques on Deep Learning Models For Time Series Tasks. Proceedings of the Pre-Registration Workshop NeurIPS (2020), Vancouver, BC, Canada.

5. Schröder, M., Zamanian, A., and Ahmidi, N. (2023, January 4). Post-hoc Saliency Methods Fail to Capture Latent Feature Importance in Time Series Data. Proceedings of the ICLR 2023 Workshop on Trustworthy Machine Learning for Healthcare, Online.

Cited by 1 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3