1. A Survey of Methods for Explaining Black Box Models;Guidotti;ACM Comput. Surv.,2018
2. Ismail, A.A., Gunady, M.K., Corrada Bravo, H., and Feizi, S. (2020, January 6–12). Benchmarking Deep Learning Interpretability in Time Series Predictions. Proceedings of the 34th Conference on Neural Information Processing Systems (NeurIPS), Online.
3. Loeffler, C., Lai, W.C., Eskofier, B., Zanca, D., Schmidt, L., and Mutschler, C. (2022). Don’t Get Me Wrong: How to apply Deep Visual Interpretations to Time Series. arXiv.
4. Schlegel, U., Oelke, D., Keim, D.A., and El-Assady, M. (2020, January 11). An Empirical Study of Explainable AI Techniques on Deep Learning Models For Time Series Tasks. Proceedings of the Pre-Registration Workshop NeurIPS (2020), Vancouver, BC, Canada.
5. Schröder, M., Zamanian, A., and Ahmidi, N. (2023, January 4). Post-hoc Saliency Methods Fail to Capture Latent Feature Importance in Time Series Data. Proceedings of the ICLR 2023 Workshop on Trustworthy Machine Learning for Healthcare, Online.