1. Agarwal, C., et al.: Openxai: towards a transparent evaluation of model explanations (2022)
2. Alvarez-Melis, D., Jaakkola, T.S.: On the robustness of interpretability methods (2018)
3. Ancona, M., Ceolini, E., Öztireli, C., Gross, M.: Towards better understanding of gradient-based attribution methods for deep neural networks. arXiv preprint arXiv:1711.06104 (2018)
4. Anders, C.J., Neumann, D., Samek, W., Müller, K.R., Lapuschkin, S.: Software for dataset-wide XAI: from local explanations to global insights with zennit, corelay, and virelay (2021)
5. Arya, V., et al.: AI explainability 360: impact and design (2022)