Affiliation:
1. ENAC (École Nationale de l’Aviation Civile), Université de Toulouse, 7, Avenue Edouard Belin, 31055 Toulouse, France
Abstract
Explainable Artificial Intelligence (XAI) and acceptable artificial intelligence are active topics of research in machine learning. For critical applications, being able to prove or at least to ensure with a high probability the correctness of algorithms is of utmost importance. In practice, however, few theoretical tools are known that can be used for this purpose. Using the Fisher Information Metric (FIM) on the output space yields interesting indicators in both the input and parameter spaces, but the underlying geometry is not yet fully understood. In this work, an approach based on the pullback bundle, a well-known trick for describing bundle morphisms, is introduced and applied to the encoder–decoder block. With constant rank hypothesis on the derivative of the network with respect to its inputs, a description of its behavior is obtained. Further generalization is gained through the introduction of the pullback generalized bundle that takes into account the sensitivity with respect to weights.
Subject
General Physics and Astronomy
Reference35 articles.
1. Linardatos, P., Papastefanopoulos, V., and Kotsiantis, S. (2021). Explainable AI: A Review of Machine Learning Interpretability Methods. Entropy, 23.
2. A Review of Trustworthy and Explainable Artificial Intelligence (XAI);Chamola;IEEE Access,2023
3. Chang, D.T. (2021). Probabilistic Deep Learning with Probabilistic Neural Networks and Deep Probabilistic Models. arXiv.
4. Simonyan, K., Vedaldi, A., and Zisserman, A. (2013). Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv.
5. A survey of visual analytics for Explainable Artificial Intelligence methods;Alicioglu;Comput. Graph.,2022