1. Alvarez-Melis, D., Jaakkola, T.S.: On the robustness of interpretability methods. CoRR abs/1806.08049 (2018). arxiv.org/abs/1806.08049
2. Alvarez-Melis, D., Jaakkola, T.S.: Towards robust interpretability with self-explaining neural networks. CoRR abs/1806.07538 (2018). arxiv.org/abs/1806.07538
3. Andersen, S., Olesen, K., Jensen, F., Jensen, F.: HUGIN - a shell for building Bayesian belief universes for expert systems. IJCAI 2, 1080–1085 (1989)
4. Arya, V., et al.: One explanation does not fit all: a toolkit and taxonomy of AI explainability techniques (2019). arxiv.org/abs/1909.03012
5. Askira-Gelman, I.: Knowledge discovery: comprehensibility of the results. In: 2014 47th Hawaii International Conference on System Sciences, vol. 5, p. 247. IEEE Computer Society, Los Alamitos, January 1998. https://doi.org/10.1109/HICSS.1998.648319. https://doi.ieeecomputersociety.org/10.1109/HICSS.1998.648319