1. Amina Adadi and Mohammed Berrada . 2018. Peeking inside the black-box: a survey on explainable artificial intelligence (XAI) . IEEE access, Vol. 6 ( 2018 ), 52138--52160. Amina Adadi and Mohammed Berrada. 2018. Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE access, Vol. 6 (2018), 52138--52160.
2. Jakob Ambsdorf , Alina Munir , Yiyao Wei , Klaas Degkwitz , Harm Matthias Harms , Susanne Stannek, Kyra Ahrens, Dennis Becker, Erik Strahl, Tom Weber, et al. 2022 . Explain yourself! Effects of Explanations in Human-Robot Interaction . arXiv preprint arXiv:2204.04501 (2022). Jakob Ambsdorf, Alina Munir, Yiyao Wei, Klaas Degkwitz, Harm Matthias Harms, Susanne Stannek, Kyra Ahrens, Dennis Becker, Erik Strahl, Tom Weber, et al. 2022. Explain yourself! Effects of Explanations in Human-Robot Interaction. arXiv preprint arXiv:2204.04501 (2022).
3. Explainable navigation system using fuzzy reinforcement learning
4. Stefano Borgo and Claudio Masolo . 2009. Foundational choices in DOLCE . In Handbook on ontologies . Springer , 361--381. Stefano Borgo and Claudio Masolo. 2009. Foundational choices in DOLCE. In Handbook on ontologies. Springer, 361--381.
5. Aditya Chattopadhay , Anirban Sarkar , Prantik Howlader , and Vineeth N Balasubramanian . 2018 . Grad-cam: Generalized gradient-based visual explanations for deep convolutional networks. In 2018 IEEE winter conference on applications of computer vision (WACV) . IEEE , 839--847. Aditya Chattopadhay, Anirban Sarkar, Prantik Howlader, and Vineeth N Balasubramanian. 2018. Grad-cam: Generalized gradient-based visual explanations for deep convolutional networks. In 2018 IEEE winter conference on applications of computer vision (WACV). IEEE, 839--847.