Affiliation:
1. Department of Medical Physics, School of Medicine, University of Patras, 26504 Patras, Greece
2. Department of Electrical and Computer Technology Engineering, University of Patras, 26504 Patras, Greece
Abstract
Currently, artificial intelligence is facing several problems with its practical implementation in various application domains. The explainability of advanced artificial intelligence algorithms is a topic of paramount importance, and many discussions have been held recently. Pioneering and classical machine learning and deep learning models behave as black boxes, constraining the logical interpretations that the end users desire. Artificial intelligence applications in industry, medicine, agriculture, and social sciences require the users’ trust in the systems. Users are always entitled to know why and how each method has made a decision and which factors play a critical role. Otherwise, they will always be wary of using new techniques. This paper discusses the nature of fuzzy cognitive maps (FCMs), a soft computational method to model human knowledge and provide decisions handling uncertainty. Though FCMs are not new to the field, they are evolving and incorporate recent advancements in artificial intelligence, such as learning algorithms and convolutional neural networks. The nature of FCMs reveals their supremacy in transparency, interpretability, transferability, and other aspects of explainable artificial intelligence (XAI) methods. The present study aims to reveal and defend the explainability properties of FCMs and to highlight their successful implementation in many domains. Subsequently, the present study discusses how FCMs cope with XAI directions and presents critical examples from the literature that demonstrate their superiority. The study results demonstrate that FCMs are both in accordance with the XAI directives and have many successful applications in domains such as medical decision-support systems, precision agriculture, energy savings, environmental monitoring, and policy-making for the public sector.
Subject
Fluid Flow and Transfer Processes,Computer Science Applications,Process Chemistry and Technology,General Engineering,Instrumentation,General Materials Science
Reference49 articles.
1. Goodfellow, I., Bengio, Y., and Courville, A. (2016). Deep Learning, MIT Press.
2. Research Trends in Artificial Intelligence Applications in Human Factors Health Care: Mapping Review;Asan;JMIR Hum. Factors,2021
3. Pisoni, G., Díaz-Rodríguez, N., Gijlers, H., and Tonolli, L. (2021). Human-Centered Artificial Intelligence for Designing Accessible Cultural Heritage. Appl. Sci., 11.
4. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI;Bennetot;Inf. Fusion,2020
5. Explainable AI (XAI): Core Ideas, Techniques, and Solutions;Dwivedi;ACM Comput. Surv.,2023
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献