Author:
Dujmović Jozo,Allen William L.
Abstract
All professional decisions prepared for a specific stakeholder can and must be explained. The primary role of explanation is to defend and reinforce the proposed decision, supporting stakeholder confidence in the validity of the decision. In this paper we present the methodology for explaining results of the evaluation of alternatives for water quality protection for a real-life project, the Upper Neuse Clean Water Initiative in North Carolina. The evaluation and comparison of alternatives is based on the Logic Scoring of Preference (LSP) method. We identify three explainability problems: (1) the explanation of LSP criterion properties, (2) the explanation of evaluation results for each alternative, and (3) the explanation of the comparison and ranking of alternatives. To solve these problems, we introduce a set of explainability indicators that characterize properties that are necessary for verbal explanations that humans can understand. In addition, we use this project to show the methodology for automatic generation of explainability reports. We recommend the use of explainability reports as standard supplements for evaluation reports containing the results of evaluation projects based on the LSP method.
Reference12 articles.
1. Dujmović, J. (2018). Soft Computing Evaluation Logic, Wiley and IEEE Press.
2. Explainable and Trustworthy Artificial Intelligence;Alonso-Moral;Comput. Intell.,2022
3. Fuzzy Systems toward Human-Explainable Artificial Intelligence and Their Applications;Deng;IEEE Trans. Fuzzy Syst.,2021
4. Trustworthy AI;Wing;Commun. ACM,2021
5. Larsen, H.L., Martin-Bautista, M.J., Vila, M.A., Andreasen, T., and Christiansen, H. (2021). Flexible Query Answering Systems, Springer.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Explainable Artificial Intelligence (XAI) in Manufacturing;Explainable Artificial Intelligence (XAI) in Manufacturing;2023