Affiliation:
1. School of Information Technology, Mae Fah Luang University, Chiang Rai, Thailand
Abstract
Machine learning has been implemented as a part of many software systems to support data-driven decisions and recommendations. The prominent machine learning technique is the artificial neural network, which lacks the explanation of how it produces the output. However, many application domains require algorithmic decision making to be transparent so explainability in these systems has been an important challenge. This paper proposes an automated framework that elicits the contributing rules describing how the neural network model makes decisions. The explainability of contributing rules can be measured and it is able to address issues in the training dataset. With the ontology representation of contributing rules, an individual decision can be automatically explained through ontology reasoning. We have developed a tool that supports applying our framework in practice. The evaluation has been conducted to assess the effectiveness of our framework using open datasets from different domains. The results prove that our framework performs well to explain the neural network models, as it can achieve the average accuracy of 81% to explain the subject models. Also, our framework takes significantly less time to process than the other technique.
Publisher
World Scientific Pub Co Pte Ltd
Subject
Artificial Intelligence,Computer Graphics and Computer-Aided Design,Computer Networks and Communications,Software