Affiliation:
1. Department of Computer Engineering, Amirkabir University of Technology, Tehran, Iran (the Islamic Republic of)
Abstract
In recent years, advanced machine learning and artificial intelligence techniques have gained popularity due to their ability to solve problems across various domains with high performance and quality. However, these techniques are often so complex that they fail to provide simple and understandable explanations for the outputs they generate. To address this issue, the field of
explainable artificial intelligence
has recently emerged. However, most data generated in different domains are inherently structural; that is, they consist of parts and relationships among them. Such data can be represented using either a simple
data-structure
or
form
, such as a
vector
, or a complex data-structure, such as a
graph
. The effect of this representation form on the explainability and interpretability of machine learning models is not extensively discussed in the literature. In this survey article, we review efficient algorithms proposed for learning from inherently structured data, emphasizing how their representation form affects the explainability of learning models. A conclusion of our literature review is that using complex
forms
or
data-structures
for data representation improves not only the learning performance but also the explainability and transparency of the model.
Publisher
Association for Computing Machinery (ACM)