Affiliation:
1. Department of Informatics, King’s College London
Abstract
Knowledge graphs are important in human-centered AI because of their ability to reduce the need for large labelled machine-learning datasets, facilitate transfer learning, and generate explanations. However, knowledge-graph construction has evolved into a complex, semi-automatic process that increasingly relies on opaque deep-learning models and vast collections of heterogeneous data sources to scale. The knowledge-graph lifecycle is not transparent, accountability is limited, and there are no accounts of, or indeed methods to determine, how fair a knowledge graph is in the downstream applications that use it. Knowledge graphs are thus at odds with AI regulation, for instance the EU’s upcoming AI Act, and with ongoing efforts elsewhere in AI to audit and debias data and algorithms. This paper reports on work in progress towards designing explainable (XAI) knowledge-graph construction pipelines with human-in-the-loop and discusses research topics in this space. These were grounded in a systematic literature review, in which we studied tasks in knowledge-graph construction that are often automated, as well as common methods to explain how they work and their outcomes. We identified three directions for future research: (i) tasks in knowledge-graph construction where manual input remains essential and where there may be opportunities for AI assistance; (ii) integrating XAI methods into established knowledge-engineering practices to improve stakeholder experience; as well as (iii) evaluating how effective explanations genuinely are in making knowledge-graph construction more trustworthy.