Author:
D'Souza Jennifer,Hussein Hassan,Evans Julia,Vogt Lars,Karras Oliver,Ilangovan Vinodh,Lorenz Anna-Lena,Auer Sören
Abstract
The Open Research Knowledge Graph (ORKG) is a digital library for machine-actionable scholarly knowledge, with a focus on structured research comparisons obtained through expert crowdsourcing. While the ORKG has attracted a community of more than 1,000 users, the curated data has not been subject to an in-depth quality assessment so far. Here, proposed as a first exemplary step, within a team of domain experts, we evaluate the quality of six selected ORKG Comparisons based on three criteria, namely: 1) the quality of semantic modelling, 2) the maturity of the Comparisons in terms of their completeness, syntactic representation, identifier stability, and their linkability mechanisms ensuring the interoperability and discoverability. Finally, 3) the informative usefulness of the Comparisons to expert and lay users. We have found that each criterion addresses a unique and independent aspect of quality. Backed by the observations of our quality evaluations presented in this paper, a fitting model of knowledge graph quality appears one that is indeed multidimensional as ours.
Funder
Bundesministerium für Bildung und Forschung