Author:
Liu Hongbo,Chen Yue,He Peng,Zhang Chao,Wu Hao,Zhang Jiange
Abstract
AbstractConventional knowledge graph representation learn the representation of entities and relations by projecting triples in the knowledge graph to a continuous vector space. The vector representation increases the precision of link prediction and the efficiency of downstream tasks. However, these methods cannot process previously unseen entities during the knowledge graph evolution. In other words, the model trained on the source knowledge graph cannot be applied to the target knowledge graph containing new unseen entities. Recently, a few subgraph-based link prediction models obtained the inductive ability, but they all neglect semantic information. In this work, we propose an inductive representation learning model TGraiL which considers not only the topological structure but also semantic information. First, distance in the subgraph is used to encode the node’s topological structure. Second, the projection matrix is used to encode the entity type information. Finally, both kinds of information are fused for training to acquire the ultimate vector representation of entities. The experimental results indicate that the model’s performance has been significantly improved compared to the existing baseline models, demonstrating the method’s effectiveness and superiority.
Funder
National Natural Science Foundation of China
The National Social Science Fund of China
Publisher
Springer Science and Business Media LLC