Simple Knowledge Graph Completion Model Based on Differential Negative Sampling and Prompt Learning
-
Published:2023-08-09
Issue:8
Volume:14
Page:450
-
ISSN:2078-2489
-
Container-title:Information
-
language:en
-
Short-container-title:Information
Author:
Duan Li1, Wang Jing1, Luo Bing1, Sun Qiao12
Affiliation:
1. College of Electronic Engineering, Naval University of Engineering, Wuhan 430033, China 2. College of Computer Science and Technology, Zhejiang University, Hangzhou 310027, China
Abstract
Knowledge graphs (KGs) serve as a crucial resource for numerous artificial intelligence tasks, significantly contributing to the advancement of the AI field. However, the incompleteness of existing KGs hinders their effectiveness in practical applications. Consequently, researchers have proposed the task of KG completion. Currently, embedding-based techniques dominate the field as they leverage the structural information within KGs to infer and complete missing parts. Nonetheless, these methods exhibit limitations. They are limited by the quality and quantity of structural information and are unable to handle the missing entities in the original KG. To overcome these challenges, researchers have attempted to integrate pretrained language models and textual data to perform KG completion. This approach utilizes the definition statements and description text of entities within KGs. The goal is to compensate for the latent connections that are difficult for traditional methods to obtain. However, text-based methods still lag behind embedding-based models in terms of performance. Our analysis reveals that the critical issue lies in the selection process of negative samples. In order to enhance the performance of the text-based methods, various types of negative sampling methods are employed in this study. We introduced prompt learning to fill the gap between the pre-training language model and the knowledge graph completion task, and to improve the model reasoning level. Simultaneously, a ranking strategy based on KG structural information is proposed to utilize KG structured data to assist reasoning. The experiment results demonstrate that our model exhibits strong competitiveness and outstanding inference speed. By fully exploiting the internal structural information of KGs and external relevant descriptive text resources, we successfully elevate the performance levels of KG completion tasks across various metrics.
Subject
Information Systems
Reference42 articles.
1. Bollacker, K., Evans, C., Paritosh, P., Sturge, T., and Taylor, J. (2008, January 9–12). Freebase: A collaboratively created graph database for structuring human knowledge. Proceedings of the 2008 ACM SIGMOD International Conference on Management of Data, Vancouver, BC, Canada. 2. Suchanek, F.M., Kasneci, G., and Weikum, G. (2007, January 8–12). Yago: A core of semantic knowledge. Proceedings of the 16th International Conference on World Wide Web, Banff, AB, Canada. 3. Auer, S., Bizer, C., Kobilarov, G., Lehmann, J., Cyganiak, R., and Ives, Z. (2007). Proceedings of the Semantic Web: 6th International Semantic Web Conference, 2nd Asian Semantic Web Conference, ISWC 2007 + ASWC 2007, Busan, Republic of Korea, 11–15 November 2007,
Springer. Proceedings, 2007. 4. Dong, X., Gabrilovich, E., Heitz, G., Horn, W., Lao, N., Murphy, K., Strohmann, T., Sun, S., and Zhang, W. (2014, January 24–27). Knowledge vault: A web-scale approach to probabilistic knowledge fusion. Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, New York, NY, USA. 5. Hierarchical attentive knowledge graph embedding for personalized recommendation;Sha;Electron. Commer. Res. Appl.,2021
|
|