Abstract
AbstractText generation is a key tool in natural language applications. Generating texts which could express rich ideas through several sentences needs a structured representation of their content. Many works utilize graph-based methods for graph-to-text generation, like knowledge-graph-to-text generation. However, generating texts from knowledge graph still faces problems, such as repetitions and the entity information is not fully utilized in the generated text. In this paper, we focus on knowledge-graph-to-text generation, and develop a multi-level entity fusion representation (MEFR) model to address the above problems, aiming to generate high-quality text from knowledge graph. Our model introduces a fusion mechanism, which is capable of aggregating node representations from word level and phrase level to obtain rich entity representations of the knowledge graph. Then, Graph Transformer is adopted to encode the graph and outputs contextualized node representations. Besides, we develop a vanilla beam search-based comparison mechanism during decoding procedure, which further considers similarity to reduce repetitive information of the generated text. Experimental results show that the proposed MEFR model could effectively improve generation performance, and outperform other baselines on AGENDA and WebNLG datasets. The results also demonstrate the importance to further explore information contained in knowledge graph.
Funder
National Natural Science Foundation of China
MOE (Ministry of Education in China) Project of Humanities and Social Sciences
Publisher
Springer Science and Business Media LLC
Subject
Computational Mathematics,Engineering (miscellaneous),Information Systems,Artificial Intelligence
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献