Affiliation:
1. Tianjin University, China
2. Harbin Institute of Technology (Shenzhen), China
3. Microsoft Research Asia, China
Abstract
RDF verbalization has received increasing interest, which aims to generate a natural language description of the knowledge base. Sequence-to-sequence models based on Transformer are able to obtain strong performance equipped with pre-trained language models such as BART and T5. However, in spite of the general performance gain introduced by the pre-trained models, the performance of the task is still limited by the small scale of the training dataset. To address the problem, we propose two orthogonal strategies to enhance the representation learning of RDF triples. Concretely, two types of knowledge are introduced, i.e., descriptive knowledge and relational knowledge, respectively. The descriptive knowledge indicates the semantic information of self definition, and the relational knowledge indicates the semantic information learned from the structural context. We further combine the descriptive and relational knowledge together to enhance the representation learning. Experimental results on the WebNLG and SemEval-2010 datasets show that the two types of knowledge can both enhance the model performance, and their combination is able to obtain further improvements in most cases, providing new state-of-the-art results.
Funder
National Natural Science Foundation of China
Publisher
Association for Computing Machinery (ACM)
Reference38 articles.
1. Layer normalization;Ba Jimmy Lei;arXiv preprint arXiv:1607.06450,2016
2. Graph-to-Sequence Learning using Gated Graph Neural Networks
3. Translating embeddings for modeling multi-relational data;Bordes Antoine;Adv. Neural Inf. Process. Syst.,2013
4. A Thorough Examination of the CNN/Daily Mail Reading Comprehension Task
5. Marco Damonte and Shay B. Cohen. 2019. Structural neural encoders for AMR-to-text generation. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 3649–3658.
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献