Multi-Hop Question Generation with Knowledge Graph-Enhanced Language Model
-
Published:2023-05-07
Issue:9
Volume:13
Page:5765
-
ISSN:2076-3417
-
Container-title:Applied Sciences
-
language:en
-
Short-container-title:Applied Sciences
Author:
Li Zhenping12ORCID, Cao Zhen3, Li Pengfei3, Zhong Yong12, Li Shaobo14ORCID
Affiliation:
1. Chengdu Institute of Computer Applications, Chinese Academy of Sciences, Chengdu 610041, China 2. School of Computer Science and Technology, University of Chinese Academy of Sciences, Beijing 100049, China 3. School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798, Singapore 4. Key Laboratory of Advanced Manufacturing Technology, Ministry of Education, Guizhou University, Guiyang 550025, China
Abstract
The task of multi-hop question generation (QG) seeks to generate questions that require a complex reasoning process that spans multiple sentences and answers. Beyond the conventional challenges of what to ask and how to ask, multi-hop QG necessitates sophisticated reasoning from dispersed evidence across multiple sentences. To address these challenges, a knowledge graph-enhanced language model (KGEL) has been developed to imitate human reasoning for multi-hop questions.The initial step in KGEL involves encoding the input sentence with a pre-trained GPT-2 language model to obtain a comprehensive semantic context representation. Next, a knowledge graph is constructed using the entities identified within the context. The critical information in the graph that is related to the answer is then utilized to update the context representations through an answer-aware graph attention network (GAT). Finally, the multi-head attention generation module (MHAG) is performed over the updated latent representations of the context to generate coherent questions. Human evaluations demonstrate that KGEL generates more logical and fluent multi-hop questions compared to GPT-2. Furthermore, KGEL outperforms five prominent baselines in automatic evaluations, with a BLEU-4 score that is 27% higher than that of GPT-2.
Funder
AI industrial technology innovation platform of Sichuan Province
Subject
Fluid Flow and Transfer Processes,Computer Science Applications,Process Chemistry and Technology,General Engineering,Instrumentation,General Materials Science
Reference31 articles.
1. Duan, N., Tang, D., Chen, P., and Zhou, M. (2017, January 7–11). Question generation for question answering. Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, Copenhagen, Denmark. 2. Mostafazadeh, N., Brockett, C., Dolan, W.B., Galley, M., Gao, J., Spithourakis, G., and Vanderwende, L. (December, January 27). Image-Grounded Conversations: Multimodal Context for Natural Question and Response Generation. Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), Taipei, Taiwan. 3. Du, X., Shao, J., and Cardie, C. (August, January 30). Learning to Ask: Neural Question Generation for Reading Comprehension. Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Vancouver, BC, Canada. 4. Nema, P., Mohankumar, A.K., Khapra, M.M., Srinivasan, B.V., and Ravindran, B. (2019, January 3–7). Let’s Ask Again: Refine Network for Automatic Question Generation. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China. 5. Scialom, T., Piwowarski, B., and Staiano, J. (August, January 28). Self-attention architectures for answer-agnostic neural question generation. Proceedings of the 57th annual meeting of the Association for Computational Linguistics, Florence, Italy.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|