Affiliation:
1. School of Information and Communication Engineering, Hainan University, Haikou 570100, China
2. Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
Abstract
Extractive text summarization selects the most important sentences from a document, preserves their original meaning, and produces an objective and fact-based summary. It is faster and less computationally intensive than abstract summarization techniques. Learning cross-sentence relationships is crucial for extractive text summarization. However, most of the language models currently in use process text data sequentially, which makes it difficult to capture such inter-sentence relations, especially in long documents. This paper proposes an extractive summarization model based on the graph neural network (GNN) to address this problem. The model effectively represents cross-sentence relationships using a graph-structured document representation. In addition to sentence nodes, we introduce two nodes with different granularity in the graph structure, words and topics, which bring different levels of semantic information. The node representations are updated by the graph attention network (GAT). The final summary is obtained using the binary classification of the sentence nodes. Our text summarization method was demonstrated to be highly effective, as supported by the results of our experiments on the CNN/DM and NYT datasets. To be specific, our approach outperformed baseline models of the same type in terms of ROUGE scores on both datasets, indicating the potential of our proposed model for enhancing text summarization tasks.
Funder
National Natural Science Foundation of China
National Key R&D Program of China
Subject
Electrical and Electronic Engineering,Computer Networks and Communications,Hardware and Architecture,Signal Processing,Control and Systems Engineering
Reference49 articles.
1. Sutskever, I., Vinyals, O., and Le, Q.V. (2014). Sequence to sequence learning with neural networks. arXiv.
2. Liu, Y., and Lapata, M. (2019). Hierarchical transformers for multi-document summarization. arXiv.
3. Cao, Z., Wei, F., Li, W., and Li, S. (2018, January 2–7). Faithful to the original: Fact aware neural abstractive summarization. Proceedings of the AAAI Conference on Artificial Intelligence, New Orleans, LA, USA.
4. Wang, D., Liu, P., Zheng, Y., Qiu, X., and Huang, X. (2020). Heterogeneous graph neural networks for extractive document summarization. arXiv.
5. Cheng, J., and Lapata, M. (2016). Neural summarization by extracting sentences and words. arXiv.
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献