Fact-Aware Generative Text Summarization with Dependency Graphs
-
Published:2024-08-15
Issue:16
Volume:13
Page:3230
-
ISSN:2079-9292
-
Container-title:Electronics
-
language:en
-
Short-container-title:Electronics
Author:
Chen Ruoyu12ORCID, Li Yan2ORCID, Jiang Yuru12ORCID, Sun Bochen2ORCID, Wang Jingqi2ORCID, Li Zhen3ORCID
Affiliation:
1. Institute of Intelligent Information Processing, Beijing Information Science and Technology University, Beijing 100192, China 2. Computer School, Beijing Information Science and Technology University, Beijing 100192, China 3. School of Computer Science & Technology, Beijing Institute of Technology, Beijing 100081, China
Abstract
Generative text summaries often suffer from factual inconsistencies, where the summary deviates from the original text. This significantly reduces their usefulness. To address this issue, we propose a novel method for improving the factual accuracy of Chinese summaries by leveraging dependency graphs. Our approach involves analyzing the input text to build a dependency graph. This graph, along with the original text, is then processed by separate models: a Relational Graph Attention Neural Network for the dependency graph and a Transformer model for the text itself. Finally, a Transformer decoder generates the summary. We evaluate the factual consistency of the generated summaries using various methods. Experiments demonstrate that our approach improves about 7.79 points compared to the baseline Transformer model on the Chinese LCSTS dataset using ROUGE-1 metric, and 4.48 points in the factual consistency assessment model StructBERT.
Funder
Beijing Natural Science Foundation Beijing Information Science and Technology University
Reference31 articles.
1. See, A., Liu, P.J., and Manning, C.D. (2017). Get To The Point: Summarization with Pointer-Generator Networks. Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Vancouver, BC, Canada, 30 July–4 August 2017, Association for Computational Linguistics. 2. Paulus, R., Xiong, C., and Socher, R. (May, January 30). A Deep Reinforced Model for Abstractive Summarization. Proceedings of the International Conference on Learning Representations, Vancouver, BC, Canada. 3. Dong, L., Yang, N., Wang, W., Wei, F., Liu, X., Wang, Y., Gao, J., Zhou, M., and Hon, H.W. (2019). Unified language model pre-training for natural language understanding and generation. Proceedings of the 33rd International Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 8–14 December 2019, Curran Associates Inc. 4. Maynez, J., Narayan, S., Bohnet, B., and McDonald, R. (2020, January 5–10). On Faithfulness and Factuality in Abstractive Summarization. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Online. 5. Falke, T., Ribeiro, L.F.R., Utama, P.A., Dagan, I., and Gurevych, I. (2019). Ranking Generated Summaries by Correctness: An Interesting but Challenging Application for Natural Language Inference. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, 28 July–2 August 2019, Association for Computational Linguistics.
|
|