Improving ROUGE‐1 by 6%: A novel multilingual transformer for abstractive news summarization

Author:

Kumar Sandeep1ORCID,Solanki Arun1

Affiliation:

1. Department of Computer Science and Engineering, SoICT Gautam Buddha University Greater Noida India

Abstract

SummaryNatural language processing (NLP) has undergone a significant transformation, evolving from manually crafted rules to powerful deep learning techniques such as transformers. These advancements have revolutionized various domains including summarization, question answering, and more. Statistical models like hidden Markov models (HMMs) and supervised learning have played crucial roles in laying the foundation for this progress. Recent breakthroughs in transfer learning and the emergence of large‐scale models like BERT and GPT have further pushed the boundaries of NLP research. However, news summarization remains a challenging task in NLP, often resulting in factual inaccuracies or the loss of the article's essence. In this study, we propose a novel approach to news summarization utilizing a fine‐tuned Transformer architecture pre‐trained on Google's mt‐small tokenizer. Our model demonstrates significant performance improvements over previous methods on the Inshorts English News dataset, achieving a 6% enhancement in the ROUGE‐1 score and reducing training loss by 50%. This breakthrough facilitates the generation of reliable and concise news summaries, thereby enhancing information accessibility and user experience. Additionally, we conduct a comprehensive evaluation of our model's performance using popular metrics such as ROUGE scores, with our proposed model achieving ROUGE‐1: 54.6130, ROUGE‐2: 31.1543, ROUGE‐L: 50.7709, and ROUGE‐LSum: 50.7907. Furthermore, we observe a substantial reduction in training and validation losses, underscoring the effectiveness of our proposed approach.

Publisher

Wiley

Reference68 articles.

1. MayhewS TsygankovaT RothD.Ner and pos when nothing is capitalized. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP‐IJCNLP) Hong Kong China: Association for Computational Linguistics; 2019: 6255–6260. doi:10.18653/v1/D19‐1650

2. LopezMM KalitaJ.Deep Learning Applied to NLP; 2017. doi:10.48550/ARXIV.1703.03091

3. DevlinJ ChangM‐W LeeK ToutanovaK.BERT: pre‐training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies Volume 1 (Long and Short Papers) Minneapolis Minnesota: Association for Computational Linguistics; 2019: 4171–4186. doi:10.18653/v1/N19‐1423

4. An empirical study of smoothing techniques for language modeling

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3