Affiliation:
1. National University of Sciences and Technology, Islamabad, Pakistan
2. National University of Sciences and Technology, Islamabad Pakistan
Abstract
With the advent of Deep Learning based Artificial Neural Networks models, Natural Language Processing (NLP) has witnessed significant improvements in textual data processing in terms of its efficiency and accuracy. However, the research is mostly restricted to high-resource languages such as English and low-resource languages still suffer from a lack of available resources in terms of training datasets as well as models with even baseline evaluation results. Considering the limited availability of resources for low-resource languages, we propose a methodology for adapting self-attentive transformer-based architecture models (mBERT, mT5) for low-resource summarization, supplemented by the construction of a new baseline dataset (76.5k article, summary pairs) in a low-resource language Urdu. Choosing news (a publicly available source) as the application domain has the potential to make the proposed methodology useful for reproducing in other languages with limited resources. Our adapted summarization model
urT5
with up to 44.78% reduction in size as compared to
mT5
can capture contextual information of low resource language effectively with evaluation score (up to 46.35 ROUGE-1, 77 BERTScore) at par with state-of-the-art models in high resource language English
(PEGASUS: 47.21, BART: 45.14 on XSUM Dataset)
. The proposed method provided a baseline approach towards extractive as well as abstractive summarization with competitive evaluation results in a limited resource setup.
Publisher
Association for Computing Machinery (ACM)
Reference45 articles.
1. Amine Abdaoui Camille Pradel and Grégoire Sigel. 2020. Load what you need: Smaller versions of multilingual bert. arXiv preprint arXiv:2010.05609(2020).
2. Charu C Aggarwal and Charu C Aggarwal. 2015. Mining text data. Springer.
3. Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond
4. Dzmitry Bahdanau Kyunghyun Cho and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473(2014).
5. Yejin Bang, Samuel Cahyawijaya, Nayeon Lee, Wenliang Dai, Dan Su, Bryan Wilie, Holy Lovenia, Ziwei Ji, Tiezheng Yu, Willy Chung, et al. 2023. A multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity. arXiv preprint arXiv:2302.04023(2023).