Abstract
Abstract
Multilingual Transformer 5 (MT5) is a versatile architecture in natural language processing (NLP) that demonstrates proficiency across various languages. This study aimed to improve the performance of the MT5 model in two key tasks: topic classification and headline generation. The datasets used were 183K and 294K samples. The classification task involved categorizing news articles, while the news generation task aimed to create coherent and contextually relevant Arabic news content. Through careful fine-tuning and rigorous evaluation, the MT5 model significantly advances its ability to address complex challenges in Arabic NLP. This study provides practical insights into real-world applications in processing Arab news. The performance of the MT5 model was evaluated using various online platforms. The mT5small model achieved an accuracy of 0.7858 and an F1 score of 0.7858, while the mT5base model achieved an accuracy of 0.8230 and an F1 score of 0.8230. The generative approach for headline generation yielded Rouge-1, Rouge-2, and Rouge-L scores under the task "Generative of Headlines." These outcomes demonstrate the effectiveness of the fine-tuned MT5 model across various evaluation metrics and tasks, confirming its potential for practical applications in Arabic NLP.
Publisher
Research Square Platform LLC