ArMT-TNN: Enhancing natural language understanding performance through hard parameter multitask learning in Arabic

Author:

Alkhathlan Ali1,Alomar Khalid2

Affiliation:

1. Computer Science Department, King Abdulaziz University, Jeddah, Saudi Arabia

2. Information Systems Department, King Abdulaziz University, Jeddah, Saudi Arabia

Abstract

Multitask learning (MTL) is a machine learning paradigm where a single model is trained to perform several tasks simultaneously. Despite the considerable amount of research on MTL, the majority of it has been centered around English language, while other language such as Arabic have not received as much attention. Most existing Arabic NLP techniques concentrate on single or multitask learning, sharing just a limited number of tasks, between two or three tasks. To address this gap, we present ArMT-TNN, an Arabic Multi-Task Learning using Transformer Neural Network, designed for Arabic natural language understanding (ANLU) tasks. Our approach involves sharing learned information between eight ANLU tasks, allowing for a single model to solve all of them. We achieve this by fine-tuning all tasks simultaneously and using multiple pre-trained Bidirectional Transformer language models, like BERT, that are specifically designed for Arabic language processing. Additionally, we explore the effectiveness of various Arabic language models (LMs) that have been pre-trained on different types of Arabic text, such as Modern Standard Arabic (MSA) and Arabic dialects. Our approach demonstrated outstanding performance compared to all current models on four test sets within the ALUE benchmark, namely MQ2Q, OOLD, SVREG, and SEC, by margins of 3.9%, 3.8%, 10.1%, and 3.7%, respectively. Nonetheless, our approach did not perform as well on the remaining tasks due to the negative transfer of knowledge. This finding highlights the importance of carefully selecting tasks when constructing a benchmark. Our experiments also show that LMs which were pretrained on text types that differ from the text type used for finetuned tasks can still perform well.

Publisher

IOS Press

Reference6 articles.

1. Vaswani A, et al. Attention is all you need. Advances in Neural Information Processing Systems. 2017; 30.

2. End-to-end argument mining with cross-corpora multi-task learning;Morio;Transactions of the Association for Computational Linguistics,2022

3. Cross-modal multitask transformer for end-to-end multimodal aspect-based sentiment analysis;Yang;Information Processing & Management,2022

4. Vad-assisted multitask transformer framework for emotion recognition and intensity prediction on suicide notes;Ghosh;Information Processing & Management,2023

5. Few-shot parameter-efficient fine-tuning is better and cheaper than in-context learning;Liu;Advances in Neural Information Processing Systems,2022

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3