Investigating cross-lingual training for offensive language detection

Author:

Pelicon Andraž12,Shekhar Ravi3,Škrlj Blaž12,Purver Matthew13ORCID,Pollak Senja1

Affiliation:

1. Jožef Stefan Institute, Ljubljana, Slovenia

2. Jožef Stefan International Postgraduate School, Ljubljana, Slovenia

3. Queen Mary University of London, London, United Kingdom

Abstract

Platforms that feature user-generated content (social media, online forums, newspaper comment sections etc.) have to detect and filter offensive speech within large, fast-changing datasets. While many automatic methods have been proposed and achieve good accuracies, most of these focus on the English language, and are hard to apply directly to languages in which few labeled datasets exist. Recent work has therefore investigated the use of cross-lingual transfer learning to solve this problem, training a model in a well-resourced language and transferring to a less-resourced target language; but performance has so far been significantly less impressive. In this paper, we investigate the reasons for this performance drop, via a systematic comparison of pre-trained models and intermediate training regimes on five different languages. We show that using a better pre-trained language model results in a large gain in overall performance and in zero-shot transfer, and that intermediate training on other languages is effective when little target-language data is available. We then use multiple analyses of classifier confidence and language model vocabulary to shed light on exactly where these gains come from and gain insight into the sources of the most typical mistakes.

Funder

European Union’s Horizon

European Union’s Rights, Equality and Citizenship Program

EPSRC

Slovenian Research Agency

Publisher

PeerJ

Subject

General Computer Science

Reference78 articles.

1. Massively multilingual sentence embeddings for zero-shot cross-lingual transfer and beyond;Artetxe;Transactions of the Association for Computational Linguistics,2019

2. RuG@ EVALITA 2018: hate speech detection in Italian social media;Bai,2018

3. CrotoneMilano for AMI at Evalita2018: a performant, cross-lingual misogyny detection system;Basile,2018

4. SemEval-2019 task 5: multilingual detection of hate speech against immigrants and women in Twitter;Basile,2019

5. Ethnic cleansing in Myanmar: the Rohingya crisis and human rights;Beyrer;The Lancet,2017

Cited by 4 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. A survey on multi-lingual offensive language detection;PeerJ Computer Science;2024-03-29

2. A Comprehensive Review on Transformers Models For Text Classification;2023 International Mobile, Intelligent, and Ubiquitous Computing Conference (MIUCC);2023-09-27

3. Investigating toxicity changes of cross-community redditors from 2 billion posts and comments;PeerJ Computer Science;2022-08-18

4. BERT Models for Arabic Text Classification: A Systematic Review;Applied Sciences;2022-06-04

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3