CoMix: Confronting with Noisy Label Learning with Co-training Strategies on Textual Mislabeling

Author:

Zhao Shu1ORCID,Zhao Zhuoer2ORCID,Xu Yangyang3ORCID,Sun Xiao4ORCID

Affiliation:

1. Anhui University, Hefei, China

2. School of Artificial Intelligence, Anhui University, Hefei, China

3. University of Science and Technology of China Institute of Advanced Technology, Hefei, China

4. School of Computer and Information Engineering, Hefei University of Technology, Hefei China and Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei China

Abstract

The existence of noisy labels is inevitable in real-world large-scale corpora. As deep neural networks are notably vulnerable to overfitting on noisy samples, this highlights the importance of the ability of language models to resist noise for efficient training. However, little attention has been paid to alleviating the influence of label noise in natural language processing. To address this problem, we present CoMix, a robust Noise-against training strategy taking advantage of Co-training that deals with textual annotation errors in text classification tasks. In our proposed framework, the original training set is first split into labeled and unlabeled subsets according to a sample partition criteria and then applies label refurbishment on the unlabeled subsets. We implement textual interpolation in hidden space between samples on the updated subsets. Meanwhile, we employ peer diverged networks simultaneously leveraging co-training strategies to avoid the accumulation of confirm bias. Experimental results on three popular text classification benchmarks demonstrate the effectiveness of CoMix in bolstering the network’s resistance to label mislabeling under various noise types and ratios, which also outperforms the state-of-the-art methods.

Funder

National Key R&D Programme of China

Major Project of Anhui Province

General Programmer of the National Natural Science Foundation of China

University Synergy Innovation Program of Anhui Province

Publisher

Association for Computing Machinery (ACM)

Reference42 articles.

1. MixMatch: A holistic approach to semi-supervised learning;Berthelot David;Advances in Neural Information Processing Systems,2019

2. Combining labeled and unlabeled data with co-training

3. Active bias: Training more accurate neural networks by emphasizing high variance samples;Chang Haw-Shiuan;Advances in Neural Information Processing Systems,2017

4. MixText: Linguistically-Informed Interpolation of Hidden Space for Semi-Supervised Text Classification

5. Pengfei Chen, Ben Ben Liao, Guangyong Chen, and Shengyu Zhang. 2019. Understanding and utilizing deep neural networks trained with noisy labels. In Proceedings of the International Conference on Machine Learning. PMLR, 1062–1070.

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3