Author:
Chu Zhendong,Ma Jing,Wang Hongning
Abstract
Crowdsourcing provides a practical way to obtain large amounts of labeled data at a low cost. However, the annotation quality of annotators varies considerably, which imposes new challenges in learning a high-quality model from the crowdsourced annotations. In this work, we provide a new perspective to decompose annotation noise into common noise and individual noise and differentiate the source of confusion based on instance difficulty and annotator expertise on a per-instance-annotator basis. We realize this new crowdsourcing model by an end-to-end learning solution with two types of noise adaptation layers: one is shared across annotators to capture their commonly shared confusions, and the other one is pertaining to each annotator to realize individual confusion. To recognize the source of noise in each annotation, we use an auxiliary network to choose from the two noise adaptation layers with respect to both instances and annotators. Extensive experiments on both synthesized and real-world benchmarks demonstrate the effectiveness of our proposed common noise adaptation solution.
Publisher
Association for the Advancement of Artificial Intelligence (AAAI)
Cited by
15 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Label Selection Approach to Learning from Crowds;Transactions of the Japanese Society for Artificial Intelligence;2024-09-01
2. Learning from Crowds with Crowd-Kit;Journal of Open Source Software;2024-04-06
3. Learning from Multiple Noisy Annotations via Trustable Data Mixture;Lecture Notes in Computer Science;2024
4. Label Selection Approach to Learning from Crowds;Communications in Computer and Information Science;2023-11-26
5. From Labels to Decisions: A Mapping-Aware Annotator Model;Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining;2023-08-04