Efficient crowdsourcing for multi-class labeling

Author:

Karger David R.1,Oh Sewoong2,Shah Devavrat1

Affiliation:

1. Massachusetts Institute of Technology, Cambridge, MA, USA

2. University of Illinois at Urbana-Champaign, Urbana, IL, USA

Abstract

Crowdsourcing systems like Amazon's Mechanical Turk have emerged as an effective large-scale human-powered platform for performing tasks in domains such as image classification, data entry, recommendation, and proofreading. Since workers are low-paid (a few cents per task) and tasks performed are monotonous, the answers obtained are noisy and hence unreliable. To obtain reliable estimates, it is essential to utilize appropriate inference algorithms (e.g. Majority voting) coupled with structured redundancy through task assignment. Our goal is to obtain the best possible trade-off between reliability and redundancy. In this paper, we consider a general probabilistic model for noisy observations for crowd-sourcing systems and pose the problem of minimizing the total price (i.e. redundancy) that must be paid to achieve a target overall reliability. Concretely, we show that it is possible to obtain an answer to each task correctly with probability 1-ε as long as the redundancy per task is O((K/q) log (K/ε)), where each task can have any of the $K$ distinct answers equally likely, q is the crowd-quality parameter that is defined through a probabilistic model. Further, effectively this is the best possible redundancy-accuracy trade-off any system design can achieve. Such a single-parameter crisp characterization of the (order-)optimal trade-off between redundancy and reliability has various useful operational consequences. Further, we analyze the robustness of our approach in the presence of adversarial workers and provide a bound on their influence on the redundancy-accuracy trade-off. Unlike recent prior work [GKM11, KOS11, KOS11], our result applies to non-binary (i.e. K>2) tasks. In effect, we utilize algorithms for binary tasks (with inhomogeneous error model unlike that in [GKM11, KOS11, KOS11]) as key subroutine to obtain answers for K-ary tasks. Technically, the algorithm is based on low-rank approximation of weighted adjacency matrix for a random regular bipartite graph, weighted according to the answers provided by the workers.

Publisher

Association for Computing Machinery (ACM)

Subject

Computer Networks and Communications,Hardware and Architecture,Software

Reference28 articles.

1. Casting Words. http://castingwords.com. Casting Words. http://castingwords.com.

2. Crowd Flower. http://crowd ower.com. Crowd Flower. http://crowd ower.com.

3. Crowd Spring. http://www.crowdspring.com. Crowd Spring. http://www.crowdspring.com.

4. ESP game. http://www.espgame.org. ESP game. http://www.espgame.org.

5. Soylent. http://projects.csail.mit.edu/soylent/. Soylent. http://projects.csail.mit.edu/soylent/.

Cited by 19 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Graph Signal Processing Over a Probability Space of Shift Operators;IEEE Transactions on Signal Processing;2023

2. A Survey on Task Assignment in Crowdsourcing;ACM Computing Surveys;2022-02-03

3. Key Research Issues and Related Technologies in Crowdsourcing Data Collection;Wireless Communications and Mobile Computing;2021-10-16

4. Crowdsourcing: Descriptive Study on Algorithms and Frameworks for Prediction;Archives of Computational Methods in Engineering;2021-04-04

5. Privacy-preserving and verifiable online crowdsourcing with worker updates;Information Sciences;2021-02

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3