A Human-Centered Framework for Ensuring Reliability on Crowdsourced Labeling Tasks
-
Published:2013-11-03
Issue:
Volume:1
Page:2-3
-
ISSN:2769-1349
-
Container-title:Proceedings of the AAAI Conference on Human Computation and Crowdsourcing
-
language:
-
Short-container-title:HCOMP
Author:
Alonso Omar,Marshall Catherine,Najork Marc
Abstract
This paper describes an approach to improving the reliability of a crowdsourced labeling task for which there is no objective right answer. Our approach focuses on three contingent elements of the labeling task: data quality, worker reliability, and task design. We describe how we developed and applied this framework to the task of labeling tweets according to their interestingness. We use in-task CAPTCHAs to identify unreliable workers, and measure inter-rater agreement to decide whether subtasks have objective or merely subjective answers.
Publisher
Association for the Advancement of Artificial Intelligence (AAAI)
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献