Abstract
Crowdsourcing is often used to gather annotated data for training and evaluating computational systems that attempt to solve cognitive problems, such as understanding Natural Language sentences. Crowd workers are asked to perform semantic interpretation of sentences to establish a ground truth. This has always been done under the assumption that each task unit, e.g. each sentence, has a single correct interpretation that is contained in the ground truth. We have countered this assumption with CrowdTruth, and have shown that it can be better suited to tasks for which semantic interpretation is subjective. In this paper we investigate the dependence of worker metrics for detecting spam on the quality of sentences in the dataset, and the quality of the target semantics. We show that worker quality metrics can improve significantly when the quality of these other aspects of semantic interpretation are considered.
Publisher
Human Computation Institute
Cited by
20 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献