In Search of Ambiguity: A Three-Stage Workflow Design to Clarify Annotation Guidelines for Crowd Workers

Author:

Pradhan Vivek Krishna,Schaekermann Mike,Lease Matthew

Abstract

We propose a novel three-stage FIND-RESOLVE-LABEL workflow for crowdsourced annotation to reduce ambiguity in task instructions and, thus, improve annotation quality. Stage 1 (FIND) asks the crowd to find examples whose correct label seems ambiguous given task instructions. Workers are also asked to provide a short tag that describes the ambiguous concept embodied by the specific instance found. We compare collaborative vs. non-collaborative designs for this stage. In Stage 2 (RESOLVE), the requester selects one or more of these ambiguous examples to label (resolving ambiguity). The new label(s) are automatically injected back into task instructions in order to improve clarity. Finally, in Stage 3 (LABEL), workers perform the actual annotation using the revised guidelines with clarifying examples. We compare three designs using these examples: examples only, tags only, or both. We report image labeling experiments over six task designs using Amazon's Mechanical Turk. Results show improved annotation accuracy and further insights regarding effective design for crowdsourced annotation tasks.

Funder

Micron Foundation

University of Texas at Austin

Publisher

Frontiers Media SA

Subject

Artificial Intelligence

Reference77 articles.

1. “The jabberwocky programming environment for structured social computing,”;Ahmad;Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology,2011

2. “Identifying and measuring annotator bias based on annotators' demographic characteristics,”;Al Kuwatly,2020

3. “Practical lessons for gathering quality labels at scale,”;Alonso;Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval,2015

4. “Crowdsourcing for relevance evaluation,”;Alonso;ACM SigIR Forum, Vol. 42,2008

5. Tutorial: Best Practices for Managing Workers in Follow-Up Surveys or Longitudinal Studies2017

Cited by 1 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. A Large Language Model Approach to Educational Survey Feedback Analysis;International Journal of Artificial Intelligence in Education;2024-06-25

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3