When Crowdsourcing Fails: A Study of Expertise on Crowdsourced Design Evaluation

Author:

Burnap Alex1,Ren Yi2,Gerth Richard3,Papazoglou Giannis4,Gonzalez Richard5,Papalambros Panos Y.6

Affiliation:

1. Design Science, University of Michigan, Ann Arbor, MI 48109 e-mail:

2. Research Fellow Department of Mechanical Engineering, University of Michigan, Ann Arbor, MI 48109 e-mail:

3. Research Scientist National Automotive Center, TARDEC-NAC, Warren, MI 48397 e-mail:

4. Department of Mechanical Engineering, Cyprus University of Technology, Limassol 3036, Cyprus e-mail:

5. Professor Department of Psychology, University of Michigan, Ann Arbor, MI 48109 e-mail:

6. Professor Fellow ASME Department of Mechanical Engineering, University of Michigan, Ann Arbor, MI 48109 e-mail:

Abstract

Crowdsourced evaluation is a promising method of evaluating engineering design attributes that require human input. The challenge is to correctly estimate scores using a massive and diverse crowd, particularly when only a small subset of evaluators has the expertise to give correct evaluations. Since averaging evaluations across all evaluators will result in an inaccurate crowd evaluation, this paper benchmarks a crowd consensus model that aims to identify experts such that their evaluations may be given more weight. Simulation results indicate this crowd consensus model outperforms averaging when it correctly identifies experts in the crowd, under the assumption that only experts have consistent evaluations. However, empirical results from a real human crowd indicate this assumption may not hold even on a simple engineering design evaluation task, as clusters of consistently wrong evaluators are shown to exist along with the cluster of experts. This suggests that both averaging evaluations and a crowd consensus model that relies only on evaluations may not be adequate for engineering design tasks, accordingly calling for further research into methods of finding experts within the crowd.

Publisher

ASME International

Subject

Computer Graphics and Computer-Aided Design,Computer Science Applications,Mechanical Engineering,Mechanics of Materials

Reference56 articles.

1. Groups of Diverse Problem Solvers Can Outperform Groups of High-Ability Problem Solvers;Proc. Natl. Acad. Sci. U.S.A.,2004

2. Towards an Integrated Crowdsourcing Definition;J. Inf. Sci.,2012

3. Gerth, R. J., Burnap, A., and Papalambros, P., 2012, “Crowdsourcing: A Primer and its Implications for Systems Engineering,” 2012 NDIA Ground Vehicle Systems Engineering and Technology Symposium, Troy, MI, Aug. 14–16.

4. Kittur, A., Chi, E. H., and Suh, B., 2008, “Crowdsourcing User Studies With Mechanical Turk,” Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Florence, Italy, Apr. 5–10, pp. 453–456.10.1145/1357054.1357127

5. Recaptcha: Human-Based Character Recognition via Web Security Measures;Science,2008

Cited by 35 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3