Affiliation:
1. MintViz Lab, MANSiD Center, Ştefan cel Mare University of Suceava, Suceava, Romania
2. The Information School | DUB Group, University of Washington, Seattle, WA, USA
Abstract
We clarify fundamental aspects of end-user elicitation, enabling such studies to be run and analyzed with confidence, correctness, and scientific rigor. To this end, our contributions are multifold. We introduce a formal model of end-user elicitation in HCI and identify three types of agreement analysis:
expert
,
codebook
, and
computer
. We show that agreement is a mathematical
tolerance relation
generating a tolerance space over the set of elicited proposals. We review current measures of agreement and show that all can be computed from an
agreement graph
. In response to recent criticisms, we show that chance agreement represents an issue solely for inter-rater reliability studies and not for end-user elicitation, where it is opposed by
chance disagreement
. We conduct extensive simulations of 16 statistical tests for agreement rates, and report Type I errors and power. Based on our findings, we provide recommendations for practitioners and introduce a five-level hierarchy for elicitation studies.
Publisher
Association for Computing Machinery (ACM)
Subject
Human-Computer Interaction
Reference146 articles.
1. ACM. 2020. Artifact Review and Badging. Retrieved on October 2021 from https://www.acm.org/publications/policies/artifact-review-badging.
2. “I Am Iron Man”
3. Crowdsourcing Similarity Judgments for Agreement Analysis in End-User Elicitation Studies
4. Crowdlicit
Cited by
16 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献