Abstract
An increasing number of domains are providing us with detailed trace data on human decisions in settings where we can evaluate the quality of these decisions via an algorithm. Motivated by this development, an emerging line of work has begun to consider whether we can characterize and predict the kinds of decisions where people are likely to make errors.
To investigate what a general framework for human error prediction might look like, we focus on a model system with a rich history in the behavioral sciences: the decisions made by chess players as they select moves in a game. We carry out our analysis at a large scale, employing datasets with several million recorded games, and using
chess tablebases
to acquire a form of ground truth for a subset of chess positions that have been completely solved by computers but remain challenging for even the best players in the world.
We organize our analysis around three categories of features that we argue are present in most settings where the analysis of human error is applicable: the skill of the decision-maker, the time available to make the decision, and the inherent difficulty of the decision. We identify rich structure in all three of these categories of features, and find strong evidence that in our domain, features describing the inherent difficulty of an instance are significantly more powerful than features based on skill or time.
Funder
Google Research Grant
Simons Investigator Award
Facebook Faculty Research Grant
ARO MURI Grant
Publisher
Association for Computing Machinery (ACM)
Cited by
10 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Positive and negative explanation effects in human–agent teams;AI and Ethics;2024-01-10
2. Human Satisfaction in Ad Hoc Human-Agent Teams;Artificial Intelligence in HCI;2023
3. Personalized Game Difficulty Prediction Using Factorization Machines;The 35th Annual ACM Symposium on User Interface Software and Technology;2022-10-28
4. Trucks Don’t Mean Trump: Diagnosing Human Error in Image Analysis;2022 ACM Conference on Fairness, Accountability, and Transparency;2022-06-20
5. Evaluating Human and Agent Task Allocators in Ad Hoc Human-Agent Teams;Coordination, Organizations, Institutions, Norms, and Ethics for Governance of Multi-Agent Systems XV;2022