Author:
Liang Garston,Sloane Jennifer F.,Donkin Christopher,Newell Ben R.
Abstract
AbstractIn three experiments, we sought to understand when and why people use an algorithm decision aid. Distinct from recent approaches, we explicitly enumerate the algorithm’s accuracy while also providing summary feedback and training that allowed participants to assess their own skills. Our results highlight that such direct performance comparisons between the algorithm and the individual encourages a strategy of selective reliance on the decision aid; individuals ignored the algorithm when the task was easier and relied on the algorithm when the task was harder. Our systematic investigation of summary feedback, training experience, and strategy hint manipulations shows that further opportunities to learn about the algorithm encourage not only increased reliance on the algorithm but also engagement in experimentation and verification of its recommendations. Together, our findings emphasize the decision-maker’s capacity to learn about the algorithm providing insights for how we can improve the use of decision aids.
Funder
australian reseach council discovery grant
Publisher
Springer Science and Business Media LLC
Subject
Cognitive Neuroscience,Experimental and Cognitive Psychology
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献