Abstract
AbstractTwo major barriers to conducting user studies are the costs involved in recruiting participants and researcher time in performing studies. Typical solutions are to study convenience samples or design studies that can be deployed on crowd-sourcing platforms. Both solutions have benefits but also drawbacks. Even in cases where these approaches make sense, it is still reasonable to ask whether we are using our resources – participants’ and our time – efficiently and whether we can do better. Typically user studies compare randomly-assigned experimental conditions, such that a uniform number of opportunities are assigned to each condition. This sampling approach, as has been demonstrated in clinical trials, is sub-optimal. The goal of many Information Retrieval (IR) user studies is to determine which strategy (e.g., behaviour or system) performs the best. In such a setup, it is not wise to waste participant and researcher time and money on conditions that are obviously inferior. In this work we explore whether Best Arm Identification (BAI) algorithms provide a natural solution to this problem. BAI methods are a class of Multi-armed Bandits (MABs) where the only goal is to output a recommended arm and the algorithms are evaluated by the average payoff of the recommended arm. Using three datasets associated with previously published IR-related user studies and a series of simulations, we test the extent to which the cost required to run user studies can be reduced by employing BAI methods. Our results suggest that some BAI instances (racing algorithms) are promising devices to reduce the cost of user studies. One of the racing algorithms studied, Hoeffding, holds particular promise. This algorithm offered consistent savings across both the real and simulated data sets and only extremely rarely returned a result inconsistent with the result of the full trial. We believe the results can have an important impact on the way research is performed in this field. The results show that the conditions assigned to participants could be dynamically changed, automatically, to make efficient use of participant and experimenter time.
Funder
Ministerio de Ciencia, Innovación y Universidades
Consellería de Educación, Universidade e Formación Profesional, Xunta de Galicia
Publisher
Springer Science and Business Media LLC
Reference76 articles.
1. Allan J, Harman D, Kanoulas E, Li D, Gysel CV, Voorhees EM (2017) TREC 2017 common core track overview. In: Proceedings of TREC ’17
2. Audibert J-Y, Bubeck S, Munos R (2010) Best arm identification in multi-armed bandits. In: Proceedings of COLT ’10
3. Audibert J-Y, Munos R, Szepesvári C (2007) Tuning bandit algorithms in stochastic environments. In: Proceedings of ALT ’07
4. Aula A, Jhaveri N, Käki M (2005) Information search and re-access strategies of experienced web users. In: Proceedings of WWW ’05
5. Aziz M, Kaufmann E, Riviere M-K (2021) On multi-armed bandit designs for dose-finding clinical trials. J Mach Learn Res 22:1–38
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献