Affiliation:
1. University of Amsterdam, Amsterdam, The Netherlands
2. University of Amsterdam and Ahold Delhaize, Zaandam, The Netherlands
3. Microsoft, Redmond, WA
Abstract
Online ranker evaluation is one of the key challenges in information retrieval. Although the preferences of rankers can be inferred by interleaving methods, the problem of how to effectively choose the ranker pair that generates the interleaved list without degrading the user experience too much is still challenging. On the one hand, if two rankers have not been compared enough, the inferred preference can be noisy and inaccurate. On the other hand, if two rankers are compared too many times, the interleaving process inevitably hurts the user experience too much. This dilemma is known as the
exploration versus exploitation
tradeoff. It is captured by the
K
-armed dueling bandit problem, which is a variant of the
K
-armed bandit problem, where the feedback comes in the form of pairwise preferences. Today’s deployed search systems can evaluate a large number of rankers concurrently, and scaling effectively in the presence of numerous rankers is a critical aspect of
K
-armed dueling bandit problems.
In this article, we focus on solving the large-scale online ranker evaluation problem under the so-called Condorcet assumption, where there exists an optimal ranker that is preferred to all other rankers. We propose Merge Double Thompson Sampling (MergeDTS), which first utilizes a divide-and-conquer strategy that localizes the comparisons carried out by the algorithm to small batches of rankers, and then employs Thompson Sampling to reduce the comparisons between suboptimal rankers inside these small batches. The effectiveness (regret) and efficiency (time complexity) of MergeDTS are extensively evaluated using examples from the domain of online evaluation for web search. Our main finding is that for large-scale Condorcet ranker evaluation problems, MergeDTS outperforms the state-of-the-art dueling bandit algorithms.
Funder
Netherlands Organisation for Scientific Research
Innovation Center for Artificial Intelligence
Publisher
Association for Computing Machinery (ACM)
Subject
Computer Science Applications,General Business, Management and Accounting,Information Systems
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Reinforcement online learning to rank with unbiased reward shaping;Information Retrieval Journal;2022-08-04
2. Human Preferences as Dueling Bandits;Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval;2022-07-06
3. Cascading Hybrid Bandits: Online Learning to Rank for Relevance and Diversity;Fourteenth ACM Conference on Recommender Systems;2020-09-22