Abstract
AbstractMulti-armed bandits achieve excellent long-term performance in practice and sublinear cumulative regret in theory. However, a real-world limitation of bandit learning is poor performance in early rounds due to the need for exploration—a phenomenon known as the cold-start problem. While this limitation may be necessary in the general classical stochastic setting, in practice where “pre-training” data or knowledge is available, it is natural to attempt to “warm-start” bandit learners. This paper provides a theoretical treatment of warm-start contextual bandit learning, adopting Linear Thompson Sampling as a principled framework for flexibly transferring domain knowledge as might be captured by bandit learning in a prior related task, a supervised pre-trained Bayesian posterior, or domain expert knowledge. Under standard conditions, we prove a general regret bound. We then apply our warm-start algorithmic technique to other common bandit learners—the $$\epsilon $$
ϵ
-greedy and upper-confidence bound contextual learners. An upper regret bound is then provided for LinUCB. Our suite of warm-start learners are evaluated in experiments with both artificial and real-world datasets, including a motivating task of tuning a commercial database. A comprehensive range of experimental results are presented, highlighting the effect of different hyperparameters and quantities of pre-training data.
Publisher
Springer Science and Business Media LLC
Subject
Artificial Intelligence,Hardware and Architecture,Human-Computer Interaction,Information Systems,Software
Reference33 articles.
1. Abbasi-Yadkori Y, Pál D, Szepesvári C (2011) Improved algorithms for linear stochastic bandits. Adv Neural Inf Process Syst 24:2312–2320
2. Abeille M, Lazaric A et al (2017) Linear Thompson sampling revisited. Electron J Stat 11(2):5165–5197
3. Agrawal S, Chaudhuri S, Kollár L, Marathe AP, Narasayya VR, Syamala M (2004) Database tuning advisor for Microsoft SQL Server 2005. In: VLDB
4. Agrawal S, Goyal N (2013) Thompson sampling for contextual bandits with linear payoffs. In: International conference on machine learning, pp 127–135
5. Auer P, Cesa-Bianchi N, Fischer P (2002) Finite-time analysis of the multiarmed bandit problem. Mach Learn 47(2):235–256