Affiliation:
1. Columbia Business School, Columbia University, New York 10027;
2. Sloan School of Management, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139;
3. Google Research, New York, New York 10011
Abstract
In the classic contextual bandits problem, in each round t, a learner observes some context c, chooses some action i to perform, and receives some reward [Formula: see text]. We consider the variant of this problem in which in addition to receiving the reward [Formula: see text], the learner also learns the values of [Formula: see text] for some other contexts [Formula: see text] in set [Formula: see text], that is, the rewards that would be achieved by performing that action under different contexts [Formula: see text]. This variant arises in several strategic settings, such as learning how to bid in nontruthful repeated auctions, which has gained a lot of attention lately as many platforms have switched to running first price auctions. We call this problem the contextual bandits problem with cross-learning. The best algorithms for the classic contextual bandits problem achieve [Formula: see text] regret against all stationary policies, in which C is the number of contexts, K the number of actions, and T the number of rounds. We design and analyze new algorithms for the contextual bandits problem with cross-learning and show that their regret has better dependence on the number of contexts. Under complete cross-learning in which the rewards for all contexts are learned when choosing an action, that is, set [Formula: see text] contains all contexts, we show that our algorithms achieve regret [Formula: see text], removing the dependence on C. For any other cases, that is, under partial cross-learning in which [Formula: see text] for some context–action pair of (i, c), the regret bounds depend on how the sets [Formula: see text] impact the degree to which cross-learning between contexts is possible. We simulate our algorithms on real auction data from an ad exchange running first price auctions and show that they outperform traditional contextual bandit algorithms.
Publisher
Institute for Operations Research and the Management Sciences (INFORMS)
Subject
Management Science and Operations Research,Computer Science Applications,General Mathematics
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献