1. Improved Algorithms for Linear Stochastic Bandits;Y Abbasi-Yadkori;Advances in Neural Information Processing Systems,2011
2. Taming the Monster: A Fast and Simple Algorithm for Contextual Bandits;A Agarwal;Proceedings of the 31th International Conference on Machine Learning,2014
3. Analysis of Thompson Sampling for the Multi-armed Bandit Problem;S Agrawal;Proceedings of the 25th Annual Conference on Learning Theory,2012
4. A Near-Optimal Exploration-Exploitation Approach for Assortment Selection;S Agrawal;Proceedings of the 2016 ACM Conference on Economics and Computation. EC '16,2016
5. MNL-Bandit: A Dynamic Learning Approach to Assortment Selection;Proceedings of the 2017 Conference on Learning Theory,2017