1. How expert confidence can improve collective decision-making in contextual multi-armed bandit problems;Abels,2020
2. Contextual bandit learning with predictable rewards;Agarwal,2012
3. Taming the monster: a fast and simple algorithm for contextual bandits;Agarwal,2014
4. Sample mean based index policies by o (log n) regret for the multi-armed bandit problem;Agrawal;Adv. Appl. Probab.,1995
5. Collective decision making in the social context of science;Aikenhead;Sci. Educ.,1985