Affiliation:
1. Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139;
2. Microsoft Research NYC, New York, New York 10012
Abstract
How do you incentivize self-interested agents to explore when they prefer to exploit? We consider complex exploration problems, where each agent faces the same (but unknown) Markov decision process (MDP). In contrast with traditional formulations of reinforcement learning (RL), agents control the choice of policies, whereas an algorithm can only issue recommendations. However, the algorithm controls the flow of information, and can incentivize the agents to explore via information asymmetry. We design an algorithm which explores all reachable states in the MDP. We achieve provable guarantees similar to those for incentivizing exploration in static, stateless exploration problems studied previously. From the RL perspective, we design RL mechanisms, that is, RL algorithms that interact with self-interested agents and are compatible with their incentives. This is the first paper on RL mechanisms, that is, the first paper on any scenario that combines RL and incentives, to the best of our knowledge.
Publisher
Institute for Operations Research and the Management Sciences (INFORMS)
Subject
Management Science and Operations Research,Computer Science Applications