Author:
Millidge Beren,Walton Mark,Bogacz Rafal
Abstract
AbstractAn influential theory posits that dopaminergic neurons in the mid-brain implement a model-free reinforcement learning algorithm based on temporal difference (TD) learning. A fundamental assumption of this model is that the reward function being optimized is fixed. However, for biological creatures the ‘reward function’ can fluctuate substantially over time depending on the internal physiological state of the animal. For instance, food is rewarding when you are hungry, but not when you are satiated. While a variety of experiments have demonstrated that animals can instantly adapt their behaviour when their internal physiological state changes, under current thinking this requires model-based planning since the standard model of TD learning requires retraining from scratch if the reward function changes. Here, we propose a novel and simple extension to TD learning that allows for the zero-shot (instantaneous) generalization to changing reward functions. Mathematically, we show that if we assume the reward function is a linear combination ofreward basis vectors, and if we learn a value function for each reward basis using TD learning, then we can recover the true value function by a linear combination of these value function bases. This representational scheme allows instant and perfect generalization to any reward function in the span of the reward basis vectors as well as possesses a straightforward implementation in neural circuitry by parallelizing the standard circuitry required for TD learning. We demonstrate that our algorithm can also reproduce behavioural data on reward revaluation tasks, predict dopamine responses in the nucleus accumbens, as well as learn equally fast as successor representations while requiring much less memory.
Publisher
Cold Spring Harbor Laboratory
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献