Abstract
AbstractFinding the right amount of deliberation, between insufficient and excessive, is a hard decision making problem that depends on the value we place on our time. Average-reward, putatively encoded by tonic dopamine, serves in existing reinforcement learning theory as the stationary opportunity cost of time, and of deliberation in particular. However, this cost often varies with environmental context that can change over time. Here, we introduce an opportunity cost of deliberation estimated adaptively on multiple timescales to account for non-stationary contextual factors. We use it in a simple decision-making heuristic based on average-reward reinforcement learning (AR-RL) that we call Performance-Gated Deliberation (PGD). We propose PGD as a strategy used by animals wherein deliberation cost is implemented directly as urgency, a previously characterized neural signal effectively controlling the speed of the decision-making process. We show PGD outperforms AR-RL solutions in explaining behaviour and urgency of non-human primates in a context-varying random walk prediction task and is consistent with relative performance and urgency in a context-varying random dot motion task. We make readily testable predictions for both neural activity and behaviour and call for an integrated research program in cognitive and systems neuroscience around the value of time.
Publisher
Cold Spring Harbor Laboratory
Reference61 articles.
1. Pain-Cost and Opportunity-Cost
2. Vektor Dewanto , George Dunn , Ali Eshragh , Marcus Gallagher , and Fred Roosta , “Averagereward model-free reinforcement learning: a systematic review and literature mapping,” arXiv:2010.08920 [cs.LG].
3. Long-Term Reward Prediction in TD Models of the Dopamine System
4. Context-sensitive valuation and learning;CurrentOpinion in Behavioral Sciences,2021
5. (Reinforcement?) Learning to forage optimally;CurrentOpinion in Neurobiology,2017