Abstract
AbstractPrimates exploring and exploiting a continuous sensorimotor space rely on maps in the dorsal stream that guide visual search, locomotion, and grasp. For example, an animal swinging from one tree limb to the next uses rapidly evolving sensorimotor representations to decide when to harvest a reward. We show that such exploration/exploitation depends on dynamic maps of competing option values in the human dorsal stream. Using a reinforcement learning (RL) model capable of rapid learning and efficient exploration and exploitation, we show that preferred options are selectively maintained on the map while the values of spatiotemporally distant alternatives are compressed. Consistent with biophysical models of cortical option competition, dorsal stream BOLD signal increased and posterior cortical β1/α oscillations desynchronized as the number of potentially valuable options grew, matching predictions of information-compressing RL rather than traditional RL that caches long-term values. BOLD and β1/α responses were correlated and predicted the successful transition from exploration to exploitation. These option competition dynamics were observed across parietal and frontal dorsal stream regions, but not in the occipito-temporal MT+ sensitive to the average reward rate. Our results also illustrate that models’ diverging predictions about information dynamics can help to adjudicate between them based on population activity.Graphical abstract
Publisher
Cold Spring Harbor Laboratory