Abstract
AbstractComputational cognitive models are a fundamental tool in behavioral neuroscience. They instantiate in software precise hypotheses about the cognitive mechanisms underlying a particular behavior. Constructing these models is typically a difficult iterative process that requires both inspiration from the literature and the creativity of an individual researcher. Here, we adopt an alternative approach to learn parsimonious cognitive models directly from data. We fit behavior data using a recurrent neural network that is penalized for carrying information forward in time, leading to sparse, interpretable representations and dynamics. When fitting synthetic behavioral data from known cognitive models, our method recovers the underlying form of those models. When fit to laboratory data from rats performing a reward learning task, our method recovers simple and interpretable models that make testable predictions about neural mechanisms.
Publisher
Cold Spring Harbor Laboratory
Reference44 articles.
1. Deep variational information bottleneck;In: arXiv,2016
2. Li Ji-An , Marcus K Benna , and Marcelo G Mattar . “Automatic Discovery of Cognitive Strategies with Tiny Recurrent Neural Networks”. In: bioRxiv (2023), pp. 2023–04.
3. Learning the value of information in an uncertain world;In: Nature neuroscience,2007
4. Mice exhibit stochastic and efficient action switching during probabilistic decision making;In: Proceedings of the National Academy of Sciences,2022
5. Doing without schema hierarchies: a recurrent connectionist approach to normal and impaired routine sequential action;In: Psychological review,2004
Cited by
7 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献