Abstract
AbstractThe ability to learn from past experience is an important adaptation, but how natural selection shapes learning is not well understood. Here, we investigate the evolution of associative learning (the association of stimuli with rewards) by a modelling approach that is based on the evolution of neural networks (NNs) underlying learning. Individuals employ their genetically encoded NN to solve a learning task with fitness consequences. NNs inducing more efficient learning have a selective advantage and spread in the population. We show that in a simple learning task, the evolved NNs, even those with very simple architecture, outperform well-studied associative learning rules, such as the Rescorla-Wagner rule. During their evolutionary trajectory, NNs often pass a transitional stage where they functionally resemble Rescorla-Wagner learning, but further evolution shapes them to approximate the theoretically optimal learning rule. Networks with a simple architecture evolve much faster and tend to outperform their more complex counterparts on a shorter-term perspective. Also, on a longer-term perspective network complexity is not a reliable indicator of evolved network performance. These conclusions change somewhat when the learning task is more challenging. Then the performance of many evolved networks is not better than that of the Rescorla-Wagner rule; only some of the more complex networks reach a performance level close to the optimal Bayesian learning rule. In conclusion, we show that the mechanisms underlying learning influence the outcome of evolution. A neural-network approach allows for more flexibility and a wider set of evolutionary outcomes than most analytical studies, while at the same time, it provides a relatively straightforward and intuitive framework to study the learning process.
Publisher
Cold Spring Harbor Laboratory
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献