Abstract
AbstractDecision-making and movement of single animals or group of animals are often treated and investigated as separate processes. However, many decisions are taken while moving in a given space. In other words, both processes are optimised at the same time and optimal decision-making processes are only understood in the light of movement constraints. To fully understand the rational of decisions embedded in an environment (and therefore the underlying evolutionary processes), it is instrumental to develop theories of spatial decision-making. Here, we present a framework specifically developed to address this issue by the means of artificial neural networks and genetic algorithms. Specifically, we investigate a simple task in which single agents need to learn to explore their square arena without leaving its boundaries. We show that agents evolve by developing increasingly optimal strategies to solve a spatially-embedded learning task while not having an initial arbitrary model of movements. The process allows the agents to learn how to move (i.e. by avoiding the arena walls) in order to make increasingly optimal decisions (improving their exploration of the arena). Ultimately, this framework makes predictions of possibly optimal behavioural strategies for tasks combining learning and movement.
Publisher
Cold Spring Harbor Laboratory
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献