Author:
Mollard Sami,Wacongne Catherine,Bohte Sander M.,Roelfsema Pieter R.
Abstract
AbstractMany cognitive problems can be decomposed into series of subproblems that are solved sequentially by the brain. When subproblems are solved, relevant intermediate results need to be stored by neurons and propagated to the next subproblem, until the overarching goal has been completed. We will here consider visual tasks, which can be decomposed into sequences of elemental visual operations. Experimental evidence suggests that intermediate results of the elemental operations are stored in working memory as an enhancement of neural activity in the visual cortex. The focus of enhanced activity is then available for subsequent operations to act upon. The main question at stake is how the elemental operations and their sequencing can emerge in neural networks that are trained with only rewards, in a reinforcement learning setting.We here propose a new recurrent neural network architecture that can learn composite visual tasks that require the application of successive elemental operations. Specifically, we selected three tasks for which electrophysiological recordings of monkeys’ visual cortex are available. To train the networks, we used RELEARNN, a biologically plausible four-factor Hebbian learning rule, which is local both in time and space. We report that networks learn elemental operations, such as contour grouping and visual search, and execute sequences of operations, solely based on the characteristics of the visual stimuli and the reward structure of a task.After training was completed, the activity of the units of the neural network resembled the activity of neurons in the visual cortex of monkeys solving the same tasks. Relevant information that needed to be exchanged between subroutines was maintained as a focus of enhanced activity and passed on to the subsequent subroutines. Our results demonstrate how a biologically plausible learning rule can train a recurrent neural network on multistep visual tasks.Author SummaryMany visual problems, like finding your way on a map, are easier to solve by being decomposed into a series of easier subproblems. To successfully decompose a problem into a sequence of easier subproblems, information must flow between them so that the solution of one subproblem can be used in the next ones. Experimental evidences indicate that, in the visual cortex of monkeys solving complex visual problems, outcomes of subproblems are made available as a focus of enhanced activity so that they can be used as inputs for the next subproblems. To understand how such strategies are learnt, we developed a recurrent artificial neural networks that we trained in a reinforcement learning context, with a biologically plausible learning rule, on the same tasks that were presented to monkeys. We found that the activation of units of the networks resembled the spatiotemporal patterns of activity observed in the visual cortex of monkeys. Our results shed light on how recurrent neural networks trained with a biologically plausible learning rule can learn to propagate enhanced activity to solve complex visual tasks.
Publisher
Cold Spring Harbor Laboratory
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献