Abstract
Biological constraints often impose restrictions for plausible plasticity rules such as locality and reward-based rather than supervised learning. Two learning rules that comply with these restrictions are weight (WP) and node (NP) perturbation. NP is often used in learning studies, in particular as a benchmark; it is considered to be superior to WP and more likely neurobiologically realized, as the number of weights and therefore their perturbation dimension typically massively exceed the number of nodes. Here we show that this conclusion no longer holds when we take two biologically relevant properties into account: First, tasks extend in time. This increases the perturbation dimension of NP but not WP. Second, tasks are low dimensional, with many weight configurations providing solutions. We analytically delineate regimes where these properties let WP perform as well as or better than NP. Further we find qualitative features of the weight and error dynamics that allow to distinguish which of the rules underlie a learning process: in WP, but not NP, weights mediating zero input diffuse and gathering batches of subtasks in a trial decreases the number of trials required. The insights suggest new learning rules, which combine for specific task types the advantages of WP and NP. Using numerical simulations, we generalize the results to networks with various architectures solving biologically relevant and standard network learning tasks. Our findings suggest WP and NP as similarly plausible candidates for learning in the brain and as similarly important benchmarks.Statement of significanceNeural networks can learn by first perturbing the network weights or the activity of neurons and thereafter consolidating perturbations that improve the network performance. Weight perturbation learning is considered less efficient, useful and biologically plausible, because there are many more connection weights than neurons, such that generating beneficial perturbations seems less likely. We show that the argument does no longer hold when accounting for two features common in biology: tasks extend in time and the neuronal dynamics are low dimensional. In particular, we find that perturbing the weights is comparably good or better in various biologically relevant and standard network learning applications. This indicates that weight perturbation learning is similarly useful and a plausible candidate for learning in the brain.
Publisher
Cold Spring Harbor Laboratory