Abstract
ABSTRACTBrains learn new information while retaining previously acquired information. It is not known by what mechanisms synapses preserve previously stored memories while they are plastic and absorb new content. To understand how this stability-plasticity dilemma might be resolved, we investigate a one layer self-supervised neural network that incrementally learns to recognize new spatio-temporal spike patterns without overwriting existing memories. A plausible combination of Hebbian mechanisms, hetero-synaptic plasticity, and synaptic scaling enables unsupervised learning of spatio-temporal input patterns by single neurons. Acquisition of different patterns is achieved in networks where differentiation of selectivities is enforced by pre-synaptic hetero-synaptic plasticity. But only when the training spikes are both, jittered and stochastic past memories are found to persist despite ongoing learning. This input variability selects a subset of weights and drives them into a regime where synaptic scaling induces self-stabilization. Thereby our model provides a novel explanation for the stability of synapses related to preexisting contents despite ongoing plasticity, and suggests how nervous systems could incrementally learn and exploit temporally precise Poisson rate codes.Significance statementActivity-dependent changes in synaptic efficacy are thought to underlie learning. While ongoing synaptic plasticity is necessary for learning new content, it is detrimental to the traces of previously acquired memories. Here, we show how memories for spatiotemporal patterns can be protected from overwriting. A combination of biologically plausible synaptic plasticity mechanisms turns single neurons into detectors of statistically dominant input patterns. For networks, we find that memory stability is achieved when the patterns to be learned are temporally sloppy and noisy, as opposed to being frozen. This variability drives the relevant synaptic weights to large efficacies, where they become self-reinforcing and continue to support the initially learned patterns. As a result, such a network can incrementally learn one pattern after another.
Publisher
Cold Spring Harbor Laboratory