Abstract
AbstractUsing formal methods complemented by large-scale simulations we investigate information theoretical properties of spiking neurons trained using Hebbian and STDP learning rules. It is shown that weight space contains meta-stable states, which are points where the average weight change under the learning rule vanishes. These points may capture the random walker transiently. The dwell time in the vicinity of the meta-stable state is either quasi-infinite or very short and depends on the level of noise in the system. Moreover, important information theoretic quantities, such as the amount of information the neuron transmits are determined by the meta-stable state. While the Hebbian learning rule reliably leads to meta-stable states, the STDP rule tends to be unstable in the sense that for most choices of hyper-parameters the weights are not captured by meta-stable states, except for a restricted set of choices. It emerges that stochastic fluctuations play an important role in determining which meta-stable state the neuron takes. To understand this, we model the trajectory of the neuron through weight space as an inhomogeneous Markovian random walk, where the transition probabilities between states are determined by the statistics of the input signal.
Publisher
Springer Science and Business Media LLC
Subject
Computer Science Applications
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献