Abstract
Abstract
A primary objective of Spiking Neural Networks is a very energy-efficient computation. To achieve this target, a small spike rate is of course very beneficial given the event-driven nature of such a computation. A network that processes information encoded in spike timing can, by its nature, have such a sparse event rate, but, as the network becomes deeper and larger, the spike rate tends to increase without any improvements in the final accuracy. If, on the other hand, a penalty on the excess of spikes is used during the training, the network may shift to a configuration where many neurons are silent, thus affecting the effectiveness of the training itself. In this paper, we present a learning strategy to keep the final spike rate under control by changing the loss function to penalize the spikes generated by neurons after the first ones. Moreover, we also propose a 2-phase training strategy to avoid silent neurons during the training, intended for benchmarks where such an issue can cause the switch off of the network.
Reference32 articles.
1. Sequence to sequence learning with neural networks;Sutskever,2014
2. Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups;Hinton;IEEE Signal Process. Mag.,2012
3. Imagenet classification with deep convolutional neural networks;Krizhevsky,2012
4. Deep reinforcement learning for the control of robotic manipulation: a focussed mini-review;Liu,2021