Author:
Hodassman Shiri,Meir Yuval,Kisos Karin,Ben-Noam Itamar,Tugendhaft Yael,Goldental Amir,Vardi Roni,Kanter Ido
Abstract
AbstractReal-time sequence identification is a core use-case of artificial neural networks (ANNs), ranging from recognizing temporal events to identifying verification codes. Existing methods apply recurrent neural networks, which suffer from training difficulties; however, performing this function without feedback loops remains a challenge. Here, we present an experimental neuronal long-term plasticity mechanism for high-precision feedforward sequence identification networks (ID-nets) without feedback loops, wherein input objects have a given order and timing. This mechanism temporarily silences neurons following their recent spiking activity. Therefore, transitory objects act on different dynamically created feedforward sub-networks. ID-nets are demonstrated to reliably identify 10 handwritten digit sequences, and are generalized to deep convolutional ANNs with continuous activation nodes trained on image sequences. Counterintuitively, their classification performance, even with a limited number of training examples, is high for sequences but low for individual objects. ID-nets are also implemented for writer-dependent recognition, and suggested as a cryptographic tool for encrypted authentication. The presented mechanism opens new horizons for advanced ANN algorithms.
Publisher
Springer Science and Business Media LLC
Reference38 articles.
1. Goldental, A., Guberman, S., Vardi, R. & Kanter, I. A computational paradigm for dynamic logic-gates in neuronal activity. Front. Comput. Neurosci. 8, 52 (2014).
2. Aston-Jones, G., Segal, M. & Bloom, F. E. Brain aminergic axons exhibit marked variability in conduction velocity. Brain Res. 195, 215–222 (1980).
3. Eccles, J. C., Llinas, R. & Sasaki, K. The excitatory synaptic action of climbing fibres on the Purkinje cells of the cerebellum. J. Physiol. 182, 268–296 (1966).
4. Amit, D. J. Neural networks counting chimes. Proc. Natl. Acad. Sci. 85, 2141–2145 (1988).
5. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I. & Salakhutdinov, R. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15, 1929–1958 (2014).
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献