Integration of Leaky-Integrate-and-Fire Neurons in Standard Machine Learning Architectures to Generate Hybrid Networks: A Surrogate Gradient Approach

Author:

Gerum Richard C.1,Schilling Achim2

Affiliation:

1. Department of Physics and Center for Vision Research, York University, Toronto, Ontario M3J 1P3 Canada gerum@yorku.ca

2. Experimental Otolaryngology, Neuroscience Lab, University Hospital Erlangen, 91054 Erlangen, Germany; Cognitive Computational Neuroscience Group at the Chair of English Philology and Linguistics, Friedrich-Alexander University Erlangen-Nürnberg, 91054 Erlangen, Germany; and Laboratoire Neuorsciences Sensorielles et Cognitives, Aix Marseille-University, 13331 Marseille, France achim.schilling@fau.

Abstract

Abstract Up to now, modern machine learning (ML) has been based on approximating big data sets with high-dimensional functions, taking advantage of huge computational resources. We show that biologically inspired neuron models such as the leaky-integrate-and-fire (LIF) neuron provide novel and efficient ways of information processing. They can be integrated in machine learning models and are a potential target to improve ML performance. Thus, we have derived simple update rules for LIF units to numerically integrate the differential equations. We apply a surrogate gradient approach to train the LIF units via backpropagation. We demonstrate that tuning the leak term of the LIF neurons can be used to run the neurons in different operating modes, such as simple signal integrators or coincidence detectors. Furthermore, we show that the constant surrogate gradient, in combination with tuning the leak term of the LIF units, can be used to achieve the learning dynamics of more complex surrogate gradients. To prove the validity of our method, we applied it to established image data sets (the Oxford 102 flower data set, MNIST), implemented various network architectures, used several input data encodings and demonstrated that the method is suitable to achieve state-of-the-art classification performance. We provide our method as well as further surrogate gradient methods to train spiking neural networks via backpropagation as an open-source KERAS package to make it available to the neuroscience and machine learning community. To increase the interpretability of the underlying effects and thus make a small step toward opening the black box of machine learning, we provide interactive illustrations, with the possibility of systematically monitoring the effects of parameter changes on the learning characteristics.

Publisher

MIT Press - Journals

Subject

Cognitive Neuroscience,Arts and Humanities (miscellaneous)

Reference81 articles.

1. Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., … Zheng, X. (2015). TensorFlow: Large-scale machine learning on heterogeneous systems. Software available from tensorflow.org.

2. Atkinson, K. E. (1989). An introduction to numerical analysis.New York: Wiley

3. Real-time computation at the edge of chaos in recurrent neural networks;Bertschinger;Neural Computation,2004

4. Bhuiyan, M. A., Pallipuram, V. K., Smith, M. C., Taha, T., & Jalasutram, R. (2010). Acceleration of spiking neural networks in emerging multi-core and GPU architectures. In Proceedings of the IEEE International Symposium on Parallel and Distributed Processing, Workshops & PhD Forum (pp. 1–8). Piscataway, NJ: IEEE.

Cited by 13 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Word class representations spontaneously emerge in a deep neural network trained on next word prediction;2023 International Conference on Machine Learning and Applications (ICMLA);2023-12-15

2. Coincidence detection and integration behavior in spiking neural networks;Cognitive Neurodynamics;2023-12-13

3. Predictive coding and stochastic resonance as fundamental principles of auditory phantom perception;Brain;2023-07-28

4. Leaky-Integrate-and-Fire Neuron-Like Long-Short-Term-Memory Units as Model System in Computational Biology;2023 International Joint Conference on Neural Networks (IJCNN);2023-06-18

5. Adaptive ICA for Speech EEG Artifact Removal;2023 5th International Conference on Bio-engineering for Smart Technologies (BioSMART);2023-06-07

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3