Synaptic Activity and Hardware Footprint of Spiking Neural Networks in Digital Neuromorphic Systems

Author:

Lemaire Edgar1,Miramond Benoît2,Bilavarn Sébastien2,Saoud Hadi3,Abderrahmane Nassim2

Affiliation:

1. Thales Research & Technology, France and LEAT, University Cote d’Azur, France

2. LEAT, University Cote d’Azur, France

3. Thales Research Technology, France

Abstract

Spiking neural networks are expected to bring high resources, power, and energy efficiency to machine learning hardware implementations. In this regard, they could facilitate the integration of Artificial Intelligence in highly constrained embedded systems, such as image classification in drones or satellites. If their logic resource efficiency is widely accepted in the literature, their energy efficiency still remains debated. In this article, a novel high-level metric is used to characterize the expected energy efficiency gain when using Spiking Neural Networks (SNN) instead of Formal Neural Networks (FNN) for hardware implementation: Synaptic Activity Ratio (SAR). This metric is applied to a selection of classification tasks including images and 1D signals. Moreover, a high-level estimator for logic resources, power usage, execution time, and energy is introduced for neural network hardware implementations on FPGA, based on four existing accelerator architectures covering both sequential and parallel implementation paradigms for both spiking and formal coding domains. This estimator is used to evaluate the reliability of the Synaptic Activity Ratio metric to characterize spiking neural network energy efficiency gain on the proposed dataset benchmark. This study led to the conclusion that spiking domain offers significant power and energy savings in sequential implementations. This study also shows that synaptic activity is a critical factor that must be taken into account when addressing low-energy systems.

Funder

Thales Research Technology (Fr), ANRT (Association Nationale de Recherche & Technologie), and University Cote d’Azur

Publisher

Association for Computing Machinery (ACM)

Subject

Hardware and Architecture,Software

Reference43 articles.

1. Lapicque’s introduction of the integrate-and-fire model neuron (1907)

2. Nassim Abderrahmane. 2020. Hardware Design of Spiking Neural Networks for Energy Efficient Brain-inspired Computing. Ph.D. Dissertation. Université Côte d’Azur.

3. Design Space Exploration of Hardware Spiking Neurons for Embedded Artificial Intelligence

4. Deep learning using rectified linear units (ReLU);Agarap Abien Fred;arXiv preprint arXiv:1803.08375,2018

5. Deep neural network for traffic sign recognition systems: An analysis of spatial transformers and stochastic optimisation methods

Cited by 4 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3