Darwin3: a large-scale neuromorphic chip with a novel ISA and on-chip learning

Author:

Ma De1234,Jin Xiaofei12,Sun Shichun2,Li Yitao13,Wu Xundong2,Hu Youneng1,Yang Fangchao2,Tang Huajin1234,Zhu Xiaolei52,Lin Peng134,Pan Gang1234ORCID

Affiliation:

1. College of Computer Science and Technology, Zhejiang University , Hangzhou 310027 , China

2. Research Center for Intelligent Computing Hardware , Zhejiang Lab, Hangzhou 311121 , China

3. The State Key Lab of Brain-Machine Intelligence, Zhejiang University , Hangzhou 310027 , China

4. MOE Frontier Science Center for Brain Science and Brain-machine Integration, Zhejiang University , Hangzhou 310027 , China

5. College of Micro-Nano College of Micro-Nano Electronics, Zhejiang University , Hangzhou 311200 , China

Abstract

ABSTRACT Spiking neural networks (SNNs) are gaining increasing attention for their biological plausibility and potential for improved computational efficiency. To match the high spatial-temporal dynamics in SNNs, neuromorphic chips are highly desired to execute SNNs in hardware-based neuron and synapse circuits directly. This paper presents a large-scale neuromorphic chip named Darwin3 with a novel instruction set architecture, which comprises 10 primary instructions and a few extended instructions. It supports flexible neuron model programming and local learning rule designs. The Darwin3 chip architecture is designed in a mesh of computing nodes with an innovative routing algorithm. We used a compression mechanism to represent synaptic connections, significantly reducing memory usage. The Darwin3 chip supports up to 2.35 million neurons, making it the largest of its kind on the neuron scale. The experimental results showed that the code density was improved by up to 28.3× in Darwin3, and that the neuron core fan-in and fan-out were improved by up to 4096× and 3072× by connection compression compared to the physical memory depth. Our Darwin3 chip also provided memory saving between 6.8× and 200.8× when mapping convolutional spiking neural networks onto the chip, demonstrating state-of-the-art performance in accuracy and latency compared to other neuromorphic chips.

Funder

National Key Research and Development Program

National Natural Science Foundation of China

Key Research and Development Program of Zhejiang Province

Publisher

Oxford University Press (OUP)

Reference52 articles.

1. Brian 2, an intuitive and efficient neural simulator;Stimberg;Elife,2019

2. NEST: an environment for neural systems simulations;Diesmann;GWDG-Bericht,2003

3. SPAIC: a spike-based artificial intelligence computing framework;Hong;IEEE Comput Intell Mag,2024

4. Neurogrid: a mixed-analog-digital multichip system for large-scale neural simulations;Benjamin;Proc IEEE,2014

5. Loihi: a neuromorphic manycore processor with on-chip learning;Davies;IEEE Micro,2018

Cited by 4 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Direct training high-performance deep spiking neural networks: a review of theories and methods;Frontiers in Neuroscience;2024-07-31

2. NARS: Neuromorphic Acceleration through Register-Streaming Extensions on RISC-V Cores;Proceedings of the 21st ACM International Conference on Computing Frontiers: Workshops and Special Sessions;2024-05-07

3. Human brain computing and brain-inspired intelligence;National Science Review;2024-04-03

4. Advancements in Affective Disorder Detection: Using Multimodal Physiological Signals and Neuromorphic Computing Based on SNNs;IEEE Transactions on Computational Social Systems;2024

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3