Integration of Ag-CBRAM crossbars and Mott ReLU neurons for efficient implementation of deep neural networks in hardware
-
Published:2023-08-29
Issue:3
Volume:3
Page:034007
-
ISSN:2634-4386
-
Container-title:Neuromorphic Computing and Engineering
-
language:
-
Short-container-title:Neuromorph. Comput. Eng.
Author:
Shi YuhanORCID,
Oh Sangheon,
Park JaeseoungORCID,
Valle Javier del,
Salev Pavel,
Schuller Ivan K,
Kuzum Duygu
Abstract
Abstract
In-memory computing with emerging non-volatile memory devices (eNVMs) has shown promising results in accelerating matrix-vector multiplications. However, activation function calculations are still being implemented with general processors or large and complex neuron peripheral circuits. Here, we present the integration of Ag-based conductive bridge random access memory (Ag-CBRAM) crossbar arrays with Mott rectified linear unit (ReLU) activation neurons for scalable, energy and area-efficient hardware (HW) implementation of deep neural networks. We develop Ag-CBRAM devices that can achieve a high ON/OFF ratio and multi-level programmability. Compact and energy-efficient Mott ReLU neuron devices implementing ReLU activation function are directly connected to the columns of Ag-CBRAM crossbars to compute the output from the weighted sum current. We implement convolution filters and activations for VGG-16 using our integrated HW and demonstrate the successful generation of feature maps for CIFAR-10 images in HW. Our approach paves a new way toward building a highly compact and energy-efficient eNVMs-based in-memory computing system.
Funder
U.S. Department of Energy
National Institutes of Health
National Science Foundation
Office of Naval Research Global
Subject
Psychiatry and Mental health,Neuropsychology and Physiological Psychology
Reference19 articles.
1. Deep learning;LeCun;Nature,2015
2. Vanishing gradient mitigation with deep learning neural network optimization;Tan,2019
3. In-memory computing with resistive switching devices;Ielmini;Nat. Electron.,2018
4. Emerging non-volatile memories: opportunities and challenges;Xue,2011
5. Design considerations for efficient deep neural networks on processing-in-memory accelerators;Yang,2019
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献