Design of Synaptic Driving Circuit for TFT eFlash-Based Processing-In-Memory Hardware Using Hybrid Bonding
-
Published:2023-01-29
Issue:3
Volume:12
Page:678
-
ISSN:2079-9292
-
Container-title:Electronics
-
language:en
-
Short-container-title:Electronics
Author:
Kim Younghee1, Jin Hongzhou1, Kim Dohoon1ORCID, Ha Panbong1ORCID, Park Min-Kyu2, Hwang Joon2ORCID, Lee Jongho2, Woo Jeong-Min3, Choi Jiyeon4ORCID, Lee Changhyuk4, Kwak Joon Young4ORCID, Son Hyunwoo3ORCID
Affiliation:
1. Department of Electronic Engineering, Changwon National University, Changwon 51140, Republic of Korea 2. Department of Electrical and Computer Engineering, Seoul National University, Seoul 08826, Republic of Korea 3. Department of Electronic Engineering, Engineering Research Institute (ERI), Gyeongsang National University, Jinju 52828, Republic of Korea 4. Korea Institute of Science and Technology (KIST), Seoul 02792, Republic of Korea
Abstract
This paper presents a synaptic driving circuit design for processing in-memory (PIM) hardware with a thin-film transistor (TFT) embedded flash (eFlash) for a binary/ternary-weight neural network (NN). An eFlash-based synaptic cell capable of programming negative weight values to store binary/ternary weight values (i.e., ±1, 0) and synaptic driving circuits for erase, program, and read operations of synaptic arrays have been proposed. The proposed synaptic driving circuits improve the calculation accuracy of PIM operation by precisely programming the sensing current of the eFlash synaptic cell to the target current (50 nA ± 0.5 nA) using a pulse train. In addition, during PIM operation, the pulse-width modulation (PWM) conversion circuit converts 8-bit input data into one continuous PWM pulse to minimize non-linearity in the synaptic sensing current integration step of the neuron circuit. The prototype chip, including the proposed synaptic driving circuit, PWM conversion circuit, neuron circuit, and digital blocks, is designed and laid out as the accelerator for binary/ternary weighted NN with a size of 324 × 80 × 10 using a 0.35 μm CMOS process. Hybrid bonding technology using bump bonding and wire bonding is used to package the designed CMOS accelerator die and TFT eFlash-based synapse array dies into a single chip package.
Funder
Institute of Information & communications Technology Planning & Evaluation
Subject
Electrical and Electronic Engineering,Computer Networks and Communications,Hardware and Architecture,Signal Processing,Control and Systems Engineering
Reference28 articles.
1. Krizhevsky, A., Sutskever, I., and Hinton, G.E. (2012, January 3–8). ImageNet Classification with Deep Convolutional Neural Networks. Proceedings of the Neural Information Processing Systems (NIPS), Lake Tahoe, NV, USA. 2. Simonyan, K., and Zisserman, A. (2015, January 10–13). Very Deep Convolutional Networks for Large-Scale Image Recognition. Proceedings of the International Conference on Learning Representations (ICLR), Ottawa, ON, Canada. 3. He, K., Zhang, X., Ren, S., and Sun, J. (July, January 26). Deep Residual Learning for Image Recognition. Proceedings of the IEEE Computer Vision and Pattern Recognition Conference (CVPR), Las Vegas, NV, USA. 4. Szegedy, C., Ioffe, S., Vanhoucke, V., and Alemi, A. (2017, January 4–9). Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning. Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, CA, USA. 5. Tan, M., and Le, Q.V. (2019, January 9–15). EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. Proceedings of the Machine Learning Research, Long Beach, CA, USA.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|