Affiliation:
1. School of Electronic Science and Engineering, Southeast University , Nanjing 210000, China
Abstract
With the development of artificial intelligence, the separation of memory and processor in the traditional von-Neumann architecture has led to the bottleneck of data transmission hindering the development of energy efficient computing. The computing-in-memory (CIM) paradigm is expected to solve the problems of memory wall and power wall. In this work, we propose a time-domain (TD) computing scheme based on the spin transfer torque magnetic random access memory (STT-MRAM). Basic Boolean logic operations, such as AND/OR/Full-adder (FA), are implemented through converting the bit-line voltage to time delay and time-to-digital converter (TDC). The proposal is simulated using the 28 nm CMOS process and 40 nm MTJ compact model. Monte-Carlo simulations show that 94.2% to 100% computation accuracy can be obtained and the delay of AND/OR and FA is 2.5 ns and 3.5 ns. The energy consumption of AND/OR and FA achieve 59.43 fJ and 97.56 fJ, respectively.
Funder
National Natural Science Foundation of China
Subject
General Physics and Astronomy
Reference14 articles.
1. An in-memory VLSI architecture for convolutional neural networks;IEEE Journal on Emerging and Selected Topics in Circuits and Systems,2018
2. Challenges and trends of SRAM-based computing-in-memory for AI edge devices;IEEE Transactions on Circuits and Systems I: Regular Papers,2021
3. CONV-RAM: An energy-efficient SRAM with embedded convolution computation for low-power CNN-based machine learning applications,2018
4. 24.5 A Twin-8T SRAM computation-in-memory macro for multiple-bit CNN-based machine learning,2019
5. In-memory computation of a machine-learning classifier in a standard 6T SRAM array;IEEE Journal of Solid-State Circuits,2017
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献