FinFET 6T-SRAM All-Digital Compute-in-Memory for Artificial Intelligence Applications: An Overview and Analysis

Author:

Gul Waqas1,Shams Maitham1,Al-Khalili Dhamin1

Affiliation:

1. Department of Electronics, Carleton University, 1125 Colonel Bay Drive, Ottawa, ON K1S 5B6, Canada

Abstract

Artificial intelligence (AI) has revolutionized present-day life through automation and independent decision-making capabilities. For AI hardware implementations, the 6T-SRAM cell is a suitable candidate due to its performance edge over its counterparts. However, modern AI hardware such as neural networks (NNs) access off-chip data quite often, degrading the overall system performance. Compute-in-memory (CIM) reduces off-chip data access transactions. One CIM approach is based on the mixed-signal domain, but it suffers from limited bit precision and signal margin issues. An alternate emerging approach uses the all-digital signal domain that provides better signal margins and bit precision; however, it will be at the expense of hardware overhead. We have analyzed digital signal domain CIM silicon-verified 6T-SRAM CIM solutions, after classifying them as SRAM-based accelerators, i.e., near-memory computing (NMC), and custom SRAM-based CIM, i.e., in-memory-computing (IMC). We have focused on multiply and accumulate (MAC) as the most frequent operation in convolution neural networks (CNNs) and compared state-of-the-art implementations. Neural networks with low weight precision, i.e., <12b, show lower accuracy but higher power efficiency. An input precision of 8b achieves implementation requirements. The maximum performance reported is 7.49 TOPS at 330 MHz, while custom SRAM-based performance has shown a maximum of 5.6 GOPS at 100 MHz. The second part of this article analyzes the FinFET 6T-SRAM as one of the critical components in determining overall performance of an AI computing system. We have investigated the FinFET 6T-SRAM cell performance and limitations as dictated by the FinFET technology-specific parameters, such as sizing, threshold voltage (Vth), supply voltage (VDD), and process and environmental variations. The HD FinFET 6T-SRAM cell shows 32% lower read access time and 1.09 times better leakage power as compared with the HC cell configuration. The minimum achievable supply voltage is 600 mV without utilization of any read- or write-assist scheme for all cell configurations, while temperature variations show noise margin deviation of up to 22% of the nominal values.

Funder

CURIE Fund administrated under MacOrdrum Library

Publisher

MDPI AG

Subject

Electrical and Electronic Engineering,Mechanical Engineering,Control and Systems Engineering

Reference87 articles.

1. Efficient Processing of Deep Neural Networks: A Tutorial and Survey;Sze;Proc. IEEE,2017

2. (2021). AI Acceleration: Autonomous is driving by Manouchehr Rafie VP of Advance Technologies, GyrFalcon Technologies Inc.

3. The Challenges and Emerging Technologies for Low Power Artificial Intelligence IoT Systems;Le;IEEE Trans. Circuit Syst. -I Regul. Pap.,2021

4. In Memory computing: Advances and prospects;Verma;IEEE Solid State Circuit Mag.,2019

5. Compute-in-Memory Chips for Deep learning: Recent Trends and Prospects;Yu;IEEE Circuit Syst. Mag.,2021

Cited by 1 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Seepage Power Aware SBVL Based FinFET Design for SRAM Construction;2023 International Conference on Ambient Intelligence, Knowledge Informatics and Industrial Electronics (AIKIIE);2023-11-02

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3