Bulk‐Switching Memristor‐Based Compute‐In‐Memory Module for Deep Neural Network Training

Author:

Wu Yuting1,Wang Qiwen1,Wang Ziyu1,Wang Xinxin1,Ayyagari Buvna2,Krishnan Siddarth2,Chudzik Michael2,Lu Wei D.1ORCID

Affiliation:

1. Department of Electrical Engineering and Computer Science University of Michigan Ann Arbor MI 48109 USA

2. Applied Materials Inc. Santa Clara CA 95054 USA

Abstract

AbstractThe constant drive to achieve higher performance in deep neural networks (DNNs) has led to the proliferation of very large models. Model training, however, requires intensive computation time and energy. Memristor‐based compute‐in‐memory (CIM) modules can perform vector‐matrix multiplication (VMM) in place and in parallel, and have shown great promises in DNN inference applications. However, CIM‐based model training faces challenges due to non‐linear weight updates, device variations, and low‐precision. In this work, a mixed‐precision training scheme is experimentally implemented to mitigate these effects using a bulk‐switching memristor‐based CIM module. Low‐precision CIM modules are used to accelerate the expensive VMM operations, with high‐precision weight updates accumulated in digital units. Memristor devices are only changed when the accumulated weight update value exceeds a pre‐defined threshold. The proposed scheme is implemented with a system‐onchip of fully integrated analog CIM modules and digital sub‐systems, showing fast convergence of LeNet training to 97.73%. The efficacy of training larger models is evaluated using realistic hardware parameters and verifies that CIM modules can enable efficient mix‐precision DNN training with accuracy comparable to full‐precision software‐trained models. Additionally, models trained on chip are inherently robust to hardware variations, allowing direct mapping to CIM inference chips without additional re‐training.

Funder

National Science Foundation

Publisher

Wiley

Subject

Mechanical Engineering,Mechanics of Materials,General Materials Science

Reference54 articles.

1. a)J.Devlin M.‐W.Chang K.Lee K.Toutanova Proc. 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies Ithaca New York 2019 1 4171‐4186;

2. Deep Learning for Computer Vision: A Brief Review

3. Highly accurate protein structure prediction with AlphaFold

Cited by 2 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Memristor‐Based Neuromorphic Chips;Advanced Materials;2024-01-02

2. HyDe: A Hybrid PCM/FeFET/SRAM Device-Search for Optimizing Area and Energy-Efficiencies in Analog IMC Platforms;IEEE Journal on Emerging and Selected Topics in Circuits and Systems;2023-12

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3