Abstract
Abstract
Recent advances in quantized neural networks (QNNs) have paved the way for energy efficient hardware architectures for machine learning tasks. Binary and ternary QNNs are suitable for image classification and recognition applications on highly resource constrained hardware. Binary neural networks have low precision thus suffer a significant accuracy loss for dense networks and large datasets. This issue can be resolved through ternary neural networks (TNNs) with higher weight precision and better resource utilization. TNN implementation using conventional complementary metal oxide semiconductor and memristive devices show limited improvement in area and energy efficiency. Spintronics based magnetic random access memory (MRAM) devices are the most prominent choice amongst the various non-volatile memories for neural networks. This work presents the implementation of differential spin Hall effect (DSHE) MRAM-based two and three input ternary computation units (TCUs) for TNN. Furthermore, a multilayer perceptron architecture with synaptic crossbar array using the proposed TCU is implemented for Modified National Institute of Standards and Technology data classification. The results show that DSHE-based TCU is 30% more energy efficient as compared with spin-transfer torque (STT)-MRAM based design. Furthermore, DSHE-MRAM based TNN shows improvement in energy and area by 82% and 9%, respectively, when compared to STT-based TNN.
Subject
Materials Chemistry,Electrical and Electronic Engineering,Condensed Matter Physics,Electronic, Optical and Magnetic Materials
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献