Affiliation:
1. Xi'an Microelectronics Technology Institute
2. Xidian University
Abstract
Abstract
To address the issue of memory-computer integration still requiring field-programmable gate array (FPGA) assistance for nonconvolutional computation in the acceleration of deep neural networks, this study proposes a general hybrid static random-access memory in-memory computing (IMC) that combines transposed 8T and 10T units with vector-based, bit-serial in-memory arithmetic to support integer/decimal and positive/negative multiply-accumulate operations with various bit widths. This provides the necessary flexibility and programmability for the development of various software algorithms ranging from neural networks to signal processing. Furthermore, it reduces the transfer of data between the IMC and FPGA. The proposed design achieves an energy efficiency of 21.39 TOPS/W at 1.2 V and 500 MHz. This study presents a novel IMC design that supports flexible bit-width operations, enhancing the versatility of deep learning applications. This advancement has potential implications for various applications, paving the way for more efficient computing systems.
Publisher
Research Square Platform LLC