An XOR-10T SRAM computing-in-memory macro with current MAC operations and time-to-digital conversion for BNN edge processors
-
Published:2024-07
Issue:
Volume:182
Page:155346
-
ISSN:1434-8411
-
Container-title:AEU - International Journal of Electronics and Communications
-
language:en
-
Short-container-title:AEU - International Journal of Electronics and Communications
Author:
Liu Yulan,
Zhao Ruiyong,
Xiao Han,
Liu Yuanzhen,
Chen JingORCID
Reference39 articles.
1. Chen Y-H, Krishna T, Emer J, Sze V. 14.5 Eyeriss: An energy-efficient reconfigurable accelerator for deep convolutional neural networks. In: 2016 IEEE international solid-state circuits conference. ISSCC, 2016, p. 262–3. http://dx.doi.org/10.1109/ISSCC.2016.7418007.
2. UNPU: An energy-efficient deep neural network accelerator with fully variable weight bit precision;Lee;IEEE J Solid-State Circuits,2019
3. An energy-efficient reconfigurable processor for binary-and ternary-weight neural networks with flexible data bit width;Yin;IEEE J Solid-State Circuits,2019
4. Khwa W-S, Chen J-J, Li J-F, Si X, Yang E-Y, Sun X, et al. A 65nm 4Kb algorithm-dependent computing-in-memory SRAM unit-macro with 2.3ns and 55.8TOPS/W fully parallel product-sum operation for binary DNN edge processors. In: 2018 IEEE international solid-state circuits conference. ISSCC, 2018, p. 496–8. http://dx.doi.org/10.1109/ISSCC.2018.8310401.
5. Kim J, Koo J, Kim T, Kim Y, Kim H, Yoo S, et al. Area-Efficient and Variation-Tolerant In-Memory BNN Computing using 6T SRAM Array. In: 2019 symposium on VLSI circuits. 2019, p. C118–9. http://dx.doi.org/10.23919/VLSIC.2019.8778160.