VerSA: Versatile Systolic Array Architecture for Sparse and Dense Matrix Multiplications
-
Published:2024-04-15
Issue:8
Volume:13
Page:1500
-
ISSN:2079-9292
-
Container-title:Electronics
-
language:en
-
Short-container-title:Electronics
Author:
Seo Juwon1, Kong Joonho1ORCID
Affiliation:
1. School of Electronic and Electrical Engineering, Kyungpook National University, Daegu 41566, Republic of Korea
Abstract
A key part of modern deep neural network (DNN) applications is matrix multiplication. As DNN applications are becoming more diverse, there is a need for both dense and sparse matrix multiplications to be accelerated by hardware. However, most hardware accelerators are designed to accelerate either dense or sparse matrix multiplication. In this paper, we propose VerSA, a versatile systolic array architecture for both dense and sparse matrix multiplications. VerSA employs intermediate paths and SRAM buffers between the rows of the systolic array (SA), thereby enabling an early termination in sparse matrix multiplication with a negligible performance overhead when running dense matrix multiplication. When running sparse matrix multiplication, 256 × 256 VerSA brings performance (i.e., an inverse of execution time) improvement and energy saving by 1.21×–1.60× and 7.5–30.2%, respectively, when compared to the conventional SA. When running dense matrix multiplication, VerSA results in only a 0.52% performance overhead compared to the conventional SA.
Funder
National Research Foundation of Korea Samsung Electronics Co., Ltd. IC Design Education Center
Reference26 articles.
1. Jouppi, N.P., Young, C., Patil, N., Patterson, D., Agrawal, G., Bajwa, R., Bates, S., Bhatia, S., Boden, N., and Borchers, A. (2017, January 24–28). In-Datacenter Performance Analysis of a Tensor Processing Unit. Proceedings of the 44th Annual International Symposium on Computer Architecture, Toronto, ON, Canada. 2. Jouppi, N.P., Hyun Yoon, D., Ashcraft, M., Gottscho, M., Jablin, T.B., Kurian, G., Laudon, J., Li, S., Ma, P., and Ma, X. (2021, January 14–18). Ten Lessons From Three Generations Shaped Google’s TPUv4i: Industrial Product. Proceedings of the 2021 ACM/IEEE 48th Annual International Symposium on Computer Architecture (ISCA), Valencia, Spain. 3. Jouppi, N., Kurian, G., Li, S., Ma, P., Nagarajan, R., Nai, L., Patil, N., Subramanian, S., Swing, A., and Towles, B. (2023, January 17–21). TPU v4: An Optically Reconfigurable Supercomputer for Machine Learning with Hardware Support for Embeddings. Proceedings of the 50th Annual International Symposium on Computer Architecture, Orlando, FL, USA. 4. Pal, S., Beaumont, J., Park, D.H., Amarnath, A., Feng, S., Chakrabarti, C., Kim, H.S., Blaauw, D., Mudge, T., and Dreslinski, R. (2018, January 24–28). OuterSPACE: An Outer Product Based Sparse Matrix Multiplication Accelerator. Proceedings of the 2018 IEEE International Symposium on High Performance Computer Architecture (HPCA), Vienna, Austria. 5. Gondimalla, A., Chesnut, N., Thottethodi, M., and Vijaykumar, T.N. (2019, January 12–16). SparTen: A Sparse Tensor Accelerator for Convolutional Neural Networks. Proceedings of the 52nd Annual IEEE/ACM International Symposium on Microarchitecture, Columbus, OH, USA.
|
|