Affiliation:
1. DGIST, Daegu, Republic of Korea
2. DGIST, Daegu Republic of Korea
3. UC Irvine, Irvine, United States
Abstract
Hyperdimensional computing (HDC) is a computing paradigm inspired by the mechanisms of human memory, characterizing data through high-dimensional vector representations, known as hypervectors. Recent advancements in HDC have explored its potential as a learning model, leveraging its straightforward arithmetic and high efficiency. The traditional HDC frameworks are hampered by two primary static elements: randomly generated encoders and fixed learning rates. These static components significantly limit model adaptability and accuracy. The static, randomly generated encoders, while ensuring high-dimensional representation, fail to adapt to evolving data relationships, thereby constraining the model’s ability to accurately capture and learn from complex patterns. Similarly, the fixed nature of the learning rate does not account for the varying needs of the training process over time, hindering efficient convergence and optimal performance. This article introduces
TrainableHD
, a novel HDC framework that enables dynamic training of the randomly generated encoder depending on the feedback of the learning data, thereby addressing the static nature of conventional HDC encoders.
TrainableHD
also enhances the training performance by incorporating adaptive optimizer algorithms in learning the hypervectors. We further refine
TrainableHD
with effective quantization to enhance efficiency, allowing the execution of the inference phase in low-precision accelerators. Our evaluations demonstrate that
TrainableHD
significantly improves HDC accuracy by up to 27.99% (averaging 7.02%) without additional computational costs during inference, achieving a performance level comparable to state-of-the-art deep learning models. Furthermore,
TrainableHD
is optimized for execution speed and energy efficiency. Compared to deep learning on a low-power GPU platform like NVIDIA Jetson Xavier,
TrainableHD
is 56.4 times faster and 73 times more energy efficient. This efficiency is further augmented through the use of Encoder Interval Training (EIT) and adaptive optimizer algorithms, enhancing the training process without compromising the model’s accuracy.
Funder
National Research Foundation of Korea
Institute of Information & communications Technology Planning & Evaluation
National Science Foundation
Semiconductor Research Corporation
Air Force Office of Scientific Research
Publisher
Association for Computing Machinery (ACM)
Reference47 articles.
1. Jordan J. Bird, A. Ekart, C. D. Buckingham, and Diego R. Faria. 2019. Mental emotional sentiment classification with an eeg-based brain-machine interface. In Proceedings of the International Conference on Digital Image and Signal Processing (DISP ’19).
2. A Programmable Hyper-Dimensional Processor Architecture for Human-Centric IoT
3. Dheeru Dua and Casey Graff. 2017. UCI Machine Learning Repository. http://archive.ics.uci.edu/ml
4. Adaptive subgradient methods for online learning and stochastic optimization.;Duchi John;Journal of Machine Learning Research,2011
5. Steven K. Esser Jeffrey L. McKinstry Deepika Bablani Rathinakumar Appuswamy and Dharmendra S. Modha. 2020. Learned Step Size Quantization. arxiv:1902.08153 [cs.LG]
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. All You Need is Unary: End-to-End Unary Bit-stream Processing in Hyperdimensional Computing;Proceedings of the 29th ACM/IEEE International Symposium on Low Power Electronics and Design;2024-08-05