Advancing Hyperdimensional Computing Based on Trainable Encoding and Adaptive Training for Efficient and Accurate Learning

Author:

Kim Jiseung1ORCID,Lee Hyunsei2ORCID,Imani Mohsen3ORCID,Kim Yeseong2ORCID

Affiliation:

1. DGIST, Daegu, Republic of Korea

2. DGIST, Daegu Republic of Korea

3. UC Irvine, Irvine, United States

Abstract

Hyperdimensional computing (HDC) is a computing paradigm inspired by the mechanisms of human memory, characterizing data through high-dimensional vector representations, known as hypervectors. Recent advancements in HDC have explored its potential as a learning model, leveraging its straightforward arithmetic and high efficiency. The traditional HDC frameworks are hampered by two primary static elements: randomly generated encoders and fixed learning rates. These static components significantly limit model adaptability and accuracy. The static, randomly generated encoders, while ensuring high-dimensional representation, fail to adapt to evolving data relationships, thereby constraining the model’s ability to accurately capture and learn from complex patterns. Similarly, the fixed nature of the learning rate does not account for the varying needs of the training process over time, hindering efficient convergence and optimal performance. This article introduces TrainableHD , a novel HDC framework that enables dynamic training of the randomly generated encoder depending on the feedback of the learning data, thereby addressing the static nature of conventional HDC encoders. TrainableHD also enhances the training performance by incorporating adaptive optimizer algorithms in learning the hypervectors. We further refine TrainableHD with effective quantization to enhance efficiency, allowing the execution of the inference phase in low-precision accelerators. Our evaluations demonstrate that TrainableHD significantly improves HDC accuracy by up to 27.99% (averaging 7.02%) without additional computational costs during inference, achieving a performance level comparable to state-of-the-art deep learning models. Furthermore, TrainableHD is optimized for execution speed and energy efficiency. Compared to deep learning on a low-power GPU platform like NVIDIA Jetson Xavier, TrainableHD is 56.4 times faster and 73 times more energy efficient. This efficiency is further augmented through the use of Encoder Interval Training (EIT) and adaptive optimizer algorithms, enhancing the training process without compromising the model’s accuracy.

Funder

National Research Foundation of Korea

Institute of Information & communications Technology Planning & Evaluation

National Science Foundation

Semiconductor Research Corporation

Air Force Office of Scientific Research

Publisher

Association for Computing Machinery (ACM)

Reference47 articles.

1. Jordan J. Bird, A. Ekart, C. D. Buckingham, and Diego R. Faria. 2019. Mental emotional sentiment classification with an eeg-based brain-machine interface. In Proceedings of the International Conference on Digital Image and Signal Processing (DISP ’19).

2. A Programmable Hyper-Dimensional Processor Architecture for Human-Centric IoT

3. Dheeru Dua and Casey Graff. 2017. UCI Machine Learning Repository. http://archive.ics.uci.edu/ml

4. Adaptive subgradient methods for online learning and stochastic optimization.;Duchi John;Journal of Machine Learning Research,2011

5. Steven K. Esser Jeffrey L. McKinstry Deepika Bablani Rathinakumar Appuswamy and Dharmendra S. Modha. 2020. Learned Step Size Quantization. arxiv:1902.08153 [cs.LG]

Cited by 1 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. All You Need is Unary: End-to-End Unary Bit-stream Processing in Hyperdimensional Computing;Proceedings of the 29th ACM/IEEE International Symposium on Low Power Electronics and Design;2024-08-05

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3