Author:
Hernández-Cano Alejandro,Ni Yang,Zou Zhuowen,Zakeri Ali,Imani Mohsen
Abstract
IntroductionBrain-inspired computing has become an emerging field, where a growing number of works focus on developing algorithms that bring machine learning closer to human brains at the functional level. As one of the promising directions, Hyperdimensional Computing (HDC) is centered around the idea of having holographic and high-dimensional representation as the neural activities in our brains. Such representation is the fundamental enabler for the efficiency and robustness of HDC. However, existing HDC-based algorithms suffer from limitations within the encoder. To some extent, they all rely on manually selected encoders, meaning that the resulting representation is never adapted to the tasks at hand.MethodsIn this paper, we propose FLASH, a novel hyperdimensional learning method that incorporates an adaptive and learnable encoder design, aiming at better overall learning performance while maintaining good properties of HDC representation. Current HDC encoders leverage Random Fourier Features (RFF) for kernel correspondence and enable locality-preserving encoding. We propose to learn the encoder matrix distribution via gradient descent and effectively adapt the kernel for a more suitable HDC encoding.ResultsOur experiments on various regression datasets show that tuning the HDC encoder can significantly boost the accuracy, surpassing the current HDC-based algorithm and providing faster inference than other baselines, including RFF-based kernel ridge regression.DiscussionThe results indicate the importance of an adaptive encoder and customized high-dimensional representation in HDC.
Funder
Defense Advanced Research Projects Agency
National Science Foundation
Semiconductor Research Corporation
Office of Naval Research
Air Force Office of Scientific Research
Cisco Systems
Reference40 articles.
1. Invited Paper: Hyperdimensional Computing for Resilient Edge Learning
2. “Hdgim: hyperdimensional genome sequence matching on unreliable highly scaled fefet,”;Barkam
3. “Darl: distributed reconfigurable accelerator for hyperdimensional reinforcement learning,”;Chen;Proceedings of the 41st IEEE/ACM International Conference on Computer-Aided Design,2022
4. “Hypergraf: Hyperdimensional graph-based reasoning acceleration on fpga,”;Chen,2023
5. “Saga: a fast incremental gradient method with support for non-strongly convex composite objectives,”;Defazio;Advances in Neural Information Processing Systems,2014
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献