Affiliation:
1. School of Electrical and Information Engineering, The University of Sydney, Australia
Abstract
Kernel adaptive filters (KAFs) are online machine learning algorithms which are amenable to highly efficient streaming implementations. They require only a single pass through the data and can act as universal approximators, i.e. approximate any continuous function with arbitrary accuracy. KAFs are members of a family of kernel methods which apply an implicit non-linear mapping of input data to a high dimensional feature space, permitting learning algorithms to be expressed entirely as inner products. Such an approach avoids explicit projection into the feature space, enabling computational efficiency. In this paper, we propose the first fully pipelined implementation of the kernel normalised least mean squares algorithm for regression. Independent training tasks necessary for hyperparameter optimisation fill pipeline stages, so no stall cycles to resolve dependencies are required. Together with other optimisations to reduce resource utilisation and latency, our core achieves 161 GFLOPS on a Virtex 7 XC7VX485T FPGA for a floating point implementation and 211 GOPS for fixed point. Our PCI Express based floating-point system implementation achieves 80% of the core’s speed, this being a speedup of 10× over an optimised implementation on a desktop processor and 2.66× over a GPU.
Funder
Australian Research Councils Linkage Projects
Zomojo Pty Ltd
Publisher
Association for Computing Machinery (ACM)
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Floating-Point Exponential;Application-Specific Arithmetic;2023-08-23
2. Hardware-accelerated Real-time Drift-awareness for Robust Deep Learning on Wireless RF Data;ACM Transactions on Reconfigurable Technology and Systems;2023-03-11
3. Algorithm and Architecture Design of Random Fourier Features-Based Kernel Adaptive Filters;IEEE Transactions on Circuits and Systems I: Regular Papers;2023-02
4. Kernel Normalised Least Mean Squares with Delayed Model Adaptation;ACM Transactions on Reconfigurable Technology and Systems;2020-06-30
5. Mixed-Precision Kernel Recursive Least Squares;IEEE Transactions on Neural Networks and Learning Systems;2020