Abstract
Multi-kernel learning methods are essential kernel learning methods. Still, the base kernel functions in most multi-kernel learning methods only with select kernel functions with shallow structures, which are weak for large-scale uneven data. We propose two types of acceleration models from a multidimensional perspective of the data: the neural tangent kernel (NTK)-based multi-kernel learning method is proposed, where the NTK kernel regressor is shown to be equivalent to an infinitely wide neural network predictor, and the NTK with deep structure is used as the base kernel function to enhance the learning ability of multi-kernel models; and a parallel computing kernel model based on data partitioning techniques. An RBF, POLY-based multi-kernel model is also proposed. All models use historical memory-based PSO (HMPSO) for efficient search of parameters within the model. Since NTK has a multi-layer structure and thus has a significant computational complexity, the use of a Monotone Disjunctive Kernel (MDK) to store and train Boolean features in binary achieves a 15–60% training time compression of NTK models in different datasets while obtaining a 1–25% accuracy improvement.
Funder
Gansu Provincial Department of Education Research Project
Subject
Fluid Flow and Transfer Processes,Computer Science Applications,Process Chemistry and Technology,General Engineering,Instrumentation,General Materials Science
Reference46 articles.
1. Federated machine learning: Concept and applications;Yang;ACM Trans. Intell. Syst. Technol.,2019
2. Communication-efficient learning of deep networks from decentralized data;McMahan;Artif. Intell. Stat.,2017
3. Wei, K., Li, J., Ma, C., Ding, M., Wei, S., Wu, F., and Ranbaduge, T. (2022). Vertical Federated Learning: Challenges, Methodologies and Experiments. arXiv.
4. Distributed learning of deep neural network over multiple agents;Gupta;J. Netw. Comput. Appl.,2018
5. Thapa, C., Arachchige, P.C.M., Camtepe, S., and Sun, L. (2022, January 24–28). Splitfed: When federated learning meets split learning. Proceedings of the AAAI Conference on Artificial Intelligence, Pomona, CA, USA.