Author:
Belanche-Muñoz Lluís A.,Wiejacha Małgorzata
Abstract
Kernel methods have played a major role in the last two decades in the modeling and visualization of complex problems in data science. The choice of kernel function remains an open research area and the reasons why some kernels perform better than others are not yet understood. Moreover, the high computational costs of kernel-based methods make it extremely inefficient to use standard model selection methods, such as cross-validation, creating a need for careful kernel design and parameter choice. These reasons justify the prior analyses of kernel matrices, i.e., mathematical objects generated by the kernel functions. This paper explores these topics from an entropic standpoint for the case of kernelized relevance vector machines (RVMs), pinpointing desirable properties of kernel matrices that increase the likelihood of obtaining good model performances in terms of generalization power, as well as relate these properties to the model’s fitting ability. We also derive a heuristic for achieving close-to-optimal modeling results while keeping the computational costs low, thus providing a recipe for efficient analysis when processing resources are limited.
Subject
General Physics and Astronomy
Reference32 articles.
1. Choosing Multiple Parameters for Support Vector Machines;Chapelle;Mach. Learn.,2002
2. Tipping, M.E. (2000). Advances in Neural Information Processing Systems 12, Curran Associates, Inc.
3. Parameter-insensitive kernel in extreme learning for non-linear support vector regression;Verleysen;Neurocomputing,2011
4. The connection between regularization operators and support vector kernels;Smola;Neural Netw.,1998
5. Learning the Kernel with Hyperkernels;Ong;J. Mach. Learn. Res.,2005
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献