Abstract
Minimal complexity machines (MCMs) minimize the VC (Vapnik-Chervonenkis) dimension to obtain high generalization abilities. However, because the regularization term is not included in the objective function, the solution is not unique. In this paper, to solve this problem, we discuss fusing the MCM and the standard support vector machine (L1 SVM). This is realized by minimizing the maximum margin in the L1 SVM. We call the machine Minimum complexity L1 SVM (ML1 SVM). The associated dual problem has twice the number of dual variables and the ML1 SVM is trained by alternatingly optimizing the dual variables associated with the regularization term and with the VC dimension. We compare the ML1 SVM with other types of SVMs including the L1 SVM using several benchmark datasets and show that the ML1 SVM performs better than or comparable to the L1 SVM.
Subject
Computer Networks and Communications,Human-Computer Interaction
Reference41 articles.
1. Statistical Learning Theory;Vapnik,1998
2. Support Vector Machines for Pattern Classification;Abe,2010
3. Training of Support Vector Machines with Mahalanobis Kernels;Abe,2005
4. Weighted Mahalanobis Distance Kernels for Support Vector Machines
5. Scalable Large-Margin Mahalanobis Distance Metric Learning
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献