Affiliation:
1. School of Mathematical Science, University of Jinan, Jinan 250022, China
Abstract
<p style='text-indent:20px;'>In this paper, we study the learning performance of regularized large-margin unified machines (LUMs) for classification problem. The hypothesis space is taken to be a reproducing kernel Hilbert space <inline-formula><tex-math id="M1">\begin{document}$ {\mathcal H}_K $\end{document}</tex-math></inline-formula>, and the penalty term is denoted by the norm of the function in <inline-formula><tex-math id="M2">\begin{document}$ {\mathcal H}_K $\end{document}</tex-math></inline-formula>. Since the LUM loss functions are differentiable and convex, so the data piling phenomena can be avoided when dealing with the high-dimension low-sample size data. The error analysis of this classification learning machine mainly lies upon the comparison theorem [<xref ref-type="bibr" rid="b3">3</xref>] which ensures that the excess classification error can be bounded by the excess generalization error. Under a mild source condition which shows that the minimizer <inline-formula><tex-math id="M3">\begin{document}$ f_V $\end{document}</tex-math></inline-formula> of the generalization error can be approximated by the hypothesis space <inline-formula><tex-math id="M4">\begin{document}$ {\mathcal H}_K $\end{document}</tex-math></inline-formula>, and by a leave one out variant technique proposed in [<xref ref-type="bibr" rid="b13">13</xref>], satisfying error bound and learning rate about the mean of excess classification error are deduced.</p>
Publisher
American Institute of Mathematical Sciences (AIMS)
Subject
Artificial Intelligence,Computational Mathematics,Computational Theory and Mathematics,Theoretical Computer Science
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献