Affiliation:
1. Département de Mathématiques, Université Paris-Sud, 91405 Orsay Cedex, France; and Fraunhofer FIRST Kekuléstr. 7, 12489 Berlin, Germany,
Abstract
Analyses of the success of ensemble methods in classification have pointed out the important role played by the margin distribution function on the training and test sets. While it is acknowledged that one should generally try to achieve high margins on the training set, the more precise shape of the empirical margin distribution function one should favor in practice is subject to different approaches. We first present two concurrent philosophies for choosing the empirical margin profile: the minimax margin paradigm and the mean and variance paradigm. The best-known representative of the first paradigm is the AdaBoost algorithm, and this philosophy has been shown by several other authors to be closely related to the principle of the support vector machine. We show that the second paradigm is very close in spirit to Fisher's linear discriminant (in a feature space). We construct two boosting-type algorithms, very similar in their form, dedicated to one or the other philosophy. We consequently derive by interpolation a very simple family of iterative reweighting algorithms that can be understood as different trade-offs between the two paradigms and argue from experiments that this can allow for a suitable adaptivity to different classification problems, particularly in the presence of noise or excessive complexity of the base classifiers.
Subject
Cognitive Neuroscience,Arts and Humanities (miscellaneous)
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献