Abstract
AbstractThe $$\mathcal {C}$$
C
-bound is a tight bound on the true risk of a majority vote classifier that relies on the individual quality and pairwise disagreement of the voters and provides PAC-Bayesian generalization guarantees. Based on this bound, MinCq is a classification algorithm that returns a dense distribution on a finite set of voters by minimizing it. Introduced later and inspired by boosting, CqBoost uses a column generation approach to build a sparse $$\mathcal {C}$$
C
-bound optimal distribution on a possibly infinite set of voters. However, both approaches have a high computational learning time because they minimize the $$\mathcal {C}$$
C
-bound by solving a quadratic program. Yet, one advantage of CqBoost is its experimental ability to provide sparse solutions. In this work, we address the problem of accelerating the $$\mathcal {C}$$
C
-bound minimization process while keeping the sparsity of the solution and without losing accuracy. We present CB-Boost, a computationally efficient classification algorithm relying on a greedy–boosting-based–$$\mathcal {C}$$
C
-bound optimization. An in-depth analysis proves the optimality of the greedy minimization process and quantifies the decrease of the $$\mathcal {C}$$
C
-bound operated by the algorithm. Generalization guarantees are then drawn based on already existing PAC-Bayesian theorems. In addition, we experimentally evaluate the relevance of CB-Boost in terms of the three main properties we expect about it: accuracy, sparsity, and computational efficiency compared to MinCq, CqBoost, Adaboost and other ensemble methods. As observed in these experiments, CB-Boost not only achieves results comparable to the state of the art, but also provides $$\mathcal {C}$$
C
-bound sub-optimal weights with very few computational demand while keeping the sparsity property of CqBoost.
Funder
Natural Sciences and Engineering Research Council of Canada
Agence Nationale de la Recherche
Publisher
Springer Science and Business Media LLC
Subject
Artificial Intelligence,Software
Reference27 articles.
1. Breiman, L. (1996). Bagging predictors. Machine Learning, 24(2), 123–140.
2. Breiman, L. (2001). Random forests. Machine Learning, 45(1), 5–32.
3. Catoni, O. (2007). PAC-Bayesian supervised classification: the thermodynamics of statistical learning. arXiv preprint arXiv:0712.0248.
4. Cortes, C., Mohri, M., & Syed, U. (2014). Deep boosting. In: Proceedings of the thirty-first international conference on machine learning (ICML) (2014).
5. Demiriz, A., Bennett, K. P., & Shawe-Taylor, J. (2002). Linear programming boosting via column generation. Machine Learning, 46(1), 225–254.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献