EBSCO Logo
Connecting you to content on EBSCOhost
Results
Title

On the Rate of Convergence of Regularized Boosting Classifiers.

Authors

Blanchard, Gilles; Lugosi, Gálbor; Vayatis, Nicolas; Graepel, Thore; Herbrich, Ralf

Abstract

A regularized boosting method is introduced, for which regularization is obtained through a penalization function. It is shown through oracle inequalities that this method is model adaptive. The rate of convergence of the probability of misclassification is investigated. It is shown that for quite a large class of distributions, the probability of error converges to the Bayes risk at a rate faster than n-(v 2)/(4(v 1)) where V is the VC dimension of the "base" class whose elements are combined by boosting methods to obtain an aggregated classifier. The dimension-independent nature of the rates may partially explain the good behavior of these methods in practical problems. Under Tsybakov's noise condition the rate of convergence is even faster. We investigate the conditions necessary to obtain such rates for different base classes. The special case of boosting using decision stumps is studied in detail. We characterize the class of classifiers realizable by aggregating decision stumps. It is shown that some versions of boosting work especially well in high-dimensional logistic additive models. It appears that adding a limited labelling noise to the training data may in certain cases improve the convergence, as has been also suggested by other authors.

Subjects

DISTRIBUTION (Probability theory); STOCHASTIC convergence; NOISE; BEHAVIOR; PROBABILITY theory; GRAPH labelings

Publication

Journal of Machine Learning Research, 2004, Vol 4, Issue 5, p861

ISSN

1532-4435

Publication type

Academic Journal

DOI

10.1162/1532443041424319

EBSCO Connect | Privacy policy | Terms of use | Copyright | Manage my cookies
Journals | Subjects | Sitemap
© 2025 EBSCO Industries, Inc. All rights reserved