AdaBoost is Universally Consistent

Posted in Science on August 25, 2008

AdaBoost is Universally Consistent

We consider the risk, or probability of error, of the classifier produced by AdaBoost, and in particular the stopping strategy to be used to ensure universal consistency. (A classification method is universally consistent if the risk of the classifiers it produces approaches the Bayes risk---the minimal risk---as the sample size grows.) Several related algorithms---regularized versions of AdaBoost---have been shown to be universally consistent, but AdaBoost's universal consistency has not been established. Jiang has demonstrated that, for each probability distribution satisfying certain smoothness conditions, there is a stopping time for sample size n, so that if AdaBoost is stopped after iterations, its risk approaches the Bayes risk for that distribution. Our main result is that if AdaBoost is stopped after iterations, it is universally consistent, where n is the sample size and .

Author: Peter L. Bartlett, Berkley University

Watch Video

Tags: Science, Lectures, Computer Science, Machine Learning, VideoLectures.Net, Ensemble Methods