# The Asymptotic Performance of AdaBoost

Google Tech Talks

May 24, 2007

ABSTRACT

Many popular classification algorithms, including AdaBoost and the support vector machine, minimize a cost function that can be viewed as a convex surrogate of the 0-1 loss function. The convexity makes these algorithms computationally efficient. The use of a surrogate, however, has statistical consequences that must be balanced against the computational virtues of convexity. In this talk, we consider the universal consistency of such methods: does the risk, or expectation of the 0-1 loss, approach its optimal value, no matter what i.i.d. process generates.

May 24, 2007

ABSTRACT

Many popular classification algorithms, including AdaBoost and the support vector machine, minimize a cost function that can be viewed as a convex surrogate of the 0-1 loss function. The convexity makes these algorithms computationally efficient. The use of a surrogate, however, has statistical consequences that must be balanced against the computational virtues of convexity. In this talk, we consider the universal consistency of such methods: does the risk, or expectation of the 0-1 loss, approach its optimal value, no matter what i.i.d. process generates.