The Asymptotic Performance of AdaBoost

Posted in Conferences, Companies, Science on June 19, 2007

The Asymptotic Performance of AdaBoost
Google Tech Talks
May 24, 2007


Many popular classification algorithms, including AdaBoost and the support vector machine, minimize a cost function that can be viewed as a convex surrogate of the 0-1 loss function. The convexity makes these algorithms computationally efficient. The use of a surrogate, however, has statistical consequences that must be balanced against the computational virtues of convexity. In this talk, we consider the universal consistency of such methods: does the risk, or expectation of the 0-1 loss, approach its optimal value, no matter what i.i.d. process generates.

Watch Video

Tags: Techtalks, Google, Conferences, Science, Lectures, Computer Science, Broadcasting, Companies