Generalization bounds

Posted in Companies, Science on August 26, 2008

Generalization bounds

When a learning algorithm produces a classifier, a natural question to ask is "How well will it do in the future?" To make statements about the future given the past, some assumption must be made. If we make only an assumption that all examples are drawn independently and identically from some (unknown) distribution, we can answer the question. The answer to this question is directly applicable to classifier testing and confidence reporting. It also provides a simple general explanation of "overfitting", and influences algorithm design.

Author: John Langford, Yahoo Research, Yahoo!

Watch Video

Tags: Yahoo!, Science, Lectures, Computer Science, Machine Learning, VideoLectures.Net