# Robustness properties of support vector machines and related methods

The talk brings together methods from two disciplines: machine learning theory and robust statistics. We argue that robustness is an important aspect and we show that many existing machine learning methods based on convex risk minimization have - besides other good properties - also the advantage of being robust if the kernel and the loss function are chosen appropriately. Our results cover classification and regression problems. Assumptions are given for the existence of the influence function and for bounds on the influence function. Kernel logistic regression, support vector machines, least squares and the AdaBoost loss function are treated as special cases. We also consider Robust Learning from Bites, a simple method to make some methods from convex risk minimization applicable for huge data sets for which currently available algorithms are much to slow. As an example we use a data set from 15 German insurance companies.

*Author: Andreas Christmann, Department of Mathematics, University of Bayreuth*