Optimization for Machine Learning

Posted in Conferences, Companies, Science on April 07, 2008

Google Tech Talks
March, 25 2008


S.V.N. Vishwanathan - Research Scientist

Regularized risk minimization is at the heart of many machine learning algorithms. The underlying objective function to be minimized is convex, and often non-smooth. Classical optimization algorithms cannot handle this efficiently. In this talk we present two algorithms for dealing with convex non-smooth objective functions. First, we extend the well known BFGS quasi-Newton algorithm to handle non-smooth

functions. Second, we show how bundle methods can be applied in a machine learning context. We present both theoretical and experimental justification of our algorithms.

Speaker: S.V.N. Vishwanathan - Research Scientist - Zurich
S.V.N Vishwanathan is a principal researcher in the Statistical Machine Learning program, National ICT Australia with an adjunct appointment at the College of Engineering and Computer Science(CECS), Australian National University. I got my Ph.D in 2002 from the Department of Computer Science and Automation (CSA) at the Indian Institute of Science.

Watch Video

Tags: Techtalks, Google, Conferences, Science, Lectures, Computer Science, engEDU, Education, Google Tech Talks, Broadcasting, Companies