Multiplicative Updates for L1-Regularized Linear and Logistic Regression

Posted in Science on October 15, 2008

Multiplicative Updates for L1-Regularized Linear and Logistic Regression

Multiplicative update rules have proven useful in many areas of machine learning. Simple to implement, guaranteed to converge, they account in part for the widespread popularity of algorithms such as nonnegative matrix factorization and Expectation-Maximization. In this paper, we show how to derive multiplicative updates for problems in L1-regularized linear and logistic regression. For L1–regularized linear regression, the updates are derived by reformulating the required optimization as a problem in nonnegative quadratic programming (NQP). The dual of this problem, itself an instance of NQP, can also be solved using multiplicative updates; moreover, the observed duality gap can be used to bound the error of intermediate solutions. For L1–regularized logistic regression, we derive similar updates using an iteratively reweighted least squares approach. We present illustrative experimental results and describe efficient implementations for large-scale problems of interest (e.g., with tens of thousands of examples and over one million features).

Coauthor: Lawrence Saul, University Of California, San Diego

Watch Video

Tags: Science, Lectures, Computer Science, Machine Learning, VideoLectures.Net, Linear Models, Regression