Regularized regression and classification methods fit a linear model to
data, based on some loss criterion, subject to a constraint on the
coefficient values. As special cases, ridge-regression, the lasso, and
subset selection all use squared-error loss with different particular
constraint choices. For large problems the general choice of
loss/constraint combinations is usually limited by the computation
required to obtain the corresponding solution estimates, especially when
non convex constraints are used to induce very sparse solutions. A fast
algorithm is presented that produces solutions that closely approximate
those for any convex loss and a wide variety of convex and non convex
constraints, permitting application to very large problems. The benefits
of this generality are illustrated by examples.
Reception to follow the talk. Location to be announced.