Fast Proximal Point Optimization for Solving Penalized Regression Problems

   December 23,  2015, 11am





Recent proximal point optimization techniques have been very successful for solving penalized regression problems defined with nonsmooth functions. In this talk, we discuss two types of penalized regression in the context of convex regularization, inspired by Tikhonov and Morozov. We first briefly introduce recent developments for the former type. In the second part, we focus on a particular instance of the latter type, namely the generalized Dantzig selector (GDS), presenting our recent contribution of a fast proximal point algorithm based on a convex-concave saddle-point reformulation. Some experimental results will be shown for a particular instance of GDS, defined with the ordered $\ell_1$-norm regularizer, which has an attractive provable FDR control property in high dimensional model selection similarly to Benjamini-Hochberg’s.


The talk will begin with a brief introduction of the Collaborative Research Center SFB 876 in TU Dortmund University and some of recent works of Dr. Lee.