Transformed L1 Function, Sparse Optimization Algorithms and Applications
A non-convex sparsity promoting penalty function, the transformed L1 (TL1), is studied in optimization problems with its applications in compressed sensing (CS) and matrix completion. The TL1 penalty interpolates l_0 and l_1 norms through a nonnegative parameter a \in (0,+\infty), similar to l_p with p \in (0,1]. TL1 is known in the statistics literature to enjoy three desired properties: unbiasedness, sparsity, and Lipschitz continuity.
For compressed sensing problems, a RIP condition for TL1 exact recovery is proposed and proved. Next, difference of convex algorithms for TL1 (DCATL1) are presented in computing TL1-regularized constrained and unconstrained problems. For the unconstrained problem, we prove convergence of DCALT1 to a stationary point satisfying the first order optimality condition. In numerical experiments, we identify the optimal value a=1 and compare DCATL1 with other CS algorithms. An explicit fixed point representation is also developed for the TL1 regularized minimization problem. The TL1 thresholding functions are in closed form for all parameter values.
The TL1 threshold values differ in subcritical (supercritical) parameter regime where the TL1 threshold functions are continuous (discontinuous) similar to soft-thresholding (half-thresholding) functions. We propose TL1 iterative thresholding algorithms and compare them with hard and half thresholding algorithms in CS test problems.
The TS1 penalty, as a matrix quasi-norm defined on its singular values, interpolates the rank and the nuclear norm through a nonnegative parameter a \in (0, +\infty). We consider the unconstrained TS1 regularized low-rank matrix recovery problem and develop a fixed point representation for its global minimizer. The TS1 thresholding functions are in closed analytical form for all parameter values. We propose TS1 iterative thresholding algorithms and compare them with some state-of-the-art algorithms on matrix completion test problems.