Skip to main content
eScholarship
Open Access Publications from the University of California

UC Irvine

UC Irvine Electronic Theses and Dissertations bannerUC Irvine

Nonconvex Models and Algorithms for Sparse Regularization in Deep Learning and Image Segmentation

Creative Commons 'BY' version 4.0 license
Abstract

In this thesis, we propose sparse, nonconvex optimization models in both areas of deep learning and image segmentation and develop algorithms to solve them. To be more specific, we design optimization algorithms that perform channel pruning of convolutional neural networks (CNNs) in order to compress them without deteriorating their accuracy, while we investigate a nonconvex alternative of total variation (TV) to obtain sharper segmentation results. Various numerical experiments are conducted to showcase the effectiveness and performance of the proposed nonconvex models.

In Part I, we follow the direction of sparse optimization to perform channel pruning of overparametrized CNNs. In one class of models, we propose a family of nonconvex sparse group lasso that blends nonconvex regularization (e.g., transformed $\ell_1$, $\ell_1 - \ell_2$, and $\ell_0$) that induces sparsity onto the individual weights and $\ell_{2,1}$ regularization onto the output channels of a layer. Next, we provide two directions to improve network slimming, a channel pruning method that applies $\ell_1$ regularization on the scaling factors in the batch normalization layers. In one direction, we replace $\ell_1$ regularization with nonconvex alternatives, such as transformed $\ell_1$ and $\ell_p, 0 < p <1$, and derive their subgradient formulas to perform subgradient descent during training. In another direction, because subgradient descent is used for network slimming as the default optimization algorithm with theoretical and numerical flaws, we propose a theoretically convergent algorithm called proximal network slimming that trains CNNs to be more robust against channel pruning. As a result, fine tuning CNNs after channel pruning becomes optional by training with our proposed proximal network slimming.

In Part II, we examine the applications of the weighted anisotropic--isotropic total variation (AITV), the $\ell_1 -\alpha \ell_2$ variant of TV, in image segmentation. In one direction, we replace the TV regularization in the Chan--Vese segmentation model and a fuzzy region competition model by AITV. To deal with the nonconvex nature of AITV, we apply the difference-of-convex algorithm (DCA), in which the subproblems can be minimized by the primal-dual hybrid gradient method with linesearch. In another direction, we design an efficient, multi-stage image segmentation framework that incorporates AITV. The segmentation framework generally consists of two stages: smoothing and thresholding. In the first stage, a smoothed image is obtained by an AITV-regularized Mumford-Shah model, which can be solved efficiently by the alternating direction method of multipliers with a closed-form solution of a proximal operator of the $\ell_1 -\alpha \ell_2$ regularizer. In the second stage, we threshold the smoothed image by $k$-means clustering to obtain the final segmentation result.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View