Skip to main content
eScholarship
Open Access Publications from the University of California

Large-Scale Optimization and Deep Learning Techniques for Data-Driven Signal Processing

  • Author(s): DeGuchy, Omar
  • Advisor(s): Marcia, Roummel F
  • et al.
Abstract

The collection of data has become an integral part of our everyday lives. The algorithms necessary to process this information become paramount to our ability to interpret this resource. This type of data is typically recorded in a variety of signals including images, sounds, time series, and bio-informatics. In this work, we develop a number of algorithms to recover these types of signals in a variety of modalities. This work is mainly presented in two parts.

Initially, we apply and develop large-scale optimization techniques used for signal processing. This includes the use of quasi-Newton methods to approximate second derivative information in a trust-region setting to solve regularized sparse signal recovery problems. We also formulate the compact representation of a large family of quasi-Newton methods known as the Broyden class. This extension of the classic quasi-Newton compact representation allows different updates to be used at every iteration. We also develop the algorithm to perform efficient solves with these representations. Within the realm of sparse signal recovery, but particular to photon-limited imaging applications, we also propose three novel algorithms for signal recovery in a low-light regime. First, we recover the support and lifetime decay of a flourophore from time dependent measurements. This type of modality is useful in identifying different types of molecular structures in tissue samples. The second algorithm identifies and implements the Shannon entropy function as a regularization technique for the promotion of sparsity in reconstructed signals from noisy downsampled observations. Finally, we also present an algorithm which addresses the difficulty of choosing the optimal parameters when solving the sparse signal recovery problem. There are two parameters which effect the quality of the reconstruction, the norm being used, as well as the intensity of the penalization imposed by that norm. We present an algorithm which uses a parallel asynchronous search along with a metric in order to find the optimal pair.

The second portion of the dissertation draws on our experience with large-scale optimization and looks towards deep learning as an alternative to solving signal recovery problems. We first look to improve the standard gradient based techniques used during the training of these deep neural networks by presenting two novel optimization algorithms for deep learning. The first algorithm takes advantage of the limited memory Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton algorithm in a trust-region setting in order to address the large scale minimization problem associated with deep learning. The second algorithm uses second derivative information in a trust region setting where the Hessian is not explicitly stored. We then use a conjugate based method in order to solve the trust-region subproblem.

Finally, we apply deep learning techniques to a variety of applications in signal recovery. These applications include revisiting the photon-limited regime where we recover signals from noisy downsampled observations, image disambiguation which involves the recovery of two signals which have been superimposed, deep learning for synthetic aperture radar (SAR) where we recover information describing the imaging system as well as evaluate the impact of reconstruction on the ability to perform target detection, and signal variation detection in the human genome where we leverage the relationships between subjects to provide better detection.

Main Content
Current View