Skip to main content
eScholarship
Open Access Publications from the University of California

UC San Diego

UC San Diego Electronic Theses and Dissertations bannerUC San Diego

Super-resolution under Extreme Sampling Constraints: Theory and Algorithms

Abstract

High dimensional inverse problems are at the heart of numerous modern signal processing and machine learning applications, where the goal is to sense the physical environment and infer parameters of interest residing in a high-dimensional ambient space, from low-dimensional (non)-linear measurements. Despite the rapid growth in the volume of data that is being generated in the form of images, videos, and sensors, there are restrictions imposed by the physical constraints of the sensing system. For instance, in a scanning microscopy system, the frame rate and hence the temporal resolution is limited by the speed of the scanning mirrors and the size of the field of view (FOV). Similarly, in the modern hybrid mmWave systems, although a massive number of antennas are deployed, the number of Radio Frequency (RF) chains is scarce due to their high cost and power consumption. Therefore, it is crucial to design sensing paradigms that can reliably recover the information of interest under such stringent sampling budgets. This requires leveraging the underlying ``low-dimensional" geometry of the signal (known as priors) to enable reconstruction with relatively few measurements. Over the last two decades, several theoretical and algorithmic techniques have been developed for tackling these under-determined systems, the most well-known among them being sparse and low-rank signal/image reconstruction.

In this thesis, we primarily focus on the ``ill-posed" inverse problem of super-resolution with ``extreme spatial/temporal sampling constraints" arising from real-world applications in Neural Spike Deconvolution and Sensor Array Signal Processing. We especially focus on unconventional regimes where existing approaches based on {sparsity, correlation and/or low rank} priors may fail. Broadly, super-resolution is concerned with the reconstruction of temporally/spatially localized events (or spikes) from samples of their convolution with a low-pass filter. The problem becomes ill-posed due to systematic attenuation of the high-frequency content of the underlying spikes. Unlike classical compressed sensing, the under-sampling operation in super-resolution does not correspond to observing random linear projections of the unknown signal of interest. This prevents direct application of existing theoretical guarantees developed in the compressed sensing literature. In contrast to prior works in super-resolution which exploit the role of sparsity and/or non-negativity priors to solve the resulting ill-posed problem, this thesis explores the problem of Binary Super-resolution, i.e., when the spike amplitudes are known apriori to be binary-valued. We demonstrate that enforcing binary priors in under-determined linear inverse problems can allow exact recovery (in absence of noise) in the non-standard regime where the sparsity level of the spikes far exceeds the number of acquired measurements --- which henceforth is referred to as ``extreme compression" regime. Past works have shown that it is possible to operate in such a regime by leveraging multiple snapshots and statistical priors. On the contrary, this thesis shows that multiple measurements may not be necessary to operate in extreme compression in the face of binary constraints. We also show that standard convex-relaxation techniques for the binary constraint, such as box-constraints (that are widely used in Binary compressed sensing), are inadequate for operating in extreme compression regimes and they can introduce a certain bias in the support of the recovered spikes. We overcome the computational challenges of enforcing binary constraints by exploiting the special structure of the measurements that allow us to reformulate the problem as a binary search.

Despite the ability to recover finite-valued signals from uniformly downsampled measurements for most filters, there might exist certain ``adversarial filters" that result in ambiguity upon uniform downsampling. We exhibit that the additional flexibility of designing the measurement matrix (beyond uniform-sampling) can mitigate the effects of adversarial filters. We propose a novel algorithm-measurement co-design framework where the measurement matrix is designed as a function of the filter. In absence of noise, this framework can provably achieve the optimal complexity, which is independent of dimension as well as sparsity of the binary signal. Moreover, the recovery can be performed using a greedy sequential decoding algorithm with low computational complexity.

The second half of this dissertation studies another class of problems with stringent requirements on available spatial and temporal measurements. This problem arises in passive sensing scenarios with sparse sensor arrays, where the goal is to perform source localization with very few spatial sensors. Sparse array geometries such as nested arrays and co-prime arrays have gained popularity due to their ability to identify more sources than sensors and offer very high resolution compared to a uniform array with the same number of physical sensors. The benefit is attributed to the ability of sparse arrays to exploit certain correlation structures in the source signals, without increasing the spatial sensing budget. However, it is commonly believed that sparse arrays inherently require a large number of temporal snapshots to obtain reliable correlation estimation, and therefore, they may not be preferred in the so-called “sample-starved” regimes, where the temporal snapshots are scarce. In applications like automotive radar and mmWave communication systems, the sources/multipaths may be coherent and the environment is dynamic due to the high mobility of the sources. Motivated by these challenging scenarios, it is desirable to identify the sources or multipath components with superior resolution while using very few temporal snapshots (only a single snapshot in the extreme case). The techniques developed in these sample-starved scenarios are largely based on heuristics. This thesis debunks some of the myths associated with sparse arrays in the limited snapshot regime by providing provable ways to leverage benefits of deterministic sparse arrays (nested arrays) in the following largely unexplored regimes: (i) with few snapshots (of the order of number of sources) and (ii) extreme-sample starved scenarios with only a single snapshot.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View