Scalable Front End Designs for Communication and Learning
In this work we provide three examples of estimation/detection problems, for which customizing the Front End to the specific application makes the system more efficient and scalable. The three problems we consider are all classical, but face new scalability challenges.
This introduces additional constraints, accounting for which results in front end designs that are very distinct from the conventional approaches. The first two case studies pertain to the canonical problems of synchronization and equalization for communication links. As the system bandwidths scale, challenges arise due to the limiting resolution of analog-to-digital converters (ADCs). We discuss system designs that react to this bottleneck by drastically relaxing the precision requirements of the front end and correspondingly modifying the back end algorithms using Bayesian principles. The third problem we discuss belongs to the field of computer vision. Inspired by the research in neuroscience about the mammalian visual system, we redesign the front end of a machine vision system to be neuro-mimetic, followed by layers of unsupervised learning using simple k-means clustering. This results in a framework that is intuitive, more computationally efficient compared to the approach of supervised deep networks, and amenable to the increasing availability of large amounts of unlabeled data.
We first consider the problem of blind carrier phase and frequency synchronization in order to obtain insight into the performance limitations imposed by severe quantization constraints.
We adopt a mixed signal analog front end that coarsely quantizes the phase and employs a digitally controlled feedback that applies a phase shift prior to the ADC, this acts as a controllable dither signal and aids in the estimation process. We propose a control policy for the feedback and show that combined with blind Bayesian algorithms, it results in excellent performance, close to that of an unquantized system.
Next, we take up the problem of channel equalization with severe limits on the number of slicers available for the ADC. We find that the standard flash ADC architecture can be highly sub-optimal in the presence of such constraints. Hence we explore a ``space-time'' generalization of the flash architecture by allowing a fixed number
of slicers to be dispersed in time (sampling phase) as well as space (i.e., amplitude). We show that optimizing the slicer locations, conditioned on the channel, results in significant gains in the bit error rate (BER) performance.
Finally, we explore alternative ways of learning convolutional
nets for machine vision, making it easier to interpret and simpler to implement than currently used purely supervised nets. In particular,
we investigate a framework that combines a neuro-mimetic front end (designed in collaboration with the neuroscientists from the psychology department at UCSB)
together with unsupervised feature extraction based on clustering. Supervised classification,
using a generic support vector machine (SVM), is applied at the end.
We obtain competitive classification results on standard image databases,
beating the state of the art for NORB (uniform-normalized) and approaching it for MNIST.