Skip to main content
eScholarship
Open Access Publications from the University of California

UCLA

UCLA Electronic Theses and Dissertations bannerUCLA

Approximation in Synchronization and Computation

Abstract

Approximate solutions result in lower complexity and expense compared to

exact solutions, by tolerating a limited distortion. This thesis in centered on two

primary problems: synchronization and computation. We will seek approximate

solutions for these two problems throughout the thesis.

The first part of the thesis in concerned with approximation in file synchronization. File synchronization plays an important role in data sharing applications where several users own edited versions of an original file and they need to synch their files with the original one. Previous works have studied bounds and algorithms for exact reconstruction, where the goal is to exactly synchronize the copies of the original file. In contrast, a more challenging scenario is where the copies may not need to be perfectly synched, i.e. it suffices to reconstruct them within a pre-defined distance, in some notion. In this part, we address approximate synchronization from an information-theoretic viewpoint. The model we employ for edition is via a binary deletion channel. Transmitter owns a binary file, which can be the representation of any type of data including text, image, video, etc., and feeds it to the deletion channel. Receiver obtains an edited version of the string and approximately reconstructs the main sequence along with extra information receives from the transmitter.

In this thesis, we study the approximate synchronization problem, a more relaxed scenario in which the final reconstructed file does not need to be identical

to the original file. We study the case when a binary file undergoes deletion errors with some small deletion rate (so that the total number of deletions is linear in file length). We derive an upper bound on the optimal rate of information that the transmitter (owner of the original file) needs to provide to the receiver (owner of the edited file) to allow the receiver to reconstruct the original file to within a predefined target distortion.

The second part of the thesis focuses on approximate in computation, and Hamming distance calculation as a specific type of computation. Performing computation inside the memory unit (and not fetching data to the processing unit) introduces several benefits, e.g. energy and time saving, avoiding bottleneck congestion. Memristors are introduced as the memory units storing data in the resistive arrays. Computation in the memory in performed by measuring the resistance of the resistive elements, each representing one 0/1 bit. However, noisy measurements challenge the addressed scheme proposed before. We explicitly take the effect of noise into the consideration. Confidence bounds quantitatively show how accurate one can perform the computation in the noisy setup compared to the noise-free scenario. With respect to the context of the problem, we model the noise in two different approaches. One is bit-flipping noise, in which resistive components are read in a way similar to a binary symmetric channel with certain error probability. Secondly, a Gaussian model is considered for the noise in which the output of the measurement will be a continuous random variable. We provide confidence bounds for this two noise models and two single and multiple measurements settings.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View