Skip to main content
eScholarship
Open Access Publications from the University of California

UCLA

UCLA Electronic Theses and Dissertations bannerUCLA

Coding Theoretic Techniques for Analysis and Mitigation of the Effects of Noise on Algorithms

Abstract

Coding theory provides techniques which can ensure the error free transmission and storage of data. This data is often used as input to various algorithms that run on hardware. Any information about the algorithm can be useful in helping augment coding theory techniques to protect the data. This thesis studies the performance of two algorithms under noise and how coding theory techniques might be used to mitigate the effects of noise.

The first part of the thesis discusses the effects of noise on the decoding hardware. In particular, the effect of noise due to radiation on low density parity code (LDPC) decoders is studied. The arrival and duration of errors induced by radiation events is also modeled. We

accomplish this by proposing a multi-state radiation channel. This model accounts for the duration and dependence of the noise due to radiation. This model also subsumes some previously studied cases and allows for a more refined analysis. We introduce a corresponding LDPC combined Gallager B/E decoder and perform a density evolution analysis to characterize the idealized decoder performance. We also present results in the finite length case.

The second part of the thesis discusses the effects of noisy feature data on the performance of the linear regression algorithm. Machine learning requires a large amount of data to train the learning algorithms. This data must be protected from noise when it is stored

or transmitted. Until now, most techniques protect the data agnostic of the application for which the data is to be use. We study the effects of Gaussian noise in the feature data on the output of linear regression. We present coding theoretic techniques to reduce the effects of Gaussian noise on the output of the regression algorithm. We use the expected square loss to measure the effects of noise on the output of regression using repetition coding. We present a technique to optimally allocate units of redundancy to different features to minimize the expected loss given the regression coefficients. We also use submodular optimization to jointly optimize the regression parameters and redundancy allocation at the training stage of regression algorithm. We demonstrate the advantage of our technique in optimizing the redundancy allocation for protecting features.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View