Skip to main content
Open Access Publications from the University of California

Parallel and high-speed data compression


We address parallel and high-speed lossless data compression. Data compression attempts to reduce the size of an input file by removing redundancy. We consider the class of dictionary or substitutional compression methods which remove redundancy by replacing substrings of the input by references to strings stored in a dictionary. We describe parallel designs for implementing dictionary compression methods on the systolic array and Xnet architectures. These designs are optimal in the sense that they require linear time. On the Parallel Random Access Machine (PRAM) model we give sublinear time algorithms for dictionary compression using optimal, greedy, and longest fragment first (LFF) parsing strategies. Dictionary compression methods output a sequence of fixed-length references. Compression can be improved by replacing the fixed-length references by variable-length codes. We present a collection of fixed-to-variable-length coding schemes which can be implemented to work at very high speeds while still yielding compression that is close to Huffman coding.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View