Skip to main content
eScholarship
Open Access Publications from the University of California

UC San Diego

UC San Diego Electronic Theses and Dissertations bannerUC San Diego

On communication complexity and universal compression

Abstract

We first consider communication complexity which arises in applications where a system needs to compute a function whose value depends on data distributed among several parties. As the total data is available to none of the parties, they need communicate with each other to accomplish their task. Communication complexity is the least amount of communication required to compute the function. The concept of communication complexity is relevant in many practical applications such as VLSI circuit design, sensor networks, etc., where one wants to minimize the amount of energy used by the various components of the system. Since the components consume energy to communicate it is important to minimize the total amount of communication by decreasing the number of signals exchanged by them. In this dissertation we consider a novel way of communication where silence is used to convey information. For this model we study the worst-case and average-case complexities of symmetric functions. For binary-input functions we determine the average- and worst-case complexities and describe the protocols achieving them. For functions of non-binary inputs we consider one-round communication, where each party is restricted to communicate in consecutive stages. We analyze the extra amount of communication required by one- over multi-round communication for symmetric and asymmetric functions. For the special case of ternary- input functions we provide close lower and upper bounds on the worst-case one-round complexity and describe protocols achieving them. Protocols achieving the average-case one- round complexity for ternary-input functions are also described. These protocols can be generalized to inputs of arbitrary size. We then consider universal compression, i.e. compression when the statistics of data are unknown, involving data drawn from sources with memory over large alphabets. It has long been know that data generated by sources over large alphabets, such as i.i.d. and Markov distributions incur unbounded extra number of bits over their entropy. We consider a recently introduced concept of compressing a string by separately conveying its pattern---the order in which the symbols appear. For example, the pattern of "abracadabra'' is 12314151231. It was shown that the patterns of i.i.d. strings can be losslessly compressed with diminishing per-symbol redundancy. We first consider the pattern redundancy of distributions with memory. We show that patterns of Markov sources cannot be compressed with diminishing redundancy. We next consider Hidden Markov distributions and establish close lower and upper bounds on the on the pattern redundancy of HMM sources, showing in particular that they can be compressed as well as when their source distribution is known

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View