This dissertation investigates communication over the binary symmetric channel with noiseless feedback of the received symbols from the decoder to the encoder. The binary symmetric channel receives as input a binary symbol, and produces as output also a binary symbol, that may be different to the input symbol. The channel is symmetric in the sense that the probability that the output symbol is not equal to the input symbol is the same for both input symbols. The general communication problem consists of reliably relaying a message from a source (where it is generated) to a destination (where it is needed) while using the least possible system resources. Reliability in this case is defined as a small error probability--the probability that the message at the destination differs from that at the source. System resources include forward channel symbol transmissions, encoder and decoder complexity, and possibly other resources that may be associated with a cost, like the number of times the feedback channel is accessed for a feedback transmission. The system operates as follows: the source delivers its message to an encoder, that computes channel input symbols; the channel produces output symbols that are noisy versions of the input symbols; a decoder uses channel output symbols to compute an estimate of the message at the encoder; and a noiseless feedback channel delivers the output symbols to the encoder, which affords the encoder all the information available to the decoder.The objectives of this dissertations are (1) to design encoding methods that can be implemented in simulations; (2) explore simplifications that make the implementations more efficient; (3) analyze the expected rate that can be achieved with all the proposed encoding methods, including the simplifications; (4) analyze converse bounds, the lowest rates that cannot be achieved with the system while satisfying an error probability constraint; and (5) explore communication with additional constraints, like source causality constraints and feedback sparsity constraints.
Chapter 1 lays out the background and motivation of the research problems studied in this dissertation, and summarizes previous results and about related problems. Then, Chapter 1 briefly summarizes the contents of each Chapter.
Chapter 2 provides efficient algorithms and a simulation framework that implements previously proposed encoders as well as new ones. Simulation results validate previous achievability bounds and motivate tighter achievability bounds. Chapter 2 also proposes a simpler encoding rule, that greatly lowers the runtime and memory complexity, and exhibits a rate performance indistinguishable from the encoders with the highest existing achievability bounds. The performance of the new encoder is further analyzed in Chapter 3.
Chapter 3 demonstrates a new analysis of the expected blocklength needed to transmit a fixed length information sequence with bounded error probability. The new analysis relaxes the sufficient encoding constraints that guarantee an expected rate performance above the highest achievability bounds previously developed for the model. To tighten an upper bound on expected time, analyzed for the first phase of a two-phase process, this chapter proposes an analysis of a ``surrogate process,'' for which a tighter bound can be shown. The ``surrogate process'' is carefully constructed from the original process, so that its decoding time upper bounds that of the original process. This property guarantees that the tighter bound on the ``surrogate process'' applies to the original, and results in a significantly higher achievability bound. The bounds are tightened further by jointly optimizing both phases of the two-phase analysis used to obtain the original bounds. A proof that the simple encoder of Ch. 2 satisfies the relaxed constraints is provided. Chapter 3 then proposes a converse bound, an upper bound on the highest expected rate that is achievable by an encoder that enforces the stopping rule used by the proposed encoders.
Chapter 4 extends the study of feedback codes over the binary symmetric channel to the ``causal encoding'' setting, where the source information sequence becomes progressively available during transmission instead of the traditional setting where the entire source sequence is available before transmission begins. In ``causal encoding'' the encoder seeks to minimize the average decoding delay under the same frame error rate constraint considered in Ch. 2 and Ch. 3. The sub-block combining algorithm is proposed as a ``causal encoder,'' that starts the transmission with a segment of the information sequence, and adds new segments as they become available. The chapter identifies lower bounds on expected decoding times imposed by the channel and the source, as well as the region where a causal encoder may outperform existing non-causal encoders. Simulation results are provided to show that the sub-block combining algorithm outperforms, in average decoding time, non-causal encoders, in their original form, and with natural modifications that make them better ``causal encoders'' under certain conditions. The performance of the sub-block combining algorithm is further improved using a method that analyzes optimized block sizes for the specific operating point, set by the source and channel symbol rates.
Chapter 5 studies the ``sparse feedback'' setting where feedback symbols are only available to the transmitter at sparse time instances, instead of being available before the transmission of each symbol. A new encoding rule is introduced, that also satisfies the relaxed constraints proposed in Ch. 3. Then the ``look-ahead algorithm'' is proposed, to satisfy the encoding constraints for a few transmissions in advance, without additional feedback. Simulation results show that the ``look-ahead algorithm'' admits average feedback delay that increases with message length, from slightly above one transmission at a message length of about ten bites, to about five to six when the message size reaches 80 bits. The sub-block combining algorithm of Ch. 4 is modified to operate on several blocks simultaneously and used as a sparse feedback encoder. This algorithm exhibits much lower complexity, which makes it suitable for message sizes of up to a few hundred bits, that are not possible with the look-ahead algorithm.
Chapter 6 provides the conclusions of this dissertation, and highlights future research directions.