Coded Content Caching for Wireless Networks
- Author(s): Hachem, Jad
- Advisor(s): Diggavi, Suhas N
- et al.
The increase in mobile Internet usage has created a need to optimize content delivery in cellular networks.
In emerging technologies such as 5G, heterogeneous networks are proposed, which consist of sparsely deployed cellular base stations (BS) with wide coverage but low data rate, combined with a dense network of wireless access points (AP) of small coverage but relatively high data rate.
We envisage equipping the APs with a local cache.
When a group of users request some files, their demands are served by a (common) broadcast from one or more BSs, which is aided by side information placed a priori in the APs.
Therefore, there is a tradeoff between the size of the cache and the size of the broadcast.
Our goal is to design schemes that optimize this tradeoff.
Traditional caching techniques, which have proved efficient in the wired Internet, are insufficient to handle the explosion in multimedia demand in wireless networks.
The seminal work by Maddah-Ali and Niesen [``Fundamental limits of caching,'' IEEE Transactions on Information Theory, May 2014] introduced an information-theoretic framework to study this problem and proposed the so-called ``coded caching'' technique that takes advantage of the broadcast nature of wireless to send coded multicast messages to many users at once, thus greatly improving the transmission rate.
This original work assumed a caching system with an error-free broadcast link between the content library and the users, focused on equally popular files, and assumed that each user has one exclusive local cache.
Inspired by this work, this thesis studies more general coded caching problems within this information-theoretic framework.
Broadly speaking, we focus on non-uniform content popularity as well as more general network topologies.
First, we consider a system where the files desired by the users are not all equally popular.
We adopt a ``multi-level'' popularity model where files are partitioned into multiple popularity classes.
Under this model, we study the behavior of the system as the total number of users, as compared to the number of caches, varies.
Furthermore, we allow a more complex topology by requiring some users to connect to multiple caches at once.
We find approximately optimal strategies for the two extreme cases: when the number of users per cache is very large, and when each cache has exactly one user.
An interesting dichotomy is observed where the approximately optimal strategies required for these two extremes are very different.
Finally, we provide a heuristic for ``discretizing'' common popularity distributions such as Zipf into multiple levels, and numerically evaluate its performance.
Second, we study the caching problem when we are allowed to assign users to caches after their demands are known, under some restrictions.
Specifically, we divide the caches into several clusters, and we assume that each user can be assigned to one cache from a specific cluster and that each cache can serve no more than one user.
Focusing on a stochastic Zipf popularity model, we find that there are regimes in which coded caching is no longer efficient.
Instead, a strategy that consists in replicating files across clusters and performs an uncoded delivery dominates certain regimes.
We compare these two schemes and find the regimes in which each is more efficient, as a function of cache memory, cluster size, and skewness of popularity.
Finally, we show that each scheme is approximately optimal in some of these regimes.
Third, we return to the uniform popularity model in order to study more complicated networks than the error-free broadcast network.
Our main focus is on Gaussian interference networks, where caches are placed at both the transmitters (BSs) and the receivers (APs).
We propose a separation-based approach that creates separate network and physical layers, with a multiple-multicast message set to act as an interface between them.
At the physical layer, we focus on transmitting this message set across the interference channel; at the network layer, we solve the caching problem using the message set as a set of error-free links replacing the channel.
We show that this architecture is approximately optimal under high SNR.
Among the implications of this result is that placing common information between the transmitters cannot give more than a constant-factor benefit.
Moreover, we show that, when the receiver memory is large, a small number of transmitters is enough to obtain most of the benefits.