A modern version of the idea that the area of event horizons gives $4G$ times
an entropy is the Hubeny-Rangamani Causal Holographic Information (CHI)
proposal for holographic field theories. Given a region $R$ of a holographic
QFTs, CHI computes $A/4G$ on a certain cut of an event horizon in the
gravitational dual. The result is naturally interpreted as a coarse-grained
entropy for the QFT. CHI is known to be finitely greater than the fine-grained
Hubeny-Rangamani-Takayanagi (HRT) entropy when $\partial R$ lies on a Killing
horizon of the QFT spacetime, and in this context satisfies other non-trivial
properties expected of an entropy. Here we present evidence that it also
satisfies the quantum null energy condition (QNEC), which bounds the second
derivative of the entropy of a quantum field theory on one side of a
non-expanding null surface by the flux of stress-energy across the surface. In
particular, we show CHI to satisfy the QNEC in 1+1 holographic CFTs when
evaluated in states dual to conical defects in AdS$_3$. This surprising result
further supports the idea that CHI defines a useful notion of coarse-grained
holographic entropy, and suggests unprecedented bounds on the rate at which
bulk horizon generators emerge from a caustic. To supplement our motivation, we
include an appendix deriving a corresponding coarse-grained generalized second
law for 1+1 holographic CFTs perturbatively coupled to dilaton gravity.

We consider the universal sector of a $d$-dimensional large-$N$
strongly-interacting holographic CFT on a black hole spacetime background $B$.
When our CFT$_d$ is coupled to dynamical Einstein-Hilbert gravity with Newton
constant $G_{d}$, the combined system can be shown to satisfy a version of the
thermodynamic Generalized Second Law (GSL) at leading order in $G_{d}$. The
quantity $S_{CFT} + \frac{A(H_{B, \text{perturbed}})}{4G_{d}}$ is
non-decreasing, where $A(H_{B, \text{perturbed}})$ is the (time-dependent) area
of the new event horizon in the coupled theory. Our $S_{CFT}$ is the notion of
(coarse-grained) CFT entropy outside the black hole given by causal holographic
information -- a quantity in turn defined in the AdS$_{d+1}$ dual by the
renormalized area $A_{ren}(H_{\rm bulk})$ of a corresponding bulk causal
horizon. A corollary is that the fine-grained GSL must hold for finite
processes taken as a whole, though local decreases of the fine-grained
generalized entropy are not obviously forbidden. Another corollary, given by
setting $G_{d} = 0$, states that no finite process taken as a whole can
increase the renormalized free energy $F = E_{out} - T S_{CFT} - \Omega J -
\Phi Q$, with $T, \Omega, \Phi$ constants set by ${H}_B$. This latter corollary
constitutes a 2nd law for appropriate non-compact AdS event horizons.

In this paper, we design a robust scheduling algorithm to ensure worst-case optimal performance when only coarse-grained channel state information (i.e., bounds on channel errors, but not the fine-grained error pattern) is available To solve this problem, we consider two coarse-grained channel error models and take a zero-sum game theoretic approach, in which the scheduler and the channel error act as non-cooperative adversaries in the scheduling process. Our results show that in the heavy channel error case, the optimal scheduler adopts a threshold form. It does not schedule a flow if the price (the flow is willing to pay) is too small, in order to maximize the system revenue. Among the scheduled flows, the scheduler schedules a flow with a probability inversely proportional to the flow price such that the risk of being caught by the channel error adversary is minimized. We also show that in the mild channel error model, the robust scheduling policy exhibits a balanced trade-off between a greedy decision and a conservative policy. The scheduler is likely to take a greedy decision if it evaluates the risk of encountering the channel error adversary now to be small. Therefore, robust scheduling does not always imply conservative decision. The scheduler is willing to take "risks" to expect higher gain in some scenarios. Our solution also shows that probabilistic scheduling may lead to higher worst-case performance compared to traditional deterministic policies. Finally, the current efforts show the feasibility to explore a probabilistic approach to cope with dynamic channel error conditions.