The huge amount of data embodied in a video signal is by far the biggest burden on existing wireless communication systems. Adopting an efficient video transmission strategy is thus crucial in order to deliver video data at the lowest bit rate and the highest quality possible. Unequal error protection (UEP) is a powerful tool in this regard, whose ultimate goal is to wisely provide a stronger protection for the more important data, and a weaker protection for the less important data carried by a video signal. The use of efficient video delivery techniques becomes more important when 3D video content is transmitted over a wireless channel, since it contains twice as much data as 2D video. In this dissertation, we consider the UEP problem for transmission of 3D video over wireless channels. The proposed UEP techniques entail relatively high computational complexity which lend themselves to be more suitable for video-on-demand delivery, where the time-consuming computations are done offline at the transmitter/encoder side.
To adopt UEP for 3D video, we consider a general problem of joint source-channel coding (JSCC). Solving the JSCC problem yields the optimum amount of 3D video compression as well as the optimum FEC (forward error correction) code rates exploited for UEP. We first need to estimate the perceived quality of the reconstructed video at the receiver. The lack of a good objective metric for 3D video makes adopting UEP a more challenging and problematic task compared to 2D video. Fortunately, for 3D video, some quality thresholds are derived in the literature based on the PSNR (peak-signal-to-noise-ratio) metric through experimental tests. These thresholds allow us to formulate the JSCC optimization problem using the PSNR in a straightforward but different way from the typical counterpart optimization problems in the literature. More precisely, we put the constraints of the optimization problem on the quality of the reconstructed 3D video and set our goal to minimize the total bit rate. We adopt the multiview coding (MVC) extension of the H.264/AVC. We also propose a scalable variant of MVC and formulate and solve the JSCC optimization problem for it. We show that significant gains are obtained if the proposed UEP scheme is combined with asymmetric coding.
We also tackle the UEP problem for the video plus depth (V+D) format. We employ the SSIM (Structural SIMilarity) metric for designing UEP for V+D, since it has been shown that PSNR does not properly characterize the perceived quality of a 3D video represented in V+D format. Moreover, the synthesized right view always shows a huge PSNR loss (even in the absence of compression), which does not even allow us to use the asymmetric coding PSNR thresholds. This motivated us to adopt the classical JSCC problem formulation, where our goal is to maximize the quality of the reconstructed left and right views, given that there is a constraint on the sum of the number of source bits and the number of FEC bits. We show that UEP provides significant gains compared to equal error protection. We also derive several interesting results; some of them are in accordance with what have already been published in the literature and some of them are not. We show that the reason for this inconsistency is that we are solving the UEP problem in a more general situation, which yields novel solutions.
Lastly, we focus on UEP for video broadcasting over wireless channels. Our goal here is to design a UEP-based video broadcasting system that well serves all the users within the service area of a base station. In a service area, there exist heterogeneous users with different display resolutions operating at different bit rates. Spatially scalable video is an excellent video compression format for this scenario, since it allows a user to decode that portion of the scalable bit stream that fits its operating bit rate as well as its display resolution. We tackle this problem for a MIMO (multi-input-multi-output) channel which enables us to exploit either spatial diversity or spatial multiplexing in a multipath fading channel to increase channel reliability or throughput, respectively. We employ spatial diversity techniques, in particular the Alamouti code, to encode the base layer. We also adopt spatial multiplexing techniques, in particular the V-BLAST, to encode the enhancement layer. By controlling the power allocation between the base layer and the enhancement layer, we can control the level of protection we provide to each of them. We also show that the adoption of scalable video in our system yields much higher gains compared to non-scalable video.