Skip to main content
Open Access Publications from the University of California

UC San Diego

UC San Diego Electronic Theses and Dissertations bannerUC San Diego

Signal processing for 3D videos and displays


The consumer 3D television revolution is all set to take off in the near future. Any 3D technology which promises a natural viewing experience has to reproduce a continuous lightfield akin to the real visual world in a suitable manner for the human eyes to sample. The input to such a goggles-free 3D display technology is a set of discrete views which have been captured using a set of cameras. The 3D display technology then recreates the continuous 3D scene or lightfield from these views. Many issues arise in the whole 3D scene capture and display pipeline, some of which have been analyzed and solved from a signal processing perspective in this dissertation. First, we take a look at the frequency domain analysis of the 3D scene sampling problem for multiview displays. Next, we consider the reconstruction of a lightfield on the display from a set of views. There are two issues which arise here : aliasing due to undersampling (fewer number of views), as well as light leakage due to the physical construction of the display. We analyze these two issues and provide a joint solution, such that the final reconstructed 3D image is free of aliasing artifacts and also looks sharp on the 3D display. For any practical 3D video system, compression is a crucial issue. There have been recent efforts to extend standard 2D H.264 video coding to the 3D scenario. We propose an extension to such a coding system which takes the display properties into account. Low spatial resolution is an issue in multiview displays due to view multiplexing. Also, based on a lightfield signal analysis, it can be shown that objects away from the zero disparity plane appear aliased on the multiview display, which become blurry when subject to antialias prefiltering. This results in a further loss of spatial resolution. We propose two alternative techniques for view resizing which go beyond conventional sampling theorems, by extending content adaptive resizing algorithms to the 3D scenario, as well as making use of the interplay between 2D and 3D visual cues. The advantage of these algorithms is that they retain high (2D spatial) resolution, without compromising too much on the 3D depth perception provided by the multiview displays. Finally, we revisit the 3D scene sampling problem from a display point of view, i.e. the problem of processing a set of camera views to make them suitable for a given multiview 3D display. To make this possible, we address the view interpolation problem with limited geometry information using a multiscale overcomplete operator framework. We also propose a metric for checking the quality of geometry based view interpolation in the case of non-occlusions. We conclude with some comments about the theoretical and practical feasibility of a such a view interpolation metric

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View