UC San Diego
Visual exploration in volume rendering for multi-channel data
- Author(s): Kim, Han Suk
- et al.
Volume rendering has been an important tool to understand scientific 3D data. Traditional volume data contain only one value per voxel, but light microscopy captures multiple protein expressions from different fluorescences. This technology generates multi-channel data where several values are defined at each voxel. Because traditional volume rendering systems have assumed single channel information, i.e., opacity, there exists a significant gap between the technology available for single channel and for multi-channel data, especially in visual exploration methods for better understanding of the data. In this dissertation, we bridge the gap by investigating the characteristics of multi-channel data. The visual exploration in volume rendering has been done in many ways, but two of the most critical and effective approaches for multi-channel data are transfer function design and viewpoint selection. We first propose a new method for multi-channel transfer function design. The challenge for designing multi-dimensional transfer functions is the dimensionality of the domain. Multi-channel data often contain more than three values per voxel, which prevents users from manipulating color and opacity on multi- dimensional domain. Moreover, adding additional attributes, such as gradient, second order derivatives, or textural information, further increases the dimension of the domain. We apply recently-developed nonlinear dimensionality reduction algorithms to reduce the high dimensionality of the domain. In this work, we use Isomap and Locally Linear Embedding as well as Principal Component Analysis. Furthermore, we present a real-time viewpoint selection algorithm for multi- channel data. Because the transfer function dramatically changes the appearance of the multi-channel data, users see different objects in the data depending on the transfer function exploration. This characteristic of visualization of multi -channel data necessitates real-time viewpoint selection. The automatic viewpoint selection in the course of transfer function exploration enables users to quickly understand the data. Our algorithm takes under a second for various volume data sets, which is about 40 to 80 times faster than in previous approaches. This allows the algorithm to be integrated with real-time transfer function exploration