The 3D Morse-Smale complex is a fundamental topological construct that partitions the domain of a real-valued function into regions having uniform gradient flow behavior. In this paper, we consider the construction and selective presentation of cells of the Morse-Smale complex and their use in the analysis and visualization of scientific datasets. We take advantage of the fact that cells of different dimension often characterize different types of features present in the data. For example, critical points pinpoint changes in topology by showing where components of the level sets are created, destroyed or modified in genus. Edges of the Morse-Smale complex extract filament-like features that are not explicitly modeled in the original data. Interactive selection and rendering of portions of the Morse-Smale complex introduces fundamental data management challenges due to the unstructured nature of the complex even for structured inputs. We describe a data structure that stores the Morse-Smale complex and allows efficient selective traversal of regions of interest. Finally, we illustrate the practical use of this approach by applying it to cryo-electron microscopy data of protein molecules.

We consider the problem of generating a map between two triangulated meshes, M and M', with arbitrary and possibly differing genus. This problem has rarely been tackled in its generality. Early schemes considered only topological spheres. Recent algorithms allow inputs with an arbitrary number of tunnels but require M and M' to have equal genus, mapping tunnel to tunnel. Other schemes which allow more general inputs are not guaranteed to work and the authors do not provide a characterization of the input meshes that can be processed successfully. Moreover, the techniques have difficulty dealing with coarse meshes with many tunnels. In this paper we present the first robust approach to build a map between two meshes of arbitrary unequal genus. We also provide a simplified method for setting the initial alignment between M and M', reducing reliance on landmarks and allowing the user to select ''landmark tunnels'' in addition to the standard landmark vertices. After computing the map, we automatically derive a continuous deformation from M to M' using a variational implicit approach to describe the evolution of non-landmark tunnels. Overall, we achieve a cross parameterization scheme that is provably robust in the sense that it can map M to M' without constraints on their relative genus or on the density of the triangulation with respect to the number of tunnels. To demonstrate the practical effectiveness of our scheme we provide a number of examples of inter-surface parameterizations between meshes of different genus and shape.

We introduce local and global comparison measures for a collection of $k \leq d$ real-valued smooth functions on a common $d$-dimensional Riemannian manifold. For $k = d = 2$ we relate the measures to the set of critical points of one function restricted to the level sets of the other. The definition of the measures extends to piecewise linear functions for which they are easy to compute. The computation of the measures forms the centerpiece of a software tool which we use to study scientific datasets.

Visualization is a highly data intensive science: visualization algorithms take as input vast amounts of data produced by simulations or experiments, and then transform that data into imagery. It turns out, as we shall explore in this chapter, that visualization reveals a somewhat different view of scientific data management challenges than are examined elsewhere in this book. For example, a data ordering and storage layout that works well for saving data from memory to disk may not be the best thing for subsequent visual data analysis algorithms. This chapter will present four broad topic areas under this general rubric: (1) a view of SDM-related issues from the perspective of implementing a production-quality, parallel capable visual data analysis infrastructure; (2) novel data storage formats for multi-resolution, streaming data movement, access and use by post-processing tools; (3) data models, formats and APIs for performing efficient I/O for both simulations and post-processing tools,
discussion of issues and previous work in this space; (4) how combining state-of-
the-art techniques from scientific data management and visualization enables visual data analysis of truly massive datasets.

Topology provides a foundation for the development of mathematically sound tools for processing and exploration of scalar fields. Existing topology-based methods can be used to identify interesting features in volumetric data sets, to find seed sets for accelerated isosurface extraction, or to treat individual connected components as distinct entities for isosurfacing or interval volume rendering. We describe a framework for direct volume rendering based on segmenting a volume into regions of equivalent contour topology and applying separate transfer functions to each region. Each region corresponds to a branch of a hierarchical contour tree decomposition, and a separate transfer function can be defined for it. The novel contributions of our work are 1) a volume rendering framework and interface where a unique transfer function can be assigned to each subvolume corresponding to a branch of the contour tree, 2) a runtime method for adjusting data values to reflect contour tree simplifications, 3) an efficient way of mapping a spatial location into the contour tree to determine the applicable transfer function, and 4) an algorithm for hardware-accelerated direct volume rendering that visualizes the contour tree-based segmentation at interactive frame rates using graphics processing units (GPUs) that support loops and conditional branches in fragment programs.

We develop a new method for segmentation of molecular surfaces. Topological analysis of a scalar function defined on the surface and its associated gradient field reveals the relationship between the features of interest and critical points of the scalar function. The segmentation is obtained by associating segments with local minima/maxima. Controlled simplification of the function merges segments resulting in a hierarchical segmentation of the molecular surface. This segmentation is used to identify rigid components of proteins molecules and to study the role of cavities and protrusions in protein-protein interactions.

Multiresolution methods provide a means for representing data at multiple levels of detail. They are typically based on a hierarchical data organization scheme and update rules needed for data value computation. We use a data organization that is based on what we call $\sqrt[n]{2}$ subdivision. The main advantage of $\sqrt[n]{2}$ subdivision, compared to quadtree (n=2) or octree (n=3) organizations, is that the number of vertices is only doubled in each subdivision step instead of multiplied by a factor of four or eight, respectively. To update data values we use n-variate B-spline wavelets, which yield better approximations for each level of detail. We develop a lifting scheme for n=2 and n=3 based on the $\sqrt[n]{2}$-subdivision scheme. We obtain narrow masks that provide a basis for out-of-core techniques as well as view-dependent visualization and adaptive, localized refinement.