Skip to main content
eScholarship
Open Access Publications from the University of California

IDAV Publications

Cover page of Towards Sensor-Aided Multi-View Reconstruction for High Accuracy Applications

Towards Sensor-Aided Multi-View Reconstruction for High Accuracy Applications

(2014)

We present the general idea of a computer vision structure-from-motion framework that makes use of sensor fusion to provide very accurate and efficient multi-view reconstruction results that can capture internal geometry. Given the increased ubiquity and cost-effectiveness of embedding sensors, such as positional sensors, into objects, it has become feasible to fuse such sensor data and camera-acquired data to vastly improve reconstruction quality and enable a number of novel applications for structure-from-motion. Application areas, which require very high accuracy, include medicine, robotics, security, and additive manufacturing (3D printing). Specific examples and initial results are discussed, followed by a discussion on proposed future work.

Cover page of GPU-Accelerated and Efficient Multi-View Triangulation for Scene Reconstruction

GPU-Accelerated and Efficient Multi-View Triangulation for Scene Reconstruction

(2014)

This paper presents a framework for GPU-accelerated N-view triangulation in multi-view reconstruction that improves processing time and final reprojection error with respect to methods in the literature. The framework uses an algorithm based on optimizing an angular error-based L1 cost function and it is shown how adaptive gradient descent can be applied for convergence. The triangulation algorithm is mapped onto the GPU and two approaches for parallelization are compared: one thread per track and one thread block per track. The better performing approach depends on the number of tracks and the lengths of the tracks in the dataset. Furthermore, the algorithm uses statistical sampling based on confidence levels to successfully reduce the quantity of feature track positions needed to triangulate an entire track. Sampling aids in load balancing for the GPU's SIMD architecture and for exploiting the GPU's memory hierarchy. When compared to a serial implementation, a typical performance increase of 3--4x can be achieved on a 4-core CPU. On a GPU, large track numbers are favorable and an increase of up to 40x can be achieved. Results on real and synthetic data prove that reprojection errors are similar to the best performing current triangulation methods but costing only a fraction of the computation time, allowing for efficient and accurate triangulation of large scenes.

Cover page of Uncertainty, Baseline, and Noise Analysis for L1 Error-Based Multi-View Triangulation

Uncertainty, Baseline, and Noise Analysis for L1 Error-Based Multi-View Triangulation

(2014)

A comprehensive uncertainty, baseline, and noise analysis in computing 3D points using a recent L1-based triangulation algorithm is presented. This method is shown to be not only faster and more accurate than its main competitor, linear triangulation, but also more stable under noise and baseline changes. A Monte Carlo analysis of covariance and a confidence ellipsoid analysis were performed over a large range of baselines and noise levels for different camera configurations, to compare performance between angular error-based and linear triangulation. Furthermore, the effect of baseline and noise was analyzed for true multi-view triangulation versus pairwise stereo fusion. Results on real and synthetic data show that L1 angular error-based triangulation has a positive effect on confidence ellipsoids, lowers covariance values and results in more-accurate pairwise and multi-view triangulation, for varying numbers of cameras and configurations.

Cover page of Future Challenges for Ensemble Visualization

Future Challenges for Ensemble Visualization

(2014)

The simulation of complex events is a challenging task and often requires careful selection of simulation parameters. With the availability of vast computation resources, it has become possible to run several alternative parameter settings or simulation models in parallel, creating an 'ensemble' of possible outcomes for a given event of interest. Recently, the visual analysis of such ensemble data has repeatedly come up as one of the most important new areas of visualization and it is expected to have a wide impact on the field of visualization in the next few years. The main challenge is to develop expressive visualizations of properties of this set of solutions, the ensemble, to support scientists in this challenging parameter-space exploration task. This paper presents and explores future challenges for ensemble visualization.

Cover page of Point-Based Rendering of Forest LiDAR

Point-Based Rendering of Forest LiDAR

(2014)

Airborne Light Detection And Ranging (LiDAR) is an increasingly important modality for remote sensing of forests. Unfortunately, the lack of smooth surfaces complicates visualization of LiDAR data and of the results of fundamental analysis tasks that interest environmental scientists. In this paper, we use multi-pass point-cloud rendering to produce shadows, approximate occlusion, and a non-photorealistic silhouette effect which enhances the perception of the three-dimensional structure. We employ these techniques to provide visualizations for evaluating two analysis techniques, tree segmentation and forest structure clustering.

Cover page of Topological Aspects of Material Interface Reconstruction: Challenges and Perspectives

Topological Aspects of Material Interface Reconstruction: Challenges and Perspectives

(2013)

Multi-fluid simulations, especially volume of fluid datasets, confront visualization experts with the challenge of reconstructing appropriate material interfaces that accurately delimit fluid boundaries. In general this reconstruction problem does not have an unique solution, leading to possible spatial and temporal inconsistencies in the reconstructed interfaces. In this paper we present and discuss challenges and directions for topology based analysis of volume of fluid data and its interfaces. We investigate the suitability of established topological methods for solving these challenges, analyze their potential and drawbacks, and propose future research directions.

Cover page of Extracting and Visualizing Topological Information from Large High-Dimensional Data Sets

Extracting and Visualizing Topological Information from Large High-Dimensional Data Sets

(2013)

This doctoral dissertation explores and advances topology-based data analysis and visualization, a field that concerns itself with creating tools for gaining insights from scientific data, thus supporting the process of scientific knowledge discovery. In particular, the study proposes two novel analytical techniques, inspired by domain-specific problems and presents a study of error of approximation for these techniques. The first part of the dissertation focuses on a specific problem that arises in computational chemistry. Analysis of transformation pathways is a well-known tool for the investigation of chemical systems and has implications for the design of chemical reactions and materials. However, existing techniques for analyzing transformation pathways either lack the required level of detail for such analyses, or they are limited to low-dimensional data. These issues, complicated by the noise in data and by issues of handling periodic boundaries, are addressed by a novel technique, which involves the extraction of a topological structure, the ``Morse complex,'' and visualizing it as a graph, augmented with additional information, enabling the desired end-user analysis. The technique is then successfully applied to the analyses of two different types of chemical data, which demonstrates its utility. The second part of the dissertation concentrates on the problem of enabling the comparison of data sets in terms of their topologies. In particular, the focus is on enabling the comparison between different instances of the same topological structure, namely a contour tree. One possible solution to this problem is to correlate contour trees in terms of the geometric proximity of their critical points. In order to visualize this correlation, a novel technique combines the extraction of the contour trees, dimensionality reduction, graph drawing, and contours construction. The technique produces a visual metaphor called a ``geometry-preserving topological landscape.'' The utility of the technique is demonstrated through a comparative analysis of data sets based on their corresponding landscapes. The remainder of the dissertation is dedicated to studying the problem of error quantification for the proposed techniques, as well as for more general settings. In particular, the focus is on approximation methods used to reconstruct a domain. For example, by studying the ability of these techniques to preserve topological information, one can derive method selection recommendations, which are potentially generalizable to various topological data analysis techniques. To address this problem, a novel definition of a difference measure for topological abstraction, the ''merge tree,'' is presented and subsequently used to evaluate the previously mentioned approximation methods. The resulting recommendations are found to support the selection of approximation methods for the two proposed techniques.

Cover page of Statistical Angular Error-Based Triangulation for Efficient and Accurate Multi-View Scene Reconstruction

Statistical Angular Error-Based Triangulation for Efficient and Accurate Multi-View Scene Reconstruction

(2013)

This paper presents a framework for N-view triangulation of scene points, which improves processing time and final reprojection error with respect to standard methods, such as linear triangulation. The framework introduces an angular error-based cost function, which is robust to outliers and inexpensive to compute, and designed such that simple adaptive gradient descent can be applied for convergence. Our method also presents a statistical sampling component based on confidence levels, that reduces the number of rays to be used for triangulation of a given feature track. It is shown how the statistical component yields a meaningful yet much reduced set of representative rays for triangulation, and how the application of the cost function on the reduced sample can efficiently yield faster and more accurate solutions. Results are demonstrated on real and synthetic data, where it is proven to significantly increase the speed of triangulation and optimize reprojection error in most cases. This makes it especially attractive for efficient triangulation of large scenes given the speed and low memory requirements.

Cover page of Visualization Methods for Computer Vision Analysis

Visualization Methods for Computer Vision Analysis

(2013)

We present the general idea of using common tools from the field of scientific visualization to aid in the design, implementation and testing of computer vision algorithms, as a complementary and educational component to purely mathematics-based algorithms and results. The interaction between these two broad disciplines has been basically non-existent in the literature, and through initial work we have been able to show the benefits of merging visualization techniques into vision for analyzing patterns in computed parameters. Specific examples and initial results are discussed, such as scalar field-based renderings for scene reconstruction uncertainty and sensitivity analysis as well as feature tracking summaries, followed by a discussion on proposed future work.