Object space text, although desirable for its correct occlusion behavior, often appears blurry or “shimmery” due to rapidly alternating text thickness when used with head tracked binocular stereo viewing. Text thickness tends to vary because it depends on scan conversion, which in turn depends on the user’s location in a head tracked environment, and the user almost never stays perfectly still. This paper describes a simple method of eliminating such blurriness for object space text that need not have a fixed location in the virtual environment, such as menu system and annotation text. Our approach positions text relative to the user’s view frustums (one frustum per eye), adjusting the 3D position of each piece of text as the user moves, so that the text occupies a constant place in each of the view frustums and projects to the same pixels regardless of the user’s location.

## Type of Work

Article (12) Book (0) Theses (7) Multimedia (0)

## Peer Review

Peer-reviewed only (17)

## Supplemental Material

Video (0) Audio (0) Images (0) Zip (0) Other files (0)

## Publication Year

## Campus

UC Berkeley (19) UC Davis (0) UC Irvine (0) UCLA (0) UC Merced (0) UC Riverside (0) UC San Diego (0) UCSF (0) UC Santa Barbara (0) UC Santa Cruz (0) UC Office of the President (0) Lawrence Berkeley National Laboratory (1) UC Agriculture & Natural Resources (0)

## Department

Department of Mechanical Engineering (4) Laboratory for Manufacturing and Sustainability (2) College of Chemistry (1) Computer Aided Design and Manufacturing Laboratory (1) Energy Sciences (1)

## Journal

## Discipline

Engineering (9) Education (1)

## Reuse License

## Scholarly Works (19 results)

This thesis describes new parallel GPU algorithms that accelerate fundamental CAD operations such as spline evaluations, surface-surface intersections, minimum distance computations, moment computations, etc., thereby improving the interactivity of a CAD system.

CAD systems (such as SolidWorks, AutoCAD, ProE, etc.) create graphical user interfaces for solid modeling, which build on fundamental CAD operations that are performed by a modeling kernel. However, since many of these fundamental operations are compute-intensive, the CAD systems make the designer wait until a particular operation is completed before providing visual feedback and allowing new operations to be performed, reducing interactivity. The broad objective of this research is to develop new parallel algorithms for CAD that run on Graphics Processing Units to provide order-of-magnitude better performance than current CPU implementations.

A critical operation that all CAD systems have to perform is the evaluation of Non-Uniform Rational B-Splines (NURBS) surfaces. We developed a unified parallel algorithm to evaluate and render a NURBS surface directly using the GPU. The GPU algorithm can render over 100 NURBS surfaces at 30 frames per second, significantly enhancing interactivity.

Fundamental modeling operations (such as surface intersections, separation distance computations, etc.) are typically performed repeatedly in a CAD system during modeling. We have developed GPU-accelerated algorithms that perform surface-surface intersections more than 50 times faster than the commercial solid modeling kernel ACIS. We have also developed GPU algorithms to perform minimum distance computations, which have applications in multi-axis machining, path planning, and clearance analysis. These algorithms are not only more than two orders of magnitude faster than the CPU implementations, they often have much tighter error bounds.

We have also developed algorithms for computing accurate geometric moments of solid models that are represented using multiple trimmed-NURBS surfaces. We have developed a framework that makes use of NURBS surface data to evaluate surface integrals of trimmed NURBS surfaces in real time. With our framework, we can compute volume and moments of solid models with error estimates. The framework also supports local geometry changes, which is useful for providing interactive feedback to the designer while the solid model is being designed.

Finally, the ultimate objective of this research is to provide a generalized framework to overcome some of the GPU programming challenges in CAD. Using this framework, a programmer could easily develop complex CAD algorithms that utilize the GPU to improve the performance of CAD systems.

This thesis describes geometric algorithms to check the cleanability of a design during the manufacturing process. The automotive industry needs a computational tool to determine how to clean their products due to the trend of miniaturization and increased geometric complexity of mechanical parts. A newly emerging concept in a product design, Design-for-Cleanability, necessitates algorithms to help designers to design parts that are easy to clean during the manufacturing process. In this thesis, we consider cleaning using high-pressure water jets to clean off the surfaces of workpieces. Specifically, we solve the following two problems purely from a geometric perspective: predicting water trap regions of a workpiece and finding a rotation axis to drain a workpiece.

Finding an orientation that minimizes the potential water trap regions and/or controls their locations when the workpiece is fixtured for water jet cleaning is important to increase the cleaning efficiency. Trapped water leads to stagnation areas, preventing efficient flow cleaning. Minimizing the potential water trap also reduces the draining time and effort after cleaning. We propose a new pool segmentation data structure and algorithm based on topological changes of 2D slices with respect to the gravity direction. Then, we can quickly predict potential water trap regions of a given geometry by analyzing our directed graph based on the segmented pools.

Given a workpiece filled with water after cleaning, to minimize the subsequent drying time, our industrial partner first mounts workpieces on a slowly rotating carrier so that gravity can drain out as much water as possible. We propose an algorithm to find a rotation axis that drains the workpiece when the rotation axis is set parallel to the ground and the workpiece is rotated around the axis. Observing that all water traps contain a concave vertex, we solve our problem by constructing and analyzing a directed "draining graph" whose nodes correspond to concave vertices of the geometry and whose edges are set according to the transition of trapped water when we rotate the workpiece around the given axis. We first introduce an algorithm to test whether a given rotation axis can drain the workpiece. We then extend these concepts to design an algorithm to find the set of all rotation axes that drain the workpiece. If such a rotation axis does not exist, our algorithm will also detect that. To the best of our knowledge, our work is the first to tackle the draining problem and to give an algorithm for the problem.

Minkowski sums are a fundamental operation for many applications in Computer-Aided Design and Manufacturing, such as solid modeling (offsetting and sweeping), collision detection, toolpath planning, assembly/disassembly planning, and penetration depth computation. Configuration spaces (C-spaces) are closely related to Minkowski sums; we analyze accessibility for waterjet cleaning processes as an example to illustrate the important relationship between them. We describe an algorithm for finding all the cleanable regions given the geometry of a workpiece. Minkowski sums are used to compute the C-spaces and cleanable regions are then found by visibility analysis.

Computing the Minkowski sum of two arbitrary polyhedra in R^{3} is difficult because of high combinatorial complexity. We present two algorithms for directly computing a voxelization of the Minkowski sum of two closed watertight polyhedra that run on the Graphics Processing Unit (GPU) and do not need to compute a complete boundary representation (B-rep).

For the first voxelization algorithm, we put forward a new formula that decomposes the Minkowski sum of two polyhedra into the union of the Minkowski sum of their boundaries and a translation of each input polyhedron. The union is then voxelized on the GPU using the stencil shadow volume technique. The performance of this algorithm depends on the numbers of faces of the two polyhedra.

For Minkowski sums in cases where we do not need to consider enclosed voids, we propose the second voxelization algorithm, which has much faster running times and also achieves higher resolution. It first robustly culls primitives that cannot contribute to the final boundary of the Minkowski sum, and then uses flood fill to find all the outer voxels. The performance of this algorithm depends on both the numbers of faces of the input polyhedra and the shape complexity of the Minkowski sum.

We demonstrate applications of the voxelized Minkowski sums in solid modeling, motion planning, and penetration depth computation. Compared with existing B-rep based algorithms, our voxelization algorithms are easy to implement and avoid the extra sampling process required in many applications.

The objective of this research is to enable real-time in-situ monitoring for the Selective Laser Melting (SLM) process, by providing diagnostic feedback from monitoring that can be used to automate and adjust SLM system parameter settings. The ultimate goal is to improve SLM product quality and manufacturing productivity. We propose a deep learning approach to monitoring that takes in-situ videos as input data fed to convolutional neural networks (CNNs). We describe the entire monitoring framework, including running SLM experiments, collecting SLM video data, image processing for ex-situ generated height maps, generating labels for in-situ data from ex-situ measurement, and training CNNs with labeled in-situ video data. Experimental results show that our approach successfully recognizes the desired SLM process metrics (e.g. size, continuity) from in-situ video data.

In order to train effective CNNs, besides collecting extensive SLM video data, we also need to label it. We have automated the process of generating labels from ex-situ measurements of the corresponding finished SLM experimental output. The ex-situ measurements provide high-precision height maps for the product surface, to which we apply our proposed image processing algorithm to calculate process quality metrics as labels.

However, our proposed automated labeling approach requires high-precision height maps, which are generated from an expensive Structured Light Microscope. It might not be readily available to other researchers and institutions, or not enough machine time may be available to label all experiments for which there is video. Thus there might not be enough labeled data to train effective CNNs. This research also combines semi-supervised learning with our original approach to address this problem. Semi-supervised learning method enables other researchers to address the problem without requiring a huge amount of labeled data.

In addition, in practice, another issue is label noise. Even though the data labels were generated using high-precision height maps, the labels are not perfect and might still contain incorrect labels, known as ``noisy'' labels. We propose novel approaches to improve neural networks' performance when they are trained under label noise. The proposed approaches can be easily combined with other existing approaches that address the label noise problem to further improve the prediction accuracy, with very few additional hyperparameters that need to be tuned. Experimental results demonstrate that our approaches can significantly improve CNN models' prediction accuracy when training neural networks with noisy labels.

During the development of a new product, it is difficult for designers to predict how their design decisions will impact manufacturability and manufacturing cost of the individual parts in their product. Additive manufacturing is increasingly becoming a viable option to produce high fidelity prototypes and even small-scale production part runs. However, as an emerging technology, there are few resources available to help designers make design decisions regarding quality and manufacturability for additive manufacturing. Most information developed to help designers ensure manufacturability is in the form of general guidelines that designers must interpret and then use their best judgment to scrutinize their design. Designers can only guess, based on previous experience, if the process can produce part features that meet their specified geometric tolerances. However, by using algorithms to analyze part geometry, it is possible to predict additive manufacturing outcomes. This thesis describes the development of two software tools to analyze part geometry in near real-time: one that predicts manufacturability, and another that predicts achievable quality.

These tools are used to explore how automated part geometry analysis influences the effectiveness of design for additive manufacturing feedback. The research hypothesis of this thesis is that part geometry analysis improves the practicality, accuracy, and usefulness of design for additive manufacturing feedback. To test this hypothesis, three research thrusts were conducted: evaluating the performance of the newly developed tools relative to existing tools, experimental verification of the predictions of the tools, and a user study evaluating usage of the manufacturability tool during a design task. Comparison with existing tools indicated that both tools described in this thesis have similar computation time as existing solutions, while providing greater potential to allow designers to analyze manufacturing trade-offs, with a more comprehensive approach to modeling sources of errors in the manufacturing process. A range of parts were printed using fused deposition modeling and then inspected. The experimental results showed that the predictions of both tools were relatively accurate, and highlighted several additional process parameters that can be included in the modeling approach to improve accuracy. Lastly, a user study demonstrated that use of the software tool reduced the number of manufacturability problems in participants' designs while requiring a similar amount of time to use, compared with using a list of design heuristics. The findings of the thesis support the practicality, accuracy, and usefulness of geometry analysis software tools to support design for additive manufacturing.

The objective of this research is to advance the state of the art in image matching algorithms, especially with regard to input image pairs that include dramatically inconsistent appearance (e.g., different sensor modalities, significant intensity/color changes, different times such as day/night and years apart, etc.). We denote this range of input as disparate input. To handle disparate input, one should be able to capture the underlying aspects not affected by superficial changes to appearance.

To this end, we present a novel image descriptor based on the distribution of line segments in an image; we call it DUDE (DUality Descriptor). By exploiting line-point duality, DUDE descriptors are computationally efficient and robust to unstable line segment detection. Our experiments show that DUDE can provide more true-positive correspondences for challenging disparate datasets.

Beyond traditional image matching, we have designed an effective autograding system for multiview engineering drawings that also uses DUDE to improve its performance. The autograding system needs to be able to compare drawings that may include appearance changes due to students' mistakes, but also needs to differentiate between allowable and erroneous translation and/or scale changes.

In the addition to hand-crafted descriptors, this research also investigates data-driven descriptors generated by new deep learning based approaches. Due to the lack of labeled disparate imagery datasets, it is still challenging to effectively target disparate input using deep learning approaches. Therefore we introduce an aggressive data augmentation strategy called Artificial Intensity Remapping (AIR). By applying AIR to standard datasets, one can obtain models that are more effective for registration of disparate data. Finally, we compare the DUDE descriptor to a deep learning based descriptor powered by AIR.

We present our work on analyzing and improving the energy efficiency of multi-axis CNC milling process.

Due to the differences in energy consumption behavior, we treat 3- and 5-axis CNC machines separately in our work. For 3-axis CNC machines, we first propose an energy model that estimates the energy requirement for machining a component on a specified 3-axis CNC milling machine. Our model makes machine-specific predictions of energy requirements while also considering the geometric aspects of the machining toolpath. Our model - and the associated software tool - facilitate direct comparison of various alternative toolpath strategies based on their energy-consumption performance.

Further, we identify key factors in toolpath planning that affect energy consumption in CNC machining. We then use this knowledge to propose and demonstrate a novel toolpath planning strategy that may be used to generate new toolpaths that are inherently energy-efficient, inspired by research on digital micrography -a form of computational art.

For 5-axis CNC machines, the process planning problem consists of several sub-problems that researchers have traditionally solved separately to obtain an approximate solution. After illustrating the need to solve all sub-problems simultaneously for a truly optimal solution, we propose a unified formulation based on configuration space theory. We apply our formulation to solve a problem variant that retains key characteristics of the full problem but has lower dimensionality, allowing visualization in 2D. Given the complexity of the full 5-axis toolpath planning problem, our unified formulation represents an important step towards obtaining a truly optimal solution.

With this work on the two types of CNC machines, we demonstrate that without changing the current infrastructure or business practices, machine-specific, geometry-based, customized toolpath planning can save energy in CNC machining.