The Computer Aided Design and Manufacturing Laboratory, in the Department of Mechanical Engineering, encompasses research in geometric and solid modeling, CAD/CAM, computer aided process planning, solid freeform fabrication, computer graphics and visualization, virtual prototyping, and virtual reality. A primary focus is on efficient geometric algorithms. Current research includes developing new techniques for accessibility analysis and collision detection, with applications in haptic design environments, design for manufacturing for injection molding, layered manufacturing, and machining.
Object space text, although desirable for its correct occlusion behavior, often appears blurry or “shimmery” due to rapidly alternating text thickness when used with head tracked binocular stereo viewing. Text thickness tends to vary because it depends on scan conversion, which in turn depends on the user’s location in a head tracked environment, and the user almost never stays perfectly still. This paper describes a simple method of eliminating such blurriness for object space text that need not have a fixed location in the virtual environment, such as menu system and annotation text. Our approach positions text relative to the user’s view frustums (one frustum per eye), adjusting the 3D position of each piece of text as the user moves, so that the text occupies a constant place in each of the view frustums and projects to the same pixels regardless of the user’s location.