Skip to main content
eScholarship
Open Access Publications from the University of California

Displaying readable object-space text in a head tracked, stereoscopic virtual environment

  • Author(s): Karasuda, Eric
  • McMains, Sara
  • et al.
Abstract

Object space text, although desirable for its correct occlusion behavior, often appears blurry or “shimmery” due to rapidly alternating text thickness when used with head tracked binocular stereo viewing. Text thickness tends to vary because it depends on scan conversion, which in turn depends on the user’s location in a head tracked environment, and the user almost never stays perfectly still. This paper describes a simple method of eliminating such blurriness for object space text that need not have a fixed location in the virtual environment, such as menu system and annotation text. Our approach positions text relative to the user’s view frustums (one frustum per eye), adjusting the 3D position of each piece of text as the user moves, so that the text occupies a constant place in each of the view frustums and projects to the same pixels regardless of the user’s location.

Main Content
Current View