Skip to main content
eScholarship
Open Access Publications from the University of California

UC Merced

UC Merced Electronic Theses and Dissertations bannerUC Merced

Integrating Wearable and Haptic Devices for Enhanced Input and Interaction in Virtual Reality

Creative Commons 'BY' version 4.0 license
Abstract

As the affordability and availability of virtual reality hardware increases, its adoption is also rapidly growing. Yet, input and interaction in virtual reality remains a significant challenge. Existing interaction techniques often lack intuitiveness, precision, and the spatial feedback users are accustomed to in the real world. They can also be cumbersome and lack mobility, further limiting usability and immersion. Furthermore, virtual reality lacks effective and efficient methods for text input. These limitations restrict the widespread adoption of virtual reality, primarily confining it to entertainment and training simulations. Furthermore, since virtual reality is still in its infancy, there is a lack of design guidelines to help designers and developers, leading to decisions driven primarily by intuition. This dissertation investigates the use of wearable and haptic technologies to overcome these limitations and create more intuitive and efficient virtual reality experiences.

First, addressing the challenges of text input in virtual reality, this dissertation begins with an investigation into the impact of key shape and dimension on text entry performance and preference. The aim is to contribute to the standardization of design practices in virtual reality through empirical data. We compare three common key shapes: hexagonal, round, square, in both two-dimension (2D) and three-dimension (3D). The results indicate that the 3D square keys provide superior performance in terms of accuracy and user preference. This suggests that replicating familiar real-world elements can significantly enhance usability, especially when a technology is in its infancy.

Second, building on these findings, we investigate mid-air text input, which is a common scenario in virtual reality environments. To address the lack of spatial feedback resulting from the absence of a physical surface, we utilize an ultrasonic haptic feedback device. In addition to incorporating the design insights from the first study, we develop three different ultrasonic haptic feedback methods: feedback only on keypress, on both touch and keypress, and gradual feedback that increases in intensity as users push down a key. A pilot study revealed that the touch & press feedback performed significantly better, both quantitatively and qualitatively. We therefore compare a mid-air keyboard with and without touch & press feedback in a user study. Results revealed that haptic feedback improves entry speed by 16% and reduces the error rate by 26%. In addition, most participants feel that it enhances presence and spatial awareness in the virtual world by maintaining a higher consistency with the real world and significantly reduces mental demand, effort, and frustration.

Third, extending upon mid-air interaction, we investigate the effectiveness of different selection gestures augmented with ultrasonic haptic feedback. We compare four commonly used mid-air target selection methods: Push, Tap, Dwell, Pinch, with two types of ultrasonic haptic feedback: feedback upon selection only, and feedback on both hover and selection, in a Fitts' law experiment. Results reveal that Tap is the fastest, the most accurate, and one of the least physically and cognitively demanding selection methods. Pinch is relatively fast but error-prone and physically and cognitively demanding. Dwell is slowest by design, yet the most accurate and the least physically and cognitively demanding. Both haptic feedback methods improve selection performance by increasing users' spatial awareness. Participants perceive the selection methods as faster, more accurate, and more physically and cognitively comfortable with the haptic feedback methods. Based on these findings, we provide guidelines for choosing optimal mid-air selection gestures considering technological limitations and task requirements.

Fourth, we further extend selection gestures and input methods in virtual reality by developing a custom wearable device that does not occupy the hands, thereby leaving them free for other tasks. We introduce a novel finger-worn device for gesture typing in virtual reality, termed the "digital thimble", which users wear on their index finger. This thimble utilizes an optical sensor to track finger movement and a pressure sensor to detect touch and contact force. We also introduce Shapeshifter, a technique that enables text entry in virtual reality through gestures and varying contact force on any opaque, diffusely reflective surface, including the human body. A week-long in-the-wild pilot study shows that Shapeshifter yields, on average, 11 words per minute (wpm) on flat surfaces (e.g., a desk), 9 wpm on the lap when sitting, and 8 wpm on the palm and back of the hand while standing in text composition tasks. In a simulation study, Shapeshifter achieves 27 wpm for text transcription tasks, outperforming current gesture typing techniques in virtual reality.

Finally, we extend the functionality of the digital thimble further and explore its usability in the context of target selection, sorting, and teleportation. We start with a Fitts' law study that compares the digital thimble with a commercial wearable mouse (previously unexplored in virtual reality contexts) and a traditional controller, using two selection methods: press and touch-release. A second user study investigates the devices for sorting and teleportation tasks. While the finger mouse demonstrated superior throughput and task completion speed, the digital thimble showed greater accuracy and precision. Participants also favored the digital thimble for its enhanced comfort, convenience, and overall user-friendliness. These findings highlight the digital thimble's potential as a versatile and comfortable input device for virtual reality applications, offering valuable advantages over traditional alternatives.

This dissertation makes significant progress in addressing the core challenges associated with input and interaction in virtual reality, setting the foundation for more intuitive and natural interactions within virtual environments. The insights gained have the potential to enhance the accessibility and applicability of virtual reality across a broad spectrum of fields, such as education, training, collaboration, and healthcare, thereby broadening its impact and utility.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View