Skip to main content
eScholarship
Open Access Publications from the University of California

UC San Diego

UC San Diego Electronic Theses and Dissertations bannerUC San Diego

Seamless and Efficient Interactions within a Mixed-Dimensional Information Space

Abstract

Mediated by today's visual displays, information space allows users to discover, access and interact with a wide range of digital and physical information. The information presented in this space may be digital, physical or a blend of both, and appear across different dimensions - such as texts, images, 3D content and physical objects embedded within real-world environment. Navigating within the information space often involves interacting with mixed-dimensional entities, visually represented in both 2D and 3D. At times, interactions also involve transitioning among entities represented in different dimensions. We introduce the concept of mixed-dimensional information space, encompassing entities represented in both 2D and 3D. A mixed-dimensional information space is promising to harness the strength of entities visually represented in different dimensions.

Interactions within the mixed-dimensional information space should be seamless and efficient: users should be able to focus on their primary tasks without being distracted by interactions with or transitions between entities. While incorporating 3D representations into the mixed-dimensional information space offers intuitive and immersive ways to interact with complex information, it is important to address potential seams and inefficiencies that arise while interacting with both 2D and 3D entities. For example, in a mixed-dimensional information space rendered on a 2D display, navigating the viewing camera with a mouse to explore and identify relevant perspectives - using 2D content such as texts and images - can be tedious. Creating design feedback on a 3D model can be challenging when trying to find suitable reference images that align with a specific perspective of the 3D model. When navigating within a mixed-dimensional information space viewed through Mixed Reality (MR), seams can arise from suboptimal placement of virtually rendered 2D content within the physical 3D space. This can cause inefficiencies in an instructional MR experience, where texts and image-based guidance are often rendered in mid-air to guide novices through procedural tasks, as users must repeatedly switch between 2D instructional content and the physical 3D workspace. Navigation of information space also sees people as existing inside the information space. In the context of analyzing medical images in the Virtual Reality (VR) headset, seams can arise from the dimensional incongruence between the visually rendered content and the user's mental focus.

Grounded on real-world applications for both general and specialized users including creating reference images for 3D design feedback, carrying out procedural tasks with instructions as well as exploring and contouring medical images, this dissertation introduces new interactive techniques and systems to realize seamless and efficient interactions within the mixed-dimensional information space. While 3D is often part of key representations within such space, the interactive experiences introduced in this dissertation includes those rendered on a traditional 2D display, and those experienced through emergent extended reality headset. Grounded on the user-centered design, this dissertation introduces three interactive systems: MemoVis which aims to use emergent generative AI to help users create reference images for 3D design feedback; PaperToPlace which demonstrates how paper-based instruction documents can be transformed and spatialized into a context-aware MR experience; and VRContour which explores how contour delineation workflow - an indispensable task in today's radiotherapy treatment planning workflow in the field of radiation oncology - can be brought into VR. The approaches and design insights presented in this dissertation can guide future efforts in designing interactive workflows that enable users to engage more efficiently with mixed-dimensional information spaces, where content is visually represented in both 2D and 3D.