Please note: This PhD defence will be given online.
Jeremy
Hartmann,
PhD
candidate
David
R.
Cheriton
School
of
Computer
Science
Supervisor: Professor Daniel Vogel
We investigate, build, and design interaction methods to merge the real with the virtual. An initial investigation looks at spatial augmented reality (SAR) and its effects on pointing with a real mobile phone. A study reveals a set of trade-offs between the raycast, viewport, and direct pointing techniques.
To further investigate the manipulation of virtual content within a SAR environment, we design an interaction technique that utilizes the distance that a user holds a mobile phone away from their body. Our technique enables pushing virtual content from a mobile phone to an external SAR environment, interact with that content, rotate-scale-translate it, and pull the content back into the mobile phone. This is all done in a way that ensures seamless transition between the real environment of the mobile phone and the virtual SAR environment.
To investigate the issues that occur when the physical environment is hidden by a fully immersive VR HND, we design and investigate a system that merges a realtime 3D reconstruction of the real world with a virtual environment. This allows users to freely move, manipulate, observe, and communicate with people and objects situated in their physical reality without losing their sense of immersion or presence inside a virtual world.
A study with VR users demonstrates the affordances provided by the system and how it can be used to enhance current VR experiences. We then move to AR, to investigate the limitations of optical see-through HMDs and the problem of communicating the internal state of the virtual world with unaugmented users. To address these issues and enable new ways to visualize, manipulate, and share virtual content, we propose a system that combines a wearable SAR projector. Demonstrations showcase ways to utilize the projected and head-mounted displays together, such as expanding field of view, distributing content across depth surfaces, and enabling bystander collaboration.
We then turn to videogames to investigate how spectatorship of these virtual environments can be enhanced through expanded video rendering techniques. We extract and combine additional data to form a cumulative 3D representation of the live game environment for spectators, which enables each spectator to individually control a personal view into the stream while in VR. A study shows that users prefer spectating in VR when compared with a comparable desktop rendering.
To join this PhD defence on Zoom, please go to https://uwaterloo.zoom.us/j/97651426463?pwd=cy8rbVcvakdGVjFnQlhzK2dYTEJNQT09.