Cinematic virtual reality with head-motion parallax

Placeholder Show Content

Abstract/Contents

Abstract
Even as virtual reality has rapidly gained popularity over the past decade, visual fatigue, imperfect sense of immersion, and nausea remain significant barriers to its wide adoption. A key cause of this discomfort is the failure of the current technology to render accurate perspective changes or parallax resulting from the viewer's head motion. This mismatch induces a visual-vestibular conflict. Moreover, rendering accurate head-motion parallax is essential for making the computer-generated experience immersive and more like reality. The lack of this perceptual cue degrades the feeling of presence and makes the overall experience less compelling. This work addresses the issue by proposing an end-to-end framework that can capture, store, and render natural scenery with accurate head-motion parallax. At the core of the problem is the trade-off between storing enough scene information to facilitate fast, high-fidelity rendering of head-motion parallax and keeping the representation compact enough to be practically viable. In this regard, we explore several novel scene representations, compare them with qualitative and quantitative evaluations, and discuss their advantages and disadvantages. We demonstrate the practical applicability of the proposed representations by developing an end-to-end virtual reality system that can render real-time head-motion parallax for natural environments. To that end, we build a two-level camera rig and present an algorithm to construct the proposed representations using the images captured by our camera system. Furthermore, we develop a custom OpenGL renderer that uses the constructed intermediate representations to synthesize full-resolution, stereo frames in a head-mounted display, updating the rendered perspective in real-time based on the viewer's head position and orientation. Finally, we propose a theoretical model for understanding the disocclusion behavior in depth-based novel-view synthesis and analyze the impact of the choice of intermediate representation and camera geometry on the synthesized views in terms of quantitative image quality metrics and the occurrence of disocclusion holes

Description

Type of resource text
Form electronic resource; remote; computer; online resource
Extent 1 online resource
Place California
Place [Stanford, California]
Publisher [Stanford University]
Copyright date 2020; ©2020
Publication date 2020; 2020
Issuance monographic
Language English

Creators/Contributors

Author Thatte, Jayant Purushottam
Degree supervisor Girod, Bernd
Thesis advisor Girod, Bernd
Thesis advisor Wandell, Brian A
Thesis advisor Wetzstein, Gordon
Degree committee member Wandell, Brian A
Degree committee member Wetzstein, Gordon
Associated with Stanford University, Department of Electrical Engineering

Subjects

Genre Theses
Genre Text

Bibliographic information

Statement of responsibility Jayant Thatte
Note Submitted to the Department of Electrical Engineering
Thesis Thesis Ph.D. Stanford University 2020
Location https://purl.stanford.edu/gd337dj1396
Location https://doi.org/10.25740/gd337dj1396

Access conditions

Copyright
© 2020 by Jayant Purushottam Thatte
License
This work is licensed under a Creative Commons Attribution Non Commercial Share Alike 3.0 Unported license (CC BY-NC-SA).

Also listed in

Loading usage metrics...