Vision modeling tools for evaluating next-generation displays

Placeholder Show Content

Abstract/Contents

Abstract
Nearly all metrics for evaluating display image quality operate on the RGB representations used to control conventional flat displays. One computes image quality by comparing an idealized version of the RGB image (reference) and a version rendered by the display under design (test). This framework has been important historically, and it remains suitable for developing metrics that evaluate several aspects of conventional flat display image quality, such as spatial resolution, intensity quantization, and color gamut. There are several limitations of the RGB methods, however. First, standard RGB image quality metrics do not include display characterization, which limits their ability to generalize across ambient viewing conditions, display size, viewing distance, or biological differences between viewers. Second, the RGB representation does not extend to advanced displays where the image is delivered using more complex technology, such as light field, augmented reality, or multi-planar displays. To evaluate the quality of these 3D display technologies, it is necessary to develop a new framework that accommodates the diversity of emerging display technologies. In this dissertation, I adopt a universal representation at the initial stages of the visual pathways by representing display images as retinal photoreceptor (cone) excitations. This is a display-independent representation: all display technologies initiate vision with the retinal images incident on the two eyes. The ISETBio and ISET3d software provides a platform for developing next generation image quality metrics for displays based on cone excitations. The software includes tools for computing not only the retinal image, but also the cone excitations to display images and 3D scenes. In this dissertation, I describe the development of these two toolboxes to model and evaluate next-generation displays that cannot be fully represented by RGB data. First, the existing vision modeling tools were first expanded from 2D image-formation (planar, 2D stimuli) to 3D image-formation (3D virtual models) through the creation and development of ISET3d. Next, the modeled cone excitations from ISETBio/3d were used as input to existing image quality metrics, improving generalization and extending the calculations to novel displays. Lastly, the expanded vision modeling tools were used to model a next-generation, multi-focal display

Description

Type of resource text
Form electronic resource; remote; computer; online resource
Extent 1 online resource
Place California
Place [Stanford, California]
Publisher [Stanford University]
Copyright date 2020; ©2020
Publication date 2020; 2020
Issuance monographic
Language English

Creators/Contributors

Author Lian, Trisha Pei-Wei
Degree supervisor Wandell, Brian A
Thesis advisor Wandell, Brian A
Thesis advisor Girod, Bernd
Thesis advisor Wetzstein, Gordon
Degree committee member Girod, Bernd
Degree committee member Wetzstein, Gordon
Associated with Stanford University, Department of Electrical Engineering.

Subjects

Genre Theses
Genre Text

Bibliographic information

Statement of responsibility Trisha Pei-Wei Lian
Note Submitted to the Department of Electrical Engineering
Thesis Thesis Ph.D. Stanford University 2020
Location electronic resource

Access conditions

Copyright
© 2020 by Trisha Pei-Wei Lian
License
This work is licensed under a Creative Commons Attribution Non Commercial 3.0 Unported license (CC BY-NC).

Also listed in

Loading usage metrics...