Data-driven tools for scene modeling

Placeholder Show Content

Abstract/Contents

Abstract
Detailed digital environments are crucial to achieving a sense of immersion in video games, virtual worlds, and cinema. The modeling tools currently used to create these environments rely heavily on single-object modeling: designers must repeatedly search for, place, align, and scale each new object in a scene that may contain thousands of models. This style of scene design is enabled by the large collections of 3D models which are becoming available on the web. While these databases make it possible for designers to incorporate existing content into new scenes, the process can be slow and tedious: the rate at which we can envision new content greatly exceeds the rate at which we can realize these imagined constructs as digital creations. In this dissertation, we aim to alleviate this bottleneck by developing tools that accelerate the modeling of 3D scenes. We rely upon a data-driven approach, where we learn common scene modeling design patterns from examples of 3D environments. We show which properties of scene databases such as Google 3D Warehouse are most important for data-driven tasks, and how to transform existing scene databases into a form that is amenable to pattern learning. We also describe a custom scene modeling program which serves as a testbed for the modeling tools we develop, and which we use to create a curated corpus of scenes that enable the development of powerful modeling tools. Our tools require the ability to compare arrangements of objects. We present several techniques to do so, including kernel density estimation and graph kernels, and show how these approaches can be applied to produce practical modeling tools. We use this machinery to support basic modeling operations such as searching for or orienting single models. We show how to use a corpus of 3D scenes to automatically categorizing and aligning collections of objects by group objects into contextual categories. Finally, we combine these contextual categories and our arrangement comparison algorithm to enable example-based 3D scene synthesis, where the artist provides a small number of examples and we generate a diverse and plausible set of similar scenes. All of the methods we develop use a data-driven approach in order to enable the rapid construction of large virtual environments without the need for an artist to try and specify the \rules of design" for each possible domain.

Description

Type of resource text
Form electronic; electronic resource; remote
Extent 1 online resource.
Publication date 2013
Issuance monographic
Language English

Creators/Contributors

Associated with Fisher, Matthew
Associated with Stanford University, Department of Computer Science.
Primary advisor Hanrahan, P. M. (Patrick Matthew)
Thesis advisor Hanrahan, P. M. (Patrick Matthew)
Thesis advisor Klemmer, Scott
Thesis advisor Tversky, Barbara
Advisor Klemmer, Scott
Advisor Tversky, Barbara

Subjects

Genre Theses

Bibliographic information

Statement of responsibility Matthew Fisher.
Note Submitted to the Department of Computer Science.
Thesis Ph.D. Stanford University 2013
Location electronic resource

Access conditions

Copyright
© 2013 by Matthew David Fisher
License
This work is licensed under a Creative Commons Attribution Non Commercial 3.0 Unported license (CC BY-NC).

Also listed in

Loading usage metrics...