Modeling virtual environments 3D assets from visual inputs

Placeholder Show Content

Abstract/Contents

Abstract
Modeling the physical world is crucial for many applications that significantly impact society, includ- ing robotics, autonomous vehicles, virtual and augmented reality (AR/VR). Research works com- monly leverage 3D virtual environments as a proxy for real environments to perform experiments. By using virtual environments, experiments like robotic simulations, navigation, and interactions can be performed safely and at a lower cost compared to real-world environments. 3D geometric and material assets are the building blocks of virtual environments. However the process of creating 3D assets from scratch is mostly manual, time-consuming, and requires significant expertise. This challenge motivates the development of automated methods for 3D assets generation. We introduce a Computer Vision framework for generating 3D assets from visual inputs that include images and 3D point clouds from real scenes. Visual inputs from a scene are parsed into individual objects, structures, and materials which can be processed into 3D assets. This framework involves four key tasks: 3D semantic segmentation, shape completion, shape modeling, and material prediction. We introduce representations, algorithms, machine learning models, and datasets to address the chal- lenges associated with each of these tasks. We introduces new models for 3D semantic segmentation and shape completion. It also introduces a graph representation and search algorithm that facilitates generating multiple variations of a given shape. Last, it contributes a large scale dataset of procedu- ral materials along with a neural network model that generates procedural materials from images. The results of this thesis is a set of methods addressing the aforementioned tasks and supporting the creation of realistic virtual environments at scale.

Description

Type of resource text
Form electronic resource; remote; computer; online resource
Extent 1 online resource.
Place California
Place [Stanford, California]
Publisher [Stanford University]
Copyright date 2023; ©2023
Publication date 2023; 2023
Issuance monographic
Language English

Creators/Contributors

Author Tchapmi Petse, Lyne
Degree supervisor Savarese, Silvio
Thesis advisor Savarese, Silvio
Thesis advisor Bambos, Nicholas
Thesis advisor Pauly, John (John M.)
Degree committee member Bambos, Nicholas
Degree committee member Pauly, John (John M.)
Associated with Stanford University, School of Engineering
Associated with Stanford University, Department of Electrical Engineering

Subjects

Genre Theses
Genre Text

Bibliographic information

Statement of responsibility Lyne P. Tchapmi.
Note Submitted to the Department of Electrical Engineering.
Thesis Thesis Ph.D. Stanford University 2023.
Location https://purl.stanford.edu/vx419pw4129

Access conditions

Copyright
© 2023 by Lyne Tchapmi Petse
License
This work is licensed under a Creative Commons Attribution Non Commercial No Derivatives 3.0 Unported license (CC BY-NC-ND).

Also listed in

Loading usage metrics...