Using generative deep learning to create high-quality models from 3D scans
Abstract/Contents
- Abstract
- With recent developments in both commodity range sensors as well as mixed reality devices, capturing and creating 3D models of the world around us has become increasingly important. As the world around us lives in a three-dimensional space, such 3D models will not only facilitate capture and display for content creation but also provide a basis for fundamental scene understanding, from semantic understanding to virtual interactions. Leveraging data from commodity range sensors to reconstruct 3D scans of a scene has shown significant promise towards 3D model creation of real-world environments. However, the quality of reconstructed 3D scans has yet to reach that of artist-created 3D models -- in particular, 3D scans always suffer from incompleteness, due to the many occlusions in real-world scenes as well as physical limitations of range sensors. Such incomplete 3D models are both unsuitable visually, and moreover, provides only a limited basis for higher-level scene reasoning. This work focuses on the task of scan completion, that is, from a 3D scan with partial geometry due to occlusions and scanning patterns, we aim to infer the missing geometry. We introduce a generative formulation for scan completion, leveraging deep learning techniques to create high-quality, complete models from 3D scans. We approach this problem as a conditional generative task, where we condition on an input partial scan and aim to learn `part'-wise similarity between scans to infer the complete model. First, we begin by focusing on the more constrained problem of completing scans of isolated shapes. We then expand upon this to design a generative approach for completion of general 3D scans of arbitrary scenes, directly addressing the challenge of varying scene sizes in 3D. This not only provides scan completion at scale, producing geometrically complete 3D models, but also provides a basis for higher-level scene reasoning such as that required for virtual interactions or physical simulations.
Description
Type of resource | text |
---|---|
Form | electronic resource; remote; computer; online resource |
Extent | 1 online resource. |
Place | California |
Place | [Stanford, California] |
Publisher | [Stanford University] |
Copyright date | 2018; ©2018 |
Publication date | 2018; 2018 |
Issuance | monographic |
Language | English |
Creators/Contributors
Author | Dai, Angela | |
---|---|---|
Degree supervisor | Hanrahan, P. M. (Patrick Matthew) | |
Thesis advisor | Hanrahan, P. M. (Patrick Matthew) | |
Thesis advisor | Funkhouser, Thomas | |
Thesis advisor | Savarese, Silvio | |
Degree committee member | Funkhouser, Thomas | |
Degree committee member | Savarese, Silvio | |
Associated with | Stanford University, Computer Science Department. |
Subjects
Genre | Theses |
---|---|
Genre | Text |
Bibliographic information
Statement of responsibility | Angela Dai. |
---|---|
Note | Submitted to the Computer Science Department. |
Thesis | Thesis Ph.D. Stanford University 2018. |
Location | electronic resource |
Access conditions
- Copyright
- © 2018 by Angela Dai
- License
- This work is licensed under a Creative Commons Attribution Non Commercial 3.0 Unported license (CC BY-NC).
Also listed in
Loading usage metrics...