Physics-based sound synthesis using time-domain methods

Placeholder Show Content

Abstract/Contents

Abstract
Physics-based sound synthesis is an increasingly popular technique in computer graphics to automatically generate realistic sounds associated to (otherwise silent) visual events, such as a spolling green plastic bowl or a dripping faucet. Previous work has shown very promising results; however, these algorithms still suffer from several shortcomings, such as long precomputation time or difficult integration for complex sound sources. In this thesis, we explore new simulation frameworks that leverage time-domain methods and insights to improve both the quality and speed of physics-based sound synthesis algorithms. First, we introduce KleinPAT, a new time-domain algorithm that rapidly estimates acoustic transfer fields of a vibrating rigid object (modeled by the linear modal model). Instead of estimating the transfer fields by (sequentially) solving the frequency-domain Helmholtz equations, our method partitions all vibration modes into chords using optimal mode conflation, performs a single time-domain wave simulation for each chord, and then separates the per-mode transfer fields using a deconflation solver. We show that our method achieves thousand-fold speedup compared to the more traditional fast boundary element methods, and maintains accuracy suitable for sound synthesis. Second, we present an integrated time-domain acoustic wavesolver to support sound rendering of a wide variety of physics-based simulation models and computer animated phenomena. We target high-quality offline rendering, and introduce methods including a sharp-interface boundary handling method, the acoustic shaders abstraction to integrate various sound sources, and a parallel-in-time synthesis algorithm for this task. We demonstrate the generality and quality of the solver by rendering sound sources of dynamic, multi-physics nature, such as vibrating solids, thin shells, water, and character. Finally, we will switch gears and introduce a new method to enrich standard rigid-body impact models with spatially varying coefficient of restitution maps, or Bounce Maps. We demonstrate that the commonly accepted hypothesis of constant restitution value per object is wildly incorrect, and propose a fast precomputation algorithm to sample and compute it. The resulting Bounce Maps can be queried in negligible time and can be used easily to enhance existing solvers. Although it is not directly related to sound synthesis, we will show that a dominant factor for varying restitution responses is the post-impact vibrations, which can cause sound.

Description

Type of resource text
Form electronic resource; remote; computer; online resource
Extent 1 online resource.
Place California
Place [Stanford, California]
Publisher [Stanford University]
Copyright date 2019; ©2019
Publication date 2019; 2019
Issuance monographic
Language English

Creators/Contributors

Author Wang, Jui-Hsien
Degree supervisor James, Doug L
Thesis advisor James, Doug L
Thesis advisor Hanrahan, P. M. (Patrick Matthew)
Thesis advisor Smith, Julius O. (Julius Orion)
Degree committee member Hanrahan, P. M. (Patrick Matthew)
Degree committee member Smith, Julius O. (Julius Orion)
Associated with Stanford University, Institute for Computational and Mathematical Engineering.

Subjects

Genre Theses
Genre Text

Bibliographic information

Statement of responsibility Jui-Hsien Wang.
Note Submitted to the Institute for Computational and Mathematical Engineering.
Thesis Thesis Ph.D. Stanford University 2019.
Location electronic resource

Access conditions

Copyright
© 2019 by Jui-Hsien Wang

Also listed in

Loading usage metrics...