Computational analysis and modeling of expressive timing in music performance

Placeholder Show Content

Abstract/Contents

Abstract
This thesis presents a machine learning model of expressive performance of piano music (specifically of Chopin Mazurkas) and a critical analysis of the output based upon statistical analyses of the musical scores and of recorded performances. Given the multidimensionality of the task, generating compelling computer generated interpretations of a musical score represents a formidable challenge, and a significant goal of MIR and computer music research. Here I seek to characterize the problems and suggest solutions. Performers' distortion of notated rhythms in a musical score is a significant factor in the production of convincingly expressive music interpretations. Sometimes exaggerated, and sometimes subtle, these distortions are driven by a variety of factors, including schematic features (both structural such as phrase boundaries and surface events such as recurrent rhythmic patterns), as well as relatively rare veridical events that characterize the individuality and uniqueness of a particular piece. Performers tend to adopt similar pervasive approaches to interpreting schemas, resulting in common performance practices, while often formulating less common approaches to the interpretation of veridical events. Furthermore, some performers choose anomalous interpretations of schemas. This thesis presents statistical analyses of timings of recorded human performances of selected Mazurkas by Frédéric Chopin. These include a dataset of 456 expressive piano performances of historical piano rolls that I automatically translated to MIDI format, as well as timing data of acoustic recordings from an available collection. I compared these analyses to the performances of the same works generated by the neural network trained with recorded human performances of the entire corpus. This thesis demonstrates that while machine learning succeeds, to some degree, in expressive interpretation of schemata, convincingly capturing performance characteristics remains very much a work in progress.

Description

Type of resource text
Form electronic resource; remote; computer; online resource
Extent 1 online resource.
Place California
Place [Stanford, California]
Publisher [Stanford University]
Copyright date 2021; ©2021
Publication date 2021; 2021
Issuance monographic
Language English

Creators/Contributors

Author Shi, Zhengshan
Degree supervisor Berger, Jonathan, 1954-
Thesis advisor Berger, Jonathan, 1954-
Thesis advisor Arul, Kumaran
Thesis advisor Smith, Julius O. (Julius Orion)
Degree committee member Arul, Kumaran
Degree committee member Smith, Julius O. (Julius Orion)
Associated with Stanford University, Department of Music

Subjects

Genre Theses
Genre Text

Bibliographic information

Statement of responsibility Zhengshan Shi.
Note Submitted to the Department of Music.
Thesis Thesis Ph.D. Stanford University 2021.
Location https://purl.stanford.edu/yf745wn1098

Access conditions

Copyright
© 2021 by Zhengshan Shi
License
This work is licensed under a Creative Commons Attribution 3.0 Unported license (CC BY).

Also listed in

Loading usage metrics...