Algorithms for high-performance brain-computer interfaces

Placeholder Show Content

Abstract/Contents

Abstract
Brain-computer interfaces (BCIs) are a set of technologies that enable people to interact with devices by directly translating neural activity into command signals. Noninvasive methods such as EEG and fMRI allow minimal-risk access to brain signals but are limited by low spatial and temporal resolution, respectively. By contrast, intracortical BCIs (iBCIs) benefit from unparalleled spatial and temporal resolution recordings of local neuronal ensembles, resulting in some of the highest-performance communication systems to date. While promising, several key challenges remain for wider adoption of these technologies. First, iBCIs in particular suffer from signal instability that cause performance degradation across time. A key challenge for clinical translation of these iBCI cursor systems is ensuring robust, long-term control for end users without manual retraining. Second, while these technologies have enabled point-and-click based typing interfaces approaching 8-10 words per minute, an ideal interface would match the bandwidth of conversational speech (roughly 150 words per minute). Third, intracortical systems require brain surgery, which has inherent risks due to the invasive nature of the procedure. Developing noninvasive approaches for silent speech decoding may therefore be more suitable for a subset of patients. In this work, we discuss three advances that tackle these key areas by leveraging modern machine learning methods alongside high signal-to-noise (SNR) recording systems. First, we present an unsupervised recalibration procedure for improving cursor BCI robustness. Using data from our clinical trial participant, we highlight the extent and timescales of neural feature drift. We demonstrate how existing state-of-the-art procedures are seemingly ill-equipped to deal with long-term drift using both offline data as well as simulation models. We then introduce a novel procedure that leverages task structure to recalibrate a system automatically, and demonstrate superior performance in-silico and in closed-loop in our participant. Second, we lay the groundwork for intracortical speech BCI efforts by prototyping a system in an offline manner using microelectrode arrays in dorsal motor cortex. We demonstrate that, despite being located in a nontraditional brain area for speech decoding, our arrays provide high SNR compared to existing approaches. Third, we leverage skin-like flexible electronics and deep learning for silent speech decoding from electromyography (EMG) signals.

Description

Type of resource text
Form electronic resource; remote; computer; online resource
Extent 1 online resource.
Place California
Place [Stanford, California]
Publisher [Stanford University]
Copyright date 2023; ©2023
Publication date 2023; 2023
Issuance monographic
Language English

Creators/Contributors

Author Wilson, Guy
Degree supervisor Druckmann, Shaul
Thesis advisor Druckmann, Shaul
Thesis advisor Gardner, Justin, 1971-
Thesis advisor Linderman, Scott
Thesis advisor Sussillo, David
Degree committee member Gardner, Justin, 1971-
Degree committee member Linderman, Scott
Degree committee member Sussillo, David
Associated with Stanford University, School of Medicine
Associated with Stanford University, Neurosciences Program

Subjects

Genre Theses
Genre Text

Bibliographic information

Statement of responsibility Guy Wilson.
Note Submitted to the Neurosciences Program.
Thesis Thesis Ph.D. Stanford University 2023.
Location https://purl.stanford.edu/yn442ff4231

Access conditions

Copyright
© 2023 by Guy Wilson
License
This work is licensed under a Creative Commons Attribution 3.0 Unported license (CC BY).

Also listed in

Loading usage metrics...