Computational models in fMRI : from the neural basis of visual recognition memory to predicting brain activation for arbitrary tasks
Abstract/Contents
- Abstract
- This dissertation explores specific and generalized approaches in the computational modeling of human brain data, from localizing brain regions important for visual recognition memory to predicting cortex-wide activation for arbitrary tasks. In Part 1, we investigate recognition memory for naturalistic scenes, focusing on subsequent memory effects (SMEs), which reveal areas in the brain whose activity during an experience is associated with whether that experience is later remembered or forgotten. Behaviorally, people are remarkably consistent in the images they remember or forget, indicating that inherent properties of images heavily influence human vision and memory. Emerging evidence suggests that this phenomenon of image memorability may be an automatic, stimulus-driven, high-level perceptual signal represented along the path from perception to memory. If true, memorability is an item-level confound present in decades of studies showing SMEs in a consistent set of brain regions. To address this issue, we used a recent large-scale functional magnetic resonance imaging (fMRI) dataset in which 8 subjects were each shown up to 30,000 images over the course of a year, with image repetition delays ranging on the order of seconds to months. We found that, after controlling for image memorability and encoding-related reaction time, SMEs nevertheless persisted in the expected brain regions, providing a critical validation of decades of research into visual recognition memory. Additionally, while we replicated prior work showing image memorability effects (IMEs) in high-level visual cortex, we also found IMEs in early visual cortex, a surprising novel finding that was perhaps uniquely detectable by this dataset. Finally, we observed an overlap of SMEs and IMEs in areas thought to be selective for either effect, disrupting a narrative in the literature of a clean neural separation between these stimulus-driven and observer-driven effects. Part 2 generalizes the analyses in Part 1 with a more flexible, predictive modeling framework that learns to map from a comprehensive space of psychological functions (perceptual, motor, and cognitive) to cortex-wide patterns of activation. Fundamentally, a deep understanding of the brain basis of cognition should enable accurate prediction of brain activity patterns for any psychological task, based on the cognitive functions engaged by that task. Encoding models (EMs), a class of computational models that predict neural responses from known features (e.g., stimulus properties), have succeeded in circumscribed domains like visual neuroscience, but implementing domain-general EMs that predict brain-wide activity for arbitrary tasks has been limited mainly by availability of datasets that 1) sufficiently span a large space of psychological functions, and 2) are sufficiently annotated with such functions to allow robust EM specification. To address these issues, we introduce cognitive encoding models (CEMs), which predict cortical activation patterns for arbitrary tasks based on their perceptual, motor, and cognitive demands, as specified by a formal ontology. CEMs were trained and tested using the Multi-Domain Task Battery dataset of 24 subjects engaging in 44 diverse task conditions over the course of 32 fMRI scans (5.5 hours of task data per subject). We found that CEMs predicted cortical activation maps of held-out tasks with high accuracy, and we probed the trained models for insights into a) hierarchical relationships between psychological functions, b) the degree to which brains of different individuals similarly encode such functions, and c) the functional specialization of large-scale resting-state networks. Our implementation and validation of CEMs provides a proof of principle for the utility of formal ontologies in cognitive neuroscience and motivates the use of CEMs in the further testing of cognitive theories. Taken together, these modeling approaches set the stage for several fruitful research directions, in particular the development of stimulus-computable encoder-decoder models of brain and behavior that continually use the latest models in machine learning and artificial intelligence to characterize, in a data-driven manner, how representations in the brain are transformed from perception to memory in service of future behavior.
Description
Type of resource | text |
---|---|
Form | electronic resource; remote; computer; online resource |
Extent | 1 online resource. |
Place | California |
Place | [Stanford, California] |
Publisher | [Stanford University] |
Copyright date | 2023; ©2023 |
Publication date | 2023; 2023 |
Issuance | monographic |
Language | English |
Creators/Contributors
Author | Walters, Jonathon |
---|---|
Degree supervisor | Poldrack, Russell A |
Thesis advisor | Poldrack, Russell A |
Thesis advisor | McClelland, James L |
Thesis advisor | Wagner, Anthony David |
Degree committee member | McClelland, James L |
Degree committee member | Wagner, Anthony David |
Associated with | Stanford University, School of Humanities and Sciences |
Associated with | Stanford University, Department of Psychology |
Subjects
Genre | Theses |
---|---|
Genre | Text |
Bibliographic information
Statement of responsibility | Jonathon Walters. |
---|---|
Note | Submitted to the Department of Psychology. |
Thesis | Thesis Ph.D. Stanford University 2023. |
Location | https://purl.stanford.edu/yb752th1520 |
Access conditions
- Copyright
- © 2023 by Jonathon Walters
- License
- This work is licensed under a Creative Commons Attribution Non Commercial 3.0 Unported license (CC BY-NC).
Also listed in
Loading usage metrics...