Advanced methods for photon attenuation and scatter correction for combined positron emission tomography and magnetic resonance imaging (PET/MRI)

Placeholder Show Content

Abstract/Contents

Abstract
Combined positron emission tomography and magnetic resonance (PET/MR) imaging combines the molecular information from PET with the structural information from MR, which allows the regions showing biological pathways of disease from PET to be localized with respect to anatomical structures in the body from MR. In order to obtain qualitatively and quantitatively accurate PET images, it is necessary to correct PET data for photon attenuation due to both the patient tissue and hardware components in the field-of-view FOV of the PET/MR system. Unlike in combined PET/CT, this correction is challenging to perform in PET/MR because the MR signal is not directly related to the attenuation properties of the tissues. Traditional methods to correct for photon attenuation include atlas and segmentation methods, however these methods have limitations, such as requiring additional scan time or not being patient specific. This thesis presents alternative methods of attenuation correction using machine learning and traditional image processing. In the former case, scatter correction also results as a natural consequence of the machine learning methodology. Previous groups have shown promising results generating synthetic CT images from MR data for photon attenuation correction (AC) using a U-shaped convolutional neural network (U-Net) or conditional generative adversarial networks (cGANs). The synthetic CT images can be used to correct for attenuation during image reconstruction in the same manner as done for PET/CT. Other groups have shown the ability to generate CT images from uncorrected PET data. In this thesis, we build upon this work and conclude that the most accurate results for synthetic CT image generation are obtained when one incorporates both uncorrected PET data and MR data in the input to a cGAN model. Our proposed attenuation and scatter corrected (ASC) solution produced PET images with superior image quality compared to the commercially-available atlas method, with an average structural similarity index (SSIM) value of 0.941 ± 0.004 vs. 0.911 ± 0.006 and an average peak signal-to-noise ratio (PSNR) value of 47.3 ± 0.4 vs. 44.3 ± 0.3 for a model that uses image data from two MR pulse sequences as well as uncorrected PET data as input. Our second proposed method directly generates corrected ASC PET images from non-attenuation and non-scatter corrected (NASC) PET data and/or MR data, avoiding the conventional approach of generating a pseudo-CT followed by image reconstruction. The mean pixel value difference between the generated ASC PET and the gold standard ASC PET obtained from a PET/CT scan of the same patient for the four different models we trained were 1.5% ± 0.8% (MR Dixon Water input), 2% ± 1% (MR Diffusion Weighted Imaging input), 1.0% ° ± 0.8% (NASC PET input) and 0.9% ± 0.6% (MR Dixon Water, MR Dixon Fat, and NASC PET input). We also describe the development and results of three novel alternative GAN methods, including a vision transformer GAN (ViT-GAN), a shifted window GAN (Swin-GAN) and an attention-gated Pix2Pix (AG-Pix2Pix) to directly generate ASC PET images from single or multi-modality image data inputs. The average PSNR and SSIM values for the multi-modal Swin-GAN input were 39.1 ± 5.5 and 0.98 (IQR 0.98-1.00), respectively, and these values for the single-modal Swin-GAN input were 39.3 ± 5.6 and 0.99 (IQR 0.98-0.99). Lastly, we discuss a novel method to perform AC of the radio-frequency (RF) coil located in the FOV of PET/MR systems. This method involves locating markers placed on an RF coil using a setup in which several cameras are placed just outside the PET/MR system FOV. We also present synthetic results demonstrating the degradation of the PET image due to inaccurate position estimation of a flexible RF coil. This method is capable of reducing the PET signal error due to the presence of a flexible RF coil by roughly 14%.

Description

Type of resource text
Form electronic resource; remote; computer; online resource
Extent 1 online resource.
Place California
Place [Stanford, California]
Publisher [Stanford University]
Copyright date 2023; ©2023
Publication date 2023; 2023
Issuance monographic
Language English

Creators/Contributors

Author Anaya, Emily Alexandra
Degree supervisor Levin, Craig
Thesis advisor Levin, Craig
Thesis advisor Boyd, Stephen
Thesis advisor Nishimura, Dwight
Degree committee member Boyd, Stephen
Degree committee member Nishimura, Dwight
Associated with Stanford University, School of Engineering
Associated with Stanford University, Department of Electrical Engineering

Subjects

Genre Theses
Genre Text

Bibliographic information

Statement of responsibility Emily Anaya.
Note Submitted to the Department of Electrical Engineering.
Thesis Thesis Ph.D. Stanford University 2023.
Location https://purl.stanford.edu/hg457zy1742

Access conditions

Copyright
© 2023 by Emily Alexandra Anaya
License
This work is licensed under a Creative Commons Attribution Non Commercial 3.0 Unported license (CC BY-NC).

Also listed in

Loading usage metrics...