Leveraging large scale data sets: a transfer learning approach for 7T super resolution
Skip to main content
eScholarship
Open Access Publications from the University of California

UCSF

UC San Francisco Electronic Theses and Dissertations bannerUCSF

Leveraging large scale data sets: a transfer learning approach for 7T super resolution

Abstract

Brain morphometry on data from multi-scanner and multi-site studies can suffer from nonbiologicalvariance due to scanner and acquisition differences. Harmonization methods, such as ComBat, have been introduced to remove unwanted variance in structural neuroimaging data. Statistical methods for harmonizing structural data however operate on derived morphological measurements to remove site related effects rather than operating at the voxel level to remove scanner related effects. This study works towards a deep learning-based image harmonization method by training and evaluating a generative adversarial model for transforming 3T images to a standard 7T-like image quality. 7T MRI can achieve better tissue contrast and tissue segmentation results but lacks the widespread availability of 3T MRI, resulting in limited dataset sizes for deep learning. Transfer learning from a 3T synthesis task to a 7T synthesis task was hypothesized to improve synthesis results by greatly increasing dataset size and diversity with multi-site longitudinal data. The 7T synthesis dataset was comprised of 9 subjects each with a 3T MPRAGE and 7T MP2RAGE T1-weighted scan. Leave one out cross validation was used and performance evaluation metrics were reported as the mean across all validation cross folds. The transfer learning dataset consisted of 419 total subjects and 1124 T1 weighted images with a wide variety of sites, scanners, and acquisition sequences. An independent testing set of 17 subjects with paired 1.5T and 3T scans from the transfer learning dataset were used for evaluating the 3T synthesis task. Image similarity metrics such as Structural Similarity Index Measure (SSIM) and Peak Signal to Noise Ratio (PSNR) were used to evaluate synthesis performance. Dice Similarity Coefficient (DSC) and Jaccard Similarity Coefficient (JSC) were used to evaluate the synthesized and 3T segmentations results using 7T segmentation as ground truth. The 7T synthesis network with transfer learning weights achieved an SSIM of 0.950 ± 0.02 and PSNR of 25.44 ± 0.61, improved over the 3T image which had SSIM of 0.909 ± 0.01 and PSNR of 21.83 ± 0.92. The DSC for grey matter regions of interest was 0.810 ± 0.02 and 0.916 ± 0.004 for white matter regions of interest for the synthesized validation images, an improvement of 0.053 DSC (p = 0.011) and 0.017 DSC (p = 0.0039) over the 3T results respectively. The JSC for grey matter regions of interest was 0.693 ± 0.03 and 0.842 ± 0.01 for white matter regions of interest, an improvement of 0.066 DSC (p = 0.011) and 0.026 DSC (p = 0.0039) respectively. Future work will evaluate the ability of the 7T synthesis models at removing non-biological variance, particularly in longitudinal studies where imaging protocol or scanners were updated.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View