Deep Neural Networks for Cardiovascular Magnetic Resonance Imaging
Skip to main content
Open Access Publications from the University of California


UCLA Electronic Theses and Dissertations bannerUCLA

Deep Neural Networks for Cardiovascular Magnetic Resonance Imaging


Magnetic Resonance Imaging (MRI) is a powerful diagnostic imaging modalities known to provide high soft-tissue contrast and spatial resolution. Much of the versatility of MRI stems from the fact that the signal from different tissue types can be weighted differently through manipulation of the sequence in which radiofrequency (RF) and gradient events are played out during the data acquisition phase. However, data acquisition for most MRI measurements is sequential, limiting its speed and increasing its susceptibility to motion artifacts. This is particularly the case for cardiovascular applications, where cardiac and respiratory motion complicate all aspects of the data acquisition and signal processing pathways. Moreover, following data acquisition and image reconstruction, clinically relevant post-processing may require substantial time and effort, increasing the burden on clinical centers and medical staff. Thus, general algorithms should be customized to accelerate image acquisition, image reconstruction and image post-processing with the goal of expanding the speed, scope and reliability of cardiovascular MRI applications. This dissertation describes several deep learning-based methods applying tailored image reconstruction, respiratory motion correction, blood vessel segmentation, and instance T1 mapping calculation. The first application is the acceleration of dynamic cardiac MRI. Modern approaches to speeding MR image acquisition involve the use of significantly under-sampled k-space data (with a proportional reduction in acquisition time), such that the Nyquist limit of traditional signal sampling is violated and the missing k-space data are estimated by other means. The missing data are typically recovered either through incorporating independently acquired surface coil spatial sensitivity maps (parallel acquisition) or through iterative reconstruction via optimized approximations that enforce both sparsity in the sampled domain and consistency with the explicitly acquired data (compressed sensing). Although both parallel imaging and compressed sensing (CS) have proved powerful, they manifest hard limits as the degree of undersampling is increased. Moreover, even with fast modern processors and dedicated reconstruction hardware, image reconstruction times can become prohibitive. Deep learning methods have the potential to address several of the limitations noted for current parallel imaging and CS techniques and to expand the scope of clinical applications. Our first task was to develop a deep Convolutional Neural Network (CNN) to reconstruct the 2D dynamic cine images from the highly undersampled k-space data, e.g., 8X-10X. In our platform, redundant information in the temporal dimension was used, and the data consistency was imposed in the k-space domain. Indeed, we used CNN only to learn the effective Spatio-temporal regularizer from the historical data in our platform. Learnable parameters (weights and biases) of the neural network were optimized during the off-line training process and tested on the unseen data. Testing inference time was ~40ms per frame, while more than 1s is usually required for conventional parallel imaging and compressed sensing combined reconstruction. Our next task was to correct respiratory motion artifact that was superimposed on the images acquired during the free-breathing 2D cardiac cine scan. Although segmented (multi-shot) cardiac cine is the gold standard in cardiac imaging, it requires breath-holding through the data acquisition, which may not be feasible in all patients. For this reason, in this dissertation, we sought to find a way to study the performance of the deep neural networks in removing the respiratory artifact from effected 2D cardiac cine images. To achieve that, we trained an adversarial autoencoder network using unpaired training data (healthy volunteers and patients who underwent clinically indicated cardiac MRI examinations). We used a U-net structure for the encoder and decoder parts of the autoencoder. We considered an adversarial objective to regularize the code space of the autoencoder. To ensure that the network reduces the respiratory motion artifact without losing accuracy or introducing new spurious features, we first examined its performance on artificially corrupted data with simulated rigid motion. Then, we demonstrated the feasibility of the proposed approach in vivo by training on actual respiratory motion-corrupted images in an unpaired manner and testing on volunteer and patient data. We showed that it is feasible to correct the respiratory motion-related image artifacts without accessing the paired free of the motion artifact target. Quantitatively in this feasibility study, the mean structural similarity indices (SSIM) for the simulated motion-corrupted images and motion-corrected images were 0.76 and 0.93 (out of 1), respectively. Concerning the image sharpness, the proposed method improved the Tenengrad focus measure of the motion-corrupted images by 12% in the simulation study and 7% in the in-vivo study. Subjective image quality assessments showed that the average overall subjective image quality for the motion-corrupted images, motion-corrected images, and breath-hold images were 2.5, 3.5, and 4.1(out of 5.0), respectively. Statistically, there was a significant difference between the image quality scores of the motion-corrupted and breath-held images (P<0.05); however, after respiratory motion correction, there was no significant difference between the image quality scores of motion-corrected and breath-held images. Our next further application is joint compensation of the respiratory motion artifact and reconstruction of the high-quality 3D images from the undersampled acquisition in the 3D dynamic cardiac cine MRI. Imaging acceleration and respiratory motion compensation remain two significant challenges in MRI, particularly for cardiothoracic, abdominal, and pelvic MRI applications. This dissertation sought to implement a novel 3D generative adversarial network (GAN)-based technique to jointly reconstruct the image and compensate the respiratory motion artifact of 4D (time-resolved 3D) cardiac MRI. We trained the 3D GAN based on combinations of the pixel-wise content loss, adversarial loss, and a novel data-driven temporal aware loss function. Asides from the image reconstruction, the proposed method also compensates for the respiratory motion of the free-breathing scans. We adopted a novel progressive growing-based strategy to achieve a stable and sample-efficient training process for the proposed 3D GAN. We thoroughly evaluated the performance of the proposed method qualitatively and quantitatively based on the relatively large patient populations (3D cardiac cine data from 42 patients). Our radiological assessments showed that the proposed method achieved significantly better scores in general image quality and image artifacts at 10.7X-15.8X acceleration than the SG CS-WV approach at 3.5X-7.9X acceleration (4.53�0.540 vs. 3.13�0.681 for general image quality, 4.12�0.429 vs. 2.97�0.434 for image artifacts, p<0.05 for both). Radiological evaluations approved that the reconstructed images were free of the spurious anatomical structures and concerning the functional analysis was in good agreement with the conventional SG CS-WV approach. We showed promising results for high-resolution (1mm3) free-breathing 4D cardiac MR data acquisition with simultaneous respiratory motion compensation and fast reconstruction time which might pave the way for future 4D MR researches. The fourth application is the fast and accurate calculation of the myocardial T1 and T2 values. Modified Look-Locker inversion recovery (MOLLI) pulse sequence is a widely used MR pulse sequence that allows the measurements and mapping of the myocardial T1 and T2 values. Modeling of the signal evolution of the MOLLI sequence is required to compute the accurate relaxometry parameters. Bloch equation simulation with slice profile correction (BLESSPC) algorithm could consider the non-rectangle 2D RF excitation slice profile effects, B1+ errors, and imperfect inversion and T2 preparation pulses. Nonetheless, BLESSPC is computationally expensive, which limits its applicability in practice. We sought to implement a deep neural network for fast and accurate computation of myocardial T1/T2 relaxometry values by training the neural network on the simulated data computed based on the BLESSPC algorithm. We trained two separate neural networks based on simulated radial T1-T2 values. Trained T1-T2 models were evaluated concerning the stability of the different noise levels and compared against the BLESSPC algorithm. Testing and comparison were performed in different levels, including simulation, phantom, and in vivo data acquired by the MOLLI sequence at 1.5 T and radial T1-T2 sequence at 3 T. Trained models in the phantom studies achieved similar accuracy and precision to the BLESSPC algorithm with respect to T1-T2 estimations for both MOLLI and radial T1-T2 (P > 0.05). For in vivo, trained models and BLESSPC produced similar myocardial T1/T2 values for radial T1-T2 at 3 T (T1: 1366 � 31 ms for both methods, P > 0.05; T2: 37.4 ms � 0.9 ms for both methods, P > .05), and similar myocardial T1 values for the MOLLI sequence at 1.5 T (1044 � 20 ms for both methods, P > .05). As was expected, our proposed method can compute the T1/T2 map in less than 1 second (CPU-based) with similar accuracy and precision to the BLESSPC as the computationally expensive but comprehensive algorithm. The developed model in this dissertation offers a fast and promising approach for accurate computation of myocardium T1/T2 values, replacing BLESSPC for both MOLLI and radial T1-T2 sequences. The fifth application is the automatic peripheral artery, and vein segmentation in the lower extremities based on ferumoxytol enhanced magnetic resonance angiography (FE-MRA). The post-processing of FE-MRA images mainly includes segmentation of the peripheral vasculature and classification of them into arteries and veins, often performed by an experienced radiologist via visual inspection and manual delineations. Due to the large size of the high resolution, volumetric peripheral MRA, e.g., 560 x 940 x 240, manual annotation is a time-consuming and tedious process. Since manual labeling is a subjective process and depends on physician’s experience and knowledge, it can potentially introduce high inter-observer variability. To achieve an accurate and reproducible segmentation of peripheral arteries and veins, we sought to develop an automatic platform in this dissertation. Our proposed platform first segmented the high-quality vascular network from FE-MRA volumetric images and then classified them into arteries and veins. For the segmentation, we used a local attention-gated 3D U-Net and trained that by using a deep supervision mechanism based on a linear combination of the focal Tversky loss and region mutual loss. We performed a region-growing algorithm for the classification, starting from the initial arterial seeds obtained by time-resolved images to separate the arteries from the veins. Quantitatively, our platform achieved a competitive F1 = 0.8087 and Recall = 0.8410 for blood vessel segmentation compared with F1 = (0.7604, 0.7573, 0.7651) and Recall = (0.7791, 0.7570, 0.7774) obtained with Volumetric-Net, DeepVesselNet-FCN, and Uception. The proposed method achieved F1 = (0.8274 / 0.7863) in the calf region-the most challenging region in peripheral arteries and veins segmentation for the artery and vein classification stage. The platform described in this dissertation is fully automatic without requirements for human interaction and able to extract and label the peripheral vessels from FE-MRA volumes in less than 4 minutes. This method improves upon manual segmentation by radiologists, which routinely takes several hours – an endeavor that is often time- and cost-prohibitive.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View