Skip to main content
Open Access Publications from the University of California


UCLA Electronic Theses and Dissertations bannerUCLA

Using AI to Mitigate Variability in CT Scans: Improving Consistency in Medical Image Analysis


Computed tomography (CT) plays an integral role in diagnosing and screening various types of diseases. A growing number of machine learning (ML) models have been developed for prediction and classification that utilize derived quantitative image features, thanks in part to the availability of large CT datasets and advances in medical image analysis. Researchers have classified disease severity using quantitative image features such as hand-crafted radiomic and deep features. Despite reporting high classification performance, these models typically do not generalize well. Variations in the appearance of CT scans caused by differences in acquisition and reconstruction parameters adversely impact the reproducibility of quantitative image features and the performance of machine learning algorithms. As a result, few ML algorithms have been used in clinical settings. Mitigating the effects of varying CT acquisition and reconstruction parameters is a challenging inverse problem. Recent advances in deep learning have demonstrated that image translation and denoising models can achieve high per-pixel similarity metrics when compared to a target image. The purpose of this dissertation is to develop and evaluate two conditional generative models that mitigate the effects of working with CT scans acquired and reconstructed with a variety of parameters. The overarching hypothesis is that improved image quality results in better consistency in nodule detection. In essence, these models attempt to learn the underlying conditional distribution on the normalized images (high-quality) given the un-normalized (low-quality) images. First, I propose a novel CT image normalization method based on a 3D conditional generative adversarial network (GAN) that utilizes a spectral-normalization algorithm. My model provides an end-to-end solution for normalizing scans acquired using different doses, slice thicknesses, and reconstruction kernels. This study demonstrates that the GAN is capable of mitigating the variability in image quality, quantitative image features, and lung nodule detection using an automated Computer-Aided-Detection (CAD) algorithm. I show that GAN improved perceptual similarity by 22%, and resulted in a 16% increase in features with a good level of agreement based on concordance correlation coefficient analysis. As a result, the performance of the existing nodule detection model was up to 75% more consistent with the reference scan. Second, I explore the use of a conditional normalizing flow-based model to incorporate uncertainty information during image translation. The model is capable of learning the explicit conditional density and generating several plausible image outputs, providing a means to reduce the distortions introduced by existing methods. I show that the normalizing flow method achieves a 6% improvement in perpetual quality compared to the state-of-the-art GAN-based method and the resulted agreement level of the detection task is improved by 13%. This dissertation compares these two generative approaches, identifying their strengths and limitations when normalizing heterogeneous CT images and mitigating the effect of different acquisition and reconstruction parameters on downstream clinical tasks.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View