Label-efficient Representation Learning for Medical Image Analysis
Skip to main content
eScholarship
Open Access Publications from the University of California

UCLA

UCLA Electronic Theses and Dissertations bannerUCLA

Label-efficient Representation Learning for Medical Image Analysis

Abstract

This thesis aims to partially tackle the inherent challenges of data-hungry deep learning methods for medical image analysis due to the scarcity of annotated training data in the medical domain. The focus is on investigating novel solutions within the realms of few-shot learning, multiple-instance learning, and self-supervised learning, specifically centering on histopathology images for coherence.The first part of the research involves the use of contrastive learning (CL) and latent augmentation (LA) to enhance the efficiency and generalizability of few-shot learning in histology images. The study seeks to understand the conditions under which self-supervised models outperform supervised ones and explores the potential of self-supervised representations. For instance, it reveals that SSL models pre-trained on pathological images excel in few-shot classification settings compared to supervised models. This is because SSL models learn class-agnostic information, whereas supervised models, which focus on discriminative features, are sensitive to shifts in data distribution. Additionally, it demonstrates that LA, by introducing semantic variations in an unsupervised way, can significantly improve few-shot classification performance. The second part presents ReMix, a novel framework for multiple-instance learning (MIL)- based whole-slide image (WSI) classification. ReMix addresses training efficiency and data diversity challenges by substituting instances with instance prototypes (patch cluster centroids) and employing online, stochastic, and flexible latent space augmentations to enforce semantic-perturbation invariance. This technique has been shown to boost the performance and efficiency of both spatial-agnostic and spatial-aware MIL methods. Finally, the study delves into self-supervised learning (SSL) for dense prediction tasks in pathology images. A new SSL framework, Concept Contrastive Learning (ConCL), is introduced, proven to outperform previous state-of-the-art SSL methods. The main objective of ConCL is to enhance detection and segmentation tasks in computational pathology, which are often heavily dependent on annotated data, hence challenging to execute efficiently and accurately. A roadmap is provided for pre-training a superior encoder for downstream dense prediction tasks. Furthermore, a simple, dependency-free concept-generating method is proposed that does not rely on external segmentation algorithms or saliency detection models. In summary, this thesis broadens the understanding of deep learning applications in healthcare, demonstrating the power of data augmentation and representation learning in medical image analysis across various settings. It encourages further investigation into these challenges to enhance the speed and accuracy of diagnoses, improve treatment decisions, and reduce medical errors.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View