Self-Supervised Contrastive Learning for Multi-Organ Segmentation
Skip to main content
Open Access Publications from the University of California

UC Irvine

UC Irvine Electronic Theses and Dissertations bannerUC Irvine

Self-Supervised Contrastive Learning for Multi-Organ Segmentation

Creative Commons 'BY-NC' version 4.0 license

Medical imaging modalities such as computed tomography (CT) and magnetic resonance imaging (MRI) play an important role in clinical workflows because they allow radiologists to analyze a patient's anatomy in great detail while being minimally invasive. Organ segmentation, in particular, is often performed as a preliminary step for treatment planning, diagnosis, and prognosis. Manual organ segmentation is, however, a very expensive and time consuming process, so there is great demand for computer assisted or automated organ segmentation methods. In recent years, thanks in part to the advent of large-scale labeled datasets such as ImageNet, deep convolutional neural networks (CNNs) have become the dominant approach to solving segmentation tasks in the natural imaging domain. Since there is a lack of large-scale labeled datasets in the medical domain, it is difficult to optimally train a deep CNN from scratch. Transfer learning from ImageNet is also suboptimal because medical images are inherently different from natural images.

In this thesis we aim to overcome two main challenges in deep learning-based medical image analysis: insufficient labeled data and domain shift. We propose a self-supervised contrastive learning framework for pre-training CNNs on unlabeled medical datasets in order to learn generic representations that can be fine-tuned for a wide range of multi-organ segmentation tasks. We introduce a novel contrastive loss for dense self-supervised pre-training on local regions. Finally, we conduct extensive experiments on three multi-organ datasets and demonstrate that our method consistently boosts current supervised and self-supervised pre-training approaches.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View