Shape and motion analysis in medical imaging
- Author(s): Liu, Wenyang
- Advisor(s): Ruan, Dan
- et al.
Medical images are being increasingly used and facilitate various applications such as computer aided diagnosis, quantitative functional analysis, treatment planning and image-guided interventions. In those applications, multiple images are usually acquired from multiple/single subjects at different time points or in real-time for comparison, progression monitoring or guidance and management during interventions. Such acquisition offers a unique opportunity to study patient anatomy in the context of multiple realizations, and permits both the construction of patient (or sub-population cohort) specific context and motion analysis. To fully benefit from those applications, shape and motion analysis are needed. The former can be considered in the general research area of shape estimation and segmentation from an image processing perspective; the latter addresses motion estimation and prediction, two problems of ultimate clinical importance. These are challenging questions, especially when images are acquired with different modalities, subject to low SNR and/or intensity and contrast variations as a consequence of contrast dynamics. It is also a well-admitted challenge to predict anatomical motion due to the complexity of motion pattern combined with the curse of dimensionality from high-dimensional anatomy representations. To this end, this thesis aims to address those challenges. One key and fundamental component of our development is the extraction of robust shape features from images, based on the observation that boundaries/shapes are usually better preserved and more consistent even under changing or challenging imaging conditions. The main contribution of this thesis is by developing variational frameworks to extract shape features automatically and reliably and building proper low-dimensional embeddings, based on which motion estimation and prediction methods are developed.
The first part of this thesis focuses on solving motion estimation/tracking based on extracting and registering shape features. We have developed and validated a robust motion estimation method, by registering shape features extracted from a variational segmentation method. Extracted with length and temporal shape regularizations, the shape features are robust to intensity and contrast variations and low SNR. The continuous representation of the shape features further eliminates the needs and risks of large errors of building and registering explicit correspondences among features. To evaluate its clinical value, we have applied the proposed method to compensate respiratory motion in MR urography, and quantitatively evaluated its performance on estimating functional renal parameters. To further complement the shape extraction step in the proposed method, while also serving as a fundamental methodology development, we have further developed a unified segmentation framework by incorporating a novel sparse composite shape prior, which is especially advantageous for shape extraction when images are subject to high noise and/or with signal voids. We have evaluated and shown clinical values of this framework in solving various challenging segmentation problems including corpus callosum segmentation in 2D MR, liver segmentation in 3D CT and left ventricle segmentation in Cine MRI.
We then address the shape extraction problem when point clouds are acquired using photogrammetry systems in image-guided radiotherapy. We represent and reconstruct continuous shapes/surfaces from acquired point clouds by minimizing a regularized variational functional, such that the resulting surfaces are robust to noise and missing measurements. To further speed-up the reconstruction, we have developed a real-time surface reconstruction method based upon the overcomplete nature of the respiratory motion, by representing each point cloud as a sparse combination of training set and propagate such linear relation to the continuous surface space.
Finally, we investigate how to efficiently model temporal dynamics of high-dimensional motion by learning their low-dimensional embeddings. We have developed a unified prediction framework on high-dimensional states by applying manifold learning to construct a low-dimensional feature sub-manifold, where efficient prediction can be performed. A pre-image estimation method is also explored to map the predicted value in the sub-manifold to its original high-dimensional space.
Our developed methodologies for shape and motion analysis are general and their clinical values are beyond the DCE-MRI and image-guided radiotherapy as demonstrated in this thesis. We also expect the development of robust shape descriptors to have substantial impact in the general field of computer vision when motion estimation and prediction are needed, especially when images captured are under non-ideal conditions.