Feature Extraction in Volumetric Bioimages
Automatic analysis of biological image datasets is one of the important achievements of applied image processing research. Feature extraction in volumetric bioimages obtained from numerous biomedical imaging techniques is becoming extremely critical for biologists and medical professionals to find the answers to many problems. In this work, we show how to extract effective features in two kinds of volumetric bioimages - plant shoot apical meristem(SAM) cell images taken by Confocal Laser Scanning Microscopy (CLSM) and the nematode images taken by Differential Interference Contrast Microscopy.
For actively developing tissues, a computational platform capable of automatically segmenting and tracking cells in the volumetric image stacks is very critical to obtaining high-throughput and quantitative spatiotemporal measurements of a range of cell behaviors, which will lead to a better understanding of the underlying dynamics of morphogenesis. The cells in the SAM are tightly clustered in space and have very similar shapes and intensity distributions, thus choosing reliable features to compute cell correspondences in space and time can be very challenging. In our research we propose a local graph matching based method to track the cells both spatially and temporally, and identify cell divisions. The geometric structure and topology of the cells' relative positions are efficiently exploited as the basic feature to match the cells. Furthermore, we build a joint segmentation and tracking system, where the tracking output acts as an indicator of the quality of segmentation and, in turn, the optimized segmentation can be improved to obtain better tracking results. In the end, the cell correspondences across multiple slices and time windows are fused together to obtain the final cell lineages. Experiments on multiple plant datasets show the proposed image analysis pipeline can effectively segment and track the SAMs cells.
Another contribution of this work on volumetric bioimages analysis lies in multilinear feature extraction and classification for nematode Digital Multi-focal Images (DMI). In such images, morphological information for a transparent specimen is captured in the form of a stack of high quality images, representing individual focal planes through the specimen's body. We present a method that can effectively exploit the entire information in the stack using the 3D X-Ray projections at different viewing angles. These DMI stacks represent the effect of different factors - shape, texture, viewpoint, different instances within the same class and different classes of specimens. For this purpose, we embed the 3D X-Ray Transform within a multilinear framework and propose a Multilinear X-Ray Transform feature representation. The experimental results on the nematode DMI data show that the proposed feature extraction and analysis method can give reliable recognition rate on a real-life database.