This dissertation addresses the challenges of cell segmentation and tracking in time-lapse microscopy using advanced deep-learning techniques. Analyzing microscopy images involves several challenges, including accurate segmentation and tracking of cells, handling the presence of artifacts and noise, and dealing with the proximity and overlapping of cells in densely populated images. Furthermore, there is a significant challenge in obtaining large annotated datasets necessary for training robust deep-learning models, as manually annotating microscopy images is time-consuming and labor-intensive. To overcome these issues, first, we developed DeepSea, a deep-learning model for efficient cell segmentation and tracking. DeepSea incorporates auxiliary models for cell edge detection, residual blocks for efficiency, and progressive learning techniques, achieving high segmentation accuracy and effective cell tracking. Additionally, we propose cGAN-Seg, a CycleGAN-based model that generates synthetic images to enhance the training process of cell segmentation models, incorporating style generation paths, linear attention mechanisms, differentiable image augmentation, and VGG perceptual loss. This significantly improves segmentation performance on limited datasets, with substantial improvements across various cell types and imaging modalities. A GAN-based super-resolution video generator is also introduced, generating annotated high-quality, realistic time-lapse microscopy videos, further addressing annotated dataset scarcity for live single-cell tracking models. Finally, we employed our quantitative single-cell image analysis pipeline to gain insights into cell size regulation and morphological diversity, as well as cell spatial and frequency feature distribution.