An Advanced Deep Learning and Computer Vision Framework for Precipitation Retrieval from Multi-spectral Satellite Information
- Author(s): Hayatbini, Negin
- Advisor(s): Sorooshian, Soroosh
- Hsu, Kuo-lin
- et al.
Among all natural phenomena, precipitation is the main driver of the hydrological cycle and the challenging task of precipitation estimation is an essential element for hydrological and meteorological applications. Recent developments in satellite technologies resulting in higher temporal, spatial, and spectral resolutions, along with advancements in Machine Learning (ML) algorithms and computational power, open the great opportunity to develop analytical and data-driven tools to characterize such natural phenomena and their future behavior more efficiently and accurately. In this dissertation, state-of-the-art data-driven frameworks are proposed based on advanced deep learning algorithms and computer vision tools to extract the most useful features from single or multiple spectral bands of satellite information.
Specifically, in the first part of the dissertation, a novel gradient-based cloud segmentation algorithm is proposed to effectively identify clouds and monitor their evolution towards more accurate quantitative precipitation estimation and forecast. This algorithm integrates morphological image gradient magnitudes to separate cloud systems and patches boundaries from single or multi-spectral imagery. This method improves rain detection and estimation skills with an accuracy rate of up to 98\% in identifying cloud regions compared to the existing cloud-patch-based segmentation technique implemented in the operational PERSIANN-CCS (Precipitation Estimation from Remotely Sensed Information using Artificial Neural Network - Cloud Classification System) algorithm. Application of this method is extendable to hurricanes simulations and synthetic satellite imageries simulated by high-resolution weather prediction models.
In the second part, an end-to-end deep learning precipitation estimation framework is developed from multiple sources of remotely sensed information to provide half-hourly, 4-km by 4-km precipitation estimates over the CONUS. In the first stage, a Rain/No Rain (R/NR) binary mask is generated by classification of the pixels and then a Fully Convolutional Network (FCN), U-net, is used as a regressor to predict precipitation estimates for rainy pixels. Due to the complex structure of precipitation, and inability of traditional objective functions to capture the true rainfall distribution, a novel distribution matching approach is designed and implemented. The network is trained using both the conditional Generative Adversarial Network (cGAN), and the Mean Squared Error (MSE) loss terms to match the distribution of the generated results and observed data, and to relax the strict assumptions of the traditional and conventional loss function in DNNs. The newly developed precipitation estimation algorithm is introduced as an augmentation for PERSIANN-CCS (Precipitation Estimation from Remotely Sensed Information using Artificial Neural Network - Cloud Classification System) algorithm for estimating global precipitation and is termed as PERSIANN-cGAN. Statistics and visualizations of the metrics for PERSIANN-cGAN represent an improvement of the precipitation retrieval accuracy compared to the operational PERSIANN-CCS product, and a baseline model trained using the conventional MSE loss term.