Skip to main content
eScholarship
Open Access Publications from the University of California

UC Irvine

UC Irvine Electronic Theses and Dissertations bannerUC Irvine

An Advanced Deep Learning Framework For Short-Term Precipitation Forecasting FromSatellite Information

Abstract

Short-term Quantitative Precipitation Forecasting is important for aviation and navigation safety control, flood forecasting, early flood warning, and natural hazard management. Obtaining accurate and timely precipitation forecasts in short range of time (0-6 hours) is a challenging task. Addressing the challenges in forecasting accurate short-term rainfall is an open question in the field of hydrometeorology and is a major objective. This dissertation introduces a machine learning, in specific deep learning, framework to accurately forecast high- and low-intensity precipitation events. In details, this dissertation introduces an effective precipitation forecasting framework by (1) developing an infrared cloud-top brightness temperature forecasting model, and (2) proposing an effective infrared to rainfall intensity mapping model using satellite and radar data. The proposed framework is effective due to(1) solving a physical problem using a continuous infrared data in which evolution is dominated by the continuity law of heat transfer, (2) providing forecasts for various ranges of rainfall intensities, and (3) introducing a framework with potentials to become a

quasi-global scale product. As the initial attempt to develop the precipitation forecasting model, a forecasting model was proposed by extrapolating infrared imageries using an advanced Deep Neural Network (DNN) and applying the forecasted infrared into an effective rainfall retrieval algorithm to obtain the short-term precipitation forecasts. To achieve such tasks, we propose a Long Short-Term Memory (LSTM) and the Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks (PERSIANN), respectively. The precipitation forecasts obtained from LSTM combined with PERSIANN were compared with a Recurrent Neural Network (RNN), Persistency method, and Farnebackoptical flow each combined with the PERSIANN algorithm and the numerical model results from the first version of Rapid Refresh (RAPv1.0) over three regions in the United States, including the states of Oregon, Oklahoma, and Florida. Furthermore, to improve the forecasting skills of the proposed method, a new infrared forecasting method was developed by improving the LSTM model (such as efficient use of neighborhood pixel information, resolving the loss of resolution problem and introducing more efficient objectives compared to maximum likelihood estimates). The new proposed infrared forecasting algorithm is a semi-conditional Generative AdversarialNetwork (GAN) consisting of convolutional, recurrent (LSTM) and convolutional-recurrent (ConvLSTM) layers in order to forecast temporally and spatially coherent infrared images. The results are compared with the non-adversarial version of the proposed model and demonstrate superior performances. In addition to this precipitation forecasting improvement step, a new precipitation estimation algorithm is introduced to replace the PERSIANN algorithm in order to increase the infrared-rainfall translation accuracy and enable the framework to become an end-to-end model. The new precipitation estimation model is a conditional GAN, termed as PERSIANN-GAN, which translates 0.25◦×0.25◦ infrared data into same resolution precipitation estimates by defining a more flexible objective function. The PERSIANN-GAN results are compared with two Convolutional Neural Networks (CNNs) baseline models one without adversarial part and with bypass connections and the other one without adversarial part and without bypass connections. The model performances were also compared to the well-known operational PERSIANN product. The comparison results demonstrate higher visual and statistical performances of PERSIANN-GAN.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View