Successive earthquakes can drive landscape evolution. However, the mechanism and pace with which landscapes respond remain poorly understood. Offset channels in the Carrizo Plain capture the fluvial response to lateral slip on the San Andreas Fault on millennial timescales. In Chapter 1, we developed and tested a model that quantifies competition between fault slip, which elongates channels, and aggradation, which causes channel infilling and, ultimately, abandonment. Validation of this model supports a transport-limited fluvial response and implies that measurements derived from present-day channel geometry are sufficient to quantify the rate of bedload transport relative to slip rate. Extension of the model identifies the threshold for which persistent change in transport capacity, obliquity in slip, or advected topography results in reorganization of the drainage network.
Chapters 2-4 shift focus to earthquake statistics. Earthquakes follow well-known and remarkably robust empirical laws. The intensity of aftershocks after a large earthquake rapidly decreases with time according to Omori's law, wherein the total abundance of aftershocks depends strongly on the magnitude of the mainshock. The relative abundance of large and small events is commonly modeled using the Gutenberg-Richter relationship. These empirical laws provide a first-order order description of seismicity and underlie current operational earthquake forecasts. However, very large fluctuations from these statistical models suggest non-stationarity in time and space. Are these random, or can these fluctuations be explained? Chapters 2-4 aim to diagnose the causes of this variability and potential tools to better characterize these features in the context of earthquake forecasting.
In chapter 2, we examine aftershock productivity relative to the global average for all mainshocks (MW>6.5) from 1990 to 2019. A global map of earthquake productivity highlights the influence of tectonic regimes. Earthquake depth, lithosphere age, and plate boundary type correspond well with earthquake productivity. We investigate the role of mainshock attributes by compiling source dimensions, radiated seismic energy, stress drop, and a measure of slip heterogeneity based on finite fault source inversions for the largest earthquakes from 1990 to 2017. On an individual basis, stress drop, normalized rupture width, and aspect ratio most strongly correlate with aftershock productivity. A multivariate analysis shows that a particular set of parameters (dip, lithospheric age, and normalized rupture area) combines well to improve predictions of aftershock productivity on a cross-validated data set. Our overall analysis is consistent with a model in which the volumetric abundance of nearby stressed faults controls the aftershock productivity rather than variations in source stress. Thus, we suggest a complementary approach to aftershock forecasts based on geological and rupture properties rather than local calibration alone.
Recognizing earthquakes as foreshocks in real-time would provide a valuable forecasting capability. In a recent study, Gulia and Wiemer (2019) proposed a traffic-light system that relies on abrupt changes in b-values relative to background values. The approach utilizes high-resolution earthquake catalogs to monitor localized regions around the largest events and distinguish foreshock sequences (reduced b-values) from aftershock sequences (increased b-values). In Chapter 3, we utilize the recent well-recorded earthquake foreshock sequences in Ridgecrest, California, and Maria Antonia, Puerto Rico, as an opportunity to test the procedure. For Ridgecrest, a b-value time series may have indicated an elevated risk of a larger impending earthquake during the MW6.4 foreshock sequence and provided an ambiguous identification of the onset of the MW7.1 aftershock sequence. The exact result depends strongly on expert judgment. Monte Carlo sampling across a range of reasonable decisions most often results in ambiguous warning levels. In the case of the Puerto Rico sequence, we record significant drops in b-value prior to and following the largest event (MW6.4) in the sequence. The b-value has still not returned to background levels (12 February 2020). The Ridgecrest sequence roughly conforms to expectations; the Puerto Rico sequence will only do so if a larger event occurs in the future with an ensuing b-value increase. Any real-time implementation of this approach will require dense instrumentation, consistent (versioned) low completeness catalogs, well-calibrated maps of regionalized background b-values, systematic real-time catalog production, and robust decision-making about the event source volumes to analyze.
Seismology is witnessing explosive growth in the diversity and scale of earthquake catalogs owing to improved seismic networks and increasingly automated data augmentation techniques. A key assumption in this community effort is that more detailed observations should translate into improved earthquake forecasts. Current operational earthquake forecasts build on seminal work designed for sparse earthquake records. Advances in the past decades have mainly focused on the regionalization of the models, the recognition of catalog peculiarities, and the extension to spatial forecasts; but have failed to leverage the wealth of new geophysical data. Here, we develop a neural-network-based earthquake forecasting model that leverages the new data in an adaptable forecasting framework: the Recurrent Earthquake foreCAST (RECAST). We benchmark temporal forecasts generated by RECAST against the widely used Epidemic Type Aftershock Sequence (ETAS) model using both synthetic and observed earthquake catalogs. We consistently find improved model fit and forecast accuracy for Southern California earthquake catalogs with more than 10,000 events. The approach provides a flexible and scalable path forward to incorporate additional data into earthquake forecasts.