We present a simple method for long- and short-term earthquake forecasting (estimating earthquake rate per unit area, time, and magnitude). For illustration we apply the method to the Pacific plate boundary region and the Mediterranean area surrounding Italy and Greece. Our ultimate goal is to develop forecasting and testing methods to validate or falsify common assumptions regarding earthquake potential. Our immediate purpose is to extend the forecasts we made starting in 1999 for the northwest and southwest Pacific to include somewhat smaller earthquakes and then adapt the methods to apply in other areas. The previous forecasts used the CMT earthquake catalog to forecast magnitude 5.8 and larger earthquakes. Like our previous forecasts, the new ones here are based on smoothed maps of past seismicity and assume spatial clustering. Our short-term forecasts also assume temporal clustering. An important adaptation in the new forecasts is to abandon the use of tensor focal mechanisms. This permits use of earthquake catalogs that reliably report many smaller quakes with no such mechanism estimates. The result is that we can forecast earthquakes at higher spatial resolution and down to a magnitude threshold of 4.7. The new forecasts can be tested far more quickly because smaller events are considerably more frequent. Also, our previous method used the focal mechanisms of past earthquakes to estimate the preferred directions of earthquake clustering, however the method made assumptions that generally hold in subduction zones only. The new approach escapes those assumptions. In the northwest Pacific the new method gives estimated earthquake rate density very similar to that of the previous forecast.

We present estimates of future earthquake rate density (probability per unit area, time, and magnitude) on a 0.1-degree grid for a region including California and Nevada, based only on data from past earthquakes. Our long-term forecast is not explicitly time-dependent, but it can be updated at any time to incorporate information from recent earthquakes. The present version, founded on several decades worth of data, is suitable for testing without updating over a five-year period as part of the experiment conducted by the Collaboratory for Study of Earthquake Predictability (CSEP). The short-term forecast is meant to be updated daily and tested against similar models by CSEP. The short-term forecast includes a fraction of our long-term one plus time-dependent contributions from all previous earthquakes. Those contributions decrease with time according to the Omori law: proportional to the reciprocal of the elapsed time. Both forecasts estimate rate density using a radially symmetric spatial smoothing kernel decreasing approximately as the reciprocal of the square of epicentral distance, weighted according to the magnitude of each past earthquake. We made two versions of both the long- and short-term forecasts, based on the Advanced National Seismic System (ANSS) and Preliminary Determinations of Epicenters (PDE) catalogs, respectively. The two versions are quite consistent, but for testing purposes we prefer those based on the ANSS catalog since it covers a longer time interval, is complete to a lower magnitude threshold and has more precise locations. Both forecasts apply to shallow earthquakes only (depth 25 km or less) and assume a tapered Gutenberg-Richter magnitude distribution extending to a lower threshold of 4.0.

[1] We compute the stress tensor in the upper crust of southern California as a function of time and compare observed seismicity with the estimated stress at the time of each earthquake. Several recent developments make it possible to do this much more realistically than before: ( 1) a wealth of new geodetic and geologic data for southern California and ( 2) a catalog of moment tensors for all earthquakes with magnitudes larger than 6 since 1850 and larger than 5 since 1910. We model crustal deformation using both updated geodetic data and geologically determined fault slip rates. We subdivide the crust into elastic blocks, delineated by faults which move freely at a constant rate below a locking depth with a rate determined by the relative block motion. We compute normal and shear stresses on nodal planes for each earthquake in the catalog. We consider stress increments from previous earthquakes ("seismic stress'') and aseismic tectonic stress, both separately and in combination. The locations and mechanisms of earthquakes are best correlated with the aseismic shear stress. Including the cumulative coseismic effects from past earthquakes does not significantly improve the correlation. Correlations between normal stress and earthquakes are always very sensitive to the start date of the catalog, whether we exclude earthquakes very close to others and whether we evaluate stress at the hypocenter or throughout the rupture surface of an earthquake. Although the correlation of tectonic stress with earthquake triggering is robust, other results are unstable apparently because the catalog has so few earthquakes.

We extend existing branching models for earthquake occurrences by incorporating potentially important estimates of tectonic deformation and by allowing the parameters in the models to vary across different tectonic regimes. We partition the Earth’s surface into five regimes: trenches (including subduction zones and oceanic convergent boundaries and earthquakes in outer rise or overriding plate); fast spreading ridges and oceanic transforms; slow spreading ridges and transforms; active continental zones, and plate interiors (everything not included in the previous categories). Our purpose is to specialize the models to give them the greatest possible predictive power for use in earthquake forecasts. We expected the parameters of the branching models to be significantly different in the various tectonic regimes, because earlier studies (Bird and Kagan in Bull Seismol Soc Am 94(6):2380–2399, 2004) found that the magnitude limits and other parameters differed between similar categories. We compiled subsets of the CMT and PDE earthquake catalogs corresponding to each tectonic regime, and optimized the parameters for each, and for the whole Earth, using a maximum likelihood procedure. We also analyzed branching models for California and Nevada using regional catalogs. Our estimates of parameters that can be compared to those of other models were consistent with published results. Examples include the proportion of triggered earthquakes and the exponent describing the temporal decay of triggered earthquakes. We also estimated epicentral location uncertainty and rupture zone size and our results are consistent with independent estimates. Contrary to our expectation, we found no dramatic differences in the branching parameters for the various tectonic regimes. We did find some modest differences between regimes that were robust under changes in earthquake catalog and lower magnitude threshold. Subduction zones have the highest earthquake rates, the largest upper magnitude limit, and the highest proportion of triggered events. Fast spreading ridges have the smallest upper magnitude limit and the lowest proportion of triggered events. The statistical significance of these variations cannot be assessed until methods are developed for estimating confidence limits reliably. Some results apparently depend on arbitrary decisions adopted in the analysis. For example, the proportion of triggered events decreases as the lower magnitude limit is increased, possibly because our procedure for assigning independence probability favors larger earthquakes. In some tests we censored earthquakes occurring near and just after a previous event, to account for the fact that most such earthquakes will be missing from the catalog. Fortunately the branching model parameters were hardly affected, suggesting that the inability to measure immediate aftershocks does not cause a serious estimation bias. We compare our branching model with the ETAS model and discuss the differences in the models parametrization and the results of earthquake catalogs analysis.

The concept of background seismicity is strictly related to the identification of spontaneous and triggered earthquakes. The definition of foreshocks, main shocks and aftershocks is currently based on procedures depending on parameters whose values are notoriously assumed by subjective criteria. We propose a method for recognizing the background and the induced seismicity statistically. Rather than using a binary distinction of the events in these two categories, we prefer to assign to each of them a probability of being independent or triggered. This probability comes from an algorithm based on the ETAS model. A certain degree of subjectivity is still present in this procedure, but it is limited by the possibility of adjusting the free parameters of the algorithm by rigorous statistical criteria such as maximum likelihood. We applied the method to the seismicity of southern California and analyzed the sensitivity of the results to the free parameters in the algorithm. Finally, we show how our statistical declustering algorithm may be used for mapping the background seismicity, or the moment rate in a seismic area.