High‐resolution Regional Climate Models (RCMs) are driven at their lateral boundaries by information from global models (usually coupled Atmosphere/Ocean General Circulation Models – AOGCMs). RCMs are generally driven by 6‐hourly data from the AOGCM, and the spatial AOGCM data are interpolated to the boundaries of the RCM grid. When driven by observed (reanalysis) data, RCMs show high skill in their simulations of present‐day climate within their domain, attributable largely to the improved resolution of surface boundary conditions (especially orography) relative to global models. For projections of future climate, however, when the RCM is driven by future climate‐change output from an AOGCM, the skill of an RCM will depend to some degree on the skill of the AOGCM. For the best RCM results it is likely that these will be produced by the best driver AOGCMs. The question therefore arises as to how to decide what are the best AOGCMs.
There are different ways to assess the relative skill of different AOGCMs. We consider four methods here. First, we investigate how well different AOGCMs simulate present‐day climate – better models are those that simulate present climate better. Second, we compare projections of future climate across a range of AOGCMs. We judge models whose projections differ greatest from the model‐mean projections (outlier models) as least reliable. Third, we consider ENSO performance. Present and future climate over the California region is strongly linked to the El Nino/Southern Oscillation phenomenon (ENSO). Thus, AOGCMs that produce poorer simulations of ENSO should be judged less useful as RCM drivers.
As a fourth criterion we consider the western boundary fluxes directly. Climate and climate changes within an RCM domain must be dependent to a large degree on the fluxes of mass, momentum, heat and moisture into the domain along its western boundary. It is important, therefore, to assess how well AOGCMs can simulate present‐day lateral boundary conditions. This provides a fourth criterion for selecting those AOGCMs that are best as RCM drivers. We consider both real fluxes calculated using 6‐hourly data (viz. for variable ‘X’ where u is the westerly wind speed and denotes a time average) and ‘pseudo fluxes’ defined by , which require only monthly data for their calculation. We show that, in terms of their implications for validation of model fluxes, pseudo fluxes give the same results as real fluxes and so may be used as a replacement for real fluxes.
For the validation of boundary fluxes, we use the Mahalanobis Distance as a metric for determining how well a model matches the observations, and we develop statistical tests to determine whether model/observed differences are statistically significant.
We have assessed 20 models from the AR4/CMIP3 data base. For the validation, outlier and ENSO test criteria we are able to divide the models into three groups. Superior model are CCSM3.0, GFDL2.0, GFDL2.1, IPSL, MIROCmedres and HadCM3. Inferior models that cannot be recommended as RCM drivers are CNRM, FGOALS, GISS‐EH, GISS‐ER, INM and PCM. 12 Intermediate models are CCCMA, MRI, ECHO‐G, BCCR, CSIRO, MIROChires, ECHAM5 and HadGEM1. We note that CCMA, MRI and ECHO‐G are flux adjusted models, which may produce a favorable bias in their validation performance, so these should be used, if at all, with caution.
For direct validation of western boundary flux performance we have examined only CCSM3.0, GDFL2.1, PCM, GISS‐EH and MIROCmedres. We find major errors in all of these models in their simulations of the strengths of the subtropical and polar jets – all models produce jets that are too strong. Moisture flux simulations are better. Here, MIROCmedres is the best model, followed in order by GFDD2.1, CCSM3.0, GISS‐EH and PCM. The last two models here cannot be recommended as RCM drivers.