Skip to main content
eScholarship
Open Access Publications from the University of California

The International Land Model Benchmarking (ILAMB) System: Design, Theory, and Implementation

  • Author(s): Collier, Nathan
  • Hoffman, Forrest M
  • Lawrence, David M
  • Keppel‐Aleks, Gretchen
  • Koven, Charles D
  • Riley, William J
  • Mu, Mingquan
  • Randerson, James T
  • et al.

Published Web Location

http://dx.doi.org/10.1029/2018MS001354
No data is associated with this publication.
Abstract

©2018. The Authors. The increasing complexity of Earth system models has inspired efforts to quantitatively assess model fidelity through rigorous comparison with best available measurements and observational data products. Earth system models exhibit a high degree of spread in predictions of land biogeochemistry, biogeophysics, and hydrology, which are sensitive to forcing from other model components. Based on insights from prior land model evaluation studies and community workshops, the authors developed an open source model benchmarking software package that generates graphical diagnostics and scores model performance in support of the International Land Model Benchmarking (ILAMB) project. Employing a suite of in situ, remote sensing, and reanalysis data sets, the ILAMB package performs comprehensive model assessment across a wide range of land variables and generates a hierarchical set of web pages containing statistical analyses and figures designed to provide the user insights into strengths and weaknesses of multiple models or model versions. Described here is the benchmarking philosophy and mathematical methodology embodied in the most recent implementation of the ILAMB package. Comparison methods unique to a few specific data sets are presented, and guidelines for configuring an ILAMB analysis and interpreting resulting model performance scores are discussed. ILAMB is being adopted by modeling teams and centers during model development and for model intercomparison projects, and community engagement is sought for extending evaluation metrics and adding new observational data sets to the benchmarking framework.

Item not freely available? Link broken?
Report a problem accessing this item