Skip to main content
U.S. flag

An official website of the United States government

Methodological Developments in the International Land Model Benchmarking (ILAMB) Effort

PRESENTERS:
To attach your poster or presentation:

E-mail your file for upload
Authors

Lead Presenter

Co-Author

Abstract

As earth system models (ESMs) become increasingly complex, there is a growing need for comprehensive and multi-faceted evaluation of model performance. The International Land Model Benchmarking (ILAMB) project is a model-data intercomparison and integration project designed to improve the performance of land models and improve the design of new measurement campaigns to reduce uncertainties associated with key land surface processes. While the effort has been established for over a decade, we continue to make developments and improvements in order to incorporate new datasets as well as adapt our scoring methodology to be more useful for model developers in identifying and diagnosing model errors. More information about out methodology, reference data, and catalogs of results can be found at https://www.ilamb.org.

In addition to an overview, we present a shift in the ILAMB scoring methodology. In order to make errors comparable across different areas of the globe, ILAMB originally employed a normalization of errors by the variability of the reference quantity in a particular location. For many variables, this choice tends to strongly weight performance in the tropics and consequently does not give a balanced perspective on model performance across the globe. To balance performance across the globe, we propose a shift to normalize errors by regional error quantiles. We select regions which conform to traditional understanding of biomes in order to focus on areas where we expect errors to be commensurate in the order of magnitude. We then choose a normalizing quantity by taking a quantile of the errors in these biomes across a selection of CMIP5 and CMIP6 era models. In this way, we contextualize the scores by using the performance of the recent generations of models.

Category
Metrics, Benchmarks and Credibility of model output and data for science and end users
Biogeochemistry (Processes and Feedbacks)
Funding Program Area(s)