Skip to main content
U.S. flag

An official website of the United States government

Methodological Developments in the International Land Model Benchmarking Effort

Presentation Date
Tuesday, December 12, 2023 at 2:10pm - Tuesday, December 12, 2023 at 6:30pm
Location
MC - Poster Hall A-C - South
Authors

Author

Abstract

As earth system models (ESMs) become increasingly complex, there is a growing need for comprehensive and multi-faceted evaluation of model performance. The International Land Model Benchmarking (ILAMB) project is a model-data intercomparison and integration project designed to improve the performance of land models and improve the design of new measurement campaigns to reduce uncertainties associated with key land surface processes. While the effort has been established for over a decade, we continue to make developments and improvements in order to incorporate new datasets as well as adapt our scoring methodology to be more useful for model developers in identifying and diagnosing model errors.

The version 2.7 release of the ILAMB python software includes new datasets for gross primary productivity and latent/sensible heat flux (WECANN: Water, Energy, and Carbon with Artificial Neural Networks), growing season net carbon flux (Loechli2023), biomass (ESACCI: European Space Agency, Biomass Climate Change Initiative and XuSaatchi2021), methane (Fluxnet), and surface soil moisture (Wang2021). In addition to these land-focused datasets, ILAMB v2.7 marks the first release of the International Ocean Model Benchmarking (IOMB) package. While the codebase remains the same as is used with the land, this release encodes datasets and configuration file to be used in benchmarking ocean models.

Finally, we present a shift in the ILAMB scoring methodology. In order to make errors comparable across different areas of the globe, ILAMB originally employed a normalization of errors by the variability of the reference quantity in a particular location. For many variables, this choice tends to strongly weight performance in the tropics and consequently does not give a balanced perspective on model performance across the globe. To balance performance across the globe, we propose a shift to normalize errors by regional error quantiles. We select regions which conform to traditional understanding of biomes in order to focus on areas where we expect errors to be commensurate in the order of magnitude. We then choose a normalizing quantity by taking a quantile of the errors in these biomes across a selection of CMIP5 and CMIP6 era models. In this way, we contextualize the scores by using the performance of the recent generations of models.

Funding Program Area(s)