As earth system models (ESMs) become increasingly complex, there is a growing need for comprehensive and multi-faceted evaluation of model projections. To advance understanding of terrestrial biogeochemical processes and their interactions with hydrology and climate under conditions of increasing atmospheric carbon dioxide, new analysis methods are required that use observations to constrain model predictions, inform model development, and identify needed measurements and field experiments. Better representations of biogeochemistry–climate feedbacks and ecosystem processes in these models are essential for reducing the acknowledged substantial uncertainties in 21st century climate change projections.
Building upon past model evaluation studies, the goals of the International Land Model Benchmarking (ILAMB) project are to:
- Develop internationally accepted benchmarks for land model performance by drawing upon international expertise and collaboration
- Promote the use of these benchmarks by the international community for model intercomparison
- Strengthen linkages among experimental, remote sensing, and climate modeling communities in the design of new model tests and new measurement programs
- Support the design and development of open source benchmarking tools.
The second ILAMB Workshop in the United States was convened on May 16 to 18, 2016, in Washington, District of Columbia, USA. Sponsored by the U.S. Department of Energy’s (DOE’s) Office of Biological and Environmental Research, the workshop was convened by the Biogeochemistry–Climate Feedbacks Scientific Focus Area (BGC Feedbacks SFA) project. Overarching goals of the workshop were to engage the international research community in defining scientific priorities for (1) design of new metrics, (2) improvement of model development and workflow practices, (3) Coupled Model Intercomparison Project (CMIP) evaluation, and to learn about new observational data sets and measurement campaigns.
The workshop drew more than 60 on-site participants, and between 20 and 30 individuals—including students and postdocs—attended online at any time during the plenary sessions. Participants were from Australia, Canada, China, Germany, Japan, Netherlands, Sweden, United Kingdom, and the United States and represented 10 different major modeling centers. Plenary presentations focused on model benchmarking, emergent constraints, evaluation metrics, uncertainty quantification, and field experiment and measurement networks.