Skip to main content
U.S. flag

An official website of the United States government

Publication Date
19 June 2020

Benchmarking Simulated Precipitation in Earth System Models

Authors

Author

Earth system models (ESMs) bridge observationally-based and theoretical understanding of the Earth system. They are among the most frequently used tools to study a variety of questions related to variability and changes in Earth’s climate. For many applications, ESMs must realistically simulate observed large-scale precipitation patterns and seasonal cycles that have a multitude of societal and national security implications. Despite steady improvement in the simulation of precipitation, model errors in many aspects of precipitation characteristics limit the use of ESMs both in understanding Earth system variability and change and for decision-making.

The impetus for this workshop was to accelerate efforts to improve ESMs by designing a capability to comprehensively evaluate simulated precipitation in ESMs—a capability that will help ESM developers better understand their models and provide them with quantitative targets for demonstrating model improvements. A group of experts with diverse interests participated in the workshop, including model developers, observational experts, scientists with expertise in diagnosing or evaluating simulated precipitation and related processes, and several with experience in objectively summarizing model performance with metrics. Two goals steered the workshop discussions:

  1. Identify a holistic set of observed rainfall characteristics that could be used to define metrics that gauge the consistency between ESMs and observations
  2. Assess state-of-the-science methods used to evaluate simulated rainfall and identify areas of research for exploratory metrics to improve the understanding of model biases and meet stakeholder needs

The first challenge was to identify a set of observed characteristics that can reliably be used for benchmarking models—establishing observational targets and determining how far models are from these targets. Multiple viable approaches were discussed, and workshop participants agreed that it was most important to establish a starting point that while imperfect, can be useful and provide a foundation for future improvement and expansion. A set of six precipitation characteristics was agreed upon as an appropriate starting point for developing baseline precipitation metrics. They include the spatial distribution of mean state precipitation (including snow), seasonal cycle, variability on time scales from diurnal to decadal, intensity and frequency distributions, extremes, and drought. Expansion of these basic characteristics is envisioned through a tiered system including a wider range of quantitative measures that provide significantly more detail than the basic metrics. All metrics and diagnostics are designed to be applied to a common set of simulations from the current phase of the Coupled Model Intercomparison Project (CMIP6).

While the primary function of the baseline metrics is to benchmark model simulations of precipitation for documenting model performance and improvements over time, precipitation metrics are also useful for a broader community of researchers and stakeholders. The second part of the workshop focused on developing a more diagnostics-oriented set of precipitation metrics, which were referred to as “exploratory.” These exploratory metrics target critical precipitation-related characteristics and processes that are actively being researched but to date lack widely adopted measures for established benchmarking. They can be useful for model developers in guiding model development, for Earth system scientists investigating precipitation variability and change, and for researchers and stakeholders interested in specific aspects of precipitation relevant to their applications. Exploratory metrics were grouped into three types according to their functions and characteristics: process-oriented metrics, regime-oriented metrics, and use-inspired metrics. Over time, these exploratory metrics may inform further baseline metrics that can be included in the set of benchmarks. The group plans to move forward by incorporating the initial set of benchmarks into a common analysis framework and applying it to CMIP6 and earlier generations of climate models, and in parallel continuing to develop exploratory metrics. Ultimately, this effort aims to provide a guide to modelers as they strive to improve simulated precipitation.

Pendergrass, Angeline, Peter J Gleckler, L Ruby Leung, and Christian Jakob. 2020. “Benchmarking Simulated Precipitation In Earth System Models”. Bulletin Of The American Meteorological Society 101: E814-E816. doi:10.1175/bams-d-19-0318.1.
Funding Program Area(s)