Skip to main content
U.S. flag

An official website of the United States government

Publication Date
27 January 2022

Evaluating Climate Models’ Cloud Feedbacks Against Expert Judgment

Subtitle
The science is now mature enough that we can evaluate cloud feedbacks with ground-truth values determined from independent lines of evidence.
Print / PDF
Powerpoint Slide
Science

Lawrence Livermore National Laboratory scientists have evaluated how well models perform at simulating cloud feedbacks that agree with those determined by expert assessment of observational, theoretical and high-resolution modeling studies.  They also determined the extent to which simulating current-climate cloud properties in close agreement with observations leads to better prediction of cloud feedbacks. Finally, they provide the code base for modelling groups to perform these diagnostics, which facilitates close evaluation of the primary mechanism determining model sensitivity.

Impact

For decades, climate models have disagreed markedly in how much warming will occur in response to increasing carbon dioxide. This wide disagreement is in rooted in uncertainty in how clouds will respond to waring – the cloud feedback. Therefore, identifying where models fall short in terms of their ability to simulate individual cloud feedbacks is a crucial step in improving the reliability of their future climate predictions. In general, it is found that models with erroneously large overall cloud feedbacks (and hence too-high climate sensitivity) tend to have several individual cloud feedbacks that are erroneously large – there is not a single component that is the culprit. It is also shown that simulating better clouds in the current climate does not guarantee that a model will simulate cloud feedbacks in closer agreement with expert judgement, though simulating poor current-climate clouds tends to preclude simulating realistic feedbacks.

Summary

The persistent and growing spread in effective climate sensitivity (ECS) across global climate models necessitates rigorous evaluation of their cloud feedbacks. Here we evaluate several cloud feedback components simulated in 19 climate models against benchmark values determined via an expert synthesis of observational, theoretical, and high-resolution modeling studies. We find that models with smallest feedback errors relative to these benchmark values generally have moderate total cloud feedbacks (0.4–0.6 W m−2 K−1) and ECS (3–4 K). Those with largest errors generally have total cloud feedback and ECS values that are too large or too small. Models tend to achieve large positive total cloud feedbacks by having several cloud feedback components that are systematically biased high rather than by having a single anomalously large component, and vice versa. In general, better simulation of mean-state cloud properties leads to stronger but not necessarily better cloud feedbacks. The Python code base provided herein could be applied to developmental versions of models to assess cloud feedbacks and cloud errors and place them in the context of other models and of expert judgment in real-time during model development.

Point of Contact
Stephen Klein
Institution(s)
Lawrence Livermore National Laboratory (LLNL)
Funding Program Area(s)
Publication