Skip to main content
U.S. flag

An official website of the United States government

Developing Metrics to Evaluate the Skill and Credibility of Downscaling

Funding Program Area(s)
Project Team

Principal Investigator

The climate science community has developed a variety of dynamical and statistical techniques to downscale relatively coarse resolution reanalysis or global climate model (GCM) output. Dynamical downscaling frameworks vary in their dynamical cores, grid-refinement strategies, and ability to influence the coarser-scale solution. Statistical models vary in the large-scale variables chosen to predict fine-scale variables and the relationships between large-scale and fine-scale variables. Unfortunately, the relative strengths and weaknesses of these techniques are poorly understood. There are also reasons to question their ability to recover fine-scale data accurately. For dynamical downscaling, these include: lack of conservation of energy, water, and momentum; absence of scale interaction; and parameterizations derived from GCMs that may be ill-suited for higher resolution modeling. For statistical modeling, these reasons include: the subjectivity inherent in developing statistical relationships; difficulties capturing statistical properties of fine-scale fields such as spatial coherence, temporal variability, and extremes; and perhaps most crucially for climate change, the “stationarity assumption”—that statistical relationships derived from training on the historical record are unchanged as climate changes.

Against this background of confusion about the quality of downscaled data, the scientific community is under tremendous pressure to produce fine-scale information suitable for climate change impacts assessment. Meanwhile, users typically lack the scientific background to evaluate the information’s quality themselves. So important decisions relating to climate change impacts are informed by data whose quality is not vetted by the scientific community and is only loosely evaluated by users. We will address this problem by developing metrics for a systematic and definitive evaluation of the performance of downscaling techniques.

Leveraging previously funded projects at each institution, we have assembled a team with complementary expertise in development and application of dynamical and statistical techniques. We will draw on a number of existing data sets covering the conterminous U.S. derived from our previous downscaling efforts, and generate new data sets to support specific project aims. We will quantify and compare the abilities of downscaling techniques to reproduce the country’s historical climate record, with an eye toward evaluating effects of recent advances in dynamical and statistical downscaling on data quality. We will also quantify and investigate the techniques’ differences in projecting future U.S. climate change. Through a carefully designed exercise meant to produce an “apples to apples” comparison of statistically and dynamically derived climate change data, we will test the stationarity assumption of statistical downscaling. Finally, to determine the relevance of differences across downscaling techniques for a key application of climate data, we will impose the various downscaled products on a hydrologic model widely used for climate change impacts assessment, and quantify the differences in outcomes. The project’s payoff will be high. At its end, we will have a quantitative understanding of the value added of downscaling techniques, and will be in a position to advance these techniques to make them more skillful and credible. To impart this understanding to the community, we will produce a review article that will be the definitive resource for best practices in climate downscaling.

Recent Content