Pinpointing the culprits responsible for errors in large-scale climate models can take a mathematical magnifier. To identify these transgressors, scientists at Pacific Northwest National Laboratory, Sandia National Laboratories, and the University of Michigan developed a novel technique that efficiently measures and identifies time-resolution errors in the most complex weather and climate models. Applying the new method to the Community Atmosphere Model version 5 (CAM5) revealed that the primary source of time-resolution errors belongs to the calculations that represent stratiform clouds, those ubiquitous and dreary-day rain-makers.
“Until now, the lack of effective methods to analyze the time resolution problem has prevented thorough investigation of this type of model sensitivity,” said Dr. Hui Wan, atmospheric scientist at PNNL and lead author of the paper. “Our technique has made it possible to understand certain complex interactions between model components and identify the root cause of the numerical artifacts.”
With the new technique, a time step convergence test, the researchers conducted multiple, short simulations using CAM5 in which the model’s calculation frequency was varied between the default value of 30 minutes to as short as 1 second. They diagnosed time-resolution errors against the simulation with the most frequent calculations. The same test was then repeated to evaluate various model components in isolation. The team identified major sources of time-integration error by comparing the absolute errors associated with different model components, as well as their dependence on the time-step length.
Today’s computers can only represent weather and climate at finite, often large increments of time. Simplifications and judicious approximations are used as “stand-ins” to ease the burden on calculation time. But, compared to reality, these stand-ins are inevitably flawed, and sometimes how frequently the stand-in calculations take place makes a real difference in the accuracy of the results, a problem well known to climate modelers. Previous work has demonstrated that calculation frequency could change the estimated global-mean surface temperature increase caused by a doubling of carbon dioxide concentration by a factor of two (this scenario is often tested by those looking at future climate possibilities). This study offers a simple and effective strategy—a time step convergence test—to identify the model components that are responsible for the missteps.
The authors thank Balwinder Singh, Heng Xiao, Kai Zhang, Jin-ho Yoon, Minghuai Wang, and Po-Lun Ma for valuable discussions. The two anonymous reviewers are thanked for their comments and suggestions. H. Wan acknowledges support from the Linus Pauling Distinguished Postdoctoral Fellowship of the Pacific Northwest National Laboratory (PNNL) and the PNNL Laboratory Directed Research and Development Program. PNNL is operated by Battelle Memorial Institute for the U.S. Department of Energy (DOE) under contract DE-AC05- 76RL01830. P. J. Rasch was supported by the DOE Office of Science as part of the Scientific Discovery through Advanced Computing (SciDAC) Program. M. A. Taylor was supported by the DOE Office of Biological and Environmental Research, work package 11-014996, ‘‘Climate Science for a Sustainable Energy Future’’. C. Jablonowski was supported by the DOE Office of Science SciDAC award DE-SC0006684. This research used resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, which is supported by the Office of Science of DOE under contract DE-AC05- 00OR22725. The allocation was awarded under DOE’s ASCR Leadership Computing Challenge (ALCC) program in support of the Aerosol Clouds and Precipitation Scientific Focus Area of the DOE Earth System Modeling Program. Data discussed in the paper are available upon request from the corresponding author.