Scientists often use computer-based mathematical models, such as multisector human-Earth systems models, to study future changes in energy use and supply, as well as their co-evolution with other systems. Most studies featuring these models employ scenarios to account for future uncertainty. Researchers developed a framework to evaluate and update the future energy system scenarios generated by these models. This approach consists of three parts–a qualitative description of future energy system changes, metrics to measure model outcomes, and evaluation criteria to compare narrative expectations against numerical model results. Visualization techniques that simultaneously display results for many regions and automatically flag anomalous results can streamline this evaluation process.
Evaluating energy system models and the scenarios they produce maintain their credibility as scientific tools. A structured evaluation framework helps scientists identify and correct model deficiencies, refine their understanding of future changes in the energy system, and highlights interesting exceptions to the anticipated trends. The flexible framework can be applied to a variety of regions and sectors (e.g. electricity, transportation). Carefully evaluating model scenarios with this framework gives users more confidence in the analytical insights they generate.
Energy system futures are highly uncertain and dynamically evolving, often requiring modelers to make expert judgments that can influence model results. Researchers developed a three-part scenario evaluation framework to increase the credibility of model-based energy scenarios. First, a user develops a qualitative description of the system’s evolution which informs choices about model structures and parameters. Second, metrics quantify model outcomes that correspond to key elements of the scenario storyline. Finally, evaluation criteria judge consistency between the numerical model results (metrics) and scenario storyline. The criteria consist of a comparison point and a numerical threshold for deviation from that comparison point, informed by the changes detailed in the qualitative description.
Researchers applied this new framework to the electric power sector reference scenario in GCAM-USA, a version of the multi-sector Global Change Analysis Model with state-level detail in the US. This application demonstrated that (1) having a well-defined set of criteria for measuring scenario performance makes evaluating model developments simpler and more rigorous; (2) model developments may improve scenario consistency for some metrics but decrease consistency for others; and (3) not all outlier results indicate poor scenario consistency.