With growing global investments in climate modeling systems and Earth observing networks, our wealth of climate data has grown rapidly over the past several decades. While this data is essential to a variety of users, including model developers, users, scientists and stakeholders, ascertaining data quality and understanding reasons for model deficiency remains an outstanding challenge. Although tools for evaluation of available datasets have grown in number, disparate development efforts have led to specialized software packages that can be difficult to employ outside of single research groups. To this end, projects such as the Coordinated Model Evaluation Capabilities (CMEC) and the Model Diagnostics Task Force (MDTF) are advancing standards and building the connective framework needed for truly comprehensive model evaluation. We argue that these efforts are important for identifying connections across research fields and are ultimately needed to make progress in understanding and improving climate model performance.
We discuss our work on a multi-pronged effort to advance comprehensive model evaluation capabilities: First, the development of robust standards for the development of metrics and diagnostics packages; second, development of accompanying tools for coordinated execution of metric packages and visualization of / interaction with metrics and diagnostics package output; and third, building connections across research groups.