Articles

A verification framework for interannual-to-decadal predictions experiments

Solomon, Amy B.; Kumar, A.; Goddard, Lisa M.; Smith, Doug J.; Boer, George J.; Gonzalez, Pablo Salvador; Kharin, Viatcheslav V.; Merryfield, William J.; Deser, Clara; Mason, Simon J.; Kirtman, Ben P.; Msadek, Rym; Sutton, Rowan T.; Hawkins, Ed D.; Fricker, Thomas E.; Hegerl, Gabriele C.; Stephenson, David B.; Ferro, Christopher A. T.; Stephenson, David B.; Meehl, Gerald A.; Stockdale, Timothy N.; Burgman, R.; Greene, Arthur M.; Kushnir, Yochanan; Newman, Matthew J.; Carton, James A.; Fukumori, Ichiro; Delworth, Thomas L.

Decadal predictions have a high profile in the climate science community and beyond, yet very little is known about their skill. Nor is there any agreed protocol for estimating their skill. This paper proposes a sound and coordinated framework for verification of decadal hindcast experiments. The framework is illustrated for decadal hindcasts tailored to meet the requirements and specifications of CMIP5 (Coupled Model Intercomparison Project phase 5). The chosen metrics address key questions about the information content in initialized decadal hindcasts. These questions are: (1) Do the initial conditions in the hindcasts lead to more accurate predictions of the climate, compared to un-initialized climate change projections? and (2) Is the prediction model’s ensemble spread an appropriate representation of forecast uncertainty on average? The first question is addressed through deterministic metrics that compare the initialized and uninitialized hindcasts. The second question is addressed through a probabilistic metric applied to the initialized hindcasts and comparing different ways to ascribe forecast uncertainty. Verification is advocated at smoothed regional scales that can illuminate broad areas of predictability, as well as at the grid scale, since many users of the decadal prediction experiments who feed the climate data into applications or decision models will use the data at grid scale, or downscale it to even higher resolution. An overall statement on skill of CMIP5 decadal hindcasts is not the aim of this paper. The results presented are only illustrative of the framework, which would enable such studies. However, broad conclusions that are beginning to emerge from the CMIP5 results include (1) Most predictability at the interannual-to-decadal scale, relative to climatological averages, comes from external forcing, particularly for temperature; (2) though moderate, additional skill is added by the initial conditions over what is imparted by external forcing alone; however, the impact of initialization may result in overall worse predictions in some regions than provided by uninitialized climate change projections; (3) limited hindcast records and the dearth of climate-quality observational data impede our ability to quantify expected skill as well as model biases; and (4) as is common to seasonal-to-interannual model predictions, the spread of the ensemble members is not necessarily a good representation of forecast uncertainty. The authors recommend that this framework be adopted to serve as a starting point to compare prediction quality across prediction systems. The framework can provide a baseline against which future improvements can be quantified. The framework also provides guidance on the use of these model predictions, which differ in fundamental ways from the climate change projections that much of the community has become familiar with, including adjustment of mean and conditional biases, and consideration of how to best approach forecast uncertainty.

Files

Also Published In

Title
Climate Dynamics
DOI
https://doi.org/10.1007/s00382-012-1481-2