Affiliation:
1. Barcelona Supercomputing Center, Barcelona, Spain
2. Barcelona Supercomputing Center, and Institució Catalana de Recerca i Estudis Avanҫats, Barcelona, Spain
Abstract
AbstractSubseasonal predictions bridge the gap between medium-range weather forecasts and seasonal climate predictions. This time scale is crucial for operations and planning in many sectors such as energy and agriculture. For users to trust these predictions and efficiently make use of them in decision-making, the quality of predicted near-surface parameters needs to be systematically assessed. However, the method to follow in a probabilistic evaluation of subseasonal predictions is not trivial. This study aims to offer an illustration of the impact that the verification setup might have on the calculation of the skill scores, thus providing some guidelines for subseasonal forecast evaluation. For this, several forecast verification setups to calculate the fair ranked probability skill score for tercile categories have been designed. These setups use different number of samples to compute the fair RPSS as well as different ways to define the climatology, characterized by different time periods to average (week or month). These setups have been tested by evaluating 2-m temperature in ECMWF-Ext-ENS 20-yr hindcasts for all of the initializations in 2016 against the ERA-Interim reanalysis. Then, the implications on skill score values of each of the setups are analyzed. Results show that to obtain a robust skill score several start dates need to be employed. It is also shown that a constant monthly climatology over each calendar month may introduce spurious skill score associated with the seasonal cycle. A weekly climatology bears similar results to a monthly running-window climatology; however, the latter provides a better reference climatology when bias adjustment is applied.
Publisher
American Meteorological Society
Cited by
18 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献