We found a match
Your institution may have access to this item. Find your institution then sign in to continue.
- Title
Verification in the presence of observation errors: Bayesian point of view.
- Authors
Duc, Le; Saito, Kazuo
- Abstract
Verification in the presence of observation errors is approached from the Bayesian point of view. Like data assimilation (DA), Bayesian verification is shown to have a robust foundation established by Bayesian inference. Together, DA and Bayesian verification form two difference levels of Bayesian inference. Evaluation of a model is equivalent to inference on the plausibility of this model given observations. Relative performances between different models are measured by ratios of posterior plausibilities, which becomes ratios of likelihoods in the case of no prior information. These ratios are called the Bayes factors and are the standard verification method in Bayesian model comparison. Since verification scores are used intensively in numerical weather prediction, the verification scores derived from likelihoods are proposed to replace the Bayes factors in Bayesian verification. With two requirements that the verification scores are both strictly proper and local, the logarithm score, i.e. log‐likelihood, and its linear transformation are shown to be the unique class. Log‐likelihoods in Bayesian verification are determined by the form of forecast probability distributions from models. The empirical form is preferable because of its flexibility in incorporating not only observation errors but also other uncertainties in observation biases or observation error variances into calculation to obtain closed forms for log‐likelihoods. When applied to observations with Gaussian errors, the logarithm score induces the weighted mean‐squared error which is non‐dimensional and can be used for both univariate and multivariate observations. The most interesting application of Bayesian verification is to offer a new explanation for rank histograms and quantify flatness of rank histograms by a metric which turns out to be the Kullback–Leibler divergence between the rank distribution observed in reality and a uniform rank distribution. It is worthy of note that the two very different metrics come from the logarithm score.
- Subjects
WEATHER forecasting; HYPERGEOMETRIC series; CLIMATOLOGY; GAUSSIAN distribution; HISTOGRAMS
- Publication
Quarterly Journal of the Royal Meteorological Society, 2018, Vol 144, Issue 713, p1063
- ISSN
0035-9009
- Publication type
Article
- DOI
10.1002/qj.3275