tag:blogger.com,1999:blog-5822805028291837738.post3197311423359829270..comments2023-06-06T04:43:15.996-04:00Comments on Various Consequences: Bayesian Climate Model AveragingJoshua Stultshttp://www.blogger.com/profile/03506970399027046387noreply@blogger.comBlogger11125tag:blogger.com,1999:blog-5822805028291837738.post-66511048862456308942010-04-10T15:29:53.200-04:002010-04-10T15:29:53.200-04:00In Quantifying Uncertainty in Projections of Regio...In <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.76.1308&rep=rep1&type=pdf" rel="nofollow">Quantifying Uncertainty in Projections of Regional Climate Change: A Bayesian Approach to the Analysis of Multi-model Ensembles</a> they introduce fat-tail (Student's t) distributions to 'robustify' (really, that's the term they use) their modeling (<a href="http://j-stults.blogspot.com/2009/12/jaynes-on-outliers-and-robustness.html?showComment=1270913584275#c7258980328155912985" rel="nofollow">similar to the approach in this set of slides</a>).<br /> <br />I like this part of their conclusion:<br /><i>In contrast, we think that the Bayesian approach is not only flexible but facilitates an open debate on the assumptions that generate probabilistic forecasts.</i><br />Making the assumptions explicit is a big step towards <a href="http://j-stults.blogspot.com/2009/11/converging-and-diverging-views.html" rel="nofollow">productive discussion and consensus building</a>.Joshua Stultshttps://www.blogger.com/profile/03506970399027046387noreply@blogger.comtag:blogger.com,1999:blog-5822805028291837738.post-79953352397829724982010-03-14T14:25:25.008-04:002010-03-14T14:25:25.008-04:00James Annan has a short comment [pdf] in press, he...<a href="http://www.jamstec.go.jp/frcgc/research/d5/jdannan/" rel="nofollow">James Annan</a> has <a href="http://www.jamstec.go.jp/frcgc/research/d5/jdannan/wires_revised.pdf" rel="nofollow">a short comment</a> [pdf] in press, here's an interesting paragraph:<br /><i>Min and Hense [2006] suggest another alternative to the reporting of probabilities, explicitly treating the issue as a decision problem in which the expected loss is to be minimised and thus emphasising the close link between Bayesian probability and decision theory. The companion paper Min and Hense [2007] considers the issue of D&A on a regional and seasonal basis. Uncertainties are relatively higher at smaller scales, and moreover it is on a local basis that climate change will actually impact the environment. Therefore, this area of research is likely to remain important long after the main questions of climate change on the global scale are considered settled.</i><br /><br /><a href="http://www.easterbrook.ca/steve/?p=1531#comment-1916" rel="nofollow">Decision theory is the way to go</a>; vague arguments for action based on hand-wavy applications of the precautionary principle are sub-optimal and generally <a href="http://en.wikipedia.org/wiki/Coherence_%28statistics%29" rel="nofollow">incoherent</a>.Joshua Stultshttps://www.blogger.com/profile/03506970399027046387noreply@blogger.comtag:blogger.com,1999:blog-5822805028291837738.post-24896984796762826102010-02-17T13:22:31.037-05:002010-02-17T13:22:31.037-05:00A Bayesian Framework for Multimodel Regression
Abs...<a href="http://ams.allenpress.com/perlserv/?request=get-document&doi=10.1175%2FJCLI4179.1&ct=1&SESSID=f9b7659563807d314d02d7a97409d71f" rel="nofollow">A Bayesian Framework for Multimodel Regression</a><br /><b>Abstract</b>:<br /><i>This paper presents a framework based on Bayesian regression and constrained least squares methods for incorporating prior beliefs in a linear regression problem. Prior beliefs are essential in regression theory when the number of predictors is not a small fraction of the sample size, a situation that leads to overfit- ting—that is, to fitting variability due to sampling errors. Under suitable assumptions, both the Bayesian estimate and the constrained least squares solution reduce to standard ridge regression. New generalizations of ridge regression based on priors relevant to multimodel combinations also are presented. In all cases, the strength of the prior is measured by a parameter called the ridge parameter. A “two-deep” cross-validation procedure is used to select the optimal ridge parameter and estimate the prediction error. <br /><br />The proposed regression estimates are tested on the Development of a European Multimodel Ensemble System for Seasonal to Interannual Prediction (DEMETER) hindcasts of seasonal mean 2-m temperature over land. Surprisingly, none of the regression models proposed here can consistently beat the skill of a simple multimodel mean, despite the fact that one of the regression models recovers the multimodel mean in a suitable limit. This discrepancy arises from the fact that methods employed to select the ridge parameter are themselves sensitive to sampling errors. It is plausible that incorporating the prior belief that regression parameters are “large scale” can reduce overfitting and result in improved performance relative to the multimodel mean. Despite this, results from the multimodel mean demonstrate that seasonal mean 2-m temperature is predictable for at least three months in several regions.<br /></i><br /><br />So, not really a win for Bayes Model Averaging (BMA) for climate prediction in this one. It is a good example of how even <a href="http://j-stults.blogspot.com/2010/02/predictions-and-entropy-in-ensembles.html" rel="nofollow">climate prediction can still be an IVP-type problem</a>, which depends on the accuracy of the initialization. <br /><br />It is also a good illustration of Jaynes' claim that properly applied probability theory (Bayesian) does away with the multitude of ad-hoceries in the standard statistical toolbox:<br /><i>The purpose of this paper is to clarify the fact that a wide variety of methods for reducing overfitting in linear regression problems, including many of those mentioned above, can be interpreted in a single Bayesian framework. Bayesian theory allows one to incorporate “prior knowledge” in the estimation process.</i>Joshua Stultshttps://www.blogger.com/profile/03506970399027046387noreply@blogger.comtag:blogger.com,1999:blog-5822805028291837738.post-47985490493021964812009-12-25T19:00:10.420-05:002009-12-25T19:00:10.420-05:00Cross-validation and proper physical interpretatio...Cross-validation and proper physical interpretation of a complex hierarchical model / inference is hard. <br /><br /><a href="http://ba.stat.cmu.edu/journal/2008/vol03/issue01/sanso.pdf" rel="nofollow">Inferring Climate System Properties Using a Computer Model</a><br /><br /><a href="http://ba.stat.cmu.edu/journal/2008/vol03/issue01/rougier.pdf" rel="nofollow">Comment on article by Sanso et al.</a>:<br />"But GCM natural variability is a property of the GCM: it does not proxy the difference between the GCM and the climate system. In climate science this has been appreciated and discussed, but only recently has there been a genuine effort to determine a variance for the model structural error that is not based on internal variability (<a href="http://rsta.royalsocietypublishing.org/content/365/1857/1993.abstract" rel="nofollow">Murphy et al. 2007</a>)."<br /><br />"Sanso et al. present us with diagnostics based on holding-out 43 of the 426 evaluations from Y , and then predicting the model response on the hold-out and comparing it with the actual values. [...] Sanso et al. present us with diagnostics based on holding-out 43 of the 426 evalua- o tions from Y , and then predicting the model response on the hold-out and comparing it with the actual values. However, I suspect that there is plenty of information about MIT2DCM from the 383 evaluations that remain in Y. The experimental design for Y was a multi-level grid, so the evaluations that remain will almost certainly still do a good job of spanning the three-dimensional model-parameter space. Therefore I am not surprised that the diagnostics show that the hold-out sample is predicted well, but I am not sure that this tells us much about the statistical model for θ∗ , W, Y, z , or the reliability of Sanso et al.’s conclusions about the updated distribution for θ∗ : the verdict on the statistical model from the evidence in the paper is ‘unproven’."<br /><br />"I particularly commend the use of a statistical model to link model evaluations, model parameters, and system observations. This, and the inclusion of an explicit term for model structural error, are major steps forward for Climate Science."Joshua Stultshttps://www.blogger.com/profile/03506970399027046387noreply@blogger.comtag:blogger.com,1999:blog-5822805028291837738.post-20196881864444037862009-12-25T16:11:03.452-05:002009-12-25T16:11:03.452-05:00But really, how large and dynamic is the systemati...<i>But really, how large and dynamic is the systematic error? Believing it large or small seems here a matter of faith -- a prior.</i><br /><br /><a href="http://www.springerlink.com/content/cg44458n5054k5n8/" rel="nofollow">Here's a paper</a> that treats the bias problem that way (from the abstract):<br />"[...] In addition, unlike previous studies, our methodology explicitly considers model biases that are allowed to be time-dependent (i.e. change between control and scenario period). More specifically, the model considers additive and multiplicative model biases for each RCM and introduces two plausible assumptions (‘‘constant bias’’ and ‘‘constant relationship’’) about extrapolating the biases from the control to the scenario period. The resulting identifiability problem is resolved by using informative priors for the bias changes. A sensitivity analysis illustrates the role of the informative prior. [...] Our results show the necessity to consider potential bias changes when projecting climate under an emission scenario. Further work is needed to determine how bias information can be exploited for this task."Joshua Stultshttps://www.blogger.com/profile/03506970399027046387noreply@blogger.comtag:blogger.com,1999:blog-5822805028291837738.post-20755310652603884682009-12-25T15:43:41.188-05:002009-12-25T15:43:41.188-05:00By assuming the absence of significant systematic ...<i>By assuming the absence of significant systematic error are we not, in effect, assuming validation?</i><br /><br />That's my concern.<br /><br /><i>... how large and dynamic? Believing it large or small seems here a matter of faith...</i><br /><br />Not quite, <a href="http://www.leif.org/EOS/2009GL038082.pdf" rel="nofollow">this paper</a> that I linked in the post seems to indicate that the systematic bias is significant. Unfortunately since we <i>can't</i> (or are too impatient to) do validation testing for climate models like we normally would for numerical weather prediction, or CFD, or [pick your simulation], we can't estimate the sign or magnitude of the bias (and then of course control for it in our new and improved model).<br /><br /><i>...since Bayesian priors can be anything...</i><br /><br />I think that's why they chose uninformative priors, they want to avoid criticism that they are 'cooking the books'. <br /><br /><i>...the climate modelers basically behave as a "herd"...all used the same historical climate data...</i><br /><br />That was one of the validation problems identified in that <a href="http://www.inscc.utah.edu/%7Ejkim/publications/papers/Reichler_and_Kim_07_BAMS_CMIP.pdf" rel="nofollow">Reichler and Kim 2007</a> paper. <br /><br /><i>...a basic consistency among the various models will only strengthen this prior. Not weaken it.<br /><br />What am I missing here?</i><br /><br />I don't think you are missing anything; the cross-validation approach still doesn't protect us from fooling ourselves the way real empirical validation would (it's an unfortunate similarity of terminology too because the two 'validations' are not the same thing at all).<br /><br />Also, that 'herd' behaviour and 'spread-skill' relationship (less spread means better predictions, more spread means worse predictions) is exhibited by the weather prediction ensembles, but the model that tends to perform well on the training set changes as the training set moves forward in time (and the optimal length of the training set changes based on the thing you are trying to forecast and how far you are trying to forecast), the reason the BMA approach works well there is because we have a chance to close the loop with new observations every day (and gradually change the weights we give to each model). <br /><br />I think it's still applicable to climate model forecasting, but I don't think we have the political will to do validation because we have to wait much longer to close the loop. Unfortunately, calling for a decade or two of climate forecast validation, and tying policy decisions to gradual changes over decades isn't exactly compatible with urgent calls to decisive action (even if it is compatible with rational decision making, I mean we're talking about a process with time-scales on the order of decades, centuries and millennia right?).Joshua Stultshttps://www.blogger.com/profile/03506970399027046387noreply@blogger.comtag:blogger.com,1999:blog-5822805028291837738.post-36645474197285290322009-12-24T23:58:11.458-05:002009-12-24T23:58:11.458-05:00By assuming the absence of significant systematic ...By assuming the absence of significant systematic error are we not, in effect, assuming validation? That is, it seems that by ignoring the need to consider systematic error we then get to ignore the need for validation.<br /><br />Rather we get this concept of "cross-validation." A process that seems to involve taking a collection of unvalidated models and using their output to validate some other unvalidated model. (Repeat until all unvalidated models are validated.)<br /><br />But really, how large and dynamic is the systematic error? Believing it large or small seems here a matter of faith -- a prior. <br /><br />It is almost like using an ensemble of religions to "cross-validate" a belief in the existence of God. And using it to counter the "argument sometimes raised by so-called [religious] skeptics [...] that disagreements among existing [religions] are sufficient reason to doubt the correctness of any of their conclusions."<br /><br />Also, since Bayesian priors can be anything, this affects the outcome of conditioning. If my prior is a near certain belief that the climate modelers basically behave as a "herd" (say, because they all used the same historical climate data for tuning their differing parameterizations, etc.) then a basic consistency among the various models will only strengthen this prior. Not weaken it.<br /><br />What am I missing here?<br /><br />GeorgeGeorge M. Crewshttps://www.blogger.com/profile/05795380097849494251noreply@blogger.comtag:blogger.com,1999:blog-5822805028291837738.post-20318245686453950632009-12-24T11:42:59.279-05:002009-12-24T11:42:59.279-05:00"There are of course some limitations to what..."There are of course some limitations to what these procedures can achieve. Although the different climate modeling groups are independent in the sense that they consist of disjoint groups of people, each developing their own computer code, <b>all the GCMs are based on similar physical assumptions and if there were systematic errors affecting future projections in all the GCMs, our procedures could not detect that</b>. On the other hand, another argument sometimes raised by so-called climate skeptics is that disagreements among existing GCMs are sufficient reason to doubt the correctness of any of their conclusions. The methods presented in this paper provide some counter to that argument, because we have shown that by making reasonable statistical assumptions, we can calculate a posterior density that captures the variability among all the models, but that <b>still results in posterior-predictive intervals that are narrow enough to draw meaningful conclusions about probabilities of future climate change</b>."<br /><br />From <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.142.7869&rep=rep1&type=pdf" rel="nofollow">Bayesian modeling of uncertainty in ensembles of climate models</a><br /><br />In other words, the predictive distributions are informative (it's not just a uniform distribution), but we still can't protect ourselves from systematic bias (which the results cited in the post above seem to indicate). This unquantified risk to decision making is the fundamental problem that lack of model validation admits.Joshua Stultshttps://www.blogger.com/profile/03506970399027046387noreply@blogger.comtag:blogger.com,1999:blog-5822805028291837738.post-15401287839313391042009-12-24T10:36:16.288-05:002009-12-24T10:36:16.288-05:00"A difficulty with this kind of Bayesian anal..."A difficulty with this kind of Bayesian analysis is how to validate the statistical assumptions. <b>Of course, direct validation based on future climate is impossible.</b> However the following alternative viewpoint is feasible: if we think of the given climate models as a random sample from the universe of possible climate models, we can ask ourselves how well the statistical approach would do in predicting the response of a new climate model. This leads to a cross-validation approach. In effect, this makes an assumption of exchangability among the available climate models."<br /><br />From <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.142.7869&rep=rep1&type=pdf" rel="nofollow">Bayesian modeling of uncertainty in ensembles of climate models</a><br /><br />This approach is similar to what <a href="http://j-stults.blogspot.com/2009/12/jaynes-on-outliers-and-robustness.html" rel="nofollow">Jaynes suggested for the treatment of outliers</a>.Joshua Stultshttps://www.blogger.com/profile/03506970399027046387noreply@blogger.comtag:blogger.com,1999:blog-5822805028291837738.post-34234879154805981612009-12-24T09:41:12.339-05:002009-12-24T09:41:12.339-05:00Bayesian modeling of uncertainty in ensembles of c...<a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.142.7869&rep=rep1&type=pdf" rel="nofollow">Bayesian modeling of uncertainty in ensembles of climate models</a><br /><br />Abstract:<br />Projections of future climate change caused by increasing greenhouse gases depend critically on numerical climate models coupling the ocean and atmosphere (GCMs). However, different models differ substantially in their projections, which raises the question of how the different models can best be combined into a probability distribution of future climate change. For this analysis, we have collected both current and future projected mean temperatures produced by nine climate models for 22 regions of the earth. We also have estimates of current mean temperatures from actual observations, together with standard errors, that can be used to calibrate the climate models. We propose a Bayesian analysis that allows us to combine the different climate models into a posterior distribution of future temperature increase, for each of the 22 regions, while allowing for the different climate models to have different variances. Two versions of the analysis are proposed, a univariate analysis in which each region is analyzed separately, and a multivariate analysis in which the 22 regions are combined into an overall statistical model. A cross-validation approach is proposed to confirm the reasonableness of our Bayesian predictive distributions. The results of this analysis allow for a quantification of the uncertainty of climate model projections as a Bayesian posterior distribution, substantially extending previous approaches to uncertainty in climate models.<br /><br />The <a href="http://www.image.ucar.edu/~nychka/REA" rel="nofollow">R code and data</a> used in this paper is publicly available.Joshua Stultshttps://www.blogger.com/profile/03506970399027046387noreply@blogger.comtag:blogger.com,1999:blog-5822805028291837738.post-28184518634220883812009-12-23T19:27:45.896-05:002009-12-23T19:27:45.896-05:00Using Bayesian model averaging to calibrate foreca...<a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.80.8442&rep=rep1&type=pdf" rel="nofollow">Using Bayesian model averaging to calibrate forecast ensembles</a><br /><br />First paragraph of the abstract:<br />Ensembles used for probabilistic weather forecasting often exhibit a spread-skill relationship, but they tend to be underdispersive. This paper proposes a principled statistical method for postprocessing ensembles based on Bayesian model averaging (BMA), which is a standard method for combining predictive distributions from different sources. The BMA predictive probability density function (PDF) of any quantity of interest is a weighted average of PDFs centered around the individual (possibly bias-corrected) forecasts, where the weights are equal to posterior probabilities of the models generating the forecasts, and reflect the models’ skill over the training period. The BMA PDF can be represented as an unweighted ensemble of any desired size, by simulating from the BMA predictive distribution. The BMA weights can be used to assess the usefulness of ensemble members, and this can be used as a basis for selecting ensemble members; this can be useful given the cost of running large ensembles.Joshua Stultshttps://www.blogger.com/profile/03506970399027046387noreply@blogger.com