tag:blogger.com,1999:blog-5822805028291837738.post1189798998860350298..comments2023-12-09T03:51:33.158-05:00Comments on Various Consequences: Dueling BayesiansJoshua Stultshttp://www.blogger.com/profile/03506970399027046387noreply@blogger.comBlogger2125tag:blogger.com,1999:blog-5822805028291837738.post-41969075029515559272009-12-14T21:11:52.870-05:002009-12-14T21:11:52.870-05:00"sometimes the most important thing to come o..."sometimes the most important thing to come out of an inference is the rejection of the model on which it is based" <a href="http://www.stat.columbia.edu/~cook/movabletype/archives/2009/05/bayes_jeffreys.html" rel="nofollow">-- Andrew Gelman</a>Joshua Stultshttps://www.blogger.com/profile/03506970399027046387noreply@blogger.comtag:blogger.com,1999:blog-5822805028291837738.post-40873598922648947182009-12-11T19:11:24.824-05:002009-12-11T19:11:24.824-05:00gmcrews said: "Shouldn't these concepts b...gmcrews said: "Shouldn't these concepts be applied at the model verification/validation stage?"<br /><br />Absolutely.<br /><br />"Before the policy decision makers get the models' outputs?"<br /><br />Yes, but also after. Here again Gelman's quote about 'predicting everything' comes into play (it's not either/or, it's both). Once you've decided that a model is 'validated enough' for a particular use, then decision theory gets applied again based on the predictive distributions for the costs (this requires an economic model that takes the predictive distributions from the climate model to predictive distributions for the costs). <br /><br />I like this approach to policy (deciding what resources to allocate to avoid uncertain costs) because our uncertainty about the underlying physical / economic system is explicitly taken into account. Such a policy response based on decision theory would be coherent (consistent with our state of knowledge of the future 'payouts'), and it would tend to avoid foolish extremes.<br /><br />"...how you would apply these concepts if it were up to you for, say, the climate models."<br /><br />Those papers about <a href="http://j-stults.blogspot.com/2009/12/bayesian-climate-model-averaging.html" rel="nofollow">Bayes Climate Model Averaging</a> are a start. BMA results in a predictive distribution (from an ensemble of climate models), so in principle you could apply the ideas demonstrated in the Xiang & Mahadevan papers with their structural/reliability models to validate the predictions of an ensemble of climate models. <br /><br />The results of one of those papers cited in the BMA post show that we've still got some systematic biases to iron out. <a href="http://www.drroyspencer.com/Lindzen-and-Choi-GRL-2009.pdf" rel="nofollow">Lindzen & Choi's recent paper</a> is a promising development. Comparing the ensemble performance to the satellite data (of all sorts, launch more sensors!) will bear lots of fruit (proxy and ground station data are too uninformative/low-fidelity to support much further progress). <br /><br />It is a shame that the modellers and the experimentalists in this field seem to be so adversarial. The aerodynamics community went through those growing pains with the early development of CFD, but thankfully people realized that simulation and experimentation are mutually dependant, and focusing on one at the expense of the other is detrimental to progress in the field as a whole.Joshua Stultshttps://www.blogger.com/profile/03506970399027046387noreply@blogger.com