Suppose you need to us an empirical closure for, say, the viscosity of your fluid or the equation of state. Usually you develop this sort of thing with some physical insight based on kinetic theory and lab tests of various types to get fits over a useful range of temperatures and pressures, then you use this relation in your code (generally without modification based on the code’s output). An alternative way to approach this closure problem would be to run your code with variations in viscosity models and parameter values and pick the set that gave you outputs that were in good agreement with high-entropy functionals (like an average solution state, there’s many ways to get the same answer, and nothing to choose between them) for a particular set of flows, this would be a sort of inverse modeling approach. Either way gives you an answer that can demonstrate consistency with your data, but there’s probably a big difference in the predictive capability between the models so developed.that is a surprisingly accurate description of the process actually used to tune parameters in climate general circulation models (GCMs).
a relevant section from an overview paper [pdf]:
The CAPT premise is that, as long as the dynami- cal state of the forecast remains close to that of the verifying analyses, the systematic forecast errors are predominantly due to deficiencies in the model parameterizations. [...] In themselves, these differences do not automatically determine a needed parameterization change, but they can provide developers with insights as to how this might be done. Then if changing the parameterization is able to render a closer match between parameterized variables and the evaluation data, and if this change also reduces the systematic forecast errors or any compensating errors that are exposed, the modified parameterization can be regarded as more physically realistic than its predecessor.The highlighted conclusion is the troublesome leap. The process is an essentially post-hoc procedure based on goodness of fit rather than physical insight. This is contrary to established best practice in developing simulations with credible predictive capability. Sound physics rather than extensive empirical tuning is paramount if we're to have confidence in predictions. The paper also provides some discussion that goes to the IVP/BVP distinction:
But will the CAPT methodology enhance the performance of the GCM in climate simulations? In principle, yes: modified parameterizations that reduce systematic forecast errors should also improve the simulation of climate statistics, which are just aggregations of the detailed evolution of the model [see my comment here]. [...] Some systematic climate errors develop more slowly, however. [...] It follows that slow climate errors such as these are not as readily amenable to examination by a forecast-based approach.Closing the loop on long-term predictions is tough (this point is often made in the literature, but rarely mentioned in the press). The paper continues:
Thus, once parameterization improvements are provisionally indicated by better short-range forecasts, enhancements in model performance also must be demonstrated in progressively longer (extended-range, seasonal, interannual, decadal, etc.) simulations. GCM parameterizations that are improved at short time scales also may require some further "tuning" of free parameters in order to achieve radiative balance in climate mode.The discussion of data assimilation, initialization and transfer between different grid resolutions that follows that section is interesting and worth a read. They do address one of the concerns I brought up in my discussion with Robert, which was model comparison based on high-configurational-entropy functionals (like a globally averaged state):
If a modified parameterization is able to reduce systematic forecast errors (defined relative to high-duality observations and NWP analyses), it then can be regarded as more physically realistic than its predecessor.Please don't misunderstand my criticism, the process described is a useful part of the model diagnostic toolbox. However, it can easily fool us into overconfidence in the simulation's predictive capability, because we may mistake what is essentially an extensive and continuous model calibration process for validation.