Sunday, February 20, 2011

Red Hawks Host Black Knights


I went to watch some collegiate and amateur bouts down in Oxford on Saturday. Miami University was hosting the cadets from West Point (last year's collegiate champs, nice write-up in NY Times). If you're used to the glammed-up barroom brawling of UFC or even the raw knock-out power of professional boxing, then the style and speed of amateur boxing might come as quite a surprise. I really like the amateur fights because they tend to pivot on conditioning, thinking and skillful execution rather than landing lucky or brutal head-shots.

The 14-bout evening started out with a couple of tough young ladies, one from Cincinati, one from Oxford, going three rounds. It's hard to do match-ups for women because there are just fewer boxers in an already small pool of athletes (since boxing is no longer an NCAA sport). The winner of this bout threw very disciplined, quick, straight punches which her clearly less experienced opponent was ill-equipped to catch or counter. This fight was followed by a few match-ups with local fighters out of Cincinnati, OSU and Miami University. The early fights consisted of lots of off-balanced brawling. The result of "first fight" jitters and inexperience for many of these young athletes.

The cadets from West Point fought out of the blue corner for the remainder of the evening against a line-up consisting of mainly Miami University fighters, with the occasional fighter from OSU or Xavier thrown in to the mix. From the first cadet to fight, on up to the "main event" it was clear why these gentlemen have won three championships in a row. A string of cadets won judges decisions handily over their opponents. In the process demonstrating solid fundamentals, and coolness under the frequent early, but generally dissipative, aggressiveness of their foes.

Then, in the first bout at 132lbs (the second being the "main event" of the night), Lang Clarke of Army landed a solid combination to the head followed-up with a deliberate two-two that sent the man from Xavier to the mat (the referee was in the midst of "stop" as the second right landed). The first and only knock-out of the evening. The Xavier athlete was back on his feet (to the relieved cheers of the crowd) after a quick nap and a check from the ring-side doc.

The only heavy-weight bout of the night was stopped by the referee near the end of the first round. The fighter from West Point landed repeated hooks-to-the-head which the young man from Oxford was not defending.

The fight at 195lbs was relatively surprising, not in outcome (the cadet won), but in tactics. Previously the cadets had employed a shorter and simpler version of the rope-a-dope tactic when their less disciplined and less well-conditioned opponents came out swinging. Rather than stand and brawl toe-to-toe, they defended and let the other fighter tire, then exploited that self-inflicted weakness with steady "work" for the rest of the round. This cadet stood up inside his opponent's early windmill, and landed straight punches and upper cuts to the head. One or two furious windmillings punctuated by deliberately thrown and well-landed opposing hits was all that it took for the windmill's blades to drop and the hub to wobble on it's axis. Referee stops contest.

The crowd was ready for their main event. A nice cheer went up for the wiry 132-pounder from Oxford as he stepped in the ring. Yours truly was the only one to cheer when the young man from West Point entered the ring (much to my wife's embarrassment). After she heard the raucous cheer when they actually introduced the Oxford fighter, she said, "OK, you can go ahead and yell for that West Point guy," so I did.

Both fighters were about equally conditioned, which made for a much more exciting fight. They were both able to work (with varying levels of effectiveness) for the majority of each round. The fighter from Oxford threw a great volume of widely-arcing punches, most of which seemed ineffective to me due to the cadet's competent defense. After the repeated, straight head-shots landed by the cadet in the third round, I thought the decision would go his way (if the fight was not stopped sooner, which had been the outcome of this sort of pounding previously). The judges had a different perspective on the bout, so the decision went to the man from Oxford, who, much to his credit, was able to recover repeatedly from the wobbliness induced by these direct blows and swing away until the bell relieved him.

The coaches, medical and officiating crew at Miami University should be congratulated for putting on such a professional event that took good care of these young athletes, and allowed them to further develop their skills.

Thought this was funny; why you don't want guys from the same gym on the card:

Sparring partners are endowed with habitual consideration and forbearance, and they find it hard to change character. A kind of guild fellowship holds them together, and they pepper each other's elbows with merry abandon, grunting with pleasure like hippopotamuses in a beer vat.
The Sweet Science

Saturday, February 19, 2011

Historical Hydraulics

Venturi's drawings of eddies look really modern too, kind of neat:
[h/t Dan Hughes]

Tuesday, February 15, 2011

Comments on Spatio-Temporal Chaos

Some comments from a guest post on Dr Curry's site. I think she has a couple dueling chat bots who've taken up residence in her comments (see if you can guess who they are). This provides a bit more motivation for getting to the forced system results we started talking about earlier. The paper and discussion that Arthur Smith links is well worth a read (even though it isn't actually responsive ; - ).

Tomas – you claimed to focus on my comment, but *completely ignored* the central element, which you even quoted:
“small random variations in solar input (not to mention butterflies)” [as what makes weather random over the long term]
Chaos as you have discussed it requires fixed control parameters (absolutely constant solar input) and no external sources of variation not accounted for in the equations (no butterflies). You gave zero attention in your supposed response to my comment to this central issue. Others here have been accused of being non-responsive, but I have to say that is pretty non-responsive on your part.
The fact is as soon as there is any external perturbation of a chaotic system not accounted for in the dynamical equations, you have bumped the system from one path in phase space to another. Earth’s climate is continually getting bumped by external perturbations small and large. The effect of these is to move the actual observed trajectory of the system randomly – yes randomly – among the different possible states available for given energy/control parameters etc.
The randomness comes not from the chaos, but from external perturbation. Chaos amplifies the randomness so that at a time sufficiently far in the future after even the smallest perturbation, the actual state of the system is randomly sampled from those available. That random sampling means it has real statistics. The “states available” are constrained by boundaries – solar input, surface topography, etc. which makes the climate problem – the problem of the statistics of weather – a boundary value problem (BVP). There are many techniques for studying BVP’s – one of which is simply to randomly sample the states using as physical a model as possible to get the right statistics. That’s what most climate models do. That doesn’t mean it’s not a BVP.

This isn’t anything new – almost every physical dynamical system, if it’s not trivially simple, displays chaos under most conditions. Statistical mechanics, one of the most successful of all physical theories, relies fundamentally on the reliability of a statistical description of what is actually deterministic (and chaotic – way-more-than-3-body) dynamics of immense numbers of atoms and molecules. This goes back to Gibbs over a century ago, and Poincare’s work was directly related.
Tomas’ comments about the 3-body system being not even “predictable statistically (e.g you can not put a probability on the event “Mars will be ejected from the solar system in N years”” is true in the strict sense of the exact mathematics assuming no external perturbations. That’s simply because for a deterministic system something will either happen or it won’t, there’s no issue of probability about it at all. But as soon as you add any sort of noise, your perfect chaotic system becomes a mere stochastic one over long time periods, and probabilities really do apply.
A nice review of the relationships between chaos, probability and statistics is this article from 1992:
“Statistics, Probability and Chaos” by L. Mark Berliner, Statist. Sci. Volume 7, Number 1 (1992), 69-90.
http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.ss/1177011444
and see some of the discussion that followed in that journal (comments linked on that Project Euclid page).

jstults
Arthur Smith, while that is a very good paper that you linked (thank you for finding one that everyone can access), it only had a very short section on ergodic theory, and you’re back to the same hand-waving analogy about statistical mechanics and turbulent flows. The [lack of] success for simple models (based on analogy to kinetic theory btw) for turbulent flows of any significant complexity indicates to me that I can’t take your analogy very seriously.
Where’s the meat? Where’s the results for the problems we care about? I can calculate results for logistic maps and Lorenz ’63 on my laptop (and the attractor for that particular toy exists).
A more well-phrased attempt to explain why hand-waving about statistical mechanics is a diversion from the questions of significance for this problem (with apologies to Ruelle): what are the measures describing climate?
If one is optimistic, one may hope that the asymptotic measures will play for dissipative systems the sort of role which the Gibbs ensembles have played for statistical mechanics. Even if that is the case, the difficulties encountered in statistical mechanics in going from Gibbs ensembles to a theory of phase transitions may serve as a warning that we are, for dissipative systems, not yet close to a real theory of turbulence.
What Are the Measures Describing Turbulence?

Friday, February 4, 2011

Validation and Calibration: more flowcharts

In a previous post we developed a flow-chart for model verification and validation (V&V) activities. One thing I noted in the update on that post was that calibration activities were absent. My google alerts just turned up a new paper (they reference the Thacker et al. paper the previous post was based on, I think you’ll notice the resemblance of flow-charts) which adds the calibration activity in much the way we discussed.


PIC

Figure 1: Model Calibration Flow Chart of Youn et al. [1]

The distinction between calibration and validation is clearly highlighted, “In many engineering problems, especially if unknown model variables exist in a computational model, model improvement is a necessary step during the validation process to bring the model into better agreement with experimental data. We can improve the model using two strategies: Strategy 1 updates the model through calibration and Strategy 2 refines the model to change the model form.”


PIC

Figure 2: Flow chart from previous post

The well-founded criticism of calibration-based arguments for simulation credibility is that calibration provides no indication of the predictive capability of a model so-tuned. The statistician might use the term generalization risk to talk about the same idea. There is no magic here. Applying techniques such as cross-validation merely add a (hyper)parameter to the model (this becomes readily apparent in a Bayesian framework). Such techniques, while certainly useful, are no silver bullet against over-confidence. This is a fundamental truth that will not change with improving technique or technology, and that is because all probability statements are conditional on (among other things) the choice of model space (particular choices of which must by necessity be finite, though the space of all possible models is countably infinite).
One of the other interesting things in that paper is their argument for a hierarchical framework for model calibration / validation. A long time ago, in a previous life, I made a similar argument [2]. Looking back on that article is a little embarrassing. I wrote that before I had read Jaynes (or much else of the Bayesian analysis and design of experiments literature), so it seems very technically naive to me now. The basic heuristics for product development discussed in it are sound though. They’re based mostly on GAO reports [3456], a report by NAS [7], lessons learned from Live Fire Test and Evaluation [8] and personal experience in flight test. Now I understand better why some of those heuristics have sound theoretical underpinnings.
There are really two hierarchies though. There is the physical hierarchy of system, sub-system and component that Youn et al. emphasize, but there is also a modeling hierarchy. This modeling hierarchy is delineated by the level of aggregation, or the amount of reductive-ness, in the model. All models are reductive (that’s the whole point of modeling: massage the inordinately complex and ill-posed into tractability), some are just more reductive than others.


PIC

Figure 3: Modeling Hierarchy (from [2])

Figure 3 illustrates why I care about Bayesian inference. It’s really the only way to coherently combine information from the bottom of the pyramid (computational physics simulations), with information higher up the pyramid which rely on component and subsystem testing.
A few things I don’t like about the approach in [1]
  • The partitioning of parameters into “known” and “unknown” based on what level of the hierarchy (component, subsystem, system) you are at in the “bottom-up” calibration process. Our (properly formulated) models should tell us how much information different types of test data give us about the different parameters. Parameters should always be described by a distribution rather than discrete switches like known or unknown.
  • The approach is based entirely on the likelihood (but they do mention something that sounds like expert priors in passing).
  • They claim that the proposed calibration method enhances “predictive capability” (section 3), however this is misleading abuse of terminology. Certainly the in-sample performance is improved by calibration, but the whole point of making a distinction between calibration and validation is based on recognizing that this says little about the out-of-sample performance (in fairness, they do equivocate a bit on this point, “The authors acknowledge that it is difficult to assure the predictive capability of an improved model without the assumption that the randomness in the true response primarily comes from the the randomness in random model variables.”).
Otherwise, I find this a valuable paper that strikes a pragmatic chord, and that’s why I wanted to share my thoughts on it.
[Update: This thesis that I linked at Climate Etc. has a flow-chart too.
]

References

[1]   Youn, B. D., Jung, B. C., Xi, Z., Kim, S. B., and Lee, W., “A hierarchical framework for statistical model calibration in engineering product development,” Computer Methods in Applied Mechanics and Engineering, Vol. 200, No. 13-16, 2011, pp. 1421 – 1431.
[2]   Stults, J. A., “Best Practices for Developmental Testing of Modern, Complex Munitions,” ITEA Journal, Vol. 29, No. 1, March 2008, pp. 67–74.
[3]   Defense Acquisitions: Assesment of Major Weapon Programs,” Tech. Rep. GAO-03-476, U.S. General Accounting Office, May 2003.
[4]   Best Practices: Better Support of Weapon System Program Managers Needed to Improve Outcomes,” Tech. Rep. GAO-06-110, U.S. General Accounting Office, 2006.
[5]   Precision-Guided Munitions: Acquisition Plans for the Joint Air-to-Surface Standoff Missile,” Tech. Rep. GAO/NSIAD-96-144, U.S. General Accounting Office, 1996.
[6]   Best Practices: A More Constructive Test Approach is Key to Better Weapon System Outcomes,” Tech. Rep. GAO/NSIAD-00-199, U.S. General Accounting Office, July 2000.
[7]   Michael L. Cohen, John E. Rolph, D. L. S., editor, Statistics, Testing and Defense Acquisition: New Approaches and Methodological Improvements, National Academy Press, Washington D.C., 1998.
[8]   O’Bryon, J. F., editor, Lessons Learned from Live Fire Testing: Insights Into Designing, Testing, and Operating U.S. Air, Land, and Sea Combat Systems for Improved Survivability and Lethality, Office of the Director, Operational Test and Evaluation, Live Fire Test and Evaluation, Office of the Secretary of Defense, January 2007.