The purpose of the workshop is to bring together a diverse group of computational scientists working in fields in which reliability of predictive computational models is important. Via formal presentations, structured discussions, and informal conversations, we seek to heighten awareness of the importance of reliable computations, which are becoming ever more critical in our world.It looks very interesting.
The intended audience is computational scientists and decision makers in fields as diverse as earth/atmospheric sciences, computational biology, engineering science, applied mechanics, applied mathematics, astrophysics, and computational chemistry.
Tuesday, October 11, 2011
Monday, October 3, 2011
So, I used mplayer to dump frames from the video (30 fps) to jpg files (about 1800 images), and the Python Image Library to crop to a rectangle focused on the splashes.
a text file if you want to play with the data.
recurrence plots for Lorenz63 trajectories). I'm pretty sure this is not significant to engine development in any way, but I thought it was a neat source of time series data.
Saturday, October 1, 2011
A significant part of the discussion was about a post by Steve Easterbrook on the (in)appropriateness of IV&V (“I” stands for independent) or commercial / industrial flavored V&V for climate models. As George Crews points out in discussion on Climate Etc., there’s a subtle rhetorical slight of hand that Easterbrook uses (which works for winning web-points with the uncritical because different communities use the V-words differently, see the introduction to this post of mine). He claims it would be inappropriate to uncritically apply the IV&V processes and formal methods he’s familiar with from developing flight control software for NASA to climate model development. Of course, he’s probably right (though we should always be on the look-out to steal good tricks from wherever we can find them). This basically correct argument gets turned in to “Easterbrook says V&V is inappropriate for climate models” (by all sorts of folks with various motivations, see the discussion thread on Climate Etc.). The obvious question for anyone who’s familiar with even my little Separation of V&V post is, “which definition of V or V?”
What Easterbrook is now on record as agreeing is appropriate is the sort of V&V done by the computational physics community. This is good. Easterbrook seemed to be arguing for a definition of “valid” that meant “implements the theory faithfully” . This is what I’d call “verification” (are you solving the governing equations correctly). The problem with the argument built on that definition, is the conflation I pointed out at the end of this comment, which is kind of similar to the rhetorical leap mentioned in the previous paragraph and displayed on the thread at Climate Etc.
Now, his disagreement with Dan Hughes (who recommends an approach I find makes a great deal of sense, and that I've used in anger to commit arithmurgical damage on various and sundry PDEs) is that Dan thinks we should have independent V&V, and Steve thinks not. If all you care about is posing complex hypotheses then IV&V seems a waste. If you care about decision support, then it would probably be wise to invest in it (and in our networked-age much of the effort could probably be crowd-sourced, so the investment can conceivably be quite small). This actually has some parallels with the decision support dynamics I highlighted in No Fluid Dynamicist Kings in Flight Test.
One of the other themes in the discussion is voiced by Nick Stokes. He argues that all this V&V stuff doesn’t result in successful software, or that people calling for developing IV&V’d code should shut up and do it themselves. One of the funny things is that if the code is general enough that the user can specify arbitrary boundary and interior forcings, then any user can apply the method of manufactured solutions (MMS) to verify the code. What makes Nick’s comments look even more silly is that nearly all available commercial CFD codes are capable of being verified in this way. This is thanks in part to the efforts of folks like Patrick Roache for instance , and now it is a significant marketing point as Dan Hughes and Steven Mosher point out. Nick goes on to say that he’d be glad to hear of someone trying all this crazy V&V stuff. The fellows doing ice sheet modeling that I pointed out on the thread on Easterbrook’s are doing exactly that. This is because the activities referred to by the term “verification” are a useful set of tools for the scientific simulation developer in addition to being a credibility building exercise (as I mentioned in the previous post, see the report by Salari and Knupp ).
Of course doing calculation verification is the responsibility of the person doing the analysis (in sane fields of science and engineering anyway). So the “shut-up and do it yourself” response only serves to undermine the credibility of people presenting results to decision makers bereft of such analysis. On the other hand, the analyst properly has little to say on the validation question. That part of the process is owned by the decision maker.
 Curry, J., Verification, Validation and Uncertainty Quantification in Scientific Computing, http://judithcurry.com/2011/09/25/verification-validation-and-uncertainty-quantification-in-scientific-computing/,Climate Etc. Sunday 25th September, 2011.
 Roache, P., Building PDE Codes to be Verifiable and Validatable, Computing in Science and Engineering, IEEE, Sept-Oct, 2004.