Thursday, August 30, 2012

RIP John Hunter

Fernando Perez fperez.net@gmail.com via scipy.org
10:59 PM (18 hours ago) to SciPy, SciPy, numfocus

Dear friends and colleagues,

I am terribly saddened to report that yesterday, August 28 2012 at 10am, John D. Hunter died from complications arising from cancer treatment at the University of Chicago hospital, after a brief but intense battle with this terrible illness. John is survived by his wife Miriam, his three daughters Rahel, Ava and Clara, his sisters Layne and Mary, and his mother Sarah.

Note: If you decide not to read any further (I know this is a long message), please go to this page for some important information about how you can thank John for everything he gave in a decade of generous contributions to the Python and scientific communities: http://numfocus.org/johnhunter.

Saturday, August 18, 2012

Experimental Design Criteria

I used the AlgDesign package in R to generate some optimal designs for a 6th order 2D response surface. There is still clustering near the boundaries (as shown in this post) with the I optimal design, but there are a few more points "in the middle" of the parameter space.

Wednesday, August 15, 2012

Interactive Curiosity Rover Panorama

Curiosity rover: Martian solar day 2 in New Mexico Interactive Panorama

Saturday, August 11, 2012

Validating the Prediction of Unobserved Quantities

I was going to just post a bit of the abstract and a link to this recent tech report as a comment on my V&V workshop notes, but Validating the Prediction of Unobserved Quantities was interesting enough to deserve its own post. The basic approach is well-founded in probability theory, and there are some novel concepts as well.

Here's the abstract:
In predictive science, computational models are used to make predictions regarding the response of complex systems. Generally, there is no observational data for the predicted quantities (the quantities of interest or QoIs) prior to the computation, since otherwise predictions would not be necessary. Further, to maximize the utility of the predictions it is necessary to assess their reliability|i.e., to provide a quantitative characterization of the discrepancies between the prediction and the real world. Two aspects of this reliability assessment are judging the credibility of the prediction process and characterizing the uncertainty in the predicted quantities. These processes are commonly referred to as validation and uncertainty quantification (VUQ), and they are intimately linked. In typical VUQ approaches, model outputs for observed quantities are compared to experimental observations to test for consistency. While this consistency is necessary, it is not sufficient for extrapolative predictions because, by itself, it only ensures that the model can predict the observed quantities in the observed scenarios. Indeed, the fundamental challenge of predictive science is to make credible predictions with quantified uncertainties, despite the fact that the predictions are extrapolative. At the PECOS Center, a broadly applicable approach to VUQ for prediction of unobserved quantities has evolved. The approach incorporates stochastic modeling, calibration, validation, and predictive assessment phases where uncertainty representations are built, informed, and tested. This process is the subject of the current report, as well as several research issues that need to be addressed to make it applicable in practical problems.