Tuesday, December 8, 2009

Simulation-based Engineering Science

gmcrews has a couple interesting posts on model verification and validation (V&V) and his commment on scientific software has links to a couple ’state of the practice’ reports [1] [2]. The reports are about something called Simulation-based Engineering Science (SBES), which is the (common?) jargon they use to describe doing research and development with computational modelling and simulation.

1 Notes and Excerpts from [1]

Below are some excerpts from the executive summary along with a little commentary.

Simulation has today reached a level of predictive capability that it now firmly complements the traditional pillars of theory and experimentation/observation. Many critical technologies are on the horizon that cannot be understood, developed, or utilized without simulation. At the same time, computers are now affordable and accessible to researchers in every country around the world. The near-zero entry-level cost to perform a computer simulation means that anyone can practice SBE&S, and from anywhere.

1.1 Major Trends Identified

  1. Data-intensive applications, including integration of (real-time) experimental and observational data with modelling and simulation to expedite discovery and engineering solutions, were evident in many countries, particularly Switzerland and Japan.
  2. Achieving millisecond time-scales with molecular resolution for proteins and other complex matter is now within reach using graphics processors, multicore CPUs, and new algorithms.
  3. The panel noted a new and robust trend towards increasing the fidelity of engineering simulations through inclusion of physics and chemistry.
  4. The panel sensed excitement about the opportunities that petascale speeds and data capabilities would afford.

1.2 Threats to U.S. Leadership

  1. The world of computing is flat, and anyone can do it. What will distinguish us from the rest of the world is our ability to do it better and to exploit new architectures we develop before those architectures become ubiquitous.

    Furthermore, already there are more than 100 million NVIDIA graphics processing units with CUDA compilers distributed worldwide in desktops and laptops, with potential code speedups of up to a thousand-fold in virtually every sector to whomever rewrites their codes to take advantage of these new general programmable GPUs.

  2. Inadequate education and training of the next generation of computational scientists threatens global as well as U.S. growth of SBE&S. This is particularly urgent for the United States; unless we prepare researchers to develop and use the next generation of algorithms and computer architectures, we will not be able to exploit their game-changing capabilities.

    Students receive no real training in software engineering for sustainable codes, and little training if any in uncertainty quantification, validation and verification, risk assessment or decision making, which is critical for multiscale simulations that bridge the gap from atoms to enterprise.

  3. A persistent pattern of subcritical funding overall for SBE&S threatens U.S. leadership and continued needed advances amidst a recent surge of strategic investments in SBE&S abroad that reflects recognition by those countries of the role of simulations in advancing national competitiveness and its effectiveness as a mechanism for economic stimulus.

I don’t know of any engineering curriculums that have a good program for training people in all of the areas (the physics, numerical methods, design of experiments, statistics and software carpentry) to be competent high-performance simulation developers (in scientific computing the users and developers tend to be the same people). It’s requires multi-disciplinary, and deeply technical knowledge at the same time. The groups that try to go broad with the curriculum tend to treat the simulations as a black box. Those sorts of programs tend to produce people who can turn the crank on a code, but don’t have the deeper technical understanding needed to add the next increment of physics, or apply the newer more efficient solver, or adapt the current code to take advantage of new hardware. Right now that sort of expertise is achieved in an ad-hoc or apprenticeship kind of manner (see for example MIT’s program). That works for producing a few experts at a time (after a lot of time), but it doesn’t scale well.

1.3 Opportunities for Investment

  1. There are clear and urgent opportunities for industry-driven partnerships with universities and national laboratories to hardwire scientific discovery and engineering innovation through SBE&S.
  2. There is a clear and urgent need for new mechanisms for supporting R&D in SBE&S.

    investment in algorithm, middleware, and software development lags behind investment in hardware, preventing us from fully exploiting and leveraging new and even current architectures. This disparity threatens critical growth in SBE&S capabilities needed to solve important worldwide problems as well as many problems of particular importance to the U.S. economy and national security.

  3. There is a clear and urgent need for a new, modern approach to educating and training the next generation of researchers in high performance computing specifically, and in modeling and simulation generally, for scientific discovery and engineering innovation.

    Particular attention must be paid to teaching fundamentals, tools, programming for performance, verification and validation, uncertainty quantification, risk analysis and decision making, and programming the next generation of massively multicore architectures. At the same time, students must gain deep knowledge of their core discipline.

The third finding is interesting, but it is a tall order. So we need to train subject matter experts who are also experts in V&V, decision theory, software development and exploiting unique (and rapidly evolving) hardware. Show me the curiculum that accomplishes that, and I’d be quite impressed (really, post a link in the comments if you know of one).

More on validation:

Experimental validation of models remains difficult and costly, and uncertainty quantification is not being addressed adequately in many of the applications. Models are often constructed with insufficient data or physical measurements, leading to large uncertainty in the input parameters. The economics of parameter estimation and model refinement are rarely considered, and most engineering analyses are conducted under deterministic settings. Current modeling and simulation methods work well for existing products and are mostly used to understand/explain experimental observations. However, they are not ideally suited for developing new products that are not derivatives of current ones.

One of the mistakes that the scientific computing community made early on was in letting the capabilities of our simulations be over-sold without stressing the importance of concurrent, supporting efforts in theory and experiment. There are a significant number of consultants who make outrageous claims about replacing testing with modeling and simulation. It is far to easy for our decision makers to be impressed by the really awesome movies we can make from our simulations, and the claims from the consultants begin to get traction. It is our job to make sure the decision makers understand that the simulation is only as real as our empirical validation of it.

2 Notes and Exerpts from [2]

Below are some exerpts from the executive summary along with a little commentary.

Major Findings:

  1. SBES is a discipline indispensable to the nations continued leadership in science and engineering. […] There is ample evidence that developments in these new disciplines could significantly impact virtually every aspect of human experience.
  2. Formidable challenges stand in the way of progress in SBES research. These challenges involve resolving open problems associated with multiscale and multi-physics modeling, real-time integration of simulation methods with measurement systems, model validation and verification, handling large data, and visualization. Significantly, one of those challenges is education of the next generation of engineers and scientists in the theory and practices of SBES.
  3. There is strong evidence that our nations leadership in computational engineering and science, particularly in areas key to Simulation-Based Engineering Science, is rapidly eroding. Because competing nations worldwide have increased their investments in research, the U.S. has seen a steady reduction in its proportion of scientific advances relative to that of Europe and Asia. Any reversal of those trends will require changes in our educational system as well as changes in how basic research is funded in the U.S.

The ’Principle Recommendations’ in the report amount to ’Give the NSF more money’, which is not surprising if you consider the source. It is interesting that their finding about education is largely the same as the other report.

References

[1] A Report of the National Science Foundation Blue Ribbon Panel on Simulation-Based Engineering Science: Revolutionizing Engineering Science through Simulation, National Science Foundation, May 2006, http://www.nsf.gov/pubs/reports/sbes_final_report.pdf

[2] WTEC Panel Report on International Assessment of Research and Development in Simulation-Based Engineering and Science, 2009, http://www.wtec.org/sbes/SBES-GlobalFinalReport.pdf

Monday, December 7, 2009

Final Causes

This is the conclusion of the comments section in the model comparison chapter (Chapter 20) from Jayne’s book (emphasis original).


It seems that every discussion of scientific inference must deal, sooner or later, with the issue of belief or disbelief in final causes. Expressed views range all the way from Jaques Monod (1970) forbidding us even to mention purpose in the Universe, to the religious fundamentalist who insists that it is evil not to believe in such a purpose. We are astonished by the dogmatic, emotional intensity with which opposite views are proclaimed, by persons who do not have a shred of supporting factual evidence for their positions.

But almost everyone who has discussed this has supposed that by a ’final cause’ one means some supernatural force that suspends natural law and takes over control of events (that is, alters positions and velocities of molecules in a way inconsistent with the equations of motions) in order to ensure that some desired final condition is attained. In our view, almost all past discussions have been flawed by failure to recognize that operation of a final cause does not imply controlling molecular details.

When the author of a textbook says: ’My purpose in writing this book was to…’, he is disclosing that there was a true ’final cause’ governing many activities of writer, pen, secretary, word processor, extending usually over several years. When a chemist imposes conditions on his system which forces it to have a certain volume and temperature, he is just as truly the wielder of a final cause dictating the final thermodynamic state that he wished it to have. A bricklayer and a cook are likewise engaged in the art of invoking final causes for definite purposes. But – and this is the point almost always missed – these final causes are macroscopic; they do not determine any particular ’molecular’ details. In all cases, had those fine details been different in any one of billions of ways, the final cause would have been satisfied just as well.

The final cause may then be said to possess an entropy, indicating the number of microscopic ways in which its purpose can be realized; and the larger that entropy, the greater is the probability that it will be realized. Thus the principle of maximum entropy applies also here.

In other words, while the idea of a microscopic final cause runs counter to all the instincts of a scientists, a macroscopic final cause is a perfectly familiar and real phenomenon, which we all invoke daily. We can hardly deny the existence of purpose in the Universe when virtually everything we do is done with some definite purpose in mind. Indeed, anybody who fails to pursue some definite long-term purpose in the conduct of his life is dismissed as an idler by his colleagues. Obviously, this is just a familiar fact with no religious connotations – and no anti-religious ones. Every scientist believes in macroscopic final causes without thereby believing in supernatural contravention of the laws of physics. The wielder of the final cause is not suspending physical law; he is merely choosing the Hamiltonian with which some system evolves according to physical law. To fail to see this is to generate the most fantastic, mystical nonsense.


So we have the wager from Pascal, and God as the ultimate ’Hamiltonian chooser’ from Jaynes?

Sunday, December 6, 2009

Sociology of Science

This is the first part of the comments section in the model comparison chapter (Chapter 20) from Jayne’s book (emphasis mine).

Actual scientific practice does not really obey Ockham’s razor, either in its previous ’simplicity’ form or in our revised ’plausibility’ form. As so many of us have deplored, the attractive new hypothesis or model, which accounts for the facts in such a neat, plausible way that you want to believe it at once, is usually pooh-poohed by the official Establishment in favor of some drab, complicated, uninteresting one; or, if necessary, in favor of no alternative at all. The progress of science is carried forward mostly by the few fundamental dissenting innovators, such as Copernicus, Galileo, Newton, Laplace, Darwin, Mendel, Pasteur, Boltzmann, Einstein, Wegener, Jeffreys – all of whom had to undergo this initial rejection and attack. In the cases of Galileo, Laplace, and Darwin, these attacks continued for more than a century after their deaths. This is not because their new hypothesis were faulty – quite the contrary – but because this is the part of the sociology of science (and, indeed of all scholarship). In any field, the Establishment is seldom in pursuit of the truth, because it is composed of those who sincerely believe that they are already in possession of it.

The sociology of science is an interesting topic that’s been brought forcefully into the public perception by the recent kerfuffle over the leaked UEA CRU emails. Hans von Storch has an interesting guest post over on Roger Pielke’s site discussing some of the concerns along with suggestions for improving the sustainability of science.

I think ’sustainability of science’ is his way of saying maintaining long-term credibility. Being honest about the uncertainties and not using science to support a ’preconceived political agenda of something good’. This is an unarguably good thing. A hard thing for sure, but something no one would argue against out loud. The term Pielke gives for the behaviour exhibited by the CRU scientists is ’stealth advocacy’. When you wrap the mantle of Science (relevant Anchorman audio clip, it really is relevant, the relevant part is at the very end) around your advocacy and misrepresent the actual state of knowledge to decision makers and laypeople, then you aren’t living up to that particular sort of honesty that Feynman exhorted scientists to uphold.

Friday, December 4, 2009

Bayesian Climate Model Averaging

Another instalment in the 'lack-of-climate-model-validation-bothers-me' series (see Lindzen's talk for a good intro). I've been reading Jaynes' book lately so naturally the Bayesian approach to the issue seems most germane. The whole climate science / public policy intersection can be viewed as one big decision theory problem (acting to maximize utility under uncertainty). To come out of that game well you generally need to have good models (hopefully with nice, tight predictive distributions) and smooth, gently sloping loss functions. I'll leave the loss functions for now and focus on the modelling aspect (since that's what I'm familiar with).

Validation (comparing the model predictions to experimental observations) is generally what allows you to find out if you've made good choices of what model structure to use, what physics to include and what physics to neglect. In most applications of computational physics this is a straight-forward (if sometimes expensive) process. The problem is harder with climate models. We can't do designed experiments on the Earth.

Here are a couple of choice quotes from Reichler and Kim 2007 about the difficulties of climate model validation.
Several important issues complicate the model validation process. First, identifying model errors is difficult because of the complex and sometimes poorly understood nature of climate itself, making it difficult to decide which of the many aspects of climate are important for a good simulation. Second, climate models must be compared against present (e.g., 1979-1999) or past climate, since verifying observations for future climate are unavailable. Present climate, however, is not an independent data set since it has already been used for the model development (Williamson 1995). On the other hand, information about past climate carries large inherit uncertainties, complicating the validation process of past climate simulations (e.g., Schmidt et al. 2004). Third, there is a lack of reliable and consistent observations for present climate, and some climate processes occur at temporal or spatial scales that are either unobservable or unresolvable. Finally, good model performance evaluated from the present climate does not necessarily guarantee reliable predictions of future climate (Murphy et al. 2004).

The above quoted paper is a comparison of three generations of IPCC-family models. The study shows improvement in prediction of modern climate as the models improve from 1990 to 2001 to 2007. It also shows that the ensemble mean is more skilled than any individaul model (more on this later). The reasons given to explain the improvement make intuitive sense:
Two developments, more realistic parameterizations and finer resolutions, are likely to be most responsible for the good performance seen in the latest model generation. For example, there has been a constant refinement over the years in how sub-grid scale processes are parameterized in models. Current models also tend to have higher vertical and horizontal resolution than their predecessors. Higher resolution reduces the dependency of models on parameterizations, eliminating problems since parameterizations are not always entirely physical. That increased resolution improves model performance has been shown in various previous studies (e.g., Mullen and Buizza 2002, Mo et al. 2005, Roeckner et al. 2006).

A problem faced by climate modelers is that it is unlikely that we'll be able to run grid-resolved solutions of the climate within the lifetime of anyone now living (us CFD guys have the same problem with grid resolution scaling for high Reynolds number flows). There will always be a need for 'sub-grid' parameterizations, the hope is that eventually they will become "entirely physical" and well calibrated (if you think they already are, then you have been taken by someone's propaganda).

Bayesian model averaging (BMA) is one way to account for our uncertainty in model structure / physics choices. Instead of choosing a 'right' model, we get predictive distributions for things we care about by marginalizing over the uncertain model structures (and the uncertain parameters too).This paper shows that it is a useful procedure for short-term forecasting. The benefit with short-term forecasts is that we can evaluate the accuracy by closing the loop between predictions and observations. Min and Hense apply this idea to the IPCC AR4 coupled-climate models. Here's a short snippet from that paper providing some motivation for the use of BMA:
However, more than 50% of the models with anthropogenic-only forcing cannot reproduce the observed warming reasonably. This indicates the important role of natural forcing although other factors like different climate sensitivity, forcing uncertainty, and a climate drift might be responsible for the discrepancy in anthropogenic-only models. Besides, Bayesian and conventional skill comparisons demonstrate that a skill-weighted average with the Bayes factors (Bayesian model averaging, BMA) overwhelms the arithmetic ensemble mean and three other weighted averages based on conventional statistics, illuminating future applicability of BMA to climate predictions.

The ensemble means or Bayesian averages tend to outperform individual models, but why is this? Here's what R&K2007 has to say:
Our results indicate that multi-model ensembles are a legitimate and effective means to improve the outcome of climate simulations. As yet, it is not exactly clear why the multi-model mean is better than any individual model. One possible explanation is that the model solutions scatter more or less evenly about the truth (unless the errors are systematic), and the errors behave like random noise that can be efficiently removed by averaging. Such noise arises from internal climate variability (Barnett et al. 1994), and probably to a much larger extent from uncertainties in the formulation of models (Murphy et al. 2004; Stainforth et al. 2005).

Another interesting paper that explores this finds that models which have good scores on the calibration data do not tend to outperform other models over a subsequent validation period.
Error in the ensemble mean decreases systematically with ensemble size, N, and for a random selection as approximately 1∕Na, where a lies between 0.6 and 1. This is larger than the exponent of a random sample (a = 0.5) and appears to be an indicator of systematic bias in the model simulations.

This should not be surprising, it is very difficult to get all of the physics right (and remove the systematic bias) when you aren't able to do no-kidding validation experiments. They begin their conclusion with
In our analysis there is no evidence of future prediction skill delivered by past performance-based model selection. There seems to be little persistence in relative model skill, as illustrated by the percentage turnover in Figure 3. We speculate that the cause of this behavior is the non-stationarity of climate feedback strengths. Models that respond accurately in one period are likely to have the correct feedback strength at that time. However, the feedback strength and forcing is not stationary, favoring no particular model or groups of models consistently.

This means it is very difficult to protect ourselves from 'over-fitting' the models to our available historical record, and it certainly indicates that we should be cautious in basing policy decision on climate model forecasts. The 'science is settled' crowd, while busy banging the consensus drum and clamouring for urgent action (NOW!), never seem to offer this sort of nuanced approach to policy though.

If you have read any good, recent climate model validation papers please post them in the comments. Please don't post polemics about polar bears and arctic sea ice, my skepticism is honest, your activism should be too.

For some further Bayes Model Averaging / Model Selection check out:

(isn't Google Books cool?)

Sunday, November 29, 2009

CRU emails

An interesting thought from an old engineer (emphasis mine):
These guys called climate scientists have not done any more physics or chemistry than I did. A lifetime in engineering gives you a very good antenna. It also cures people of any self belief they cannot be wrong. You clear up a lot of messes during a lifetime in engineering. I could be wrong on global warming – I know that – but the guys on the other side don't believe they can ever be wrong.
-- David Holland, FOI requester, electrical engineer

The big clue is hiding the raw data. No engineer or scientist worth his salt should expect anyone to believe his analysis if he was unwilling to share the raw data along with enough detail that someone else could reproduce (or not) his results. Hiding the data betrays astounding arrogance, when you do that you are basically saying, "I get the last word, no one else could possibly come up with a better analysis method than what I've done."

A couple of climate scientists provide the sort of level-headed response I'd expect from an honest researcher. Note especially Judy Curry's thoughts on the data (emphasis mine):
The HADCRU surface climate dataset needs public documentation that details the time period and location of individual station measurements used in the data set, statistical adjustments to the data, how the data were analyzed to produce the climatology, and what measurements were omitted and why. If these data and metadata are unavailable, I would argue that the data set needs to be reprocessed (presumably the original raw data is available from the original sources). Climate data sets should be regularly reprocessed as new data becomes available and analysis methods improve.

More on data sharing from the emails themselves (emphasis mine):
And the issue of with-holding data is still a hot potato, one that affects both you and Keith (and Mann). Yes, there are reasons -- but many *good* scientists appear to be unsympathetic to these. The trouble here is that with-holding data looks like hiding something, and hiding means (in some eyes) that it is bogus science that is being hidden.
-- Tom Wigley, 1254756944.txt

Color me unsympathetic.

It's too bad those fine fellows didn't listen to words from a wise man back in 1999 (emphasis mine):
I have worked with the UEA group for 20+ years and have great respect for them and for their work. Of course, I don’t agree with everything they write, and we often have long (but cordial) arguments about what they think versus my views, but that is life. Indeed, I know that they have broad disagreements among themselves, so to refer to them as "the UEA group", as though they all march in lock-step seems bizarre.

>As for thinking that it is "Better that nothing appear, than something unnacceptable to us" .....as though we are the gatekeepers of all that is acceptable in the world of paleoclimatology seems amazingly arrogant. Science moves forward whether we agree with individiual articles or not....
-- Raymond S. Bradley, 0924532891.txt

This arrogance is the symptom of a group of folks who have convinced each-other that theirs is a righteous cause to advocate rather than an intellectual position to hold with humility in the face of honest uncertainty and new understanding.

Wednesday, November 25, 2009

Converging and Diverging Views

I was brushing up on my maximum entropy and probability theory the other day and came across a great passage in Jaynes' book about convergence and divergence of views. He applies basic Bayesian probability theory to the concept of changing public opinion in the face of new data, especially the effect prior states of knowledge (prior probabilities) can have on the dynamics. The initial portion of section 5.3 is reproduced below.

5.3 Converging and diverging views (pp. 126 – 129)

Suppose that two people, Mr A and Mr B have differing views (due to their differing prior information) about some issue, say the truth or falsity of some controversial proposition S. Now we give them both a number of new pieces of information or ’data’, D1,D2,,Dn, some favorable to S, some unfavorable. As n increases, the totality of their information comes to be more nearly the same, therefore we might expect that their opinions about S will converge toward a common agreement. Indeed, some authors consider this so obvious that they see no need to demonstrate it explicitly, while Howson and Urbach (1989, p. 290) claim to have demonstrated it.

Nevertheless, let us see for ourselves whether probability theory can reproduce such phenomena. Denote the prior information by IA, IB, respectively, and let Mr A be initially a believer, Mr B be a doubter:



P (S|IA) ≃ 1, P(S |IB ) ≃ 0
(5.16)

after receiving data D, their posterior probabilities are changed to



 P (D |SIA) P(S |D IA) = P (S|IA)---------- P (D |IA )
(5.17)




 P-(D-|SIB-) P(S |D IB) = P (S|IB) P (D |IB )
(5.17)

If D supports S, then since Mr A already considers S almost certainly true, we have P(D|S IA), and so



P (S |D IA) ≃ P (S |IA)
(5.18)

Data D have no appreciable effect on Mr A’s opinion. But now one would think that if Mr B reasons soundly, he must recognize that P(D|S IB) > P(D|IB), and thus



P (S |D I ) > P (S |I ) B B
(5.19)

Mr B’s opinion should be changed in the direction of Mr A’s. Likewise, if D had tended to refute
S, one would expect that Mr B’s opinions are little changed by it, whereas Mr A’s will move in the direction of Mr B’s. From this we might conjecture that, whatever the new information D, it should tend to bring different people into closer agreement with each other, in the sense that



|P (S|D I ) - P (S |D I )| < |P (S|I ) - P (S|I )| A B A B
(5.20)

Although this can be verified in special cases, it is not true in general.

Is there some other measure of ‘closeness of agreement’ such as log[P(S|D Ia)∕P(S|D IB], for which this converging of opinions can be proved as a general theorem? Not even this is possible; the failure of probability theory to give this expected result tells us that convergence of views is not a general phenomenon. For robots and humans who reason according to the consistency desiderata of Chapter 1, something more subtle and sophisticated is at work.

Indeed, in practice we find that this convergence of opinions usually happens for small children; for adults it happens sometimes but not always. For example, new experimental evidence does cause scientists to come into closer agreement with each other about the explanation of a phenomenon.

Then it might be thought (and for some it is an article of faith in democracy) that open discussion of public issues would tend to bring about a general consensus on them. On the contrary, we observe repeatedly that when some controversial issue has been discussed vigorously for a few years, society becomes polarized into opposite extreme camps; it is almost impossible to find anyone who retains a moderate view. The Dreyfus affair in France which tore the nation apart for 20 years, is one of the most thoroughly documented examples of this (Bredin, 1986). Today, such issues as nuclear power, abortion, criminal justice, etc., are following the same course. New information given simultaneously to different people may cause a convergence of views; but it may equally well cause a divergence.

This divergence phenomenon is observed also in relatively well-controlled psychological experiments. Some have concluded that people reason in a basically irrational way; prejudices seem to be strengthened by new information which ought to have the opposite effect. Kahneman and Tversky (1972) draw the opposite conclusion from such psychological tests, and consider them an argument against Bayesian methods.

But now in view of the above ESP example, we wonder whether probability theory might also account for this divergence and indicate that people may be, after all, thinking in a reasonably rational, Bayesian way (i.e. in a way consistent with their prior information and prior beliefs). The key to the ESP example is that our new information was not

S fully adequate precautions against error or deception were taken, and Mrs Stewart did in fact deliver that phenomenal performance.

It was that some ESP researcher has claimed that S is true. But if our prior probability for S is lower than our prior probability that we are being deceived, hearing this claim has the opposite effect on our state of belief from what the claimant intended.

The same is true in science and politics; the new information a scientist gets is not that an experiment did in fact yield this result, with adequate protection against error. It is that some colleague has claimed that it did. The information we get from TV evening news is not that a certain event actually happened in a certain way; it is that some news reporter claimed that it did.

Scientists can reach agreement quickly because we trust our experimental colleagues to have high standards of intellectual honesty and sharp perception to detect possible sources of error. And this belief is justified because, after all, hundreds of new experiments are reported every month, but only about once in a decade is an experiment reported that turns out later to have been wrong. So our prior probability for deception is very low; like trusting children, we believe what experimentalists tell us.

In politics, we have a very different situation. Not only do we doubt a politician’s promises, few people believe that news reporters deal truthfully and objectively with economic, social, or political topics. We are convinced that virtually all news reporting is selective and distorted, designed not to report the facts, but to indoctrinate us in the reporter’s socio-political views. And this belief is justified abundantly by the internal evidence in the reporter’s own product – every choice of words and inflection of voice shifting the bias invariably in the same direction.

Not only in political speeches and news reporting, but wherever we seek for information on political matters, we run up against this same obstacle; we cannot trust anyone to tell us the truth, because we perceive that everyone who wants to talk about it is motivated either by self-interest or by ideology. In political matters, whatever the source of information, our prior probability for deception is always very high. However, it is not obvious whether this alone can prevent us from coming to agreement.

Jaynes, E.T., Probability Theory: The Logic of Science (Vol 1), Cambridge University Press, 2003.

Tuesday, November 17, 2009

Cut F-35 Flight Testing?

I just read an interesting article about the F-35. It repeats the standard press-release-based storyline that every major defense contractor / program office offers: "we've figured out what's wrong, we've spent gazillions on simulation (and the great-for-marketing computer graphics that result) so now we know our system so well that we can't afford not to cut flight testing."

I would have just dismissed this article as more of the "same-ole-same-ole", but at the bottom they have a quote I just love:
But scrimping on flight testing isn’t a good idea, said Bill Lawrence of Aledo, a former Marine test pilot turned aviation legal consultant.

"They build the aircraft . . . and go fly it," he said. "Then you come back and fix the things you don’t like."

No amount of computer processing power, Lawrence said, can tell Lockheed and the military what they don’t know about the F-35.

Thank God for testers. This goes to the heart of the problems I have with the model validation (or lack there of) in climate change science.

From reading some of these posts, you might think I'm some sort of Luddite who doesn't like CFD or other high-fidelity simulations, but you'd be wrong. I love CFD, I've spent a big chunk of my (admittedly short) professional career developing or running or consuming CFD (and similar) analyses. I just didn't fall in love with the colourful fluid dynamics. I understand that bending metal (or buying expensive stuff) based on the results of modeling unconstrained by reality is the height of folly.

That folly is exacerbated by an 'old problem'. In another article on the F-35, I found this gem:
No one should be shocked by the continuing delays and cost increases, said Hans Weber, a prominent aerospace engineering executive and consultant. "The airplane is so ambitious, it was bound to have problems."

The real problem, Weber said, is an old one. Neither defense contractors nor military or civilian leaders in government will admit how difficult a program will be and, when problems do arise, how challenging and costly they will be to fix. "It's really hard," Weber said, "for anyone to be really honest."

Defense acquisition in a nutshell, Pentagon Wars anyone?

Defense procurement veterans said they fear that the Pentagon will be tempted to cut the flight-testing plan yet again to save money.

"You need to do the testing the engineers originally said needed to be done," Weber said. By cutting tests now, "you kick the can down the road," and someone else has to deal with problems that will inevitably arise later.

Classic.

Thursday, November 5, 2009

North-River Open House Invite


The North River Coffee House is hosting an open house with free coffee, cookies and live Jazz. Also an opportunity to hear about economic development efforts in the area from the Salem Avenue Business Association Economic Development Committee.

Here's a little geography for those unfamiliar with the area. According to a study by some folks at Wright State, the 'North River Corridor' extends along Salem Ave from the Great Miami River to Catalpa Dr.

View North River Corridor in a larger map

Wednesday, November 4, 2009

Cooler Heads with Dr Richard Lindzen

Below are embedded Dr Lindzen's talk, Deconstructing Global Warming, with interesting quotes transcribed from each part.

Part 1:


Part 2:


Part 3:
However, Hockfield's [President of MIT] response does reveal the characteristic feature of the current presentation of this issue: namely any and every statement is justified by an appeal to authority rather than by scientific argument.



Part 4:
This is the meat of the presentation.
Once people regard authority as a sufficient argument, then you're free to make any claim you wish, especially if you are in high government office, refer to authority, the authority frequently doesn't say what it is claimed to have said, but any advocate in high office can rest-assured that some authority will come along to assent.

...most claims of evidence of global warming are guilty of something called the prosecutors fallacy. In its simple form it runs as follows, for instance, if A shoots B, with near certainty there will be gunpowder evidence on A's hands. On the other hand, in a legal proceeding this is often flipped, so that it is argued that if C has evidence of gunpowder on his hands, then C shot B.

However, with global warming the line of argument is even sillier.

This rhetorical question is my favourite, it is the most damning criticism:
Whoever heard of science where you have a model, and you test it by running it over and over again?



Part 5:
Now it turns out, data is the way you usually check calculations.

This is where Dr Lindzen compares some hypothesis with satellite measurements. He does this by forcing models with observations and watching the response, then comparing this response to the measured response. Also discusses feedback in the climate system.
By the current logic, or illogic, of science, the fact that all models agree is used to imply that the result is 'robust'.

See this paper for a criticism of this type of model 'verification'.

The bottom line Dr Lindzen comes to after analysing the satellite data:
What we see is the very foundation of the issue of global warming is wrong.



Part 6:
In answer to a question about how many scientists "believe" / "don't believe":
How do you describe these people [scientists] as being for or against, it's meaningless, they're opportunistic.



Whenever I'm asked if I'm a climate skeptic, I always answer no. To the extent possible I am a climate denier. That's because skepticism assumes there is a good a priori case, but you have doubts about it. There isn't even a good a priori case.

I think the thing that bothers me most is the lack of rigorous model validation (defined for example in this AIAA standard). When you're providing analysis for decision makers you really should use the results of a validated model and give a quantitative uncertainty analysis. That enables ethical and responsible risk management, anything else is negligent (in the professional sense if not the criminal).