Sunday, November 29, 2009

CRU emails

An interesting thought from an old engineer (emphasis mine):
These guys called climate scientists have not done any more physics or chemistry than I did. A lifetime in engineering gives you a very good antenna. It also cures people of any self belief they cannot be wrong. You clear up a lot of messes during a lifetime in engineering. I could be wrong on global warming – I know that – but the guys on the other side don't believe they can ever be wrong.
-- David Holland, FOI requester, electrical engineer

The big clue is hiding the raw data. No engineer or scientist worth his salt should expect anyone to believe his analysis if he was unwilling to share the raw data along with enough detail that someone else could reproduce (or not) his results. Hiding the data betrays astounding arrogance, when you do that you are basically saying, "I get the last word, no one else could possibly come up with a better analysis method than what I've done."

A couple of climate scientists provide the sort of level-headed response I'd expect from an honest researcher. Note especially Judy Curry's thoughts on the data (emphasis mine):
The HADCRU surface climate dataset needs public documentation that details the time period and location of individual station measurements used in the data set, statistical adjustments to the data, how the data were analyzed to produce the climatology, and what measurements were omitted and why. If these data and metadata are unavailable, I would argue that the data set needs to be reprocessed (presumably the original raw data is available from the original sources). Climate data sets should be regularly reprocessed as new data becomes available and analysis methods improve.

More on data sharing from the emails themselves (emphasis mine):
And the issue of with-holding data is still a hot potato, one that affects both you and Keith (and Mann). Yes, there are reasons -- but many *good* scientists appear to be unsympathetic to these. The trouble here is that with-holding data looks like hiding something, and hiding means (in some eyes) that it is bogus science that is being hidden.
-- Tom Wigley, 1254756944.txt

Color me unsympathetic.

It's too bad those fine fellows didn't listen to words from a wise man back in 1999 (emphasis mine):
I have worked with the UEA group for 20+ years and have great respect for them and for their work. Of course, I don’t agree with everything they write, and we often have long (but cordial) arguments about what they think versus my views, but that is life. Indeed, I know that they have broad disagreements among themselves, so to refer to them as "the UEA group", as though they all march in lock-step seems bizarre.

>As for thinking that it is "Better that nothing appear, than something unnacceptable to us" .....as though we are the gatekeepers of all that is acceptable in the world of paleoclimatology seems amazingly arrogant. Science moves forward whether we agree with individiual articles or not....
-- Raymond S. Bradley, 0924532891.txt

This arrogance is the symptom of a group of folks who have convinced each-other that theirs is a righteous cause to advocate rather than an intellectual position to hold with humility in the face of honest uncertainty and new understanding.

Wednesday, November 25, 2009

Converging and Diverging Views

I was brushing up on my maximum entropy and probability theory the other day and came across a great passage in Jaynes' book about convergence and divergence of views. He applies basic Bayesian probability theory to the concept of changing public opinion in the face of new data, especially the effect prior states of knowledge (prior probabilities) can have on the dynamics. The initial portion of section 5.3 is reproduced below.

5.3 Converging and diverging views (pp. 126 – 129)

Suppose that two people, Mr A and Mr B have differing views (due to their differing prior information) about some issue, say the truth or falsity of some controversial proposition S. Now we give them both a number of new pieces of information or ’data’, D1,D2,,Dn, some favorable to S, some unfavorable. As n increases, the totality of their information comes to be more nearly the same, therefore we might expect that their opinions about S will converge toward a common agreement. Indeed, some authors consider this so obvious that they see no need to demonstrate it explicitly, while Howson and Urbach (1989, p. 290) claim to have demonstrated it.

Nevertheless, let us see for ourselves whether probability theory can reproduce such phenomena. Denote the prior information by IA, IB, respectively, and let Mr A be initially a believer, Mr B be a doubter:



P (S|IA) ≃ 1, P(S |IB ) ≃ 0
(5.16)

after receiving data D, their posterior probabilities are changed to



 P (D |SIA) P(S |D IA) = P (S|IA)---------- P (D |IA )
(5.17)




 P-(D-|SIB-) P(S |D IB) = P (S|IB) P (D |IB )
(5.17)

If D supports S, then since Mr A already considers S almost certainly true, we have P(D|S IA), and so



P (S |D IA) ≃ P (S |IA)
(5.18)

Data D have no appreciable effect on Mr A’s opinion. But now one would think that if Mr B reasons soundly, he must recognize that P(D|S IB) > P(D|IB), and thus



P (S |D I ) > P (S |I ) B B
(5.19)

Mr B’s opinion should be changed in the direction of Mr A’s. Likewise, if D had tended to refute
S, one would expect that Mr B’s opinions are little changed by it, whereas Mr A’s will move in the direction of Mr B’s. From this we might conjecture that, whatever the new information D, it should tend to bring different people into closer agreement with each other, in the sense that



|P (S|D I ) - P (S |D I )| < |P (S|I ) - P (S|I )| A B A B
(5.20)

Although this can be verified in special cases, it is not true in general.

Is there some other measure of ‘closeness of agreement’ such as log[P(S|D Ia)∕P(S|D IB], for which this converging of opinions can be proved as a general theorem? Not even this is possible; the failure of probability theory to give this expected result tells us that convergence of views is not a general phenomenon. For robots and humans who reason according to the consistency desiderata of Chapter 1, something more subtle and sophisticated is at work.

Indeed, in practice we find that this convergence of opinions usually happens for small children; for adults it happens sometimes but not always. For example, new experimental evidence does cause scientists to come into closer agreement with each other about the explanation of a phenomenon.

Then it might be thought (and for some it is an article of faith in democracy) that open discussion of public issues would tend to bring about a general consensus on them. On the contrary, we observe repeatedly that when some controversial issue has been discussed vigorously for a few years, society becomes polarized into opposite extreme camps; it is almost impossible to find anyone who retains a moderate view. The Dreyfus affair in France which tore the nation apart for 20 years, is one of the most thoroughly documented examples of this (Bredin, 1986). Today, such issues as nuclear power, abortion, criminal justice, etc., are following the same course. New information given simultaneously to different people may cause a convergence of views; but it may equally well cause a divergence.

This divergence phenomenon is observed also in relatively well-controlled psychological experiments. Some have concluded that people reason in a basically irrational way; prejudices seem to be strengthened by new information which ought to have the opposite effect. Kahneman and Tversky (1972) draw the opposite conclusion from such psychological tests, and consider them an argument against Bayesian methods.

But now in view of the above ESP example, we wonder whether probability theory might also account for this divergence and indicate that people may be, after all, thinking in a reasonably rational, Bayesian way (i.e. in a way consistent with their prior information and prior beliefs). The key to the ESP example is that our new information was not

S fully adequate precautions against error or deception were taken, and Mrs Stewart did in fact deliver that phenomenal performance.

It was that some ESP researcher has claimed that S is true. But if our prior probability for S is lower than our prior probability that we are being deceived, hearing this claim has the opposite effect on our state of belief from what the claimant intended.

The same is true in science and politics; the new information a scientist gets is not that an experiment did in fact yield this result, with adequate protection against error. It is that some colleague has claimed that it did. The information we get from TV evening news is not that a certain event actually happened in a certain way; it is that some news reporter claimed that it did.

Scientists can reach agreement quickly because we trust our experimental colleagues to have high standards of intellectual honesty and sharp perception to detect possible sources of error. And this belief is justified because, after all, hundreds of new experiments are reported every month, but only about once in a decade is an experiment reported that turns out later to have been wrong. So our prior probability for deception is very low; like trusting children, we believe what experimentalists tell us.

In politics, we have a very different situation. Not only do we doubt a politician’s promises, few people believe that news reporters deal truthfully and objectively with economic, social, or political topics. We are convinced that virtually all news reporting is selective and distorted, designed not to report the facts, but to indoctrinate us in the reporter’s socio-political views. And this belief is justified abundantly by the internal evidence in the reporter’s own product – every choice of words and inflection of voice shifting the bias invariably in the same direction.

Not only in political speeches and news reporting, but wherever we seek for information on political matters, we run up against this same obstacle; we cannot trust anyone to tell us the truth, because we perceive that everyone who wants to talk about it is motivated either by self-interest or by ideology. In political matters, whatever the source of information, our prior probability for deception is always very high. However, it is not obvious whether this alone can prevent us from coming to agreement.

Jaynes, E.T., Probability Theory: The Logic of Science (Vol 1), Cambridge University Press, 2003.

Tuesday, November 17, 2009

Cut F-35 Flight Testing?

I just read an interesting article about the F-35. It repeats the standard press-release-based storyline that every major defense contractor / program office offers: "we've figured out what's wrong, we've spent gazillions on simulation (and the great-for-marketing computer graphics that result) so now we know our system so well that we can't afford not to cut flight testing."

I would have just dismissed this article as more of the "same-ole-same-ole", but at the bottom they have a quote I just love:
But scrimping on flight testing isn’t a good idea, said Bill Lawrence of Aledo, a former Marine test pilot turned aviation legal consultant.

"They build the aircraft . . . and go fly it," he said. "Then you come back and fix the things you don’t like."

No amount of computer processing power, Lawrence said, can tell Lockheed and the military what they don’t know about the F-35.

Thank God for testers. This goes to the heart of the problems I have with the model validation (or lack there of) in climate change science.

From reading some of these posts, you might think I'm some sort of Luddite who doesn't like CFD or other high-fidelity simulations, but you'd be wrong. I love CFD, I've spent a big chunk of my (admittedly short) professional career developing or running or consuming CFD (and similar) analyses. I just didn't fall in love with the colourful fluid dynamics. I understand that bending metal (or buying expensive stuff) based on the results of modeling unconstrained by reality is the height of folly.

That folly is exacerbated by an 'old problem'. In another article on the F-35, I found this gem:
No one should be shocked by the continuing delays and cost increases, said Hans Weber, a prominent aerospace engineering executive and consultant. "The airplane is so ambitious, it was bound to have problems."

The real problem, Weber said, is an old one. Neither defense contractors nor military or civilian leaders in government will admit how difficult a program will be and, when problems do arise, how challenging and costly they will be to fix. "It's really hard," Weber said, "for anyone to be really honest."

Defense acquisition in a nutshell, Pentagon Wars anyone?

Defense procurement veterans said they fear that the Pentagon will be tempted to cut the flight-testing plan yet again to save money.

"You need to do the testing the engineers originally said needed to be done," Weber said. By cutting tests now, "you kick the can down the road," and someone else has to deal with problems that will inevitably arise later.

Classic.

Thursday, November 5, 2009

North-River Open House Invite


The North River Coffee House is hosting an open house with free coffee, cookies and live Jazz. Also an opportunity to hear about economic development efforts in the area from the Salem Avenue Business Association Economic Development Committee.

Here's a little geography for those unfamiliar with the area. According to a study by some folks at Wright State, the 'North River Corridor' extends along Salem Ave from the Great Miami River to Catalpa Dr.

View North River Corridor in a larger map

Wednesday, November 4, 2009

Cooler Heads with Dr Richard Lindzen

Below are embedded Dr Lindzen's talk, Deconstructing Global Warming, with interesting quotes transcribed from each part.

Part 1:


Part 2:


Part 3:
However, Hockfield's [President of MIT] response does reveal the characteristic feature of the current presentation of this issue: namely any and every statement is justified by an appeal to authority rather than by scientific argument.



Part 4:
This is the meat of the presentation.
Once people regard authority as a sufficient argument, then you're free to make any claim you wish, especially if you are in high government office, refer to authority, the authority frequently doesn't say what it is claimed to have said, but any advocate in high office can rest-assured that some authority will come along to assent.

...most claims of evidence of global warming are guilty of something called the prosecutors fallacy. In its simple form it runs as follows, for instance, if A shoots B, with near certainty there will be gunpowder evidence on A's hands. On the other hand, in a legal proceeding this is often flipped, so that it is argued that if C has evidence of gunpowder on his hands, then C shot B.

However, with global warming the line of argument is even sillier.

This rhetorical question is my favourite, it is the most damning criticism:
Whoever heard of science where you have a model, and you test it by running it over and over again?



Part 5:
Now it turns out, data is the way you usually check calculations.

This is where Dr Lindzen compares some hypothesis with satellite measurements. He does this by forcing models with observations and watching the response, then comparing this response to the measured response. Also discusses feedback in the climate system.
By the current logic, or illogic, of science, the fact that all models agree is used to imply that the result is 'robust'.

See this paper for a criticism of this type of model 'verification'.

The bottom line Dr Lindzen comes to after analysing the satellite data:
What we see is the very foundation of the issue of global warming is wrong.



Part 6:
In answer to a question about how many scientists "believe" / "don't believe":
How do you describe these people [scientists] as being for or against, it's meaningless, they're opportunistic.



Whenever I'm asked if I'm a climate skeptic, I always answer no. To the extent possible I am a climate denier. That's because skepticism assumes there is a good a priori case, but you have doubts about it. There isn't even a good a priori case.

I think the thing that bothers me most is the lack of rigorous model validation (defined for example in this AIAA standard). When you're providing analysis for decision makers you really should use the results of a validated model and give a quantitative uncertainty analysis. That enables ethical and responsible risk management, anything else is negligent (in the professional sense if not the criminal).