To go by the news archive, apparently 2008 and 2009 were slightly less worse than we thought, or perhaps 2007 was an abnormally worse-than-we-thought year.
If you are a thoughtful citizen, trying to be well-informed, then you might be forgiven for thinking that we’re all about to stew in our own juices or be run over by category 6 super-storms any minute. The steady worse-than-we-thought rhythm of the popular press would mark the time in your attempts to stay informed and engaged. This is actually a topic covered fairly well over on RealClimate (their stealth advocacy is disturbing at times, but they do talk sense on occasion).
When those reporters writing those news stories represented in the archive write the ’worse than we thought’ copy, they are not lying. So what is going on? This is where measurement precision comes in. Each individual piece of research is usually focused on one method or technique for measuring or estimating a quantity (whether it is arctic sea ice extent, global mean surface temperature, the cost of climate change on cotton in Calcutta, etc.). The precision of that individual technique is usually not that good on its own, so you get pretty broad probability distributions for your estimates of the thing you are trying to measure. This leads to the sorts of claims like, “new research gives much more credence to huge climate sensitivity to CO2 emission.” These claims are not mistaken on their own, but the proper way to evaluate them is in the context of all the other ways we have of measuring that same thing. Figure 2 shows a couple “measurement” distributions with large spread (or entropy) along with a distribution you’d get from combining them (assuming they were independent measurements).
Notice that the combined estimate gives much lower probability to extremes that many of the individual estimates give a fairly significant probability. James Annan has done some work on exactly that sort of “evaluation in context” for climate sensitivity to a doubling of CO2. This post over on the AirVent is sort of similar, in that it tries to combine satellite and surface station temperature data into a single “picture” of what’s going on.
Unfortunately in the climate policy debate what many alarmists (some might call them believers) claim as the weight of scientific evidence is actually just their prior inclination towards a certain way of thinking, fed by the popular press coverage of the issue, and what many deniers claim is the overwhelming weight of scientific chicanery is actually much the same thing. Jaynes understood this effect in terms of people being approximately Bayesian in their belief updating. Jaynes claimed that the way to combat a divergence of views was to be as open and transparent with methods, process and data as possible (and he claimed science had this sort of approach built-in, hence the ability of scientist to come to consensus more quickly than the public). Jeff Id puts it well, “The correct way to talk to a skeptic is to EXPLAIN YOUR POSITION AND GIVE CLEAR ANSWERS. Thats it.” I think Jaynes, and most reasonable folks, would agree.