Tuesday, December 22, 2009

Chaos: A Very Short Introduction (Book Review)

I got an early Christmas present from my favourite experimentalist. It's a book called "Chaos: A Very Short Introduction," by Leonard Smith (2007, Oxford University Press, 180pp, paperback, ISBN: 978-0-19-285-378-3), and it is a good, quick read. There is a short review in the Journal of Physics A which says, in part,
Anyone who ever tried to give a popular science account of research knows that this is a more challenging task than writing an ordinary research article. Lenny Smith brilliantly succeeds to explain in words, in pictures and by using intuitive models the essence of mathematical dynamical systems theory and time series analysis as it applies to the modern world.

[...]

However, the book will be of interest to anyone who is looking for a very short account on fundamental problems and principles in modern nonlinear science.
The only criticism offered in that review was of the low-resolution of some of the figures (which is hard to fix since it is a pocket-size format).

I'm reproducing most of one of the reviews from Amazon here because the criticisms which it claims make the book unsuitable as an 'intro to chaos' are the things I enjoyed about it:
This book starts out promising but, as one goes along, it drifts farther and farther from what an introduction to chaos should be.

In particular, the book turns out to be largely a discussion of modeling and forecasting, with some emphasis on the relevant implications of chaos. Moreover, most of the examples and applications relate to weather and climate, which becomes boring after a while (especially considering the abundance of other options). Smith's bio reveals that this is exactly his specialty, so the book appears to be heavily shaped by his background and interests, rather than what's best for a general audience. As a result, many standard and important topics in chaos theory recieve little or no mention, and I think the book fails as a proper introduction to chaos.

[...]

Considering all of this, I can recommend the book only to people who are particularly interested in modeling, forecasting, and the relevant implications of chaos, especially as this relates to weather and climate. In this context, Smith's discussion of the differences between mathematical, physical, statistical, and philosophical perspectives is particularly insightful and useful.

Well, since I think the intersection between public policy and computational physics is an interesting one, this book turned out to be right up my alley. It was an entertaining read, and I did not have to work too hard to translate the simple language Smith used to appeal to a wide audience back into familiar technical concepts. That's no mean feat.

I do have a somewhat significant bit of criticism about his treatment of the tractability of getting probabilistic forecasts in the case of chaotic physical systems for which we don't know the correct model. If you've read some of my posts on Jaynes' book you can probably guess what I'm going to say. But first, here's what Smith says:
With her perfect model, our 21st-century demon can compute probabilities that are useful as such. Why can't we? There are statisticians who argue we can, including perhaps a reviewer of this book, who form one component of a wider group of statisticians who call themselves Bayesians. Most Bayesians quite reasonably insist on using the concepts of probability correctly [this was Jaynes' main pedagogical point], but there is a small but vocal cult among them that confuse the diversity seen in our models for uncertainty in the real world. Just as it is a mistake to use the concept of probability incorrectly, it is an error to apply them where they do not belong.
There is then some illustration of this 'model inadequacy' problem which is correct as far as noting that the model is not reality, only a possibly useful shadow, but fails to support the assertion that the 'vocal group' is misapplying probability theory. Smith continues,
Would it not be a double-sense to proffer probability forecasts one knew were conditioned on an imperfect model as if they reflected the likelihood of future events, regardless of what small print appeared under the forecast?
This is an oblique criticism of the Bayesian approach, which would, of course, give predictive distributions conditional on the model or models used in the analysis. Smith's criticism is that the ensemble of models may not contain the 'correct' model, so the posterior predictive distribution is not a probability in the frequentist sense. Of course, no Bayesian would claim that it is, only that it best captures our current state of knowledge about the future and is the only procedure that enables coherent inference in general. Everything else is ad hockery, as Jaynes would say. Any prediction of the future is conditioned on our present state of knowledge (which includes, among other things, the choice of models) and the data we have. The only question then is, do we explicitly acknowledge that fact or not?

Another thing that bothered me was the sort of dismissive way he commented on the current state of model adequacy in the physical sciences:
... is the belief in the existence of mathematically precise Laws of Nature, whether deterministic or stochastic, any less wishful thinking than the hope that we will come across any of our various demons offering forecasts in the woods?

In any event, it seems we do not currently know the relevant equations for simple physical systems, or for complicated ones.
There are plenty of practising engineers and applied physicists using non-linear models to make successful predictions who I think would be quite surprised to hear that their models, or the conservation laws on which they are based, do not exist.

Other than those two minor quibbles, it was a very good book and an enjoyable read.

A nice feature of the book (quite suitable for a 'very short intro') is the 'further reading' list at the end, here's a couple that looked interesting:

2 comments:

  1. Smith also wrote Chapter 2 (preview on Google Books) of Nonlinear Dynamics and Statistics, the focus of which is the predictability of nonlinear systems.

    Here's a snipet:
    "Our agent achieves an accountable forecast by evolving a perfect ensemble under a a perfect model; once imperfect models are in use, no perfect ensemble exists. Accepting this forces us to change the interpretation and goals of forecasts. In fact, it calls into question what is meant by the state of a physical system."

    Ensemble here refers to the set of different initial conditions given to the model to reflect observational uncertainty. Smith's claim is that you can only get an 'accountable probabilistic forecast' from such a set given to a perfect model as initial conditions.

    The Bayesian rightly observes that no model has a prior probability of exactly 1 (in the perfect case) , or exactly 0 (for every other model), but every real model has some finite prior. The reason that every model has some finite prior is that our state of knowledge is limited and imperfect, it would be dishonest for us too assign a prior of 1 or 0 to a model, we can never really be that sure. This fact is exploited in Bayesian Climate Model Averaging.

    ReplyDelete