*innovation poetry*to put your thoughts along the right lines of effort. This recent post has a huge, interesting technical iceberg riding under the surface though.

If you run an experiment where you are 100% sure of the outcome, your learning is zero. You already knew how it would go, so there was no need to run the experiment. The least costly experiment is the one you didn’t have to run, so don’t run experiments when you know how they’ll turn out. If you run an experiment where you are 0% sure of the outcome, your learning is zero. These experiments are like buying a lottery ticket – you learn the number you chose didn’t win, but you learned nothing about how to choose next week’s number. You’re down a dollar, but no smarter.

The learning ratio is maximized when energy is minimized (the simplest experiment is run) and probability the experimental results match your hypothesis (expectation) is 50%. In that way, half of the experiments confirm your hypothesis and the other half tell you why your hypothesis was off track.

Maximize The Learning Ratio

In the very simple case of accepting or rejecting a hypothesis, Shipulski's advice amounts to maximum entropy sampling. That is testing where you have the greatest uncertainty between the two outcomes.

Classical Scientific Method |

This is an initial step towards Bayesian Experimental Design. Repeatedly applying this method is what the folks who search for exoplanets term "Bayesian Adaptive Exploration."

Modern Scientific Method |

I think we intuitively do something like Bayesian Adaptive Exploration when we're searching for knowledge, but this provides a mathematical and computational framework for getting really rigorous about our search. This is especially important for wicked problems where our intuition can be unreliable.

### Further reading:

[1] Shewry, M.C., H.P. Wynn, Maximum entropy sampling. Journal of Applied Statistics. 14 (1987), 2, 165--170. doi: 10.1080/02664768700000020.[2] Lindley, D. V. On a Measure of the Information Provided by an Experiment. Ann. Math. Statist. 27 (1956), no. 4, 986--1005. doi:10.1214/aoms/1177728069. http://projecteuclid.org/euclid.aoms/1177728069.

[3] Chaloner, Kathryn; Verdinelli, Isabella (1995), "Bayesian experimental design: a review" (PDF), Statistical Science, 10 (3): 273–304, doi:10.1214/ss/1177009939

[4] Loredo, T. Optimal Scheduling of Exoplanet Observations via Bayesian Adaptive Exploration. GISS Workshop — 25 Feb 2011.

[5] Loredo, T. Bayesian methods for exoplanet science: Planet detection, orbit estimation, and adaptive observing. ADA VI — 6 May 2010.

[6] Bayesian Experimental Design, on the wikipedia. Maximize utility (expected information) of your next test point.

[7] Dueling Bayesians. The link to the video is broken, but the subsection on Bayesian design of validation experiments is relevant. Your design strategy has to change if your experimental goal becomes validation rather than discovery.

## No comments:

## Post a Comment