Blogs are a great place for throwaway code. You write throwaway code to learn about solving a particular problem. For me, it is usually implementing different sorts of numerical methods. Posting the results to the web just provides a bit of a goad to pick something interesting and work it to completion. Who knows, it may turn out to be useful to someone else too.

Doing throwaway code is also a good opportunity to practice the complete computational physics implementation cycle, which is (according to me at least):

- Define the governing equations in your favorite computer algebra system (CAS)

- Do the symbol manipulation required to apply the chosen discretization or approximation to the governing equations.

- Export appropriate computation-ready expressions to your favorite language (I use Maxima’s f90() function).

- Find an analytical solution or generate forcing functions for a manufactured solution using your CAS.

- Write the auxilliary code to turn your expressions into useful routines and functions (loops and logic statements).

- Compile and verify that the implementation meets the design order of accuracy with a grid convergence study.

Becoming fast at going from governing equations to a formally verified implementation is important. According to Philip Greenspun fast means profficient, and formally verified means you aren’t fooling yourself.

Research has shown that the only way we really learn and improve our skills is by deliberate practice. Deliberate practice has a few defining characteristics

- activity explicitly intended to improve performance

- reaches for objectives just beyond your level of competence

- provides critical feedback

- involves high levels of repetition

Does the little numerical methods throwaway code process described above satisfy these criteria? Learning and implementing a new method on an old equation set or an old method on a new equation set seems to be intended to improve performance. Since it is a new method it is just beyond the level of competence (you haven’t implemented it successfully yet). The critical feedback comes from passing the verification test to confirm that the implementation meets the design order of accuracy. Which brings us finally to the repetition level. Getting high levels of repetition is a bit harder. For simple governing equations and simple methods (like this example) it will take a very short time to go from governing equations to verified implementation. But part of the progression of the competence level is extensions to more complicated equations (non-linear instead of linear, multi dimensional instead of one dimensional, vector instead of scalar), which can take quite a while to get through the entire cycle, because the complexity tends to explode.

Just getting practice at using the available software tools is important too. They are increasingly powerful, which means they come with a significant learning curve. If you want to solve complicated PDEs quickly and correctly, being proficient with a CAS is the only way to get there.

The (open source) toolset I like to use is Maxima + Fortran + F2Py + Python (scipy and matplotlib especially). The symbol manipulation is quick and correct in Maxima, which produces correct low-level Fortran expressions (through f90()). Once these are generated it is straightforward to wrap them in loops and conditionals and compile them into Python modules with F2Py. This gives you custom compiled routines for your problem which were mostly generated rather than being written by hand. They are callable from a high level language so you can use them alongside all of the great high level stuff that comes in Scipy and the rest of Python. Especially convenient is Python’s unittest module, which provides a nice way to organize and automate a test suite for these routines.

The Radau-type Runge-Kutta example I mentioned earlier is a good example for another reason. It is clearly throwaway code. Why? It has no modularity whatsoever. All of the different parts of the calculation, spatial derivatives, forcing functions, time-step update, are jammed together in one big ugly expression. It is not broadly useful. I can’t take it and use it to integrate a different problem, it only solves the simple harmonic oscillator. I can get away with this monolithic mess because the governing equation is a simple one-dimensional scalar PDE and Maxima doesn’t make mistakes in combining and simplifying all of those operations. In real production code those things need to be separated out into their own modular routines. That way different spatial derivative approximations, different forcing functions, different time-integration methods can be mixed and matched (and independently tested) to efficiently solve a wide variety of problems.

Making sure that your throwaway code really is just that means you won’t make the biggest mistake related to throwaway code: not throwing it away.

## No comments:

## Post a Comment