Tuesday, October 27, 2009

The Stick Does Not Pogo

With the upcoming test flight of the Ares IX, there is lots of talk about high vibration levels and design problems with the big solid first stage. One common misconception I've noticed (and pointed out) a couple times is that folks think Ares I might have pogo oscillations. Pogo oscillations are a specific type of propulsion-structure interaction that occurs in liquid rockets, not solids.

Here's a succinct description of what pogo is (Rubin, 1970, emphasis mine):
Many liquid rocket vehicles have experienced longitudinal vibration because of an instability arising from interaction of the vehicle structure with the propulsion system. The vibrations, nicknamed ``pogo'' after the jumping stick, have occurred principally in the first longitudinal structural mode during operation of the first liquid-propellant stage of a launch vehicle.

From the same report, here's a figure of the accelerations characteristic of pogo.


The dynamic instability being described here is not simply resonant forcing. There is a positive feedback loop.
A block diagram of the positive feedback process which can lead to instability is shown in figure 2. Structural vibratory accelerations induce the propulsion system to generate forces which can then intensify the original vibration. When the system becomes unstable, oscillations will appear spontaneously.


--Rubin (1970), emphasis mine


Rubin gives a further description of the two types of pogo.
Two basic kinds of propulsion-system behavior have produced pogo instability. The common form of pogo, called engine-coupled pogo, has been experienced to a significant degree on certain configurations of the Thor, Titan, and Saturn space vehicles. This form results from the action of the tank-to-engine propellant feed-lines and the engine itself (fig 3.) When the vehicle vibrates longitudinally, the pump and the propellant in the flexible tank undergo oscillatory motions. These two motions produce oscillating flow in the feed-line and in the pump's discharge line. The flow oscillations lead to oscillations in engine thrust and in pressure at the pump inlet, which then act as regenerative forcing functions on the vehicle structure. The feedback loop is thereby closed. Although a pump is included in figure 3, pogo can also occur in pressure-fed systems.


A much less common form of pogo results from the pneumatic behavior of an active pressurization system for the propellant tank ullage. This form is known as ullage-coupled pogo, and has been experienced only on Atlas vehicles immediately after liftoff. It has also been referred to as pneumatic-coupled pogo, or ``bloating''. A simplified schematic of an active pneumatic system for pressurizing a tank is shown in figure 4. When the vehicle vibrates longitudinally, the tank ullage pressure oscillates because of oscillation of the ullage-volume boundaries. The sense line transmits the pressure oscillation to the regulator. The regulator responds by producing an oscillatory flow of pressurant (i.e., the pressurizing gas) into the supply line which regenerates the ullage-pressure oscillation that acts as an axial forcing function on the vehicle structure, and again the feedback loop is closed.


The big SRB being contemplated for use in Ares I probably does have lots of combustion instabilities which will cause some wicked forcing on the rocket. It may even couple well to the rocket's lower longitudinal modes, but that is not pogo. There is no positive feedback there, just lots of hammering on the structure (and the astronauts).

Friday, October 23, 2009

Numerical Throwaway Code

Blogs are a great place for throwaway code. You write throwaway code to learn about solving a particular problem. For me, it is usually implementing different sorts of numerical methods. Posting the results to the web just provides a bit of a goad to pick something interesting and work it to completion. Who knows, it may turn out to be useful to someone else too.

Doing throwaway code is also a good opportunity to practice the complete computational physics implementation cycle, which is (according to me at least):


  • Define the governing equations in your favorite computer algebra system (CAS)

  • Do the symbol manipulation required to apply the chosen discretization or approximation to the governing equations.

  • Export appropriate computation-ready expressions to your favorite language (I use Maxima’s f90() function).

  • Find an analytical solution or generate forcing functions for a manufactured solution using your CAS.

  • Write the auxilliary code to turn your expressions into useful routines and functions (loops and logic statements).

  • Compile and verify that the implementation meets the design order of accuracy with a grid convergence study.

Becoming fast at going from governing equations to a formally verified implementation is important. According to Philip Greenspun fast means profficient, and formally verified means you aren’t fooling yourself.

Research has shown that the only way we really learn and improve our skills is by deliberate practice. Deliberate practice has a few defining characteristics


  • activity explicitly intended to improve performance

  • reaches for objectives just beyond your level of competence

  • provides critical feedback

  • involves high levels of repetition

Does the little numerical methods throwaway code process described above satisfy these criteria? Learning and implementing a new method on an old equation set or an old method on a new equation set seems to be intended to improve performance. Since it is a new method it is just beyond the level of competence (you haven’t implemented it successfully yet). The critical feedback comes from passing the verification test to confirm that the implementation meets the design order of accuracy. Which brings us finally to the repetition level. Getting high levels of repetition is a bit harder. For simple governing equations and simple methods (like this example) it will take a very short time to go from governing equations to verified implementation. But part of the progression of the competence level is extensions to more complicated equations (non-linear instead of linear, multi dimensional instead of one dimensional, vector instead of scalar), which can take quite a while to get through the entire cycle, because the complexity tends to explode.

Just getting practice at using the available software tools is important too. They are increasingly powerful, which means they come with a significant learning curve. If you want to solve complicated PDEs quickly and correctly, being proficient with a CAS is the only way to get there.

The (open source) toolset I like to use is Maxima + Fortran + F2Py + Python (scipy and matplotlib especially). The symbol manipulation is quick and correct in Maxima, which produces correct low-level Fortran expressions (through f90()). Once these are generated it is straightforward to wrap them in loops and conditionals and compile them into Python modules with F2Py. This gives you custom compiled routines for your problem which were mostly generated rather than being written by hand. They are callable from a high level language so you can use them alongside all of the great high level stuff that comes in Scipy and the rest of Python. Especially convenient is Python’s unittest module, which provides a nice way to organize and automate a test suite for these routines.

The Radau-type Runge-Kutta example I mentioned earlier is a good example for another reason. It is clearly throwaway code. Why? It has no modularity whatsoever. All of the different parts of the calculation, spatial derivatives, forcing functions, time-step update, are jammed together in one big ugly expression. It is not broadly useful. I can’t take it and use it to integrate a different problem, it only solves the simple harmonic oscillator. I can get away with this monolithic mess because the governing equation is a simple one-dimensional scalar PDE and Maxima doesn’t make mistakes in combining and simplifying all of those operations. In real production code those things need to be separated out into their own modular routines. That way different spatial derivative approximations, different forcing functions, different time-integration methods can be mixed and matched (and independently tested) to efficiently solve a wide variety of problems.

Making sure that your throwaway code really is just that means you won’t make the biggest mistake related to throwaway code: not throwing it away.

Thursday, October 22, 2009

Response to 'Economic Case for Slashing Carbon Emissions'

I'm reproducing my comment to the opinion piece The Economic Case for Slashing Carbon Emissions here because it apparently didn't make the editorial cut on Yale 360. The article is about a study done that supports capping emissions so that atmospheric C02 levels reach 350 parts per million in this century.

"At first glance, there is a bewildering range of estimates of the costs of climate protection.
...
The outliers ... funded by partisan lobbying groups ... An analysis by journalist Eric Pooley documents the excessive, often uncritical attention given to these studies by the media."

I think an 'uncertainty' analysis that's based on a journalist's survey of reports is pretty poor. The MIT guys have a much better approach to this, ie a no kidding uncertainty analysis using Bayes' Theorem and Monte Carlo sims. It's by no means perfect, but it is an honest analysis rather than ad hominem attack.

Also, Nordhaus' analysis with his DICE model seems to indicate that just trying to fix CO2 at a certain level could be a pretty poor strategy, and actually end up costing a significant amount more than an optimal approach while still not avoiding many of the supposed costs of environmental damage.

The costs: this is where things get *very* tenuous. The connection between climate change and solid estimates of costs and benefits seems on quite poor footing. It's hard even to get the sign of the effect right, not to mention the magnitude. Some studies even show an economic benefit of warming...


I suppose it could have been worded better (all writing is a series of drafts right?). The point I was trying to make in the first paragraph is that discounting a study because of its funding source isn't science, it's politics. If the study is so fundamentally flawed, it should be easy to point out the errors in underlying assumptions and methodology (that's science). No need to sling mud, "OMG, Teh Evil Exxon paid for that study!!1!eleventyone!" (that's politics).

The second part of the comment was just pointing out some research that seems to show that trying to control CO2 at a fixed level is not a very economical strategy for avoiding costs due to environemtal damage and at the same time avoiding tanking your economy through too much unproductive wealth redistribution (an optimal approach if you will). Further, if you read that paper about the DICE model, you'll see that the cost function (relating climate change to cost of damage) is very uncertain. So uncertain in fact that it can't even be reasonably quantified (and much to his credit, Nordhaus is quite upfront about this). Not only is the magnitude of the effect unquantifiably uncertain, there is question as to the sign of the effect (whether warming is a bad or a good). Which leads me to one of my favourite Lord Kelvin quotes:

In other words: Quantify Your Uncertainty! This is very hard to do when you can't validate your model with properly designed experiments. Here's a graphic taken from a 2006 study about costs of climate change:

Notice anything about the range on the costs of climate change from 'previous studies'? The error bars include zero! We are not even sure of the sign of the effect, much less the magnitude. It may turn out to be good public policy to subsidize carbon dioxide emissions rather than tax them.

It seems that these types of results (DICE model runs) are very sensitive to the chosen discount rate.
The discount rate is therefore the parameter that most influences the 22nd century behaviour of the modelled climate.
-- Schneider (2003)

It is interesting to note that the discount rate in the model runs supporting 'The Economics of 350' conclusions are changed from the defaults and some hand-waving justification is given. However, no quantitative sensitivity analysis is presented.
Curiouser and Curiouser
-- Winnie-the-Pooh

In fact, the discount rate is not actually stated in the narrative, the Stern Review is referenced. The discount rate used there is 1.4%, quite a low rate of return for an investment with a time horizon of a century. In fact, the only way you'll get a majority of folks to invest in something that poor would be either to lie to them or to coerce them with the machinery of state.
A dollar invested in the private sector can provide a stream of consumption at 4 per cent. Social funds are clearly limited. If we cannot invest in every desireable social activity, clearly we should begin by investing in our best social opportunities first. If climate change can earn only a 1.5 per cent return each year, there are many more deserving social activities that we must fund before we get to climate. Although climate impacts are long term, that does not justify using a different price for time.
-- Robert Mendelsohn, Perspective Paper 1.1, 'Global Crises, Global Solutions'

Talk about tweaking the knobs on the model to get the result you're looking for...

Tuesday, October 20, 2009

Antarctic Ice Melt Lowest Ever Measured

That's the sensational headline anyway. Is it part of a significant downward trend though? Here's a graph of ice melt anomaly from the paper: An updated Antarctic melt record through 2009 and its linkages to high-latitude and tropical climate variability

You can use the g3data software to pull data points (that's what I did) if you want to run your own analysis with R. Does a simple statistical analysis support the claim that "there seems to be a downward trend"?

The R to generate the above graph is shown below.

melt = read.table("melt.dat")
attach(melt)

m.1 = lm(V2 ~ V1)

# Make confidence and prediction intervals
m.1.cinterval = predict(m.1, level=0.95, interval="confidence")
m.1.pinterval = predict(m.1, level=0.95, interval="prediction")

# plot the data and the fits/intervals to a png file
png("melt.png", width=640, height=480)

plot(V1, V2, ylab="Melting Anomaly", xlab="Year")
lines(V1, m.1.cinterval[,1], lty=1)
lines(V1, m.1.cinterval[,2], lty=2)
lines(V1, m.1.cinterval[,3], lty=2)
lines(V1, m.1.pinterval[,2], lty=2)
lines(V1, m.1.pinterval[,3], lty=2)
title("Antarctic Summer Melt Anomaly")
dev.off()

And here's the summary table for the linear model.

Call:
lm(formula = V2 ~ V1)

Residuals:
Min 1Q Median 3Q Max
-1.7507 -0.7252 -0.1028 0.7953 2.2894

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 60.54322 42.88214 1.412 0.169
V1 -0.03035 0.02150 -1.412 0.169

Residual standard error: 0.983 on 28 degrees of freedom
Multiple R-squared: 0.06646, Adjusted R-squared: 0.03312
F-statistic: 1.993 on 1 and 28 DF, p-value: 0.1690

What does that model summary output mean? There really is NOT a significant downward trend of melt anomaly with years (there's no first order trend in fact). It's not measurably different from noise. How does this square with the recent reports of runaway melting though?

Whether the ice is melting to much or too little,
The public is not well served by this constant drumbeat of false alarms fed by computer models manipulated by advocates.
-- DR. DAVID WOJICK, UN IPCC expert reviewer, PhD in Philosophy of Science, co-founded Department of Engineering and Public Policy at Carnegie-Mellon University


These recent posts about climate change stuff were inspired by a post I read about climate change skeptics, which I found because of my Google alerts on things related to 'computational fluid dynamics'. In the post she mentions Freeman Dyson, he's a pretty smart guy.

Concerning the climate models, I know enough of the details to be sure that they are unreliable. They are full of fudge factors that are fitted to the existing climate, so the models more or less agree with the observed data.

It is much easier for a scientist to sit in an air-conditioned building and run computer models, than to put on winter clothes and measure what is really happening outside in the swamps and the clouds. That is why the climate model experts end up believing their own models.
-- Dr. Freeman Dyson, Professor Emeritus of Physics at the Institute for Advanced Study at Princeton, fellow of the American Physical Society, member of the US National Academy of Sciences, and a fellow of the Royal Society of London


My favourite quote on the whole mess, a level-headed engineer from MIT quoted in a short article:
Mort Webster, assistant professor of engineering systems, warned that public discussion over climate change policies gets framed as a debate between the most extreme views on each side. "The world is ending tomorrow, versus it's all a myth," he said. "Neither of those is scientifically correct or socially useful."

Monday, October 19, 2009

JASSM Bootstrap Reliability II

I previously did an analysis of JASSM's reliability using Octave. It's a very simple exercise using the built-in Octave function empirical_rand() and public statements by various folks concerning the flight tests.

As reported by Reuters, JASSM has had success in recent tests. To be quantitative, 15 out of 16 test flights were a success. Certainly good news for a program that's been bedevilled by flight test failures and Nunn-McCurdy breaches.

The previous analysis can be extended to include this new information. This time we'll use Python rather than Octave. There is not (that I know of) a built-in function like empirical_rand() in any of Python's libraries, but it's relatively straightforward to use random integers to index into arrays and accomplish the same thing.

def bootstrap(data, nboot):
"""Draw nboot samples from the data array randomly with replacement, ie
a bootstrap sample."""
bootsample = sp.random.random_integers
return(data[bootsample(0, len(data)-1, (len(data), nboot))])

Applying this function to our new data vectors to get bootstrap samples is easy.

# reliability data for July 2009, based on Reuters report
d09_jul_tests = 19
d09_jul = sp.ones(d09_jul_tests, dtype=float)
d09_jul[0:3] = 0.0

# generate the bootstrap samples:
d09_jul_boot = sp.sum(bootstrap(d09_jul, nboot), axis=0) / float(d09_jul_tests)
# find the number of unique reliabilities to set the number of bins for the histogram:
d09_jul_nbins = sp.unique(d09_jul_boot).size



So, is it a 0.9 missile or a 0.8 missile? We should probably be a little more modest in the inferences we wish to draw from such small samples. As reported by Bloomberg the Air Force publicly contemplated cancellation for a missile with reliability distributions like the blue histogram shown below (six of ten failed), while stating that if 13 of 16 were successful (the green histogram) that would be acceptable. The figure below shows these two reliability distributions along with the actual recent performance.

This seems to be a reasonably supported inference from these sample sizes, the fixes that resulted in the recent 15 out of 16 successes had a measurable effect when compared to the 4 out of 10 performance.

Not that binary outcomes for reliability are a great measure, they just happen to be easily scrape-able from press releases. The figure below illustrates the problem, there just isn't much info in each 1 or 0, so really cranking up the number of samples only slowly improves the power (reduces the area of overlap between the two distributions).

Monday, October 12, 2009

Global Cooling?

While I don't appreciate most of the pseudo-religious global warming hysterics, I can't say the recent BBC piece on 'global cooling' is anything but sensationalism. It's curious that they don't just show their readers a simple graph of the data.

I don't know about you, but the recent years that they are claiming show a cooling trend look well within the expected variability. The solid line on the graph is a simple linear regression, the inner dashed lines are a 0.95-level confidence interval and the outer dashed lines are a 0.95-level prediction interval. The prediction interval gives a range of where you would expect temperature measurements to fall based on the fitted trend and residuals. You can download the NASA temperature data yourself and have a look.

The R code to make this graph is shown below

temps = read.csv("temps.csv", header=TRUE)
attach(temps)

# fit two models, one for January to December annual averages, and one
# for December to November annual averages
m.1 = lm(JD ~ Year)
m.2 = lm(DN ~ Year)

# Make confidence and prediction intervals
m.1.cinterval = predict(m.1, level=0.95, interval="confidence")
m.2.cinterval = predict(m.2, level=0.95, interval="confidence")
m.1.pinterval = predict(m.1, level=0.95, interval="prediction")
m.2.pinterval = predict(m.2, level=0.95, interval="prediction")

# plot the data and the fits/intervals to a png file
png("temps.png", width=640, height=480)

plot(Year, JD, ylab="Degrees C")
lines(Year, m.1.cinterval[,1], lty=1)
lines(Year, m.1.cinterval[,2], lty=2)
lines(Year, m.1.cinterval[,3], lty=2)
lines(Year, m.1.pinterval[,2], lty=2)
lines(Year, m.1.pinterval[,3], lty=2)
title("NASA GISS Global Average Temperature")

dev.off()

In case you're interested here's the summary of the fitted model shown in the graph. Just looking at the graph shows there's probably something more complicated than just a linear trend going on though.

Call:
lm(formula = JD ~ Year)

Residuals:
Min 1Q Median 3Q Max
-0.317448 -0.093632 0.001046 0.087362 0.298468

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 2.9770910 0.5788968 5.143 1.01e-06 ***
Year 0.0056581 0.0002977 19.009 < 2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.1244 on 126 degrees of freedom
Multiple R-squared: 0.7414, Adjusted R-squared: 0.7394
F-statistic: 361.3 on 1 and 126 DF, p-value: < 2.2e-16

To be fair, the BBC piece does go into greater detail that it isn't really global cooling, that the Pacific Decadal Oscillation might be starting to kick in, so we'll probably see a little 20 year dip like that between 1940 and 1960. I'd say you need a few more years of data to actually be able to call it though (kind of like calling an economic recession).

Sunday, October 11, 2009

Radau-type Runge-Kutta

Implicit Runge-Kutta methods are interesting because you can get high-order accuracy with relatively few stages. A method based on Radau quadrature that is third order accurate and L-stable has only two stages. A comparable multi-step method based on a backward difference that requires saving two steps is only second order accurate.

The Butcher tableau for the method is




The method is applied by calculating several slopes at various points and then adding a weighted sum of them to the initial condition.

The stability of the method can be understood by taking a look at the amplification factor, which for this Runge-Kutta method is given by

The amplification factor in the complex plane for two third-order A-stable Radau-based methods are shown below

The one on the left (which is the one we're implementing) exhibits stiff decay, otherwise known as L-stability. The amplification factor goes to zero as the max eigenvalue scaled by the time step goes to infinity. We can confirm this with Maxima:

limit((2*z+6)/(z**2-4*z+6), z, minf, plus);

L-stable methods are particularly important for stiff problems. These often occur in chemical kinetics or turbulent flows. Also, Chebyshev pseudospectral methods for PDEs introduce really large eigenvalues which are not very accurately approximated, so damping them is advantageous.

It's easy to apply the method to a linear ODE, such as the second order equation describing the motion of a simple harmonic oscillator

which has the analytical solution

and can be written as a first order system

We can use Maxima's kronecker_product() function to calculate the system operator for both stages

and similarly the operators for the Runge-Kutta integration


Since the system is linear we can actually invert the operator analytically and solve for the update.

Which in our case gives the solution at the new time step as

Of course the reason this type of time-integration is not very popular is because often you can't analytically invert the system. For nonlinear problems the size of the system that must be iteratively solved is twice as large (for a two-stage method) as the previously mentioned second order backward difference method.

We can dump the update expression to Fortran using Maxima's f90() function, and then compile it to a Python module using F2Py. This makes doing a convergence study in Python pretty easy:

f = 1.0 # frequency
w0 = 2.0*sp.pi*f # angular frequency
np = 1.0 # number of periods
nt = [16, 32, 64, 128, 256] # number of time-steps
h = []
x = []
dxdt = []
t = []
xa = []
err = []
for i in xrange(5):
h.append(np * (1.0/f) / nt[i])
x.append(sp.ones(nt[i], dtype=float))
dxdt.append(sp.zeros(nt[i], dtype=float))
t.append(sp.linspace(0, np-h[i], nt[i]))
sho.radau3(x[i], dxdt[i], h[i], w0)
xa.append(sho.analytical(x[i][0], dxdt[i][0], w0, t[i]))
err.append(sp.sqrt(sum((xa[i] - x[i])**2)/nt[i]))

The numerical solution is compared to the analytical solution for several time-step sizes. The observed order of convergence (2.97) matches the design order (3) pretty closely.

Check out Computer Methods for Ordinary Differential Equations and Differential-Algebraic Equations and Solving Ordinary Differential Equations II: Stiff and Differential-Algebraic Problems for some pretty detailed coverage of these and other advanced integration methods. Also see this page with Fortran implementations of several of these types of methods.

Tuesday, October 6, 2009

von Braun: dirty hands

The technical people [must] keep their hands dirty and actively work on in-house projects selected specifically for the purpose of updating their knowledge and increasing their competence.
-- Werner von Braun, Management

Monday, October 5, 2009

Historical Perspective: UAVs as stepping stones

The ultimate goal and purpose of astronautics is to gain for man himself access to space and then to other worlds. The guided missile does not carry a man. It is a bridge between the space-flight concepts at the beginning and the space-flight reality yet to come.

-- General Dynamics/Astronautics, Centaur Primer: An Introduction to Hydrogen-Powered Space Flight (San Diego, 1962), pp. x-x1.


Missile's and drones are boring, you gotta stick a warm body in that thing to get any sizzle for your steak (rule 153)!

Seriously though, UAVs are great because they offer the opportunity for rapid, (relatively) inexpensive iterations of

concept --> design --> test --> field --> repeat

Probably the least appreciated, but most important aspect being the knowledge you can only earn by actual operating the hardware "in anger". Making sure that the guys who designed this one, will be able to benefit from insight gained from fielding to design the next one is pretty crucial. When you have decades long development programs, that loop just aint gonna close.