Saturday, January 30, 2010

Chemistry Drives Convection

There's a short article with a neat set of flow-viz pictures and a movie on the APS site. It's kind of funny how research results get written up in the popular press.
As a demonstration of the chemistry "tail" wagging the fluid dynamics "dog," researchers have...
Although this chemically-induced convection was not a surprise, the convection patterns in the simulations were surprising. In some cases the convective plumes went only upward from the boundary, rather than creating a symmetric pattern that sent fingers both up and down from the mid-line.
It turns out that even without a chemical reaction, convection would be expected if the reactants had different diffusion rates.
The new simulations show that convection makes the reactions progress faster, just as stirring would.
I'm not sure how surprising any of that would be. Turbulent mixing is a well known way of increasing reaction rates, for instance in flame-holders and high-g combustors. It's also kind of funny how everything has to do with climate change:
This could mean that researchers have underestimated reaction and mixing rates in natural phenomena, like the Earth's mantle and supernova explosions, as well as for carbon sequestration, in which carbon dioxide is stored underground. Modelers have so far not considered how convection induced by the interaction of carbon dioxide with ground water may affect the long-term containment of this greenhouse gas...
Now that's the funding tail wagging the science dog.

Friday, January 29, 2010

FFT-based Abel Inversion Tutorial

I was putting together a little data post-processing tutorial for some guys in my lab, and it turned into a tour de force for that scientific computing swiss army knife, the FFT. So I figured I'd share it here.

The explicit recognition of its [the FFT’s] importance is one of the few significant advances in numerical analysis in the past few decades. Much has been learned about various aspects of computation, but no other recent discovery has so profoundly affected so many different fields as has the fast Fourier transform.
–R.W. Hamming [1]

This tutorial will demonstrate Gaussian convolution / deconvolution and Abel inversion of something resembling microwave interferometry data. The tutorial uses Scipy [2], but the concepts (as well as most of the function names and even the underlying FFT libraries) transfer directly to other environments (Matlab, Octave, etc). When in doubt, Google or Hutchinson [3] are your friends.

1 Gaussian Convolution / Deconvolution

The Microwave Interferometer beam is assumed to have a Gaussian distribution of intensity. The equation for a Gaussian distribution is

 2 e- (x2-σμ)2 √2-π-σ2-

This means that the measurement at each spatial position is a “mixture” of effects from neighboring areas along the beam’s axis. To understand this, we’ll do some convolutions with Gaussians. First off, we need an array of points from a Gaussian, but we need them in a particular order. Here’s a Python function which does what we need:

def periodic_gaussian(x, sigma): 
    nx = len(x) 
    # put the peak in the middle of the array: 
    mu = x[nx/2] 
    g = gaussian(x, mu, sigma) 
    # need to normalize, the discrete approximation will be a bit off: 
    g = g / sum(g) 
    # reorder to split the peak across the boundaries: 
    return(sp.append(g[nx/2:nx], g[0:nx/2]))

The reason for reordering is because we’ll be using FFTs to do the convolutions, and the FFT assumes periodic data. Figure 1 shows the results for several values of the standard deviation.


Figure 1: Reordered Gaussians

Once we have the proper distributions defined, the convolution just requires an FFT of the theoretical curve, and the Gaussians, point-wise multiplication in the transformed-domain, followed by an inverse transform. The Python to do this is shown below:

dx = x[1] - x[0] 
sigma = [25.0, 50.0, 100.0, 150.0] 
g = [] 
cyl_conv = [] 
fcyl = fft(cyl) 
for i,s in enumerate(sigma): 
    g.append(periodic_gaussian(x, s*dx)) 
    cyl_conv.append(ifft(fcyl * fft(g[i])))

Figure 2 shows a theoretical phase shift curve for a calibration cylinder along with convolutions of that same curve with Gaussians of various standard deviations.


Figure 2: Convolutions of Calibration Cylinder Theoretical Phase Shift with Gaussians

Convolution is easy enough, but what we have in the case of an interferometry measurement is the convolved signal, and we’d like the ’true’ measurement. We can get this by deconvolution. This is simply division in the frequency space as opposed to multiplication:

# now try a deconvolution: 
sigma_deconv = [94.0, 100.0, 106.0] 
cyl_deconv = [] 
# use one of the previous convolutions as our measurement’: 
fmeas = fft(cyl_conv[2]) 
for i,s in enumerate(sigma_deconv): 
    cyl_deconv.append(ifft(fmeas / fft(periodic_gaussian(x, s*dx))))

Figure 3 shows the results for a couple test σ’s. Unfortunately, deconvolution is numerically ill-posed and very sensitive to noise, it tends to amplify any noise present. Unless we are very lucky and guess the exact σ and our measurement has no noise, we end up with ’wiggles’.


Figure 3: Deconvolution of σ = 6 ’measurement’

We can fix things by first ’smoothing’ with a small Gaussian, and then performing some test deconvolutions.

# now try a deconvolution with some regularization’: 
sigma_reg = 50.0 # about half our true sigma 
cyl_deconv_smooth = [] 
# use one of the previous convolutions as our measurement’, smooth it: 
fmeas = fft(cyl_conv[2]) * fft(periodic_gaussian(x, sigma_reg*dx)) 
for i,s in enumerate(sigma_deconv): 
    cyl_deconv_smooth.append(ifft(fmeas / fft(periodic_gaussian(x, s*dx))))


Figure 4: Deconvolution of σ = 6 ’measurement’, with smoothing

The deconvolution that uses σ > σtrue still displays overshoots even with the smoothing, it also becomes too ’sharp’. The calibration measurements can be used to find the optimal smoothing and sharpening σ for the particular interferometer / frequencies being used.

2 Abel Inversion

Many of the same tools presented in Section 1 can be extended to the calculation of the Abel inversion. In microwave interferometry we have a ’chord-averaged’ measurement, but we would like to understand the radial distribution of the quantity (phase shift / plasma density).

 - 1∫ ∞ ∂F 1 f(r) =-π- -∂y∘--2---2 dy r y - r

where F is our chord-averaged measurement, and f(r) is the radial distribution that we would like to recover. To calculate the inverse transform we first need an estimate of the derivative ∂F∕∂y. Like deconvolution, numerical differentiation tends to amplify noise (numerical or otherwise). To avoid this we can convolve the derivative of a Gaussian with the function we wish to differentiate and then perform an inverse transform to recover a ’smoothed’ numerical derivative. The derivative of the Gaussian is

 - (x-μ)2- - (x--μ)√-e--2σ2-- σ2 2 πσ2

A Python function that gives a properly re-ordered first derivative is

def periodic_gaussian_deriv(x, sigma): 
    nx = len(x) 
    # put the peak in the middle of the array: 
    mu = x[nx/2] 
    g = dgaussiandx(x, mu, sigma) 
    # need to normalize, the discrete approximation will be a bit off: 
    g = g / (-sp.sum(x * g)) 
    # reorder to split the peak across the boundaries: 
    return(sp.append(g[nx/2:nx], g[0:nx/2]))

In the same way that we used the Gaussian for smoothing we can use its derivative for differentiation. Here’s the Python to do that:

# calculate some derivatives: 
sigma_deriv = 0.5 # this can be small because weve already smoothed a lot 
cyl_deriv = [] 
for i,c in enumerate(cyl_deconv_smooth): 
        ifft(fft(c) * fft(periodic_gaussian_deriv(x, sigma_deriv*dx))).real)

Figure 5 shows the results. The wiggles that we don’t manage to smooth in the initial sharpening stage become amplified by differentiation.


Figure 5: Derivative of De-convolved Functions

Once we have a numerical approximation for the derivative, calculating the integral is straight-forward:

def abel(dfdx, x): 
    nx = len(x) 
    # do each half of the signal separately (they should be the same 
    # up to noise) 
    integral = sp.zeros((2,nx/2), dtype=float) 
    for i in xrange(nx/2, nx-1): 
        divisor = sp.sqrt(x[i:nx]**- x[i]**2) 
        integrand = dfdx[i:nx] / divisor 
        integrand[0] = integrand[1] # deal with the singularity at x=r 
        integral[0][i-nx/2] = - sp.trapz(integrand, x[i:nx]) / sp.pi 
    for i in xrange(nx/2, 1, -1): 
        divisor = sp.sqrt(x[i:0:-1]**- x[i]**2) 
        integrand = dfdx[i:0:-1] / divisor 
        integrand[0] = integrand[1] # deal with the singularity at x=r 
        integral[1][-i+nx/2] = - sp.trapz(integrand, x[i:0:-1]) / sp.pi 

The one minor ’gotcha’ is dealing with the integrable singularity in 2. We just copy the value of the nearest neighbor over the Inf, the integral will approach the correct value in the limit of small Δx (lots of points).


Figure 6: Inverse Abel Transform Results

The analytical inverse Abel transform of the theoretical phase shift curve can be easily calculated (Maxima [4] script shown in section A.2). The theoretical phase shift has the form

 2 2 P (x) = width - (x- x0)

which gives the derivative as

∂P ∂x-= - 2 (x- x0)

and finally, the inverse Abel transform as

 2 (log(2 (√a2---r2 + a)) x0 - log (2 r) x0 - √a2---r2) p(r) = - ---------------------------------------------- π

2.1 Centering

Often, despite our best efforts, the data are not centered. There is a method to estimate the optimal shift required to center the data using the FFT [5]. Figure 7 shows one of the “measurements” shifted by a small amount.


Figure 7: Shifted Measurement

The method of Jaffe et al. [5] is to maximize

N∕2-1 ∑ [ -2πmk Δfn ]2 Re(e Lnh) -N ∕2

Where Lnh is the Fourier transform of the data, and mkΔ is a shift. Since the data is real, the real part of its transform is the even part of the signal. The shift which maximizes the magnitude of the even part is the one which makes the data most even (centered). A graphical analysis of the data can provide a range of shifts to try. The Python to perform this maximization is shown below (this relies on being able to index arrays with arrays of integers, which can be done in Matlab as well).

# a shifted measurement 
shift = nx / 10 # a ten percent shift 
cyl_shifted = cyl_conv[2][scipy.append(range(shift,nx), range(0,shift))] 
# find the shift needed to center the measurement (should match shift above): 
indices = [] 
# get range of shifts to try from eye-balling the measurement 
try_shift = range(nx - 2*shift, nx) 
for i in try_shift: 
    indices.append(scipy.append(range(i,nx), range(0,i)).astype(int)) 
even_norm = sp.zeros(len(indices), dtype=float) 
for i, cyclic_perm in enumerate(indices): 
    even_norm[i] = sp.sum(fft(cyl_shifted[cyclic_perm]).real**2) 
max_even_shift = try_shift[even_norm.argmax()]

Figure 8 shows that the objective function (equation 7) reaches a local maximum at the shift which was originally applied (shown in Figure 7).


Figure 8: Centering Objective Function

3 Calibration Model Fitting

The techniques presented in section 1 can be combined to fit a calibration model to interferometry measurements of a calibration cylinder (or any other object for which you have the theoretical phase-shift curve). The purpose of a calibration model is to provide estimates of parameters like the standard deviation of the Gaussian beam for use in deconvolution and inversion of subsequent test measurements.

There steps in fitting the model are

  1. Center the data (2.1)
  2. Find an error minimizing set of deconvolution and smoothing σs (these are the parameters of our model).
    1. Define a function of the data and the σs which returns an error norm between the theoretical phase shift curve and the smoothed, deconvolved data.
    2. Apply a minimization algorithm to this function. Choose an initial guess for the deconvolution σ which is likely to be smaller than the true σ.

The Python functions to accomplish this are shown below. One error function uses a sum of squared errors and the other uses a maximum error.

def smooth_deconv(s, x, data): 
    fsmooth = fft(data) * fft(periodic_gaussian(x, s[0])) 
    fdeconv = ifft(fsmooth / fft(periodic_gaussian(x, s[1]))).real 
def cal_model_ls_err(s, x, data, theoretical): 
    fdeconv = smooth_deconv(s, x, data) 
    return(sp.sqrt(sp.sum((fdeconv - theoretical)**2))) 
def cal_model_minmax_err(s, x, data, theoretical): 
    fdeconv = smooth_deconv(s, x, data) 
    return((abs(fdeconv - theoretical)).max())

The error in the fitted model following this approach is shown in Figure 9. Both the least squares error and the min-max error methods recover the correct deconvolution σ.


Figure 9: Error in Calibration Fit

4 Further Improvements / Variations

Jaffe et al. [5] also suggest “symmetrizing” before calculating the Abel transform by removing the imaginary component of the transformed data. They also estimate the variance of the inverse Abel transform based on both the portion of the data removed by filtering (smoothing) and symmetrizing.

Rather than calculating the derivative necessary for the inverse Abel transform with a convolution, a pseudo-spectral method could be used. The trade-off being that very accurate derivatives could be attained at the expense of greatly amplifying any noise. A more practical variation, that is not susceptible to noise, would be to calculate the integral with a pseudo-spectral approach rather than the composite trapezoidal rule. This would give a more accurate integral, and bring the numerical and analytical curves in Figure 6 into closer agreement.

A Complete Scripts

A.1 Python

The complete Python script to do all of the calculations and generate all of the plots in this tutorial is available.

A.2 Maxima

The Maxima batch file used to calculate the inverse Abel transform of the theoretical phase-shift curve is available.


[1]   Hamming, R.W., “Numerical Methods for Scientists and Engineers,” 2nd. ed., Dover Publications, 1986.

[2]   Jones, E., Oliphant, T., Peterson, P., “Scipy: Open source scientific tools for Python,” 2001–,

[3]   Hutchinson, I.H., “Principles of Plasma Diagnostics,” 2nd ed., Cambridge University Press, 2002.

[4]   Schelter, W., et al., “Maxima: a computer algebra system,”

[5]   Jaffe, S.M., Larjo, J., Henberg, R., “Abel Inversion Using the Fast Fourier Transform,” 10th International Symposium on Plasma Chemistry, Bochum, (1991).

Thursday, January 28, 2010

Unnamed Official Confirms Constellation Program Will Be Canceled.

Courtesy of the AIAA news-roundup.

The Orlando Sentinel (1/28, Block) reports, "Administration officials and a former astronaut on Wednesday called President Obama's plans for NASA 'exciting' and 'bold,' saying he was replacing a failed moon program with a new $6 billion project to develop commercial rockets capable of taking astronauts into orbit." The plan, described by an unnamed White House and NASA official and Augustine Commission member Sally Ride, would see an annual increase to NASA's budget over the next five years. At the teleconference, the "NASA official stressed that just because the Constellation program to return humans to the moon and its Ares I and Ares V rockets were going to be canceled did not mean that the Obama administration was abandoning exploration and human spaceflight." However, when questioned, "officials repeatedly dodged the question of what plans the administration had for a heavy-lift rocket."
        However, Florida Today (1/28, Halvorson) notes "Obama's space plan will be a hard sell in Congress. Even ardent Obama supporters and some key space advisers are taken aback." Sen. Bill Nelson warned the plan would "decimate the space program," while Rep. Bill Posey called the plan a "slow death" for NASA's manned spaceflight program. In contrast, the unnamed NASA official said it was a "serious, serious effort" to reduce the manned spaceflight gap after the shuttle program ends. The St. Petersburg Times (1/28, Leary) notes in an article about the reaction to Obama's State of the Union address, reports Nelson said, "On the downside, we're going to have to get the president to do more for NASA. America's global leadership in science and technology is at stake if we don't maintain a more robust space exploration program."
        Meanwhile, FOX News (1/27, Clark) reports, "Marty Hauser, vice president of Washington operations for the Space Foundation, an advocacy group, said that while the proposal would hurt in the short term, it does have the potential to create jobs in the long term if the objective is to privatize space flight." However, the article notes "Republican lawmakers wasted no time in blasting the president." Industry leaders also reportedly "expressed dismay" over a potential budget freeze for NASA, according to some previous reports. Louis Friedman, executive director of the Planetary Society, said, "I think NASA's value as an economic engine for the country is long understood in theory, long underplayed in Congress.
        Bolden Discusses NASA's Future In Israel. In an article titled "'An Israeli Astronaut? There May be No More Astronauts At All!,'" the Israel National News (1/28, Kempinski) posted a video of NASA Administrator Charles Bolden speaking to the press at the Ilan Ramon International Space Conference. "Bolden related to a number of exciting topics in the field of space and beyond. In the following video the NASA Chief discusses the option of sending another Israeli astronaut into space, the mission of saving the planet from asteroids, and commercial flights to the moon."
        End Of Constellation Raises New Questions. Popular Mechanics (1/27, Pappalardo) gives "a breakdown of the some questions to ask during the aftermath of the apparent collapse of the United States' human space flight program." These include questions like who would benefit from the shift in NASA budget priorities, whether the Defense Department will be the "heir" to NASA, and if astronauts are "going extinct" among others. Furthermore, "Without the appeal of a human flight program, will fewer aspiring scientists and engineers be lured into the agency and towards military and private space? Will the research end of NASA suffer from this lack of inspirational purpose? What are the geopolitical ramifications, if any, of this waning of American power?"

Wednesday, January 27, 2010

FFT-based Convolutions for Oil Extraction Model

The Mobjectivist blog has developed a simple oil production model based on a series of convolutions. Each phase of oil exploitation after discovery, fallow, construction, maturation, and extraction is modeled by first order rate that depends on the amount of stuff at each proceeding stage.

∂P--= - rp (t) ∂t

the solution to which is just

p (t) = p(0)e-rt

So, for example, the rate of construction depends on the amount of discovered oil that’s been through the fallow stage, the amount of final extraction depends on how much has gotten through each of the proceeding stages. A really convenient way to calculate convolutions is with the FFT, because convolutions are just multiplications in the transformed space. The advantage of an FFT-based approach is the O(n log n) scaling of the computational cost (rather than the O(n2) scaling of a loop-and-multiply implementation). The draw-back of using an FFT-based convolution is that the FFT assumes the data is periodic, so you need plenty of zero-padding to prevent weird wrap-around effects. The Python to do the required convolution and calculate the simple model is shown below.

import scipy as sp 
import scipy.fftpack 
fft = scipy.fftpack.fft 
ifft = scipy.fftpack.ifft 
from scipy.stats.distributions import norm 
from matplotlib import rc 
rc(text, usetex=True) 
from matplotlib import pylab as p 
def exponential_convolution(x, y, r): 
    expo = sp.exp(-r*x) 
    expo = expo / sp.sum(expo) # normalize 
    return(ifft(fft(expo) * fft(y)).real) 
nx = 1024 # number of points on the curve 
# go out far enough so theres plenty of entries that are roughly zero 
# at the end, since the FFT assumes periodic data 
# years in the model: 
x = sp.linspace(0.0, 400.0, nx) 
# a simple oil extraction model where everything is a first order rate 
# the discovery distribution: 
y = norm.pdf(x, loc=50, scale=10) 
# fallow’: 
y_conv_1 = exponential_convolution(x, y, 0.04) 
# construction’: 
y_conv_2 = exponential_convolution(x, y_conv_1, 0.04) 
# maturation’: 
y_conv_3 = exponential_convolution(x, y_conv_2, 0.04) 
# extraction’: 
y_conv_4 = exponential_convolution(x, y_conv_3, 0.04) 
p.plot(x, y, label="discovery") 
p.plot(x, y_conv_1, label="fallow") 
p.plot(x, y_conv_2, label="construction") 
p.plot(x, y_conv_3, label="maturation") 
p.plot(x, y_conv_4, label="extraction") 

Figure 1 shows the results.

Figure 1: Simple Convolution-based Oil Extraction Model

Tuesday, January 26, 2010

Stealth Pan-handler Relocation Stymied

The Dayton Daily News is carrying an article about the ill-conceived stealth pan-handler relocation program in Grafton Hill. This was a program 'kick-started' by Federal economic stimulus funds where some homeless folks who were bothering businesses in another part of the city were gathered up and put in an old, nearly unoccupied apartment building. Ms Patterson describes the motivation for the secrecy:
The Other Place spokeswoman Tina Patterson said the decision to integrate them into the neighborhood is not a new concept but was done quietly this time because of the stereotypes associated with the chronically homeless.
What stereotypes could those be?
Olds, Inman and Richard Miniard have been problems for the city the last nine years, police reports show. They are regarded by police and business owners near Brown and East Stewart streets as nuisances and drunks who panhandle illegally for money. Each have been convicted numerous times of being drunk in public. Another resident in their complex, John C. Hibbits has been arrested more than 50 times, police records show. All the men said they abandoned shelters because of the rules, opting instead for dirty tents erected in Veterans Park.
Your tax dollars paying their rent now. I guess when the public drunkenness and urination is college-age buffoons it's ok, but when it's homeless old guys it's bad for business. That's alright, relocating them into an unlicensed group-home provides a nice, no-risk cash flow for the new property owner, and roofs over the heads of some homeless folks (who have been unfairly stereotyped). Everyone's happy right? Everyone but the people who now have to live with these pan-handlers (who couldn't be bothered to follow the rules so that they could stay in a shelter). I must say that not all of them were bad, I remember one in particular that made it a point not to ask for money when you passed him on the sidewalk, he was trying to make the best of his new opportunity. Unfortunately, his fellows did not emulate his good attitude.
“It was never our intention to have them thrown out of our neighborhood,” said Cheryl Bates, president of Grafton Hill Neighborhood Association. “We were never made aware of their arrival until we read it in the newspaper. Then we started seeing panhandling, public urination and fights involving people living in that building.”
I really feel for the poor guy who happened to be living in the same building that they dumped a dozen reprobates on:
Gary Ewing, who was living in the apartment before the formerly homeless tenants arrived, said he noticed a lot of “bizarre” things once his new neighbors arrived. “I kept calling police because they’d bust out all of the windows and would be drunk and harassing me,” Ewing said. “Now they are not renewing my lease because I called police about all of this nonsense.”
The DDN article also has a quote from Gary Zaremba, who purchased the 16 unit apartment building on 2 Jun 2009 for $106k, and then had 10 state-funded tenants delivered into his lap a month latter (talk about a great real-estate speculation, what a talented fellow). [Update: a little birdie told me that he charges ~200% the market rate for his subsidised tennants, and he is not renewing the market rate tennants, $550/month * 10 = $5.5k/month state-guaranteed income for this guy without ever having to advertise to fill the vacancies in his newly bought building] I find it interesting that the guy who bought this old nearly empty apartment building (which was on a 'wouldn't it be nice to demolish this nuisance' list) just shortly before the relocation would blame this on the race of people in Grafton Hill:
“I think the big issue here is a lot of the (Grafton Hill) complaining came from whites with better incomes,” said New York-based property owner, Gary Zaremba. “The tenants living in the building weren’t violent felons. (The neighbors) just didn’t want my tenants living in their backyard.”
Hello! Lots of the pan-handlers you relocated are white! And the poor guy you dumped them on is black! GHA has all sorts of members (with widely varying amounts of melanin in their skin). You would have known that if you had come to a meeting when you bought property in our neighborhood. Now I understand why it is so easy for demagogues to rile people up about out of town speculators and carpetbaggers. If that's what economic stimulus brings to Dayton, you can keep it.

I think the worst thing about fly-by-night programs like this is when the stimulus funds run out, these old guys will be right back out on the street looking to scratch together enough petty cash for their next fix, and their patrons will have moved on to using some new issue or group as a 'project' to solidify political capital. If we were serious about helping these guys, we wouldn't have dumped them in an unsupervised mess with no structured 'program' to help them 'reintegrate'.

VV&UQ for Nuclear Energy

I found this interesting slide from a DOE pitch about simulation as it supports nuclear energy development.

Their "top priority recommendation" is about Verification, Validation and Uncertainty Quantification:
• Our top priority recommendation is that a V&V and
  UQ program for nuclear systems’ simulation take a
  two-pronged approach.
     • First, focus on research into the critical issues
       and challenges
     • Second, a concurrent study using the V&V and
       UQ process to analyze a number of critical,
       integrated physics applications would provide a
       problem focus and address the issues of
       coupled multi-scale physics and UQ.

If you care about having useful models, then you care about VV&UQ. Otherwise, a model just provides the colorful marketing material you use to garner more funding. Larzelere goes on to say,
It is Important to Note That Advanced Modeling and Simulation Does Not Replace the Need for Theory or Experiments
As an aside, I thought this graphic comparing the manpower and flops requirements of the two different simulation domains was interesting too.