Saturday, October 1, 2011

Discussion of V and V

A few days after I put up my little post on a bit of V&V history Judith Curry had a post about VV&UQ which generated a lot of discussion (she has a high-traffic site, so this is not unusual) [1].

A significant part of the discussion was about a post by Steve Easterbrook on the (in)appropriateness of IV&V (“I” stands for independent) or commercial / industrial flavored V&V for climate models. As George Crews points out in discussion on Climate Etc., there’s a subtle rhetorical slight of hand that Easterbrook uses (which works for winning web-points with the uncritical because different communities use the V-words differently, see the introduction to this post of mine). He claims it would be inappropriate to uncritically apply the IV&V processes and formal methods he’s familiar with from developing flight control software for NASA to climate model development. Of course, he’s probably right (though we should always be on the look-out to steal good tricks from wherever we can find them). This basically correct argument gets turned in to “Easterbrook says V&V is inappropriate for climate models” (by all sorts of folks with various motivations, see the discussion thread on Climate Etc.). The obvious question for anyone who’s familiar with even my little Separation of V&V post is, “which definition of V or V?”

What Easterbrook is now on record as agreeing is appropriate is the sort of V&V done by the computational physics community. This is good. Easterbrook seemed to be arguing for a definition of “valid” that meant “implements the theory faithfully” [2]. This is what I’d call “verification” (are you solving the governing equations correctly). The problem with the argument built on that definition, is the conflation I pointed out at the end of this comment, which is kind of similar to the rhetorical leap mentioned in the previous paragraph and displayed on the thread at Climate Etc.

Now, his disagreement with Dan Hughes (who recommends an approach I find makes a great deal of sense, and that I've used in anger to commit arithmurgical damage on various and sundry PDEs) is that Dan thinks we should have independent V&V, and Steve thinks not. If all you care about is posing complex hypotheses then IV&V seems a waste. If you care about decision support, then it would probably be wise to invest in it (and in our networked-age much of the effort could probably be crowd-sourced, so the investment can conceivably be quite small). This actually has some parallels with the decision support dynamics I highlighted in No Fluid Dynamicist Kings in Flight Test.

One of the other themes in the discussion is voiced by Nick Stokes. He argues that all this V&V stuff doesn’t result in successful software, or that people calling for developing IV&V’d code should shut up and do it themselves. One of the funny things is that if the code is general enough that the user can specify arbitrary boundary and interior forcings, then any user can apply the method of manufactured solutions (MMS) to verify the code. What makes Nick’s comments look even more silly is that nearly all available commercial CFD codes are capable of being verified in this way. This is thanks in part to the efforts of folks like Patrick Roache for instance [3], and now it is a significant marketing point as Dan Hughes and Steven Mosher point out. Nick goes on to say that he’d be glad to hear of someone trying all this crazy V&V stuff. The fellows doing ice sheet modeling that I pointed out on the thread on Easterbrook’s are doing exactly that. This is because the activities referred to by the term “verification” are a useful set of tools for the scientific simulation developer in addition to being a credibility building exercise (as I mentioned in the previous post, see the report by Salari and Knupp [4]).

Of course doing calculation verification is the responsibility of the person doing the analysis (in sane fields of science and engineering anyway). So the “shut-up and do it yourself” response only serves to undermine the credibility of people presenting results to decision makers bereft of such analysis. On the other hand, the analyst properly has little to say on the validation question. That part of the process is owned by the decision maker.

References

[1]   Curry, J., Verification, Validation and Uncertainty Quantification in Scientific Computing, http://judithcurry.com/2011/09/25/verification-validation-and-uncertainty-quantification-in-scientific-computing/,Climate Etc. Sunday 25th September, 2011.

[2]   Easterbrook, S., Validating Climate Models, http://www.easterbrook.ca/steve/?p=2032, Serendipity, Tuesday 30th November, 2010.

[3]   Roache, P., Building PDE Codes to be Verifiable and Validatable, Computing in Science and Engineering, IEEE, Sept-Oct, 2004.

[4]   Knupp, P. and Salari, K., Code Verification by the Method of Manufactured Solutions, Tech. Rep. SAND2000-1444, Sandia National Labs, June 2000.

Thursday, September 15, 2011

The Separation of V and V

The material in this post started out as background for a magazine article, and turned in to the introduction to an appendix for my dissertation [1]. Since I started learning about verification and validation, I’ve always been curious about the reasons for the sharp lines we draw between the two activities. Turns out, you can trace the historical precedent for separating the task of verification and validation all the way back to Lewis Fry Richardson, and George Boole spent some space writing on what we’d call verification. Though neither he nor Richardson use the modern terms, the influence of their thought is evident in the distinct technical meanings we’ve chosen to give the two common-usage synonyms “verification” and “validation.”

The term verification is given slightly different definitions by different groups of practitioners [2]. In software engineering, the IEEE defines verification as

The process of evaluating the products of a software development phase to provide assurance that they meet the requirements defined for them by the previous phase.

while the definition now commonly accepted in the computational physics community is

The process of determining that a model implementation accurately represents the developer’s conceptual description of the model and the solution to the model. [3]

The numerical weather prediction community speaks of forecast verification, which someone using the above quoted AIAA definition would probably consider a form of validation, and a statistician might use the term validation in a way the computational physicist would probably consider verification [4]. Arguing over the definitions of words [5] which, in common use, are synonyms is contrary to progress under a pragmatic, “wrong, but useful” conception of the modeling endeavor [6]. Rather, we should be clear on our meaning in a specific context, and thus avoid talking past collegues in related disciplines. Throughout this work, the term verification is used consistent with currently accepted definitions in the aerospace, defense and computational physics communities [3789].

In the present diagnostic effort, the forward model code seeks an approximate solution to a discretized partial differential equation. This partial differential equation (PDE) is derived from Maxwell’s equations augmented by conservation equations derived from the general hyperbolic conservation laws through analytical simplification guided by problem-specific assumptions. The purpose of formal verification procedures such as method of manufactured solutions (MMS) are to demonstrate that the simulation code solves these chosen equations correctly. This is done by showing ordered convergence of the simulated solution in a discretization parameter (such as mesh size or time-step size).

Understanding the impact of truncation error on the quality of numerical solutions has been a significant concern over the entire developmental history of such methods. Although modern codes tend to use finite element, finite volume or pseudo-spectral methods as opposed to finite differences, George Boole’s concern for establishing the credibility of numerical results is generally applicable to all these methods. In his treatise, written in 1860, Boole stated

...we shall very often need to use the method of Finite Differences for the purpose of shortening numerical calculation, and here the mere knowledge that the series obtained are convergent will not suffice; we must also know the degree of approximation.

To render our results trustworthy and useful we must find the limits of the error produced by taking a given number of terms of the expansion instead of calculating the exact value of the function that gave rise thereto. [10]

In a related vein, Lewis Fry Richardson, under the heading Standards of Neglect in the 1927 paper which introduced the extrapolation method which now bears his name, stated in his characteristically colorful language

An error of form which wold be negligible in a haystack would be disastrous in a lens. Thus negligibility involves both mathematics and purpose. In this paper we discuss mathematics, leaving the purposes to be discussed when they are known. [11]

This appendix follows Richardson’s advice and confines discussion to the correctness of mathematics, leaving the purposes and sufficiency of the proposed methods for comparison with other diagnostic techniques in the context of intended applications.

While the development of methods for establishing the correctness and fitness of numerical approximations is certainly of historical interest, Roache describes why this effort in code verification is more urgently important than ever before (and will only increase in importance as simulation capabilities, and our reliance on them, grow).

In an age of spreading pseudoscience and anti-rationalism, it behooves those of us who believe in the good of science and engineering to be above reproach whenever possible. Public confidence is further eroded with every error we make. Although many of society’s problems can be solved with a simple change of values, major issues such as radioactive waste disposal and environmental modeling require technological solutions that necessarily involve computational physics. As Robert Laughlin noted in this magazine, “there is a serious danger of this power [of simulations] being misused, either by accident or through deliberate deception.” Our intellectual and moral traditions will be served well by conscientious attention to verification of codes, verification of calculations, and validation, including the attention given to building new codes or modifying existing codes with specific features that enable these activities. [6]

There is a moral imperative underlying correctness checking simply because we want to tell the truth, but this imperative moves closer "to first sight" because of the uses to which our results will be put.

The language Roache uses reflects a consensus in the computational physics community that has given the name verification to the activity of demonstrating the impact of truncation error (usually involving grid convergence studies), and the name validation to the activity of determining if a code has sufficient predictive capability for its intended use [3]. Boole’s idea of “trustworthy results” clearly underlies the efforts of various journals and professional societies [3798] to promote rigorous verification of computational results. Richardson’s separation of the questions of correct math and fitness for purpose are reflected in those policies as well. In addition, the extrapolation method developed by Richardson has been generalized to support uniform reporting of verification results [12].

Two types of verification have been distinguished [13]: Code verification and calculation verification. Code verification is done once for a particular code version, it demonstrates that a specific implementation solves the chosen governing equations correctly. This process can be performed on a series of grids of any size (as long as they are within the asymptotic range) with an arbitrarily chosen solution (no need for physical realism). Calculation verification, on the other hand, is an activity specific to a given scientific investigation, or decision support activity. The solution in this case will be on physically meaningful grids with physically meaningful initial condition (IC)s and boundary condition (BC)s (therefore no a priori-known solution). Rather than monitoring the convergence of an error metric, the convergence of solution functionals relevant to the scientific or engineering development question at hand are tracked to ensure they demonstrate convergence (and ideally, grid/time-step independence).

The approach taken in this work to achieving verification is based on heavy use of a computer algebra system (CAS) [14]. The pioneering work in using computer algebra for supporting the development of computational physics codes was performed by Wirth in 1980 [15]. This was quickly followed by other code generation efforts [16171819] demonstrations of the use of symbolic math programs to support stability analysis [20] and correctness verification for symbolically generated codes solving governing equations in general curvilinear body-fitted coordinate systems [21].

The CAS handles much of the tedious and error prone manipulation required to implement a numerical PDE solver. It also makes creating the forcing terms necessary for testing against manufactured solutions straight-forward for even very complex governing equations. The MMS is a powerful tool for correctness checking and debugging. The parameters of the manufactured solution allow the magnitude of the contribution of each term to the error to be controlled. In this way, if a code fails to converge for a solution with all parameters O(1) (note that this is the recommended approach, hugely different parameter values which might obtain in a physically realistic solution can mask bugs). The parameter sizes can then be varied in a systematic way to locate the source of the non-convergence (as convincingly demonstrated by by Salari and Knupp with a blind test protocol [22]). This gives the code developer a diagnostic capability for the code itself. The error analysis can be viewed as a sort of group test [23]where the “dilution” of each term’s (member’s) contribution to the total (group) error (response) is governed by the relative sizes of the chosen parameters. Though we fit a parametric model (the error ansatz) to determine rate of convergence, the response really is expected to be a binary one as in the classic group test, the ordered convergence rate is maintained down to the round-off plateau or it is not. The dilution only governs how high the resolution must rise (and the error must fall) for this behavior to be confirmed. Terms with small parameters will require that convergence to very high levels is used to ensure that an ordered error is not lurking below.

References

[1]   Stults, J., Nonintrusive Microwave Diagnostics of Collisional Plasmasin Hall Thrusters and Dielectric Barrier Discharges, Ph.D. thesis, Air Force Institute of Technology, September 2011.

[2]   Oberkampf, W. L. and Trucano, T. G., “Verification and Validation Benchmarks,” Tech. Rep. SAND2002-0529, Sandia National Laboratories, March 2002.

[3]   “AIAA Guide for the Verification and Validation of Computational Fluid Dynamics Simulations,” Tech. rep., American Institute of Aeronautics and Astronautics, Reston, VA, 1998.

[4]   Cook, S. R., Gelman, A., and Rubin, D. B., “Validation of Software for Bayesian Models Using Posterior Quantiles,” Journal of Computationaland Graphical Statistics, Vol. 15, No. 3, 2006, pp. 675–692.

[5]   Oreskes, N., Shrader-Frechette, K., and Belitz, K., “Verification, Validation, and Confirmation of Numerical Models in the Earth Sciences,”Science, Vol. 263, No. 5147, 1994, pp. 641–646.

[6]   Roache, P. J., “Building PDE Codes to be Verifiable and Validatable,”Computing in Science and Engineering, Vol. 6, No. 5, September 2004.

[7]   “Standard for Verification, Validation in Computational Fluid Dynamics, Heat Transfer,” Tech. rep., American Society of Mechanical Engineers, 2009.

[8]   AIAA, “Editorial Policy Statement on Numerical and Experimental Uncertainty,” AIAA Journal, Vol. 32, No. 1, 1994, pp. 3.

[9]   Freitas, C., “Editorial Policy Statement on the Control of Numerical Accuracy,” ASME Journal of Fluids Engineering, Vol. 117, No. 1, 1995, pp. 9.

[10]   Boole, G., A Treatise on the Calculus of Finite Differences, Macmillian and co., London, 3rd ed., 1880.

[11]   Richardson, L. F. and Gaunt, J. A., “The Deferred Approach to the Limit. Part I. Single Lattice. Part II. Interpenetrating Lattices,”Philosophical Transactions of the Royal Society of London. Series A,Containing Papers of a Mathematical or Physical Character, Vol. 226, No. 636-646, 1927, pp. 299–361.

[12]   Roache, P. J., “Perspective: A method for uniform reporting of grid refinement studies,” Journal of Fluids Engineering, Vol. 116, No. 3, September 1994.

[13]   Roache, P. J., Verification and Validation in Computational Scienceand Engineering, Hermosa Publishers, Albuquerque, 1998.

[14]   Fateman, R. J., “A Review of Macsyma,” IEEE Trans. on Knowl. andData Eng., Vol. 1, March 1989, pp. 133–145.

[15]   Wirth, M. C., On the Automation of Computational Physics, Ph.D. thesis, University of California, Davis, 1980.

[16]   Cook, G. O., Development of a Magnetohydrodynamic Code forAxisymmetric, High-β Plasmas with Complex Magnetic Fields, Ph.D. thesis, Brigham Young University, December 1982.

[17]   Steinberg, S. and Roache, P. J., “Using MACSYMA to Write FORTRAN Subroutines,” Journal of Symbolic Computation, Vol. 2, No. 2, 1986, pp. 213–216.

[18]   Steinber, S. and Roache, P., “Using VAXIMA to Write FORTRAN Code,” Applications of Computer Algebra, edited by R. Pavelle, Kulwer Academic Publishers, August 1984, pp. 74–94.

[19]   Florence, M., Steinberg, S., and Roache, P., “Generating subroutine codes with MACSYMA,” Mathematical and Computer Modelling, Vol. 11, 1988, pp. 1107 – 1111.

[20]   Wirth, M. C., “Automatic generation of finite difference equations and fourier stability analyses,” SYMSAC ’81: Proceedings of the fourth ACMsymposium on Symbolic and algebraic computation, ACM, New York, NY, USA, 1981, pp. 73–78.

[21]   Steinberg, S. and Roache, P. J., “Symbolic manipulation and computational fluid dynamics,” Journal of Computational Physics, Vol. 57, No. 2, 1985, pp. 251 – 284.

[22]   Knupp, P. and Salari, K., “Code Verification by the Method of Manufactured Solutions,” Tech. Rep. SAND2000-1444, Sandia National Labs, June 2000.

[23]   Dorfman, R., “The Detection of Defective Members of Large Populations,” The Annals of Mathematical Statistics, Vol. 14, No. 4, 1943, pp. 436–440.

Friday, September 2, 2011

Airframer for Dayton Aerospace Cluster

In my previous post on the Dayton Aerospace Cluster, I mentioned that one of the big missing pieces is an air-framer. The DDN recently reported that a Fortune100 defense contractor that currently makes UAV components, but does not make full airframes yet is interested in locating a manufacturing / test facility in the Dayton region.

Here's some of the defense contractors on the Fortune 100:
See the full Fortune 500 list. The most interesting one on the list is GD, since two of the three joint ventures listed are with Israeli companies, and Dayton region representatives signed a trade deal in 2009 with representatives from Haifa.

Thursday, September 1, 2011

Fun with Filip

The Filipelli dataset [1] from the NIST Statistical Reference Datasets came up in discussions about ill-conditioned problems over on the Air Vent. Fitting the certified polynomial is a example problem 5.10 in Moler’s MATLAB book [3]. Part of the problem is to compare five different methods of fitting the 10th-order polynomial

  1. MATLAB’s polyfit (which does QR on the Vandermonde matrix)
  2. MATLAB’s backslash operator (which will do QR with pivoting)
  3. Pseudoinverse
    β = Xy(1)

  4. Normal equations
    β = XT X-1XT y (2)

  5. Centering by the mean and normalizing by the standard deviation

Carrick already demonstrated the centering approach in the comments. I was surprised how well the simple centering approach worked (it’s been quite a while since I’ve done this homework problem). Carrick mentioned that Gram-Schmidt might be another way to go, and I mentioned that QR using rotations or reflections is usually preferred for numerical stability reasons. One of the trade-offs is that Gram-Schmidt gives you the basis one vector at a time, but the other methods gives you Q all at once at the end. Depending on the problem one may be more useful than the other.

The LAPACK routine DGEQRG uses the Householder reflection method to calculate the QR decomposition [2]. This routine is wrapped in scipy.linalg.qr(), and it has an option to return only economic factors and perform pivoting. The Scipy version of polyfit() uses the singular value decomposition (from which you get the pseudoinverse).

So, the two main ways of getting robust numerical algorithms for solving least-squares problems are various flavors of QR and the pseudoinverse. Naive application of some statistical software that doesn’t use these methods by default leads to problems (e.g. SAS and R). Since I don’t like doing other people’s homework I won’t got through Moler’s list, but I did want to demonstrate another method to tackle the problem using the dreaded normal equations (which are terribly conditioned in the case of the Vandermonde matrix).


PIC

The basic idea behind both numerical approaches is to build a “good” basis to describe X, so that when we write X in terms of this basis, and perform operations using the basis, we don’t loose lots of precision. We can take a more symbolic approach to the problem, and define an orthogonal basis analytically. So instead of using the monomials as in the Vandermonde matrix, we can use an orthogonal set like the Legendre polynomials. It’s easiest to use this basis if you map the x data to the interval [-1, 1].



If you read that thread from the R mailing list, you’ll recognize this approach as the one that was recommended (use the R poly function to define the model matrix with an orthogonal basis rather than the monomials which gives the Vandermonde matrix).

Here’s the Python to do the fit on the raw data just using the built-in polyfit() funciton.

 
# polyfit uses pseudoinverse 
nx = 3000 
p = sp.polyfit(x,y,10) 
px = sp.linspace(x.min(),x.max(),nx) 
# map x to [-1,1] 
x_map = (x.min() + x.max() - 2.0*x)/(x.min() - x.max()) 
px_map = sp.linspace(-1,1,nx) 
py = sp.polyval(p, px) 
pNISTy = sp.polyval(beta_NIST, px) 
NISTy = sp.polyval(beta_NIST, x)

Here’s the Python to do the fit using some other matrix factorizations.

 
# compare polyfit to QR, this version uses Householder reflections 
Q,R = qr(X, mode=qr, econ=True) 
b = sp.dot(sp.transpose(Q),y) 
beta = solve(R,b) 
beta = sp.flipud(beta) # polyval expects the coefficients in the other order 
qry = sp.polyval(beta, px)

Finally, here is the solution using the Legendre basis to start, which avoids the necessity of generating the basis numerically using QR or the singular value decomposition (scipy.special.legendre).

 
# compare to pseudoinverse 
Xdag = pinv(X) 
beta_pinv = sp.flipud(sp.dot(Xdag, y)) 
pinvy = sp.polyval(beta_pinv, px) 
Xmapdag = pinv(X_map) 
pinvy_map = sp.polyval(sp.flipud(sp.dot(Xmapdag,y)), px_map) 
 
X1 = sp.zeros((x_map.shape[0],11),dtype=float) # for fitting 
X2 = sp.zeros((px_map.shape[0],11),dtype=float) # for plotting 
X1[:,0] = 1.0 
X2[:,0] = 1.0 
portho = [] 
for i in xrange(11): 
    portho.append(legendre(i)) 
for i in xrange(11): 
    X1[:,i] = portho[i](x_map) 
    X2[:,i] = portho[i](px_map) 
X1dag = pinv(X1) 
beta1_pinv = sp.dot(X1dag, y) 
pinvy_ortho = sp.dot(X2, beta1_pinv) 
# do it the wrong way with X1 
b_legendre = sp.dot(sp.transpose(X1),y) 
beta_legendre = solve(sp.dot(sp.transpose(X1),X1), b_legendre) 
py_legendre = sp.dot(X2, beta_legendre) 
LU,piv = lu_factor(sp.dot(sp.transpose(X1),X1)) 
beta_legendre_lu = lu_solve((LU,piv),b_legendre) 
py_legendre_lu = sp.dot(X2, beta_legendre_lu)

The error norm for all of the “good” methods is roughly the same order of magnitude. Even using a bad algorithm, inverting the normal equations, gives a good answer if you start out with a design matrix that is orthogonal to begin with. The QR method is much less analyst-work (and works even without mapping or normalizing the data). Since you just build the basis numerically it requires a bit more cpu-work (which is usually cheap), and it is already coded up and well tested in freely available libraries.

References

[1]   Filippelli, A., NIST.

[2]   DGEQRF, LAPACK subroutine, version 3.3.1, http://www.netlib.org/lapack/double/dgeqrf.f

[3]   Moler, C.B., Numerical computing with MATLAB, Chapter 5: Least Squares, Cambridge University Press, 2004.

Saturday, August 20, 2011

SGI Acquires OpenFoam

OpenFoam is probably the most mature open source CFD tool available. There are plenty of niche CFD projects on SourceForge, but none that have really been able to generate the user/developer community that OpenFoam has. OpenFoam recently made the news because SGI "purchased" it for an undisclosed amount.
"Computational fluid dynamics (CFD) is one of the most important application areas for technical computing," said SGI CEO Mark J. Barrenechea. "With the acquisition of OpenCFD Ltd., SGI will be able to provide our customers the market's first fully integrated CFD solution, where all the hardware and software work together."
Basically, they hired all of the developers:
The entire OpenCFD team, led by Henry Weller, has joined SGI as full-time employees and will be based out of SGI's EMEA Headquarters located in the UK. SGI Acquires Maker of Open Source CFD Software
They also created the OpenFoam Foundation to ensure the continued development of OpenFoam under a GPL license. This is a very good thing for the development of OpenFoam to really take off. The foundation means that a true commons can develop around the software. SGI is clearly using OpenFoam as widget frosting, and they are selling support. This means their incentives line up with making the open version as good as it can be rather than crippling it in deference to a paid "pro" version. Good news for open source computational fluid dynamics.

Sunday, August 14, 2011

Second HTV-2 Flight Test

The second HTV-2 test flight experienced a loss of telemetry about 9 minutes into the flight, very similar to the first test-flight. The post-flight analysis from the first flight indicated that inertia coupling caused the vehicle to depart stable flight (which lead the autopilot to terminate the flight). Here’s a relevant snippet from Bifurcation Analysis for the Inertial Coupling Problem of a Reentry Vehicle,

Inertial coupling has been known since around 1948 [1]. It is essentially a gyroscopic effect, occurring in high roll-rate maneuvers of modern high-speed airplanes including spinning missiles designed in such a way that most of their masses are concentrated in the fuselage. For such an airplane, a slight deviation of its control surface angle from the steady-state angle may lead to a drastic change in roll-rate, causing damage on its empennage; known as the jump phenomenon. Nonlinear analyses to elucidate this problem have been reported in [23], for example. However, the airplanes treated in these works are stable around the equilibrium point of level flight. On the other hand, an intrinsically unstable reentry vehicle, if combined with a malfunction of its AFCSs, may result in a catastrophe once it falls into a high roll-rate motion.

This does sound an awful lot like what happened to both of DARPA’s HTV-2 vehicles. Departure from controlled flight due to inertia coupling was the cause of a loss of crew in the early X-2 testing [4].

Simple explanations are usually not the reason for flight test accidents or mishaps. Experience has shown that there is usually a chain of events that lead to the mishap. Day gives a great summary of the combination of contributing factors that lead to the fatal X-2 test [4],

  • Optimum energy boost trajectory.
  • The rocket burn was longer than predicted by 15 sec and positioned the pilot further from Muroc Dry Lake than expected.
  • Decrease in directional stability as speed increased.
  • Decrease in directional stability with increased lift, leading to a reduced critical roll rate for inertial coupling.
  • Adverse aileron control (control reversal).
  • High positive effective dihedral.
  • Rudder locked supersonically.
  • Mass properties in window of susceptibility for inertial roll coupling.

Day describes the result of hitting an unstable region in the control parameter space:

At critical roll velocity, violent uncontrollable motions characteristic of inertial roll coupling occurred about all three axes.

At low-speeds this might be recoverable, but it is simply intractable to design a hypersonic aircraft to handle the loads this kind of rapid motion imposes on the vehicle structure. Eventually something gives, and this “jump” leads rapidly to catastrophic failure.

Day summarizes the reasoning for not modifying the X-2 for manned hypersonic flight,

The X-2 was unfortunately in the twilight zone of progress where rocket power, structural integrity, and thermodynamics were sufficiently advanced to push the aircraft to supersonic speeds. However, hydraulic controls and computerized control augmentation systems were not yet developed enough to contend with the instabilities.

And a relevant snippet from the more recent bifurcation analysis,

If it has unstable dynamics in the neighborhood of a trim point, a slight external disturbance or a slight deviation of control surface angles from their precise values at the trim point cannot allow the vehicle to stay at the trim point [5].

The flight regime that HTV-2 is operating in is unknown enough that it is difficult for designers to know before flight if the flight profile will put the vehicle into an unstable region of the control space. I thought the language that the program manager (a fellow AFIT alum btw) used to describe the post-flight analysis was interesting,

“Assumptions about Mach 20 hypersonic flight were made from physics-based computational models and simulations, wind tunnel testing, and data collected from HTV-2s first test flight the first real data available in this flight regime at Mach 20,” said Air Force Maj. Chris Schulz, HTV-2 program manager who holds a doctorate in aerospace engineering. “Its time to conduct another flight test to validate our assumptions and gain further insight into extremely high Mach regimes that we cannot fully replicate on the ground.” DARPA Press Release

I've noticed physics-based is a common meme for climate policy alarmists trying to shore-up the credibility of simulation predictions. There’s that same tone of, The Science Says..., which leads to unwarranted confidence in a situation of profound ignorance (unquantifiable uncertainty). I’ve also observed that program managers seem to take great comfort in the aura of a model that is “physics-based” as a talisman against criticism (“just look at these colorful fluid dynamics, you must be really stupid to question Physics…”). Of course, “physics-based” can be said of all sorts of models of varying fidelity. It is a vague enough moniker to be of great political or rhetorical use. Not that I think Schulz is one of these shady operatives since he goes on to say,

“we wouldn't know exactly what to expect based solely on the snapshots provided in ground testing. Only flight testing reveals the harsh and uncertain reality.”

As DARPA has been well informed by recent experience, ignorance becomes apparent only when confronted with the reality of interest. “Physics-based” is no guarantee of predictive capability.

References

[1]   Abzug, M.J. and Larrabee, E.E., Airplane Stability and Control. A History of the Technologies that Made Aviation Possible, Cambridge University Press, Cambridge, U.K., Chap. 8, 1997.

[2]   Schy, A.A. and Hannah, M.E., “Prediction of jump phenomena in rolling coupled maneuvers of airplanes,” Journal of Aircraft, Vol. 14, pp. 375–382, April 1977.

[3]   Carrol, J.V. and Mehra, R.D., “Bifurcation analysis of nonlinear aircraft dynamics,” Journal of Guidance, Control and Dynamics, Vol. 5, No. 5, pp. 529–536, 1982.

[4]   Day, R.E., Coupling Dynamics in Aircraft: A Historical Perspective, NASA Special Publication 532, 1997.

[5]   Goto, N. and Kawakita, T., Advances in Dynamics and Control, CRC Press, Chap. 4, 2004.

Saturday, August 13, 2011

Latex for Blogger: Pandoc

The previous post showed some new math rendering tools for supporting Latex for Blogger. That still relied on htlatex (in the tex4ht package) for converting Latex into HTML. I just found another tool called Pandoc that converts between lots of different markup languages. Here’s the first paragraph from the manpage.

Pandoc is a Haskell library for converting from one markup format to another, and a command-line tool that uses this library. It can read markdown and (subsets of) Textile, reStructuredText, HTML, and LaTeX; and it can write plain text, markdown, reStructuredText, HTML, LaTeX, ConTeXt, RTF, DocBook XML, OpenDocument XML, ODT, GNU Texinfo, Medi‐ aWiki markup, EPUB, Textile, groff man pages, Emacs Org-Mode, and Slidy or S5 HTML slide shows.

Sounds promising. So, this post is written in Latex, and then I converted it to HTML with this command:

  [jstults@grafton pandoc]$ pandoc -f latex -t html -o pandoc.html pandoc.tex 

Here’s some test equations:
f(x) = x2

$\frac{\partial U}{\partial t} = \nabla U $

Φ  = Ï• - ∫ t0tf(Ï„) dÏ„

Friday, August 5, 2011

Computational Explosive Astrophysics

I got an update with an interesting set of slides on adaptive mesh refinement recently. I've always been more of a moving mesh kind of guy, but lots of people find the AMR approach useful. The slides are for a summer course on Computational Explosive Astrophysics. They have an index for the lecture notes where those slides came from. Last year's course was Galaxy Simulations.

Saturday, July 30, 2011

Latex for Blogger

A couple of years ago, I posted that I was using htlatex to do math-heavy posts. This worked in a pinch, the equations are all nicely rendered as *.png’s and they have all the normal Latex auto-numbering and cross-referencing. This only works for posts though, no equation support in comments. I learned about MathJax, which is a much better option than non-scaling pngs for math, and it supports equations in comments. All it requires is adding a script to the blogger template file.

  <script src=’http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML’ type=’text/javascript’> 
  </script>

Then run htlatex with the mathml option.

  [jstults@grafton]$ htlatex foo.tex ”xhtml,mathml”

This turns foo.tex into foo.html, which has the equations in MathML markup. MathJax renders the MathML so everything looks pretty.

MathJax also renders any Latex it finds on the page (including in comments!). The normal single dollar sign for inline math does not work, since this would cause trouble if people were trying to discuss money. Here's what does work though:

  % inline math in comments use: 
  $$ x + y = z $$ 
  % equations in comments use: 
  \[ x + y = z \
  % or  (for numbered equations): 
  \begin{equation} x + y = z \end{equation}
 inline in comments:  
 \ ( x + y = z  \)
A couple numbered equations in the post \begin{equation} \frac{\partial x}{\partial t} = A x \end{equation} foo \begin{equation}\frac{\partial y}{\partial t} = g(y) + f \end{equation}