Sunday, December 18, 2011

Fedora 16 Install Notes

Notes on using the pre-upgrade method to go from Fedora 14 to 16 (yes, it's risky to skip a version). The reason for my upgrade is that Fedora 14 is now at end-of-life, I'd like to get FEniCS working, and some of the new spins look pretty cool.

One of the things I had to manually fix after reboot was the TexLive development repo (only 60% complete, estimated for Fedora 17). I had this activated to get the IJ4UQ styles to work.
yum remove texlive-release
rpm -i http://jnovy.fedorapeople.org/texlive/packages.f16/texlive-release.noarch.rpm
Manually remove some old conflicting packages (some from Fedora 12, yikes! I'm a lazy sys-admin).
List the repositories to make sure everything is pointing at the correct release version.
yum repolist
Sync things up.
yum distro-sync
I got this error: grubby fatal error: unable to find a suitable template. According to the forum I did,
mv /boot/grub/grub.conf /boot/grub/grub.conf.old
and then,
yum reinstall kernel
yum clean all && yum install texlive

The other problem I ran in to was with the hwloc package. I had to downgrade it to the 1.2.1 version, the 1.3 version seems to be missing the shared library needed by mpich and scotch.

Friday, December 16, 2011

McCain's Hangar Queen

Senator McCain's colorful statements about the F-22 are getting lots of coverage.

Another example of how flawed the Pentagon’s weapons procurement process is can be found in the F-22 RAPTOR program. When the Pentagon and the defense industry originally conceived of the F-22 in the mid-1980s, they intended it to serve as a revolutionary solution to the Air Force’s need to maintain air superiority in the face of the Soviet threat during the Cold War. The F-22 obtained ‘full operational capability’ twenty years later -- well after the Soviet Union dissolved. When it finally emerged from its extended testing and development phase, the F-22 was recognized as a very capable tactical fighter, probably the best in the world for some time to come. But, plagued with developmental and technical issues that caused the cost of buying to go through the roof, not only was the F-22 twenty years in the making, but the process has proved so costly that the Pentagon could ultimately afford only 187 of the planes -- rather than the 750 it originally planned to buy.

“Unfortunately, the F-22 also ended up being effectively too expensive to operate compared to the legacy aircraft it was designed to replace. It also ended up largely irrelevant to the most predominant current threats to national security -- terrorists, insurgencies, and other non-state actors. In fact, if one were to set aside the F-22’s occasional appearances in recent big-budget Hollywood movies where it has been featured fighting aliens and giant robots, the F-22 has to this day not flown a single combat sortie -- despite that we have been at war for 10 years as of this September and recently supported a no-fly zone in Libya.

“Politically engineered to draw in over 1,000 suppliers from 44 states represented by key Members of Congress and, by the estimates of prime contractor Lockheed Martin, directly or indirectly supporting 95,000 jobs, there can be little doubt that the program kept being extended far longer than it should have been -- ultimately to the detriment to the taxpayer and the warfighter. As such, it remains an excellent example of how much our defense procurement process has been in need in reform. We may fight a near-peer military competitor with a fifth-generation fighter capability someday, but we have been at war for 10 years and until a few months ago had been helping NATO with a no-fly zone in Libya. And, this enormously expensive aircraft sat out both campaigns.

“Moreover, as a result of problems with its OBOG (On-Board Oxygen Generating) system, which has caused pilots to get dizzy or, in some cases, lose consciousness from lack of oxygen, on May 3, 2011, the Air Force grounded its entire fleet of F-22s. While this grounding was lifted earlier this year, exactly why F-22 pilots have been experiencing hypoxia remains unknown -- but similar unexplained incidents continue.

“And then, there is the issue of the sky-rocketing maintenance costs to the Air Force in trying to sustain a barely adequate ‘mission capable rate’ for the F-22. Its seems that the ‘plug and play’ component maintenance features that were supposed to reduce costs for the Air Force over the life cycle of the aircraft doesn’t really play well. And, each time a panel is opened for maintenance, the costs to repair the ‘low-observable’ surface in order to maintain its stealthiness have made this critical feature of the aircraft cost-prohibitive to sustain over the long-run. Finally, it seems that the engineers and technicians designing the F-22 forgot a basic law of physics during some point of the development phase -- that dissimilar metals in contact with each other have a tendency to corrode. The Air Force is now faced with a huge maintenance headache costing over hundreds of millions of dollars-and-growing to keep all 168 F-22s sitting on the ramp from corroding from the inside out.

One thing is clear: because of a problem directly attributable to how aggressively the F-22 was acquired -- procuring significant quantities of aircraft without having conducted careful developmental testing and reliably estimating how much they will cost to own and operate -- the 168 F-22s, costing over $200 million each, may very well become the most expensive corroding hanger queens ever in the history of modern military aviation.

REMARKS BY SENATOR JOHN McCAIN ON THE “MILITARY-INDUSTRIAL-CONGRESSIONAL” COMPLEX

What bothers me most about the F-22 is not the maintenance problems, operational difficulties, or danger to air-crews. These require mere engineering fixes. What really bothers me is the damage these huge programs cause to rational risk assessment and national security.

"The reality is we are fighting two wars, in Iraq and Afghanistan, and the F-22 has not performed a single mission in either theater," Gates told a Senate committee last week.

Carlson ["We think that [187 planes] is the wrong number"], however, told a group of reporters earlier in the week that the Air Force was "committed to funding 380" of the fighters, regardless of the Bush administration's decision.

According to an Air Force official briefed on the Thursday rebuke, Gates telephoned Air Force Secretary Michael W. Wynne, who was on vacation at the time, to express his displeasure with Carlson.

The senior defense official said Carlson's remarks, reported Thursday by the trade publication Aerospace Daily, angered the Pentagon's top leadership, adding that they were "completely unacceptable and out of line."

Fighter Dispute Hits Stratosphere

That's not something that more money or more technology can fix.

Thursday, November 24, 2011

OpenFOAM and FEniCS Fedora Install Notes

Notes on installing OpenFOAM and FEniCS from source on Fedora 14.

Both projects implement ideas similar to A Livermore Physics Applications Language (NALPAL, why ALPAL): take high-level descriptions of partial differential equations and automatically generate code to solve them with numerical approximations based on finite-volume (OpenFOAM) or finite-element (FEniCS) methods.

Steps for OpenFOAM

  1. Install Fedora packages for paraview, cmake, flex, qt, zlib.
    [root@deeds foo]$ yum -y install paraview cmake flex qt-devel libXt-devel zlib-devel zlib-static scotch scotch-devel openmpi openmpi-devel
    You probably already have gnuplot and readline installed, if not, install those too.
  2. Unpack the OpenFOAM tarball:
    [jstults@deeds OpenFOAM]$ tar -zxvf OpenFOAM-2.0.1.gtgz
  3. Add the environment variable definitions to the bashrc file
    source $HOME/OpenFOAM/OpenFOAM-2.0.1/etc/bashrc
    [jstults@deeds OpenFOAM]$ source ~/.bashrc
  4. Run the OpenFOAM-X.y.z/bin/foamSystemCheck script, you should get something that says, "System check: PASS", "Continue OpenFOAM installation."
  5. Go to the top-level directory, since you defined the proper environment variables that is something like
    [jstults@deeds OpenFOAM]$ cd $WM_PROJECT_DIR
  6. Make it
    [jstults@deeds OpenFOAM-X.y.z]$ ./Allwmake
  7. Various Consequences ensue...

Steps for FEniCS

Required components: Python packages FFC, FIAT, Instant, Viper and UFL, and Python/C++ packages Dolfin and UFC

  1. Install things packaged for Fedora
    [root@deeds FEniCS]$ yum -y install python-ferari python-instant python-fiat
  2. Download the other components from launchpad, and install using
    [jstults@deeds FEniCS]$ python setup.py install --prefix=~/FEniCS
    for the Python packages, and
    [jstults@deeds FEniCS]$ cmake -DCMAKE_INSTALL_PREFIX=$HOME/FEniCS ./src
    [jstults@deeds FEniCS]$ make
    [jstults@deeds FEniCS]$ make install
    for the C++/Python packages (DOLFIN and UFC).

OR, do it automatically if you have root and internet access (I still haven't got this to work: a bit of buffoonery on my part, problems finding boost libraries).

  1. Download Dorsal
  2. Run the script
    [jstults@deeds FEniCS]$ ./dorsal.sh
  3. Execute the yum command in the output from the script in another terminal.
    [root@deeds FEniCS]$ yum install -y redhat-lsb bzr bzrtools subversion \ libxml2-devel gcc gcc-c++ openmpi-devel openmpi numpy swig wget \ boost-devel vtk-python atlas-devel suitesparse-devel blas-devel \ lapack-devel cln-devel ginac-devel python-devel cmake \ ScientificPython mpfr-devel armadillo-devel gmp-devel CGAL-devel \ cppunit-devel flex bison valgrind-devel
    I also installed the boost-openmpi-devel and boost-mpich2-devel packages as well. MPI on Fedora is kind of a mess. I added these two lines to my bashrc:
    module unload mpich2-i386
    module load openmpi-i386
  4. When the packages are installed, go back to the terminal running dorsal.sh and hit ENTER.

I ended up downloading the development version of dorsal to get the latest third-party software built, because it's so easy:
[jstults@deeds dorsal]$ bzr branch lp:dorsal
Pretty darn slick, Yea for distributed version control systems! Boo for dependency hell!

FEniCS is currently under heavy development towards a 1.0 release, so I expect to be able to build it on Fedora 14 Real Soon Now ; - )

Update: Johannes and friends got me straightened out.

Thursday, November 17, 2011

Qu8k Accelerometer Data

There was a pretty cool amateur rocket shot recently that was an attempt to win the Carmack micro-prize. The rocket is called Qu8k, designed and built by Derek Deville and friends. One of the stipulations of the prize is collection of a GPS location by the onboard avionics at an altitude above 100kft (as long as the velocity is low, this should theoretically not require an unrestricted GPS).

Of course, since the other stipulation of the prize is a detailed report about the shot and the data collected, this gives us number crunching nerds a neat data set to play with. Derek posted a spreadsheet of the accelerometer data, along with a simple first order integration (twice) to get velocity and altitude. I tried an FFT-based method to compare against the first order approach in the spread-sheet, and a second-order trapezoidal rule integration. The python script to do the integration and make the two plots below is on github.

The errors in the numerical integration are not terrible (the plots of the trajectories using the different approaches are indistinguishable in the eye-ball norm).

One of the cool things about Python is the array masking capability. That makes implementing the temperature ratio component of the 1976 US Standard atmosphere (to estimate Mach number) 7 lines of code.

Tuesday, October 11, 2011

Notre Dame V&V Workshop

The folks at Notre Dame are putting on a Verification and Validation Workshop. The preliminary agenda is up on the workshop site.
The purpose of the workshop is to bring together a diverse group of computational scientists working in fields in which reliability of predictive computational models is important. Via formal presentations, structured discussions, and informal conversations, we seek to heighten awareness of the importance of reliable computations, which are becoming ever more critical in our world.

The intended audience is computational scientists and decision makers in fields as diverse as earth/atmospheric sciences, computational biology, engineering science, applied mechanics, applied mathematics, astrophysics, and computational chemistry.
It looks very interesting. Unfortunately, I don't think I'll be able to attend. Hopefully they will post posters and papers. Update: I was able to attend. They will be posting the presenters' slides. I will put up some of my notes when they get the slides up.

Monday, October 3, 2011

J2X Time Series

I thought the intermittent splashing of the cooling water/steam in the close-up portion of this J2-X engine test video looked interesting.

So, I used mplayer to dump frames from the video (30 fps) to jpg files (about 1800 images), and the Python Image Library to crop to a rectangle focused on the splashes.
When a splash occurs the pixels in this region become much whiter, so the whiteness of the region should give an indication of the "splashiness". I then converted the images to black and white, and averaged the pixel values to get a scalar time-series. The whole time series is shown in the plot below.
Also, here's a text file if you want to play with the data.
Here's a PSD and autocorrelation for the section of the data excluding the start-up and shut-down transients.
Here's a recurrence plot of that section of the data.
This is a pretty short data set, but you can see that there are little "bursts" of periodic response in the recurrence plot (compare to some of the recurrence plots for Lorenz63 trajectories). I'm pretty sure this is not significant to engine development in any way, but I thought it was a neat source of time series data.

Saturday, October 1, 2011

Discussion of V and V

A few days after I put up my little post on a bit of V&V history Judith Curry had a post about VV&UQ which generated a lot of discussion (she has a high-traffic site, so this is not unusual) [1].

A significant part of the discussion was about a post by Steve Easterbrook on the (in)appropriateness of IV&V (“I” stands for independent) or commercial / industrial flavored V&V for climate models. As George Crews points out in discussion on Climate Etc., there’s a subtle rhetorical slight of hand that Easterbrook uses (which works for winning web-points with the uncritical because different communities use the V-words differently, see the introduction to this post of mine). He claims it would be inappropriate to uncritically apply the IV&V processes and formal methods he’s familiar with from developing flight control software for NASA to climate model development. Of course, he’s probably right (though we should always be on the look-out to steal good tricks from wherever we can find them). This basically correct argument gets turned in to “Easterbrook says V&V is inappropriate for climate models” (by all sorts of folks with various motivations, see the discussion thread on Climate Etc.). The obvious question for anyone who’s familiar with even my little Separation of V&V post is, “which definition of V or V?”

What Easterbrook is now on record as agreeing is appropriate is the sort of V&V done by the computational physics community. This is good. Easterbrook seemed to be arguing for a definition of “valid” that meant “implements the theory faithfully” [2]. This is what I’d call “verification” (are you solving the governing equations correctly). The problem with the argument built on that definition, is the conflation I pointed out at the end of this comment, which is kind of similar to the rhetorical leap mentioned in the previous paragraph and displayed on the thread at Climate Etc.

Now, his disagreement with Dan Hughes (who recommends an approach I find makes a great deal of sense, and that I've used in anger to commit arithmurgical damage on various and sundry PDEs) is that Dan thinks we should have independent V&V, and Steve thinks not. If all you care about is posing complex hypotheses then IV&V seems a waste. If you care about decision support, then it would probably be wise to invest in it (and in our networked-age much of the effort could probably be crowd-sourced, so the investment can conceivably be quite small). This actually has some parallels with the decision support dynamics I highlighted in No Fluid Dynamicist Kings in Flight Test.

One of the other themes in the discussion is voiced by Nick Stokes. He argues that all this V&V stuff doesn’t result in successful software, or that people calling for developing IV&V’d code should shut up and do it themselves. One of the funny things is that if the code is general enough that the user can specify arbitrary boundary and interior forcings, then any user can apply the method of manufactured solutions (MMS) to verify the code. What makes Nick’s comments look even more silly is that nearly all available commercial CFD codes are capable of being verified in this way. This is thanks in part to the efforts of folks like Patrick Roache for instance [3], and now it is a significant marketing point as Dan Hughes and Steven Mosher point out. Nick goes on to say that he’d be glad to hear of someone trying all this crazy V&V stuff. The fellows doing ice sheet modeling that I pointed out on the thread on Easterbrook’s are doing exactly that. This is because the activities referred to by the term “verification” are a useful set of tools for the scientific simulation developer in addition to being a credibility building exercise (as I mentioned in the previous post, see the report by Salari and Knupp [4]).

Of course doing calculation verification is the responsibility of the person doing the analysis (in sane fields of science and engineering anyway). So the “shut-up and do it yourself” response only serves to undermine the credibility of people presenting results to decision makers bereft of such analysis. On the other hand, the analyst properly has little to say on the validation question. That part of the process is owned by the decision maker.

References

[1]   Curry, J., Verification, Validation and Uncertainty Quantification in Scientific Computing, http://judithcurry.com/2011/09/25/verification-validation-and-uncertainty-quantification-in-scientific-computing/,Climate Etc. Sunday 25th September, 2011.

[2]   Easterbrook, S., Validating Climate Models, http://www.easterbrook.ca/steve/?p=2032, Serendipity, Tuesday 30th November, 2010.

[3]   Roache, P., Building PDE Codes to be Verifiable and Validatable, Computing in Science and Engineering, IEEE, Sept-Oct, 2004.

[4]   Knupp, P. and Salari, K., Code Verification by the Method of Manufactured Solutions, Tech. Rep. SAND2000-1444, Sandia National Labs, June 2000.

Thursday, September 15, 2011

The Separation of V and V

The material in this post started out as background for a magazine article, and turned in to the introduction to an appendix for my dissertation [1]. Since I started learning about verification and validation, I’ve always been curious about the reasons for the sharp lines we draw between the two activities. Turns out, you can trace the historical precedent for separating the task of verification and validation all the way back to Lewis Fry Richardson, and George Boole spent some space writing on what we’d call verification. Though neither he nor Richardson use the modern terms, the influence of their thought is evident in the distinct technical meanings we’ve chosen to give the two common-usage synonyms “verification” and “validation.”

The term verification is given slightly different definitions by different groups of practitioners [2]. In software engineering, the IEEE defines verification as

The process of evaluating the products of a software development phase to provide assurance that they meet the requirements defined for them by the previous phase.

while the definition now commonly accepted in the computational physics community is

The process of determining that a model implementation accurately represents the developer’s conceptual description of the model and the solution to the model. [3]

The numerical weather prediction community speaks of forecast verification, which someone using the above quoted AIAA definition would probably consider a form of validation, and a statistician might use the term validation in a way the computational physicist would probably consider verification [4]. Arguing over the definitions of words [5] which, in common use, are synonyms is contrary to progress under a pragmatic, “wrong, but useful” conception of the modeling endeavor [6]. Rather, we should be clear on our meaning in a specific context, and thus avoid talking past collegues in related disciplines. Throughout this work, the term verification is used consistent with currently accepted definitions in the aerospace, defense and computational physics communities [3789].

In the present diagnostic effort, the forward model code seeks an approximate solution to a discretized partial differential equation. This partial differential equation (PDE) is derived from Maxwell’s equations augmented by conservation equations derived from the general hyperbolic conservation laws through analytical simplification guided by problem-specific assumptions. The purpose of formal verification procedures such as method of manufactured solutions (MMS) are to demonstrate that the simulation code solves these chosen equations correctly. This is done by showing ordered convergence of the simulated solution in a discretization parameter (such as mesh size or time-step size).

Understanding the impact of truncation error on the quality of numerical solutions has been a significant concern over the entire developmental history of such methods. Although modern codes tend to use finite element, finite volume or pseudo-spectral methods as opposed to finite differences, George Boole’s concern for establishing the credibility of numerical results is generally applicable to all these methods. In his treatise, written in 1860, Boole stated

...we shall very often need to use the method of Finite Differences for the purpose of shortening numerical calculation, and here the mere knowledge that the series obtained are convergent will not suffice; we must also know the degree of approximation.

To render our results trustworthy and useful we must find the limits of the error produced by taking a given number of terms of the expansion instead of calculating the exact value of the function that gave rise thereto. [10]

In a related vein, Lewis Fry Richardson, under the heading Standards of Neglect in the 1927 paper which introduced the extrapolation method which now bears his name, stated in his characteristically colorful language

An error of form which wold be negligible in a haystack would be disastrous in a lens. Thus negligibility involves both mathematics and purpose. In this paper we discuss mathematics, leaving the purposes to be discussed when they are known. [11]

This appendix follows Richardson’s advice and confines discussion to the correctness of mathematics, leaving the purposes and sufficiency of the proposed methods for comparison with other diagnostic techniques in the context of intended applications.

While the development of methods for establishing the correctness and fitness of numerical approximations is certainly of historical interest, Roache describes why this effort in code verification is more urgently important than ever before (and will only increase in importance as simulation capabilities, and our reliance on them, grow).

In an age of spreading pseudoscience and anti-rationalism, it behooves those of us who believe in the good of science and engineering to be above reproach whenever possible. Public confidence is further eroded with every error we make. Although many of society’s problems can be solved with a simple change of values, major issues such as radioactive waste disposal and environmental modeling require technological solutions that necessarily involve computational physics. As Robert Laughlin noted in this magazine, “there is a serious danger of this power [of simulations] being misused, either by accident or through deliberate deception.” Our intellectual and moral traditions will be served well by conscientious attention to verification of codes, verification of calculations, and validation, including the attention given to building new codes or modifying existing codes with specific features that enable these activities. [6]

There is a moral imperative underlying correctness checking simply because we want to tell the truth, but this imperative moves closer "to first sight" because of the uses to which our results will be put.

The language Roache uses reflects a consensus in the computational physics community that has given the name verification to the activity of demonstrating the impact of truncation error (usually involving grid convergence studies), and the name validation to the activity of determining if a code has sufficient predictive capability for its intended use [3]. Boole’s idea of “trustworthy results” clearly underlies the efforts of various journals and professional societies [3798] to promote rigorous verification of computational results. Richardson’s separation of the questions of correct math and fitness for purpose are reflected in those policies as well. In addition, the extrapolation method developed by Richardson has been generalized to support uniform reporting of verification results [12].

Two types of verification have been distinguished [13]: Code verification and calculation verification. Code verification is done once for a particular code version, it demonstrates that a specific implementation solves the chosen governing equations correctly. This process can be performed on a series of grids of any size (as long as they are within the asymptotic range) with an arbitrarily chosen solution (no need for physical realism). Calculation verification, on the other hand, is an activity specific to a given scientific investigation, or decision support activity. The solution in this case will be on physically meaningful grids with physically meaningful initial condition (IC)s and boundary condition (BC)s (therefore no a priori-known solution). Rather than monitoring the convergence of an error metric, the convergence of solution functionals relevant to the scientific or engineering development question at hand are tracked to ensure they demonstrate convergence (and ideally, grid/time-step independence).

The approach taken in this work to achieving verification is based on heavy use of a computer algebra system (CAS) [14]. The pioneering work in using computer algebra for supporting the development of computational physics codes was performed by Wirth in 1980 [15]. This was quickly followed by other code generation efforts [16171819] demonstrations of the use of symbolic math programs to support stability analysis [20] and correctness verification for symbolically generated codes solving governing equations in general curvilinear body-fitted coordinate systems [21].

The CAS handles much of the tedious and error prone manipulation required to implement a numerical PDE solver. It also makes creating the forcing terms necessary for testing against manufactured solutions straight-forward for even very complex governing equations. The MMS is a powerful tool for correctness checking and debugging. The parameters of the manufactured solution allow the magnitude of the contribution of each term to the error to be controlled. In this way, if a code fails to converge for a solution with all parameters O(1) (note that this is the recommended approach, hugely different parameter values which might obtain in a physically realistic solution can mask bugs). The parameter sizes can then be varied in a systematic way to locate the source of the non-convergence (as convincingly demonstrated by by Salari and Knupp with a blind test protocol [22]). This gives the code developer a diagnostic capability for the code itself. The error analysis can be viewed as a sort of group test [23]where the “dilution” of each term’s (member’s) contribution to the total (group) error (response) is governed by the relative sizes of the chosen parameters. Though we fit a parametric model (the error ansatz) to determine rate of convergence, the response really is expected to be a binary one as in the classic group test, the ordered convergence rate is maintained down to the round-off plateau or it is not. The dilution only governs how high the resolution must rise (and the error must fall) for this behavior to be confirmed. Terms with small parameters will require that convergence to very high levels is used to ensure that an ordered error is not lurking below.

References

[1]   Stults, J., Nonintrusive Microwave Diagnostics of Collisional Plasmasin Hall Thrusters and Dielectric Barrier Discharges, Ph.D. thesis, Air Force Institute of Technology, September 2011.

[2]   Oberkampf, W. L. and Trucano, T. G., “Verification and Validation Benchmarks,” Tech. Rep. SAND2002-0529, Sandia National Laboratories, March 2002.

[3]   “AIAA Guide for the Verification and Validation of Computational Fluid Dynamics Simulations,” Tech. rep., American Institute of Aeronautics and Astronautics, Reston, VA, 1998.

[4]   Cook, S. R., Gelman, A., and Rubin, D. B., “Validation of Software for Bayesian Models Using Posterior Quantiles,” Journal of Computationaland Graphical Statistics, Vol. 15, No. 3, 2006, pp. 675–692.

[5]   Oreskes, N., Shrader-Frechette, K., and Belitz, K., “Verification, Validation, and Confirmation of Numerical Models in the Earth Sciences,”Science, Vol. 263, No. 5147, 1994, pp. 641–646.

[6]   Roache, P. J., “Building PDE Codes to be Verifiable and Validatable,”Computing in Science and Engineering, Vol. 6, No. 5, September 2004.

[7]   “Standard for Verification, Validation in Computational Fluid Dynamics, Heat Transfer,” Tech. rep., American Society of Mechanical Engineers, 2009.

[8]   AIAA, “Editorial Policy Statement on Numerical and Experimental Uncertainty,” AIAA Journal, Vol. 32, No. 1, 1994, pp. 3.

[9]   Freitas, C., “Editorial Policy Statement on the Control of Numerical Accuracy,” ASME Journal of Fluids Engineering, Vol. 117, No. 1, 1995, pp. 9.

[10]   Boole, G., A Treatise on the Calculus of Finite Differences, Macmillian and co., London, 3rd ed., 1880.

[11]   Richardson, L. F. and Gaunt, J. A., “The Deferred Approach to the Limit. Part I. Single Lattice. Part II. Interpenetrating Lattices,”Philosophical Transactions of the Royal Society of London. Series A,Containing Papers of a Mathematical or Physical Character, Vol. 226, No. 636-646, 1927, pp. 299–361.

[12]   Roache, P. J., “Perspective: A method for uniform reporting of grid refinement studies,” Journal of Fluids Engineering, Vol. 116, No. 3, September 1994.

[13]   Roache, P. J., Verification and Validation in Computational Scienceand Engineering, Hermosa Publishers, Albuquerque, 1998.

[14]   Fateman, R. J., “A Review of Macsyma,” IEEE Trans. on Knowl. andData Eng., Vol. 1, March 1989, pp. 133–145.

[15]   Wirth, M. C., On the Automation of Computational Physics, Ph.D. thesis, University of California, Davis, 1980.

[16]   Cook, G. O., Development of a Magnetohydrodynamic Code forAxisymmetric, High-β Plasmas with Complex Magnetic Fields, Ph.D. thesis, Brigham Young University, December 1982.

[17]   Steinberg, S. and Roache, P. J., “Using MACSYMA to Write FORTRAN Subroutines,” Journal of Symbolic Computation, Vol. 2, No. 2, 1986, pp. 213–216.

[18]   Steinber, S. and Roache, P., “Using VAXIMA to Write FORTRAN Code,” Applications of Computer Algebra, edited by R. Pavelle, Kulwer Academic Publishers, August 1984, pp. 74–94.

[19]   Florence, M., Steinberg, S., and Roache, P., “Generating subroutine codes with MACSYMA,” Mathematical and Computer Modelling, Vol. 11, 1988, pp. 1107 – 1111.

[20]   Wirth, M. C., “Automatic generation of finite difference equations and fourier stability analyses,” SYMSAC ’81: Proceedings of the fourth ACMsymposium on Symbolic and algebraic computation, ACM, New York, NY, USA, 1981, pp. 73–78.

[21]   Steinberg, S. and Roache, P. J., “Symbolic manipulation and computational fluid dynamics,” Journal of Computational Physics, Vol. 57, No. 2, 1985, pp. 251 – 284.

[22]   Knupp, P. and Salari, K., “Code Verification by the Method of Manufactured Solutions,” Tech. Rep. SAND2000-1444, Sandia National Labs, June 2000.

[23]   Dorfman, R., “The Detection of Defective Members of Large Populations,” The Annals of Mathematical Statistics, Vol. 14, No. 4, 1943, pp. 436–440.

Friday, September 2, 2011

Airframer for Dayton Aerospace Cluster

In my previous post on the Dayton Aerospace Cluster, I mentioned that one of the big missing pieces is an air-framer. The DDN recently reported that a Fortune100 defense contractor that currently makes UAV components, but does not make full airframes yet is interested in locating a manufacturing / test facility in the Dayton region.

Here's some of the defense contractors on the Fortune 100:
See the full Fortune 500 list. The most interesting one on the list is GD, since two of the three joint ventures listed are with Israeli companies, and Dayton region representatives signed a trade deal in 2009 with representatives from Haifa.

Thursday, September 1, 2011

Fun with Filip

The Filipelli dataset [1] from the NIST Statistical Reference Datasets came up in discussions about ill-conditioned problems over on the Air Vent. Fitting the certified polynomial is a example problem 5.10 in Moler’s MATLAB book [3]. Part of the problem is to compare five different methods of fitting the 10th-order polynomial

  1. MATLAB’s polyfit (which does QR on the Vandermonde matrix)
  2. MATLAB’s backslash operator (which will do QR with pivoting)
  3. Pseudoinverse
    β = Xy(1)

  4. Normal equations
    β = XT X-1XT y (2)

  5. Centering by the mean and normalizing by the standard deviation

Carrick already demonstrated the centering approach in the comments. I was surprised how well the simple centering approach worked (it’s been quite a while since I’ve done this homework problem). Carrick mentioned that Gram-Schmidt might be another way to go, and I mentioned that QR using rotations or reflections is usually preferred for numerical stability reasons. One of the trade-offs is that Gram-Schmidt gives you the basis one vector at a time, but the other methods gives you Q all at once at the end. Depending on the problem one may be more useful than the other.

The LAPACK routine DGEQRG uses the Householder reflection method to calculate the QR decomposition [2]. This routine is wrapped in scipy.linalg.qr(), and it has an option to return only economic factors and perform pivoting. The Scipy version of polyfit() uses the singular value decomposition (from which you get the pseudoinverse).

So, the two main ways of getting robust numerical algorithms for solving least-squares problems are various flavors of QR and the pseudoinverse. Naive application of some statistical software that doesn’t use these methods by default leads to problems (e.g. SAS and R). Since I don’t like doing other people’s homework I won’t got through Moler’s list, but I did want to demonstrate another method to tackle the problem using the dreaded normal equations (which are terribly conditioned in the case of the Vandermonde matrix).


PIC

The basic idea behind both numerical approaches is to build a “good” basis to describe X, so that when we write X in terms of this basis, and perform operations using the basis, we don’t loose lots of precision. We can take a more symbolic approach to the problem, and define an orthogonal basis analytically. So instead of using the monomials as in the Vandermonde matrix, we can use an orthogonal set like the Legendre polynomials. It’s easiest to use this basis if you map the x data to the interval [-1, 1].



If you read that thread from the R mailing list, you’ll recognize this approach as the one that was recommended (use the R poly function to define the model matrix with an orthogonal basis rather than the monomials which gives the Vandermonde matrix).

Here’s the Python to do the fit on the raw data just using the built-in polyfit() funciton.

 
# polyfit uses pseudoinverse 
nx = 3000 
p = sp.polyfit(x,y,10) 
px = sp.linspace(x.min(),x.max(),nx) 
# map x to [-1,1] 
x_map = (x.min() + x.max() - 2.0*x)/(x.min() - x.max()) 
px_map = sp.linspace(-1,1,nx) 
py = sp.polyval(p, px) 
pNISTy = sp.polyval(beta_NIST, px) 
NISTy = sp.polyval(beta_NIST, x)

Here’s the Python to do the fit using some other matrix factorizations.

 
# compare polyfit to QR, this version uses Householder reflections 
Q,R = qr(X, mode=qr, econ=True) 
b = sp.dot(sp.transpose(Q),y) 
beta = solve(R,b) 
beta = sp.flipud(beta) # polyval expects the coefficients in the other order 
qry = sp.polyval(beta, px)

Finally, here is the solution using the Legendre basis to start, which avoids the necessity of generating the basis numerically using QR or the singular value decomposition (scipy.special.legendre).

 
# compare to pseudoinverse 
Xdag = pinv(X) 
beta_pinv = sp.flipud(sp.dot(Xdag, y)) 
pinvy = sp.polyval(beta_pinv, px) 
Xmapdag = pinv(X_map) 
pinvy_map = sp.polyval(sp.flipud(sp.dot(Xmapdag,y)), px_map) 
 
X1 = sp.zeros((x_map.shape[0],11),dtype=float) # for fitting 
X2 = sp.zeros((px_map.shape[0],11),dtype=float) # for plotting 
X1[:,0] = 1.0 
X2[:,0] = 1.0 
portho = [] 
for i in xrange(11): 
    portho.append(legendre(i)) 
for i in xrange(11): 
    X1[:,i] = portho[i](x_map) 
    X2[:,i] = portho[i](px_map) 
X1dag = pinv(X1) 
beta1_pinv = sp.dot(X1dag, y) 
pinvy_ortho = sp.dot(X2, beta1_pinv) 
# do it the wrong way with X1 
b_legendre = sp.dot(sp.transpose(X1),y) 
beta_legendre = solve(sp.dot(sp.transpose(X1),X1), b_legendre) 
py_legendre = sp.dot(X2, beta_legendre) 
LU,piv = lu_factor(sp.dot(sp.transpose(X1),X1)) 
beta_legendre_lu = lu_solve((LU,piv),b_legendre) 
py_legendre_lu = sp.dot(X2, beta_legendre_lu)

The error norm for all of the “good” methods is roughly the same order of magnitude. Even using a bad algorithm, inverting the normal equations, gives a good answer if you start out with a design matrix that is orthogonal to begin with. The QR method is much less analyst-work (and works even without mapping or normalizing the data). Since you just build the basis numerically it requires a bit more cpu-work (which is usually cheap), and it is already coded up and well tested in freely available libraries.

References

[1]   Filippelli, A., NIST.

[2]   DGEQRF, LAPACK subroutine, version 3.3.1, http://www.netlib.org/lapack/double/dgeqrf.f

[3]   Moler, C.B., Numerical computing with MATLAB, Chapter 5: Least Squares, Cambridge University Press, 2004.

Saturday, August 20, 2011

SGI Acquires OpenFoam

OpenFoam is probably the most mature open source CFD tool available. There are plenty of niche CFD projects on SourceForge, but none that have really been able to generate the user/developer community that OpenFoam has. OpenFoam recently made the news because SGI "purchased" it for an undisclosed amount.
"Computational fluid dynamics (CFD) is one of the most important application areas for technical computing," said SGI CEO Mark J. Barrenechea. "With the acquisition of OpenCFD Ltd., SGI will be able to provide our customers the market's first fully integrated CFD solution, where all the hardware and software work together."
Basically, they hired all of the developers:
The entire OpenCFD team, led by Henry Weller, has joined SGI as full-time employees and will be based out of SGI's EMEA Headquarters located in the UK. SGI Acquires Maker of Open Source CFD Software
They also created the OpenFoam Foundation to ensure the continued development of OpenFoam under a GPL license. This is a very good thing for the development of OpenFoam to really take off. The foundation means that a true commons can develop around the software. SGI is clearly using OpenFoam as widget frosting, and they are selling support. This means their incentives line up with making the open version as good as it can be rather than crippling it in deference to a paid "pro" version. Good news for open source computational fluid dynamics.

Sunday, August 14, 2011

Second HTV-2 Flight Test

The second HTV-2 test flight experienced a loss of telemetry about 9 minutes into the flight, very similar to the first test-flight. The post-flight analysis from the first flight indicated that inertia coupling caused the vehicle to depart stable flight (which lead the autopilot to terminate the flight). Here’s a relevant snippet from Bifurcation Analysis for the Inertial Coupling Problem of a Reentry Vehicle,

Inertial coupling has been known since around 1948 [1]. It is essentially a gyroscopic effect, occurring in high roll-rate maneuvers of modern high-speed airplanes including spinning missiles designed in such a way that most of their masses are concentrated in the fuselage. For such an airplane, a slight deviation of its control surface angle from the steady-state angle may lead to a drastic change in roll-rate, causing damage on its empennage; known as the jump phenomenon. Nonlinear analyses to elucidate this problem have been reported in [23], for example. However, the airplanes treated in these works are stable around the equilibrium point of level flight. On the other hand, an intrinsically unstable reentry vehicle, if combined with a malfunction of its AFCSs, may result in a catastrophe once it falls into a high roll-rate motion.

This does sound an awful lot like what happened to both of DARPA’s HTV-2 vehicles. Departure from controlled flight due to inertia coupling was the cause of a loss of crew in the early X-2 testing [4].

Simple explanations are usually not the reason for flight test accidents or mishaps. Experience has shown that there is usually a chain of events that lead to the mishap. Day gives a great summary of the combination of contributing factors that lead to the fatal X-2 test [4],

  • Optimum energy boost trajectory.
  • The rocket burn was longer than predicted by 15 sec and positioned the pilot further from Muroc Dry Lake than expected.
  • Decrease in directional stability as speed increased.
  • Decrease in directional stability with increased lift, leading to a reduced critical roll rate for inertial coupling.
  • Adverse aileron control (control reversal).
  • High positive effective dihedral.
  • Rudder locked supersonically.
  • Mass properties in window of susceptibility for inertial roll coupling.

Day describes the result of hitting an unstable region in the control parameter space:

At critical roll velocity, violent uncontrollable motions characteristic of inertial roll coupling occurred about all three axes.

At low-speeds this might be recoverable, but it is simply intractable to design a hypersonic aircraft to handle the loads this kind of rapid motion imposes on the vehicle structure. Eventually something gives, and this “jump” leads rapidly to catastrophic failure.

Day summarizes the reasoning for not modifying the X-2 for manned hypersonic flight,

The X-2 was unfortunately in the twilight zone of progress where rocket power, structural integrity, and thermodynamics were sufficiently advanced to push the aircraft to supersonic speeds. However, hydraulic controls and computerized control augmentation systems were not yet developed enough to contend with the instabilities.

And a relevant snippet from the more recent bifurcation analysis,

If it has unstable dynamics in the neighborhood of a trim point, a slight external disturbance or a slight deviation of control surface angles from their precise values at the trim point cannot allow the vehicle to stay at the trim point [5].

The flight regime that HTV-2 is operating in is unknown enough that it is difficult for designers to know before flight if the flight profile will put the vehicle into an unstable region of the control space. I thought the language that the program manager (a fellow AFIT alum btw) used to describe the post-flight analysis was interesting,

“Assumptions about Mach 20 hypersonic flight were made from physics-based computational models and simulations, wind tunnel testing, and data collected from HTV-2s first test flight the first real data available in this flight regime at Mach 20,” said Air Force Maj. Chris Schulz, HTV-2 program manager who holds a doctorate in aerospace engineering. “Its time to conduct another flight test to validate our assumptions and gain further insight into extremely high Mach regimes that we cannot fully replicate on the ground.” DARPA Press Release

I've noticed physics-based is a common meme for climate policy alarmists trying to shore-up the credibility of simulation predictions. There’s that same tone of, The Science Says..., which leads to unwarranted confidence in a situation of profound ignorance (unquantifiable uncertainty). I’ve also observed that program managers seem to take great comfort in the aura of a model that is “physics-based” as a talisman against criticism (“just look at these colorful fluid dynamics, you must be really stupid to question Physics…”). Of course, “physics-based” can be said of all sorts of models of varying fidelity. It is a vague enough moniker to be of great political or rhetorical use. Not that I think Schulz is one of these shady operatives since he goes on to say,

“we wouldn't know exactly what to expect based solely on the snapshots provided in ground testing. Only flight testing reveals the harsh and uncertain reality.”

As DARPA has been well informed by recent experience, ignorance becomes apparent only when confronted with the reality of interest. “Physics-based” is no guarantee of predictive capability.

References

[1]   Abzug, M.J. and Larrabee, E.E., Airplane Stability and Control. A History of the Technologies that Made Aviation Possible, Cambridge University Press, Cambridge, U.K., Chap. 8, 1997.

[2]   Schy, A.A. and Hannah, M.E., “Prediction of jump phenomena in rolling coupled maneuvers of airplanes,” Journal of Aircraft, Vol. 14, pp. 375–382, April 1977.

[3]   Carrol, J.V. and Mehra, R.D., “Bifurcation analysis of nonlinear aircraft dynamics,” Journal of Guidance, Control and Dynamics, Vol. 5, No. 5, pp. 529–536, 1982.

[4]   Day, R.E., Coupling Dynamics in Aircraft: A Historical Perspective, NASA Special Publication 532, 1997.

[5]   Goto, N. and Kawakita, T., Advances in Dynamics and Control, CRC Press, Chap. 4, 2004.