Tuesday, November 17, 2009

Cut F-35 Flight Testing?

I just read an interesting article about the F-35. It repeats the standard press-release-based storyline that every major defense contractor / program office offers: "we've figured out what's wrong, we've spent gazillions on simulation (and the great-for-marketing computer graphics that result) so now we know our system so well that we can't afford not to cut flight testing."

I would have just dismissed this article as more of the "same-ole-same-ole", but at the bottom they have a quote I just love:
But scrimping on flight testing isn’t a good idea, said Bill Lawrence of Aledo, a former Marine test pilot turned aviation legal consultant.

"They build the aircraft . . . and go fly it," he said. "Then you come back and fix the things you don’t like."

No amount of computer processing power, Lawrence said, can tell Lockheed and the military what they don’t know about the F-35.

Thank God for testers. This goes to the heart of the problems I have with the model validation (or lack there of) in climate change science.

From reading some of these posts, you might think I'm some sort of Luddite who doesn't like CFD or other high-fidelity simulations, but you'd be wrong. I love CFD, I've spent a big chunk of my (admittedly short) professional career developing or running or consuming CFD (and similar) analyses. I just didn't fall in love with the colourful fluid dynamics. I understand that bending metal (or buying expensive stuff) based on the results of modeling unconstrained by reality is the height of folly.

That folly is exacerbated by an 'old problem'. In another article on the F-35, I found this gem:
No one should be shocked by the continuing delays and cost increases, said Hans Weber, a prominent aerospace engineering executive and consultant. "The airplane is so ambitious, it was bound to have problems."

The real problem, Weber said, is an old one. Neither defense contractors nor military or civilian leaders in government will admit how difficult a program will be and, when problems do arise, how challenging and costly they will be to fix. "It's really hard," Weber said, "for anyone to be really honest."

Defense acquisition in a nutshell, Pentagon Wars anyone?

Defense procurement veterans said they fear that the Pentagon will be tempted to cut the flight-testing plan yet again to save money.

"You need to do the testing the engineers originally said needed to be done," Weber said. By cutting tests now, "you kick the can down the road," and someone else has to deal with problems that will inevitably arise later.



  1. Officials at the Pentagon appear poised to take a more conservative approach to the $300-billion Joint Strike Fighter program after design changes, parts shortages and out-of-sequence work severely delayed completion of development aircraft.
    --Pentagon eyes more cautious JSF test plan

  2. "...Sixteen of 168 planned flights were completed in fiscal 2009, the second year of flight testing, according to Michael Gilmore, the Pentagon’s director of weapons testing. The program calls for 5,000 sorties to prove the aircraft’s flying capabilities, electronics and software.

    The testing backlog is one reason Defense Secretary Robert Gates has delayed the program, cutting planned purchases of the plane by 122 in fiscal years 2011 through 2015."
    -- Lockheed Martin F-35 Flew 10% of Planned 2009 Tests

  3. Snippets from F-35 Faces A Troubled 2010:

    The Pentagon’s Selected Acquisition Reports (SARs) will be released and are likely to show a critical breach of Nunn-McCurdy cost-escalation limits, leading to a mandatory program review

    The Pentagon also has to provide Congress with the costs of an alternative solution to the requirement...

    Now why would Congress care about the cost of alternatives? Because that's the only thing that matters in controlling costs. Everything else is PR spin.

    DOTE reported that the completion of testing on all software blocks had been delayed by a year, along with the availability of the first F-35 with mission systems installed (BF-4).

    Modeling and simulation, which the JSF team has touted as the secret weapon that will allow the project to be completed on time and on schedule...

    That sounds reasonable, maybe they can reap schedule and test efficiency with smart use of modeling and simulation.

    But that flight testing is a long way off. F-35 high-alpha testing will not start before the fourth quarter of 2011—3.5 years after BF-1’s first flight and less than a year before the Marines are supposed to declare initial operating capability (IOC). (In contrast, F-22 high-alpha testing started in August 1999, less than two years after first flight, six years ahead of IOC and before production aircraft had been ordered).

    By early 2012, 185 jets are planned to be on order and advance procurement contracts should be signed for 156 more. And the predicted date is still two years out, from a program that hasn’t made a successful two-month prediction since 2008.

    The only way modeling and simulation would save them is if it allowed more testing sooner. That would move "discovery" to the right, and allow for adjustments and retest. As it stands the testing schedule is already spread out and slipping (and it will continue to slip).

    Simulation can only make explicit the knowledge implicit in your model choices. It can not tell you what you don't already know. Only testing can tell you when your model choices were wrong.

  4. Sorry, usually you want to move discovery to the left, not the right (if the time-axis on your schedule increases left to right).

  5. "Simulation can only make explicit the knowledge implicit in your model choices. It can not tell you what you don't already know. Only testing can tell you when your model choices were wrong."

    Well put!

    Unfortunately, and somewhat amazingly, many people fail to even understand, much less appreciate, this fact. For example, over at the Serendipity blog, the view is that a climate simulation should be viewed like a scientific instrument.

    To illustrate how deep the confusion goes for one expert in climate software engineering. It is further claimed that no climate scientist actually believes the results of simulations anyway. They are merely "tools to test our understanding" !?

    And I do not think this climate expert is atypical.

    Maybe if I had put things as you did in your quote, and extrapolate from that point of view, I would have been able to make a contribution to that blog. As it was, I was only "tilting at windmills."

  6. gmcrews, thanks for the link, nice Jaynes reference btw.

    His work provides such a good intro to the framework (probability theory / Bayes theorem) for understanding this stuff. The thing I find interesting is that folks who are serious about verification, validation and uncertainty quantification (and who are actively publishing in those fields) are continually trying to get closer and closer to a fully Bayesian approach without any ad hoceries. Computing power / smarter quadrature methods (Jaynes was pretty critical of naive random sampling too) are our biggest hold-ups, we still have to be pragmatic sometimes because we want answers in a reasonable amount of time.

  7. On the subject of the testing / simulation tension:
    Red Bull's Newey, regarded as arguably the best car designer in F1, admitted that CFD-only design has "pitfalls" when compared with combining the tool with wind tunnels.

    "How well it (the Virgin car) turns out, we shall see," said the Briton.

    "It is a different route, and my personal belief is that you still need to combine the two at the moment. But maybe their car will go very well and I will have to revise my opinion," added Newey.

    Newey Doubts Virgin's No Wind-Tunnel Route

  8. Hi Joshua,

    I found a gentle introduction to the complementary nature of CFD and wind tunnels on Rich Smith's Blog. And comments on the new VR F1 Team that is using CFD only too.

    It will be interesting to see how competitive the new F1 team is. The team evidently has extensive knowledge of CFD as applied to racing cars. Although IMHO, I would rather have extensive knowledge of wind tunnel test results as applied to racing cars.


  9. Thanks for the link; good site.

    It hasn't always been this way:
    It is fair to say that computationalists and experimentalists in the field of fluid dynamics have been pioneers in the development of methodology and procedures in validation. However, it is also fair to say that the field of CFD has, in general, proceeded along a path that is largely independent of validation. There are diverse reasons why the CFD community has not perceived a strong need for code V&V, especially validation. A competitive and frequently adversarial relationship (at least in the U.S.) has often existed between computationalists (code users and code writers) and experimentalists, which has led to a lack of cooperation between the two groups. We, on the other hand, view computational simulation and experimental investigations as complementary and synergistic. To those who might say, “Isn’t that obvious?” We would answer, “It should be, but they have not always been viewed as complementary.” The “line in the sand” was formally drawn in 1975 with the publication of the article “Computers versus Wind Tunnels” [63]. We call attention to this article only to demonstrate, for those who claim it never existed, that a competitive and adversarial relationship has indeed existed in the past, particularly in the U.S. This relationship was, of course, not caused by the quoted article; the article simply brought the competition and conflict to the foreground. In retrospect, the relationship between computationalists and experimentalists is probably understandable because it represents the classic case of a new technology (computational simulation) that is rapidly growing and attracting a great deal of visibility and funding support that had been the domain of the older technology (experimentation).

    Verification and Validation in Computational Fluid Dynamics

    I think they'll probably do well, once you have a good database of wind-tunnel and track (flight) test results, and experience 'tuning' your CFD (mainly turbulence models) to particular flows, wind-tunnel testing for new (but related) designs becomes less necessary.

    If it gets them to the track more quickly / cheaply then it's good, they used simulation to accelerate discovery (because track performance is really what we care about). Unfortunately, it's pretty easy to allow over-reliance on simulation to cause us to delay discovery until it becomes really expensive to fix the errors caused by details we assumed away.

    In my perfect world we would use simulation as a way to let us test more, smarter, faster and sooner. In the real world I'd be lucky to get any of those since most folks are interested in less, less, less and later.

  10. I recently said to a friend who is a very experienced FEA practitioner (mostly non-linear stress) that the experimental literature seems to lack computational references and the computational literature seemed to lack experimental references.

    I claimed therefore, somewhat tongue-in-cheek, that there is little scientific "pressure" to believe the pretty pictures generated by the computationalists, and little insight/understanding being provided by the experimentalists.

    His reply included the following:

    The automotive companies have had such success with modeling car crashes that they only need to crash the cars necessary to prove to the government that the results are correct.

    I’m convinced that the right FEA [software] in the right hands is a very good predictive tool.

    I have seen people who mis-apply FEA — I suspect there are a lot. But that’s also true of experimenters who draw the wrong conclusions.

  11. Looks like Virgin's problems at the track have more to do with being a new team with a new car than anything to do with CFD-only-design:
    ...our attention is focused on Virgin. They were first out of the blocks with their VR-01, which was developed using only Computational Fluid Dynamics (CFD). Their testing in Jerez, however, could be described as a disaster. On the first day, Timo Glock got only 5 laps in, after the heavy rain caught them out. Then, things got even worse on Thursday, when a front wing failure stopped Glock’s running after only 11 laps, which was on a heavy fuel load. His fastest lap was 10 seconds off Kobayashi’s that day. The team worked into the next day trying to repair the cause of the problem, the front wing mounting. When they finally got the car out on Friday, Lucas di Grassi could only set 8 laps, all in the wet, because of unspecified reasons. Since these 8 laps were in the wet, he was 17 seconds off the pace that day. Into Saturday, the car was much better, with di Grassi managing 63 laps all day. They were running a very heavy fuel load, so it was expected that he only ended up 8th at the end of the day, 2.5 seconds behind the leaders. If their initial pace was their actual performance, then they would count themselves lucky that the 107% rule was dropped years ago. However, I think that the Virgin has promise (no jokes!) and could be one of the better new teams this year.
    -- What We Can Learn From Jerez Test 1

  12. I guess we have Secretary Gates' answer to the question, "Cut F-35 Flight Testing?":
    On Feb. 1, Defense Secretary Robert Gates announced a dramatic reorganization of the JSF program that included extending the jet’s test schedule until 2015, shifting billions into F-35 testing, cutting procurement funds for the plane, withholding more than $600 million from JSF-maker Lockheed Martin and firing the Pentagon's F-35 program manager, Marine Corps Maj. Gen. David Heinz.
    F-35 could breach Nunn-McCurdy limits

    When the boss holds people accountable organizational pride improves (really, beatings do improve morale).

  13. Tecplot has a little marketing piece on coupled computational and wind-tunnel analysis in car development.