Wednesday, January 15, 2014

SU2 v3 Released

The folks at Stanford Aerospace Design Lab have released a new major version of Stanford University Unstructured (SU2). Here's the announcement:
Dear Colleague,

Since its introduction in January 2012, SU2, The Open-Source CFD Code, has been downloaded thousands of times by users and developers in academia, government, and industry, including many leading companies and universities. As an open-source project, the growth of active user and developer communities is a crucial goal for SU2. Given the incredibly positive response, we are pleased to announce a new version of the code with major improvements and a entirely new package for educational purposes.

This release marks the third major version of the SU2 open-source code ( SU2 is a collection of C++ software tools for performing Partial Differential Equation (PDE) analysis and for solving PDE-constrained optimization problems, with special emphasis on Computational Fluid Dynamics (CFD) and aerodynamic shape design.

We'd like to ask you to please distribute this announcement with the attached flyer to any colleagues and students in your department that might be interested.

Version 3.0 has a number of major additional capabilities:

• Adjoint-based RANS shape optimization.
• New unsteady analysis and design optimization capability.
• Upgrades to the underlying parallelization and file I/O.
• Significant improvements to the accuracy, performance, and robustness of the software suite.

Alongside Version 3.0 of SU2, we are introducing SU2 Educational (SU2_EDU): a new, educational version of the Euler/Navier-Stokes/RANS solver from the SU2 suite. The simplified structure of SU2_EDU makes it suitable for students and beginners in CFD. By focusing on a handful of key numerical methods and capabilities, SU2_EDU is ideal for use in CFD courses, for independent studies, or just to learn about a new field!

SU2_EDU is also geared toward anyone interested in high-fidelity airfoil analysis. The initial version of SU2_EDU is an intuitive, easy to use tool for computing the performance of airfoils in inviscid, laminar, or turbulent flow including non-linear effects in the transonic regime, that only requires the airfoil coordinates.

Finally, we would like to thank the open-source community for their interest, help, and support.

The SU2 team

One of the most interesting parts to me is the new SU2_EDU version. I've downloaded the code, but haven't had a chance to browse it or run any examples yet. I think this is a neat idea that will hopefully lower the barriers to entry that George pointed out previously.

Tuesday, January 14, 2014

CFD Vision 2030: Discretizations, Solvers, and Numerics

There are lots of interesting parts to the study that Phil Roe mentioned in his Colorful Fluid Dynamics lecture. Continuing the theme that algorithm improvements are just as important as hardware improvements here are some of the areas concerning discretizations, solvers and numerics (pp 24) that the report claims will lower the need for high levels of human expertise and intervention in running and understanding CFD analysis:
  1. Incomplete or inconsistent convergence behavior: "There are many possible reasons for failure, ranging from poor grid quality to the inability of a single algorithm to handle singularities such as strong shocks, under-resolved features, or stiff chemically reacting terms. What is required is an automated capability that delivers hands-off solid convergence under all reasonable anticipated flow conditions with a high tolerance to mesh irregularities and small scale unsteadiness."
  2. Algorithm efficiency and suitability for emerging HPC: "In order to improve simulation capability and to effectively leverage new HPC hardware, foundational mathematical research will be required in highly scalable linear and non-linear solvers not only for commonly used discretizations but also for alternative discretizations, such as higher-order techniques89. Beyond potential advantages in improved accuracy per degree of freedom, higher-order methods may more effectively utilize new HPC hardware through increased levels of computation per degree of freedom."

Monday, January 13, 2014

Flight Demo Program Lessons Learned

In the BAA for the DARPA XS-1 program there is a presentation by Jess Sponable about lessons learned from previous flight demonstration programs. It takes a certain level of audacity to quote Machiavelli in a presentation on program management, but the quote is pretty applicable to any new system development (though I agree with Strauss: it must be remembered that Machiavelli teaches wickedness).
It must be remembered that there is nothing more difficult to plan, more doubtful of success, nor more dangerous to manage than creation of a new system. For the initiator has the enmity of all who would profit by the preservation of the old institutions, and merely lukewarm defenders in those who would gain by the new ones.
The Prince, Machiavelli, 1513

Here are the rules compiled based on previous flight demonstration program experience:
  1. Agree to clearly defined program objectives in advance
  2. Single manager under one agency
  3. Small government and contractor program offices
  4. Build competitive hardware, not paper
  5. Focus on key demonstrations, not everything
  6. Streamlined documentation and reviews
  7. Contractor integrates and tests prototype
  8. Develop minimum realistic funding profiles
  9. Track cost/schedule in near real time
  10. Mutual trust essential
The two that jump out at me are 'single manager under one agency', and 'contractor integrates and tests prototype' (which is really about constraining the size and cost of the test program). Programs like National Aerospace Plane or Project Timberwind come to my mind as falling prey to violating these two rules. Both programs expended a great deal of effort coordinating and reconciling often conflicting interests of multiple federal agencies. Even in the happy event that the interests of the cooperating agencies perfectly align multi-agency participation almost always ensures more bureaucracy. Those programs also spent or planned to spend enormous resources in ground test and specialized supporting infrastructure. In fact, the ballooning cost of the ground test facility for Timberwind was a significant contributing factor in its cancellation.

Friday, January 10, 2014

RAND: no life-cycle cost savings from joint aircraft

RAND has a recent report out examining the historical performance of joint (i.e. multi-service) aircraft development programs. Their key findings are:
  • Joint aircraft programs have not historically saved overall life cycle cost. On average, such programs experienced substantially higher cost growth in acquisition (research, development, test, evaluation, and procurement) than single-service programs. The potential savings in joint aircraft acquisition and operations and support compared with equivalent single-service programs is too small to offset the additional average cost growth that joint aircraft programs experience in the acquisition phase.

  • The difficulty of reconciling diverse service requirements in a common design is a major factor in joint cost outcomes. Diverse service requirements and operating environments work against commonality which is the source of potential cost savings, and are a major contributor to the joint acquisition cost-growth premium identified in the cost analysis.

  • Historical analysis suggests joint programs are associated with contraction of the industrial base and a decline in potential future industry competition, as well as increased strategic and operational risk due to dependency across the services on a single type of weapon system which may experience unanticipated safety, maintenance, or performance issues with no alternative readily available.
Here's the Abstract:
In the past 50 years, the U.S. Department of Defense has pursued numerous joint aircraft programs, the largest and most recent of which is the F-35 Joint Strike Fighter (JSF). Joint aircraft programs are thought to reduce Life Cycle Cost (LCC) by eliminating duplicate research, development, test, and evaluation efforts and by realizing economies of scale in procurement, operations, and support. But the need to accommodate different service requirements in a single design or common design family can lead to greater program complexity, increased technical risk, and common functionality or increased weight in excess of that needed for some variants, potentially leading to higher overall cost, despite these efficiencies. To help Air Force leaders (and acquisition decisionmakers in general) select an appropriate acquisition strategy for future combat aircraft, this report analyzes the costs and savings of joint aircraft acquisition programs. The project team examined whether historical joint aircraft programs have saved LCC compared with single-service programs. In addition, the project team assessed whether JSF is on track to achieving the joint savings originally anticipated at the beginning of full-scale development. Also examined were the implications of joint fighter programs for the health of the industrial base and for operational and strategic risk.
JSF is now expected to be more expensive than 3 F-22-like single-service programs:

Thursday, January 9, 2014

Phil Roe: Colorful Fluid Dynamics

Echos of Tufte in one of his introductory statements: "It's full of noise, it's full of color, it's spectacular, it's intended to blow your mind away, it's intended to disarm criticism." And further on the dangers of "colorful fluid dynamics":
These days it is common to see a complicated flow field, predicted with all the right general features and displayed in glorious detail that looks like the real thing. Results viewed in this way take on an air of authority out of proportion to their accuracy.
--Doug McLean
This lecture is sponsored by MConneX.

Roe wraps up the lecture by referencing a NASA sponsored study, CFD Vision 2030, that addresses whether CFD will be able to reliably predict turbulent separated flows by 2030. The conclusion is that advances in hardware capability alone will not be enough, but that significant improvements in numerical algorithms are required.

Wednesday, January 8, 2014

Algorithmic Improvements: just as important as Moore's Law

There were a couple interesting comments on slashdot recently about future computing technologies that might allow us to enjoy the continued price/performance improvements in computing and avoid the end of Moore's Law. Here's one that highlights some promising emerging technologies (my emphasis):
I see many emerging technologies that promise further great progress in computing. Here are some of them. I wish some industry people here could post some updates about their way to the market. They may not literally prolong the Moore's Law in regards to the number of transistors, but they promise great performance gains, which is what really matters.

3D chips. As materials science and manufacturing precision advances, we will soon have multi-layered (starting at a few layers that Samsung already has, but up to 1000s) or even fully 3D chips with efficient heat dissipation. This would put the components closer together and streamline the close-range interconnects. Also, this increases "computation per rack unit volume", simplifying some space-related aspects of scaling.

Memristors. HP is ready to produce the first memristor chips but delays that for business reasons (how sad is that!) Others are also preparing products. Memristor technology enables a new approach to computing, combining memory and computation in one place. They are also quite fast (competitive with the current RAM) and energy-efficient, which means easier cooling and possible 3D layout.

Photonics. Optical buses are finding their ways into computers, and network hardware manufacturers are looking for ways to perform some basic switching directly with light. Some day these two trends may converge to produce an optical computer chip that would be free from the limitations of electric resistance/heat, EM interference, and could thus operate at a higher clock speed. Would be more energy efficient, too.

Spintronics. Probably further in the future, but potentially very high-density and low-power technology actively developed by IBM, Hynix and a bunch of others. This one would push our computation density and power efficiency limits to another level, as it allows performing some computation using magnetic fields, without electrons actually moving in electrical current (excuse me for my layman understanding).

Quantum computing. This could qualitatively speed up whole classes of tasks, potentially bringing AI and simulation applications to new levels of performance. The only commercial offer so far is Dwave, and it's not a classical QC, but so many labs are working on that, the results are bound to come soon.
3D chips, memristors, photonics, spintronics, QC

I think Moore's Law is a steamroller. But, like the genomics sequencing technology highlighted in that post on Nuit Blanche, there are improvements just as fast, or faster than Moore's law. The improvements from better algorithms can yield exponential speed-ups too. Here's a graph (from this report) depicting the orders of magnitude improvement in linear solver performance:
Couple these software improvements with continually improving hardware and things get pretty exciting. I'm happy to live in these interesting times!