Tuesday, December 30, 2014

Starscraper Sounding Rocket Kickstarter

The Boston University Rocket Propulsion Group (BURPG) is developing a sounding rocket designed to top 150km and funding it partially through a kickstarter campaign. They plan on launching from Blackrock next year like Qu8k, but this is a more ambitious and complex effort.

The rocket will be controlled using fluid injection thrust vectoring. The thrust levels of their hybrid motor are comparable to Qu8k, but it is a significantly larger (30 vs 14ft long, 12 vs 8in diameter) and heavier (1100 vs 320 lbs) and aims higher (150km vs 120kft). It's hard to tell, but it also seems to be an order of magnitude or so more expensive.

The advantage the BURPG folks claim for their concept over traditional solid fuel sounding rockets is a gentler ride for payloads on the longer, smoother burning hybrid.

Saturday, December 20, 2014

Gaussian Processes for Machine Learning

What a great resource for learning about Gaussian Processes: The Gaussian Processes Web Site.

Saturday, December 6, 2014

Fully Scripted Open Source Topology Optimization

Helical Extruder Gear for Printrbot with Optimized Topology

I've used a couple different methods for stringing together open source tools to do topology optimization, but they have all required some interactive user input. Here are some previous posts demonstrating those manual methods:
Those approaches are fine if you've got time to fiddle with interactive software, but I wanted to do some parametric studies, so I need an automated approach that would be scalable to lots and lots of optimizations.

Thursday, November 20, 2014

119 Open Source Aeronautical Engineering Tools

* permanent page with updates: Open Source Aeronautical Engineering Tools*
I posted a list of 33 open source aeronautical engineering tools on LinkedIn a couple days ago. One of the comments was a question about how open they all really were so I added a column to the list for the license and any non-free dependencies (i.e. Matlab). I went ahead and made an entry for each of the pieces of software from Ralph Carmichael's PDAS collection, which added 84 public domain pieces of software. In addition, there are 23 with various flavors of GNU, 4 BSD-style, and 3 NASA open source agreement (NOSA) codes. See the whole list below the fold. Please suggest adds/changes/deletes in the comments.

Monday, August 18, 2014

Validation & Verification in Physics of Plasmas

Physics of Plasmas is making a collection of 20 papers on verification and validation available for free download for a limited time.
Theoretical models, both analytical and numerical, are playing an increasingly important role in predicting complex plasma behavior, and providing a scientific understanding of the underlying physical processes.

Since the ability of a theoretical model to predict plasma behavior is a key measure of the model’s accuracy and its ability to advance scientific understanding, it is Physics of Plasmas’ Editorial Policy to encourage the submission of manuscripts whose primary focus is the verification and/or validation of codes and analytical models aimed at predicting plasma behavior.

Saturday, July 19, 2014

FreeFem++ Topology Optimization Scripts

There are lots of open source topology optimization options out there (e.g. 99 line code, ToPy) that I've written about before. One that I haven't posted about yet is a collection of FreeFem++ scripts by Allaire, et al. that illustrate a variety of topology optimization approaches and problems. FreeFem++ is a partial differential equation solver based on the finite element method. FreeFem++problems are defined in scripts that use a high level language. FreeFem++ itself is written in C++.

Tuesday, July 1, 2014

UAVs for Film and Profit

Yesterday's AIAA Daily Launch had a great round-up of some recent UAV news:
  • Wall Street Journal (6/27, Nicas, Subscription Publication) reported on the ongoing fight over U.S. unmanned aircraft rules, which is pitting high-tech entrepreneurs against major aerospace and defense companies.
  • Washington Post (6/28, Whitlock) reported that a majority of U.S. military UAV accidents occur abroad, but “at least 49 large drones have crashed during test or training flights near domestic bases since 2001, according to a yearlong Washington Post investigation.”
  • AP (6/28, Jelinek) reported that the Pentagon announced armed UAVs are “flying over Baghdad to protect U.S. troops that recently arrived to assess Iraq’s deteriorating security.”
  • South Florida Sun Sentinel (6/29, Anthony) reported that Boynton Beach is dropping plans to ban drones in order to boost its “fledgling image as a technological hot spot — a place that welcomes engineers and innovation.”
  • South Bend (IN) Tribune (6/29, Sheckler) reported that as UAVs become cheaper and more available to the public, and their popularity grows among hobbyists and entrepreneurs, “they will increasingly raise questions about how to best regulate them, and how to balance concerns about safety and privacy.”
  • Hollywood Reporter (6/27, Giardina) reported that Hollywood movie studios are interested in using UAVs in filming “because they hold the promise of new creative options, real cost savings and possibly even safer sets.” Federal law prohibits the commercial use of UAVs, so filmmakers choose to shoot in countries with lax UAV laws to get the shots needed for their films.
Most interesting are the petitions of the seven aerial photography companies for exemptions for commercial filming operations. As the FAA press release says, the seven Section 333 Exemption Applications for Commercial Operations of Unmanned Aircraft are available on regulations.gov: I think it is interesting that these companies are taking this approach, because in some fillings by Pirker they specifically call out historical use of remote control aircraft for movies and TV (one of four broad categories of commercial use they cite). The applications have 18 "limitations and conditions" under which commercial operations will take place. They also make this interesting claim: "These limitations provide for at least an equivalent or even higher level of safety to operations under the current regulatory structure because the proposed operations represent a safety enhancement to the already safe movie and television filming operations conducted with conventional aircraft."
To drive that point home they show a couple pictures of a manned helicopter filming as it currently occurs.

Sunday, June 29, 2014

3-in Tack Strip Bracket TopOpt

I found a useful bracket on thingiverse for mounting things on a 3-in tack strip. Of course I thought this was a perfect opportunity for a bit of topology optimization. All of the design files and the stl (rendered above) are available on GitHub. The part is also on thingiverse.

Here's a video showing the progress of the optimization:

Rendered with a wave texture in Cycles to give the layered look it would have from an FDM machine, not quite right, but pretty close:

Thursday, June 26, 2014

HiFiLES v0.1 Release

The folks at the Stanford Aerospace Computing Lab have recently released version 0.1 of HiFiLES. "HiFiLES is a high-order Flux Reconstruction solver for the Euler and Navier Stokes equations, capable of simulating high Reynolds number turbulent flows and transonic/supersonic regimes on unstructured grids."

From the release notes:

High-order numerical methods for flow simulations capture complex phenomena like vortices and separation regions using fewer degrees of freedom than their low-order counterparts. The High Fidelity (HiFi) provided by the schemes, combined with turbulence models for small scales and wall interactions, gives rise to a powerful Large Eddy Simulation (LES) software package. HiFiLES is an open-source, high-order, compressible flow solver for unstructured grids built from the ground up to take full advantage of parallel computing architectures. It is specially well-suited for Graphical Processing Unit (GPU) architectures. HiFiLES is written in C++. The code uses the MPI protocol to run on multiple processors, and CUDA to harness GPU performance.

The main reference for the code right now is this V&V paper.[1] The code uses an Energy Stable Flux Reconstruction (ESFR) scheme. Here are a couple papers on that approach.[23].


[1]   L√≥pez-Morales, M. R., Bull, J., Crabill, J., Economon, T. D., Manosalvas, D., Romero, J., Sheshadri, A., Watkins II, J. E., Williams, D., Palacios, F., et al., “Verification and Validation of HiFiLES: a High-Order LES unstructured solver on multi-GPU platforms,” .
[2]   Vincent, P. E., Castonguay, P., and Jameson, A., “A new class of high-order energy stable flux reconstruction schemes,” Journal of Scientific Computing, Vol. 47, No. 1, 2011, pp. 50–72.
[3]   Castonguay, P., Vincent, P. E., and Jameson, A., “A new class of high-order energy stable flux reconstruction schemes for triangular elements,” Journal of Scientific Computing, Vol. 51, No. 1, 2012, pp. 224–256.

Thursday, June 5, 2014

Emerging and Readily Available Technologies and National Security

Here's the description of this report from the NAP site:
Emerging and Readily Available Technologies and National Security is a study on the ethical, legal, and societal issues relating to the research on, development of, and use of rapidly changing technologies with low barriers of entry that have potential military application, such as information technologies, synthetic biology, and nanotechnology. The report also considers the ethical issues associated with robotics and autonomous systems, prosthetics and human enhancement, and cyber weapons. These technologies are characterized by readily available knowledge access, technological advancements that can take place in months instead of years, the blurring of lines between basic research and applied research, and a high uncertainty about how the future trajectories of these technologies will evolve and what applications will be possible.

Wednesday, May 28, 2014

SpaceX SuperDraco Made with DMLS

SpaceX completes Super Draco Qual
From the press release:
The SuperDraco engine chamber is manufactured using state-of-the-art direct metal laser sintering (DMLS), otherwise known as 3D printing. The chamber is regeneratively cooled and printed in Inconel, a high-performance superalloy that offers both high strength and toughness for increased reliability.

“Through 3D printing, robust and high-performing engine parts can be created at a fraction of the cost and time of traditional manufacturing methods,” said Elon Musk, Chief Designer and CEO. “SpaceX is pushing the boundaries of what additive manufacturing can do in the 21st century, ultimately making our vehicles more efficient, reliable and robust than ever before.”

Wednesday, March 5, 2014

SAC-D Hearing on National Security Space Launch Programs

The Senate Appropriations Committee, Defense Subcommittee (SAC-D) held a hearing on 5 March concerning National Security Space Launch Programs. The written testimony and webcast is available from the Senate website:
Chairman Durbin's opening statement emphasized that this hearing had some features that were a bit unusual,
It's been the general practice of the appropriations committee to direct questions about acquisitions programs to the government officials responsible for the use of tax-payer money. Today, we're taking a different approach by going into the details of the EELV program with the two companies most involved in the upcoming competition, as well as two distinguished experts in space acquisitions.

Friday, February 21, 2014

SU2 Community Verification Studies

I think there is quite a bit of excitement and community involvement building around the SU2 code. Other than all of the updates and improvements in the recently released version 3 and SU2_EDU release, I am excited to see the wider community start to do some serious verification studies. The advecting vortex case linked in that discussion thread would be a good one to add to the Test Case collection.

The core SU2 devs have a recent AIAA paper on verification/validation cases that they have successfully run with SU2 and compared favorably to other codes. One thing that is conspicuously absent is grid convergence studies to verify order of accuracy. This is an ideal place for the community to contribute because you don't have to have hugely in-depth knowledge of the source code base to run a grid convergence study or contribute a tutorial or test case (though you do have to be a fairly competent user). Much to their credit, the SU2 team is soliciting just this kind of contribution (my emphasis):
Expanded tutorials: we would like additional tutorials that complement the existing set found in the web-based documentation. The tutorials can either detail and explain the functionality of SU2 (shape optimization, parallel computing, mesh deformation, etc.) or demonstrate general CFD knowledge (highlighting good meshes vs. bad meshes, the importance of CFL number, etc.). Tutorials are intended to be tools for teaching and learning, and they should follow the same style as the existing tutorials. They must provide any mesh or config files that are necessary for their completion. New or unique verification and validation cases would be of particular interest here.

Exciting times in open source CFD!

Thursday, February 6, 2014

TPS Sizing with Complex Step Method

TPS Sizing Optimization Using Complex Variable Differentiation Sensitivity
I stumbled upon an interesting old presentation that shows a neat application of the complex step method of calculating numerical derivatives for use in optimizing thermal protection system (TPS) thickness. The great thing about the method is that it is minimally intrusive.

Wednesday, January 15, 2014

SU2 v3 Released

The folks at Stanford Aerospace Design Lab have released a new major version of Stanford University Unstructured (SU2). Here's the announcement:
Dear Colleague,

Since its introduction in January 2012, SU2, The Open-Source CFD Code, has been downloaded thousands of times by users and developers in academia, government, and industry, including many leading companies and universities. As an open-source project, the growth of active user and developer communities is a crucial goal for SU2. Given the incredibly positive response, we are pleased to announce a new version of the code with major improvements and a entirely new package for educational purposes.

This release marks the third major version of the SU2 open-source code (su2.stanford.edu). SU2 is a collection of C++ software tools for performing Partial Differential Equation (PDE) analysis and for solving PDE-constrained optimization problems, with special emphasis on Computational Fluid Dynamics (CFD) and aerodynamic shape design.

We'd like to ask you to please distribute this announcement with the attached flyer to any colleagues and students in your department that might be interested.

Version 3.0 has a number of major additional capabilities:

• Adjoint-based RANS shape optimization.
• New unsteady analysis and design optimization capability.
• Upgrades to the underlying parallelization and file I/O.
• Significant improvements to the accuracy, performance, and robustness of the software suite.

Alongside Version 3.0 of SU2, we are introducing SU2 Educational (SU2_EDU): a new, educational version of the Euler/Navier-Stokes/RANS solver from the SU2 suite. The simplified structure of SU2_EDU makes it suitable for students and beginners in CFD. By focusing on a handful of key numerical methods and capabilities, SU2_EDU is ideal for use in CFD courses, for independent studies, or just to learn about a new field!

SU2_EDU is also geared toward anyone interested in high-fidelity airfoil analysis. The initial version of SU2_EDU is an intuitive, easy to use tool for computing the performance of airfoils in inviscid, laminar, or turbulent flow including non-linear effects in the transonic regime, that only requires the airfoil coordinates.

Finally, we would like to thank the open-source community for their interest, help, and support.

The SU2 team

One of the most interesting parts to me is the new SU2_EDU version. I've downloaded the code, but haven't had a chance to browse it or run any examples yet. I think this is a neat idea that will hopefully lower the barriers to entry that George pointed out previously.

Tuesday, January 14, 2014

CFD Vision 2030: Discretizations, Solvers, and Numerics

There are lots of interesting parts to the study that Phil Roe mentioned in his Colorful Fluid Dynamics lecture. Continuing the theme that algorithm improvements are just as important as hardware improvements here are some of the areas concerning discretizations, solvers and numerics (pp 24) that the report claims will lower the need for high levels of human expertise and intervention in running and understanding CFD analysis:
  1. Incomplete or inconsistent convergence behavior: "There are many possible reasons for failure, ranging from poor grid quality to the inability of a single algorithm to handle singularities such as strong shocks, under-resolved features, or stiff chemically reacting terms. What is required is an automated capability that delivers hands-off solid convergence under all reasonable anticipated flow conditions with a high tolerance to mesh irregularities and small scale unsteadiness."
  2. Algorithm efficiency and suitability for emerging HPC: "In order to improve simulation capability and to effectively leverage new HPC hardware, foundational mathematical research will be required in highly scalable linear and non-linear solvers not only for commonly used discretizations but also for alternative discretizations, such as higher-order techniques89. Beyond potential advantages in improved accuracy per degree of freedom, higher-order methods may more effectively utilize new HPC hardware through increased levels of computation per degree of freedom."

Monday, January 13, 2014

Flight Demo Program Lessons Learned

In the BAA for the DARPA XS-1 program there is a presentation by Jess Sponable about lessons learned from previous flight demonstration programs. It takes a certain level of audacity to quote Machiavelli in a presentation on program management, but the quote is pretty applicable to any new system development (though I agree with Strauss: it must be remembered that Machiavelli teaches wickedness).
It must be remembered that there is nothing more difficult to plan, more doubtful of success, nor more dangerous to manage than creation of a new system. For the initiator has the enmity of all who would profit by the preservation of the old institutions, and merely lukewarm defenders in those who would gain by the new ones.
The Prince, Machiavelli, 1513

Here are the rules compiled based on previous flight demonstration program experience:
  1. Agree to clearly defined program objectives in advance
  2. Single manager under one agency
  3. Small government and contractor program offices
  4. Build competitive hardware, not paper
  5. Focus on key demonstrations, not everything
  6. Streamlined documentation and reviews
  7. Contractor integrates and tests prototype
  8. Develop minimum realistic funding profiles
  9. Track cost/schedule in near real time
  10. Mutual trust essential
The two that jump out at me are 'single manager under one agency', and 'contractor integrates and tests prototype' (which is really about constraining the size and cost of the test program). Programs like National Aerospace Plane or Project Timberwind come to my mind as falling prey to violating these two rules. Both programs expended a great deal of effort coordinating and reconciling often conflicting interests of multiple federal agencies. Even in the happy event that the interests of the cooperating agencies perfectly align multi-agency participation almost always ensures more bureaucracy. Those programs also spent or planned to spend enormous resources in ground test and specialized supporting infrastructure. In fact, the ballooning cost of the ground test facility for Timberwind was a significant contributing factor in its cancellation.

Friday, January 10, 2014

RAND: no life-cycle cost savings from joint aircraft

RAND has a recent report out examining the historical performance of joint (i.e. multi-service) aircraft development programs. Their key findings are:
  • Joint aircraft programs have not historically saved overall life cycle cost. On average, such programs experienced substantially higher cost growth in acquisition (research, development, test, evaluation, and procurement) than single-service programs. The potential savings in joint aircraft acquisition and operations and support compared with equivalent single-service programs is too small to offset the additional average cost growth that joint aircraft programs experience in the acquisition phase.

  • The difficulty of reconciling diverse service requirements in a common design is a major factor in joint cost outcomes. Diverse service requirements and operating environments work against commonality which is the source of potential cost savings, and are a major contributor to the joint acquisition cost-growth premium identified in the cost analysis.

  • Historical analysis suggests joint programs are associated with contraction of the industrial base and a decline in potential future industry competition, as well as increased strategic and operational risk due to dependency across the services on a single type of weapon system which may experience unanticipated safety, maintenance, or performance issues with no alternative readily available.
Here's the Abstract:
In the past 50 years, the U.S. Department of Defense has pursued numerous joint aircraft programs, the largest and most recent of which is the F-35 Joint Strike Fighter (JSF). Joint aircraft programs are thought to reduce Life Cycle Cost (LCC) by eliminating duplicate research, development, test, and evaluation efforts and by realizing economies of scale in procurement, operations, and support. But the need to accommodate different service requirements in a single design or common design family can lead to greater program complexity, increased technical risk, and common functionality or increased weight in excess of that needed for some variants, potentially leading to higher overall cost, despite these efficiencies. To help Air Force leaders (and acquisition decisionmakers in general) select an appropriate acquisition strategy for future combat aircraft, this report analyzes the costs and savings of joint aircraft acquisition programs. The project team examined whether historical joint aircraft programs have saved LCC compared with single-service programs. In addition, the project team assessed whether JSF is on track to achieving the joint savings originally anticipated at the beginning of full-scale development. Also examined were the implications of joint fighter programs for the health of the industrial base and for operational and strategic risk.
JSF is now expected to be more expensive than 3 F-22-like single-service programs:

Thursday, January 9, 2014

Phil Roe: Colorful Fluid Dynamics

Echos of Tufte in one of his introductory statements: "It's full of noise, it's full of color, it's spectacular, it's intended to blow your mind away, it's intended to disarm criticism." And further on the dangers of "colorful fluid dynamics":
These days it is common to see a complicated flow field, predicted with all the right general features and displayed in glorious detail that looks like the real thing. Results viewed in this way take on an air of authority out of proportion to their accuracy.
--Doug McLean
This lecture is sponsored by MConneX.

Roe wraps up the lecture by referencing a NASA sponsored study, CFD Vision 2030, that addresses whether CFD will be able to reliably predict turbulent separated flows by 2030. The conclusion is that advances in hardware capability alone will not be enough, but that significant improvements in numerical algorithms are required.

Wednesday, January 8, 2014

Algorithmic Improvements: just as important as Moore's Law

There were a couple interesting comments on slashdot recently about future computing technologies that might allow us to enjoy the continued price/performance improvements in computing and avoid the end of Moore's Law. Here's one that highlights some promising emerging technologies (my emphasis):
I see many emerging technologies that promise further great progress in computing. Here are some of them. I wish some industry people here could post some updates about their way to the market. They may not literally prolong the Moore's Law in regards to the number of transistors, but they promise great performance gains, which is what really matters.

3D chips. As materials science and manufacturing precision advances, we will soon have multi-layered (starting at a few layers that Samsung already has, but up to 1000s) or even fully 3D chips with efficient heat dissipation. This would put the components closer together and streamline the close-range interconnects. Also, this increases "computation per rack unit volume", simplifying some space-related aspects of scaling.

Memristors. HP is ready to produce the first memristor chips but delays that for business reasons (how sad is that!) Others are also preparing products. Memristor technology enables a new approach to computing, combining memory and computation in one place. They are also quite fast (competitive with the current RAM) and energy-efficient, which means easier cooling and possible 3D layout.

Photonics. Optical buses are finding their ways into computers, and network hardware manufacturers are looking for ways to perform some basic switching directly with light. Some day these two trends may converge to produce an optical computer chip that would be free from the limitations of electric resistance/heat, EM interference, and could thus operate at a higher clock speed. Would be more energy efficient, too.

Spintronics. Probably further in the future, but potentially very high-density and low-power technology actively developed by IBM, Hynix and a bunch of others. This one would push our computation density and power efficiency limits to another level, as it allows performing some computation using magnetic fields, without electrons actually moving in electrical current (excuse me for my layman understanding).

Quantum computing. This could qualitatively speed up whole classes of tasks, potentially bringing AI and simulation applications to new levels of performance. The only commercial offer so far is Dwave, and it's not a classical QC, but so many labs are working on that, the results are bound to come soon.
3D chips, memristors, photonics, spintronics, QC

I think Moore's Law is a steamroller. But, like the genomics sequencing technology highlighted in that post on Nuit Blanche, there are improvements just as fast, or faster than Moore's law. The improvements from better algorithms can yield exponential speed-ups too. Here's a graph (from this report) depicting the orders of magnitude improvement in linear solver performance:
Couple these software improvements with continually improving hardware and things get pretty exciting. I'm happy to live in these interesting times!