Saturday, March 25, 2017

Innovation, Entropy and Exoplanets

I enjoy Shipulski on Design for the short articles on innovation. They are generally not technical at all. I like to think of most of the posts as innovation poetry to put your thoughts along the right lines of effort. This recent post has a huge, interesting technical iceberg riding under the surface though.
If you run an experiment where you are 100% sure of the outcome, your learning is zero. You already knew how it would go, so there was no need to run the experiment. The least costly experiment is the one you didn’t have to run, so don’t run experiments when you know how they’ll turn out. If you run an experiment where you are 0% sure of the outcome, your learning is zero. These experiments are like buying a lottery ticket – you learn the number you chose didn’t win, but you learned nothing about how to choose next week’s number. You’re down a dollar, but no smarter.

The learning ratio is maximized when energy is minimized (the simplest experiment is run) and probability the experimental results match your hypothesis (expectation) is 50%. In that way, half of the experiments confirm your hypothesis and the other half tell you why your hypothesis was off track.
Maximize The Learning Ratio

Tuesday, March 7, 2017

NASA Open Source Software 2017 Catalog


NASA has released its 2017-2018 Software Catalog under their Technology Transfer Program. A pdf version of the catalog is available, or you can browse by category. The NASA open code repository is already on my list of Open Source Aeronautical Engineering tools. Of course many of the codes included in that list from PDAS are legacy NASA codes that were distributed on various media in the days before the internet.

Saturday, December 17, 2016

Hybrid Parallelism Approaches for CFD

This previous post, Plenty of Room at Exascale, focuses on one specific commercial approach to scaling CFD to large problems on heterogeneous hardware (CPU & GPU) clusters. Here's some more references I found interesting reading on this sort of approach.

Strategies

Recent progress and challenges in exploiting graphics processors in computational fluid dynamics provides some general strategies for using multiple levels of parallelism accross GPUs, CPU cores and cluster nodes based on that review of the literature:
  • Global memory should be arranged to coalesce read/write requests, which can improve performance by an order of magnitude (theoretically, up to 32 times: the number of threads in a warp)
  • Shared memory should be used for global reduction operations (e.g., summing up residual values, finding maximum values) such that only one value per block needs to be returned
  • Use asynchronous memory transfer, as shown by Phillips et al. and DeLeon et al. when parallelizing solvers across multiple GPUs, to limit the idle time of either the CPU or GPU.
  • Minimize slow CPU-GPU communication during a simulation by performing all possible calculations on the GPU.

Wednesday, July 6, 2016

Exciting 3D Print Service Developments

It has never been easier to go from a design in your head to parts in your hand. The barriers to entry are low on the front end. There are all sorts of free and open source drawing, mesh editing, modeling or CAD applications. On the fabrication end of things, services like shapeways, and imaterialise continue to improve their delivery times, material options and prices.

One of the recent developments that deserves some attention is in metal 3D printing. imaterialise has offered parts in DMLS titanium for a while, but they've been pretty pricey. They have now significantly reduced the prices on Ti parts, and are now offering a trial with aluminum. Not to be left out, Shapeways has graduated SLM aluminum from its pilot program.

It's great to see such thriving competition in this space. I'm working on some models specifically for this metal powder-bed fusion technology. What will you print?

Wednesday, December 23, 2015

Hamiltonian Monte Carlo with Stan

Stan is a library for doing Bayesian statistical inference. One of the really cool capabilities it has is the Hamiltonian Monte Carlo (HMC) method rather than the more common Markov Chain approaches. There are interfaces for using the library from Python, R or the command line:
Stan is based on a probabilistic programming language for specifying models in terms of probability distributions. Stan’s modeling language is is portable across all interfaces (PyStan, RStan, CmdStan).

I found this video from the documentation page a very understandable description of the Hamiltonian Monte Carlo approach used by Stan. It's neat to see how using a deterministic dynamics can improve on random walks. I'm reminded of Jaynes: "It appears to be a quite general principle that, whenever there is a randomized way of doing something, then there is a nonrandomized way that delivers better performance but requires more thought."

Wednesday, September 23, 2015

A One-Equation Local Correlation-Based Transition Model

This article is available for free download until 15 Oct 2015, h/t ANSYS blog.

Here's the Abstract:
A model for the prediction of laminar-turbulent transition processes was formulated. It is based on the LCTM (‘Local Correlation-based Transition Modelling’) concept, where experimental correlations are being integrated into standard convection-diffusion transport equations using local variables. The starting point for the model was the γ-Re θ model already widely used in aerodynamics and turbomachinery CFD applications. Some of the deficiencies of the γ-Re θ model, like the lack of Galilean invariance were removed. Furthermore, the Re θ equation was avoided and the correlations for transition onset prediction have been significantly simplified. The model has been calibrated against a wide range of Falkner-Skan flows and has been applied to a variety of test cases.
Keywords: Laminar-turbulent transition, Correlation, Local variables
Authors: Florian R. Menter, Pavel E. Smirnov , Tao Liu, Ravikanth Avancha

Transition location, and subsequent turbulence modeling remain the largest source of uncertainty for most engineering flows. Even for chemically reacting flows the source of uncertainty is often less the parameters and reactions for the chemistry, and more the uncertainty in the fluid state driven by shortcomings in turbulence and transition modeling.

Sunday, March 22, 2015

Reliability Growth: Enhancing Defense System Reliability


This report (pdf) from the National academies on reliability growth is interesting. There's a lot of good stuff on design for reliability, physics of failure, highly accelerated life testing, accelerated life testing and reliability growth modeling. Especially useful is the discussion about the suitability of assumptions underlying some of the different reliability growth models.

The authors provide a thorough critique of MIL-HDBK-217, Reliability Prediction of Electronic Equipment, in Appendix D, which is probably worth the price of admission by itself. If you're concerned with product reliability you should read this report (lots of good pointers to the lit).