Monday, November 13, 2017

Deep Learning to Accelerate Computational Fluid Dynamics

Lat-Net: Compressing Lattice Boltzmann Flow Simulations using Deep Neural Networks
I posted about a surprising application of deep learning to accelerate topology optimization. The thing I like about that approach is it's a strategy that could be applied to accelerate many different solvers that we use to simulate all sorts of continuum mechanics based on partial differential equations (i.e. computational fluid dynamics, structural mechanics, electrodynamics, etc.). With a bit of help from Google I found a neat paper and project on github doing exactly that for a Lattice-Boltzmann fluid solver.

Friday, November 10, 2017

Deep Learning to Accelerate Topology Optimization

Topology Optimization Data Set for CNN Training
Neural networks for topology optimization is an interesting paper I read on arXiv that illustrates how to speed up the topology optimization calculations by using a deep learning convolution neural network. The data sets for training the network are generate in ToPy, which is an Open Source topology optimization tool.

Saturday, March 25, 2017

Innovation, Entropy and Exoplanets

I enjoy Shipulski on Design for the short articles on innovation. They are generally not technical at all. I like to think of most of the posts as innovation poetry to put your thoughts along the right lines of effort. This recent post has a huge, interesting technical iceberg riding under the surface though.
If you run an experiment where you are 100% sure of the outcome, your learning is zero. You already knew how it would go, so there was no need to run the experiment. The least costly experiment is the one you didn’t have to run, so don’t run experiments when you know how they’ll turn out. If you run an experiment where you are 0% sure of the outcome, your learning is zero. These experiments are like buying a lottery ticket – you learn the number you chose didn’t win, but you learned nothing about how to choose next week’s number. You’re down a dollar, but no smarter.

The learning ratio is maximized when energy is minimized (the simplest experiment is run) and probability the experimental results match your hypothesis (expectation) is 50%. In that way, half of the experiments confirm your hypothesis and the other half tell you why your hypothesis was off track.
Maximize The Learning Ratio

Tuesday, March 7, 2017

NASA Open Source Software 2017 Catalog


NASA has released its 2017-2018 Software Catalog under their Technology Transfer Program. A pdf version of the catalog is available, or you can browse by category. The NASA open code repository is already on my list of Open Source Aeronautical Engineering tools. Of course many of the codes included in that list from PDAS are legacy NASA codes that were distributed on various media in the days before the internet.

Saturday, December 17, 2016

Hybrid Parallelism Approaches for CFD

This previous post, Plenty of Room at Exascale, focuses on one specific commercial approach to scaling CFD to large problems on heterogeneous hardware (CPU & GPU) clusters. Here's some more references I found interesting reading on this sort of approach.

Strategies

Recent progress and challenges in exploiting graphics processors in computational fluid dynamics provides some general strategies for using multiple levels of parallelism accross GPUs, CPU cores and cluster nodes based on that review of the literature:
  • Global memory should be arranged to coalesce read/write requests, which can improve performance by an order of magnitude (theoretically, up to 32 times: the number of threads in a warp)
  • Shared memory should be used for global reduction operations (e.g., summing up residual values, finding maximum values) such that only one value per block needs to be returned
  • Use asynchronous memory transfer, as shown by Phillips et al. and DeLeon et al. when parallelizing solvers across multiple GPUs, to limit the idle time of either the CPU or GPU.
  • Minimize slow CPU-GPU communication during a simulation by performing all possible calculations on the GPU.

Wednesday, July 6, 2016

Exciting 3D Print Service Developments

It has never been easier to go from a design in your head to parts in your hand. The barriers to entry are low on the front end. There are all sorts of free and open source drawing, mesh editing, modeling or CAD applications. On the fabrication end of things, services like shapeways, and imaterialise continue to improve their delivery times, material options and prices.

One of the recent developments that deserves some attention is in metal 3D printing. imaterialise has offered parts in DMLS titanium for a while, but they've been pretty pricey. They have now significantly reduced the prices on Ti parts, and are now offering a trial with aluminum. Not to be left out, Shapeways has graduated SLM aluminum from its pilot program.

It's great to see such thriving competition in this space. I'm working on some models specifically for this metal powder-bed fusion technology. What will you print?

Wednesday, December 23, 2015

Hamiltonian Monte Carlo with Stan

Stan is a library for doing Bayesian statistical inference. One of the really cool capabilities it has is the Hamiltonian Monte Carlo (HMC) method rather than the more common Markov Chain approaches. There are interfaces for using the library from Python, R or the command line:
Stan is based on a probabilistic programming language for specifying models in terms of probability distributions. Stan’s modeling language is is portable across all interfaces (PyStan, RStan, CmdStan).

I found this video from the documentation page a very understandable description of the Hamiltonian Monte Carlo approach used by Stan. It's neat to see how using a deterministic dynamics can improve on random walks. I'm reminded of Jaynes: "It appears to be a quite general principle that, whenever there is a randomized way of doing something, then there is a nonrandomized way that delivers better performance but requires more thought."