## Sunday, August 12, 2018

### Monoprice Mini Delta 3D Printer

I recently bought my first personal 3D printer. I have been involved in DIY and hobbyist 3D printing for many years through the Dayton Diode hackerspace I co-founded. This Monoprice Mini Delta is the first printer of my very own. The price point is amazing (less than $160!), and things just work right out of the box. What a hugely different experience than building that first printrbot kit (RIP printrbot). The Printrbot story is actually a piece of the Innovator's Dilemma playing out in this market niche. Printrbot disrupted a higher-cost competitor (Makerbot) who retreated up-market towards higher-end machines, and was then in-turn disrupted by foreign suppliers like Monoprice. This caused Printrbot to reatreat unsuccessfully up-market themselves towards$1000 machines. Who will disrupt Monoprice? I can't wait for my voice controlled, artificially intelligent, \$20 printer... In the meantime, this post is about my experience with this little desktop FDM machine you can buy today.

## Wednesday, February 7, 2018

Some interesting aerodynamics & control details on the re-design required for the Falcon Heavy at 15:20 or so. Great launch!

## Sunday, December 17, 2017

### Topology Optimization with ToPy: Pure Bending

 From The Design of Michell Optimal Structures
Here is an interesting paper from 1962 on the design of optimal structures: The Design of Michell Optimal Structures. One of the examples is for pure bending as shown in the figure above. I thought this would be a neat load-case to try in ToPy.

## Wednesday, November 29, 2017

### Topology Optimization for Coupled Thermo-Fluidic Problems

Interesting video of a talk by Ole Sigmund on optimizing topology for fluid mixing or heat transfer.

## Sunday, November 26, 2017

### Installing ToPy in Fedora 26

This post summarizes the steps to install ToPy in Fedora 26.

## Monday, November 20, 2017

### Machine Learning for CFD Turbulence Closures

I wrote a couple previous posts on some interesting work using deep learning to accelerate topology optimization, and a couple neural network methods for accelerating computational fluid dynamics (with source). This post is about a use of machine learning in computational fluid dynamics (CFD) with a slightly different goal: to improve the quality of solutions. Rather than a focus on getting to solutions more quickly, this post covers work focused on getting better solutions. A better solution is one that has more predictive capability. There is usually a trade-off between predictive capability, and how long it takes to get a solution. The most well-known area for improvement in predictive capability of state-of-the-practice, industrial CFD is in our turbulence and transition modeling. There are a proliferation of approaches to tackling that problem, but the overall strategy that seems to be paying off is for CFD'ers to follow the enormous investment being made by the large tech companies in techniques, open source libraries, and services for machine learning. How can those free / low-cost tools and techniques be applied to our problems?

The authors of Machine Learning Models of Errors in Large Eddy Simulation Predictions of Surface Pressure Fluctuations used machine learning techniques to model the error in their LES solutions. See an illustration of the instantaneous density gradient magnitude of the developing boundary layer from that paper shown to the right. Here's the abstract,
We investigate a novel application of deep neural networks to modeling of errors in prediction of surface pressure fluctuations beneath a compressible, turbulent flow. In this context, the truth solution is given by Direct Numerical Simulation (DNS) data, while the predictive model is a wall-modeled Large Eddy Simulation (LES
). The neural network provides a means to map relevant statistical flow-features within the LES solution to errors in prediction of wall pressure spectra. We simulate a number of flat plate turbulent boundary layers using both DNS and wall-modeled LES to build up a database with which to train the neural network. We then apply machine learning techniques to develop an optimized neural network model for the error in terms of relevant flow features

## Monday, November 13, 2017

### Deep Learning to Accelerate Computational Fluid Dynamics

 Lat-Net: Compressing Lattice Boltzmann Flow Simulations using Deep Neural Networks
I posted about a surprising application of deep learning to accelerate topology optimization. The thing I like about that approach is it's a strategy that could be applied to accelerate many different solvers that we use to simulate all sorts of continuum mechanics based on partial differential equations (i.e. computational fluid dynamics, structural mechanics, electrodynamics, etc.). With a bit of help from Google I found a neat paper and project on github doing exactly that for a Lattice-Boltzmann fluid solver.

## Friday, November 10, 2017

### Deep Learning to Accelerate Topology Optimization

 Topology Optimization Data Set for CNN Training
Neural networks for topology optimization is an interesting paper I read on arXiv that illustrates how to speed up the topology optimization calculations by using a deep learning convolution neural network. The data sets for training the network are generate in ToPy, which is an Open Source topology optimization tool.

## Saturday, March 25, 2017

### Innovation, Entropy and Exoplanets

I enjoy Shipulski on Design for the short articles on innovation. They are generally not technical at all. I like to think of most of the posts as innovation poetry to put your thoughts along the right lines of effort. This recent post has a huge, interesting technical iceberg riding under the surface though.
If you run an experiment where you are 100% sure of the outcome, your learning is zero. You already knew how it would go, so there was no need to run the experiment. The least costly experiment is the one you didn’t have to run, so don’t run experiments when you know how they’ll turn out. If you run an experiment where you are 0% sure of the outcome, your learning is zero. These experiments are like buying a lottery ticket – you learn the number you chose didn’t win, but you learned nothing about how to choose next week’s number. You’re down a dollar, but no smarter.

The learning ratio is maximized when energy is minimized (the simplest experiment is run) and probability the experimental results match your hypothesis (expectation) is 50%. In that way, half of the experiments confirm your hypothesis and the other half tell you why your hypothesis was off track.
Maximize The Learning Ratio