tag:blogger.com,1999:blog-58228050282918377382024-03-08T15:27:42.354-05:00Various Consequences<a href="http://www.newyorker.com/archive/1962/06/02/1962_06_02_031_TNY_CARDS_000272048">... against so much unfairness of things, various consequences ensued ...</a>Joshua Stultshttp://www.blogger.com/profile/03506970399027046387noreply@blogger.comBlogger330125tag:blogger.com,1999:blog-5822805028291837738.post-8399771531087783352021-05-26T06:18:00.004-04:002021-05-26T06:19:00.161-04:00User Friendly 3D Scans and Photogrammetry (Mobile Apps)<iframe width="560" height="315" src="https://www.youtube.com/embed/qZY6y1IVIfw" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
<p>Wow, the user friendliness and speed of this app is just amazing. A long way in ease and useability from the <a href="http://www.variousconsequences.com/2013/02/dayton-masonic-temple-photogrammetry.html">open source tool chain</a> I've used previously. Joshua Stultshttp://www.blogger.com/profile/03506970399027046387noreply@blogger.com0tag:blogger.com,1999:blog-5822805028291837738.post-77704452346222316062021-01-06T08:00:00.004-05:002021-01-06T08:16:59.439-05:00Stanford Center for Turbulence Research 2020 Annual Briefs<div class="separator" style="clear: both;"><a href="https://ctr.stanford.edu/annual-research-briefs-2020" style="display: block; padding: 1em 0px; text-align: center;"><img alt="" border="0" data-original-height="546" data-original-width="720" height="485" src="https://1.bp.blogspot.com/-It4fPF51pI4/X_Wz_SfSffI/AAAAAAAADRI/8E-fBRANLf4DsRuWhRUAHz0sVILxz6aCACLcBGAsYHQ/w640-h485/Screenshot%2Bfrom%2B2021-01-06%2B07-58-00.png" width="640" /></a></div>
<a href="https://ctr.stanford.edu/annual-research-briefs-2020">Center for Turbulence Research Annual Briefs</a>
<p><span></span></p><a name='more'></a><p></p><p>I like the velocity-altitude plot in the report linked in the comments. It's good to illustrate that "hypersonic" could mean a lot of different physics in your flight environment depending on where you fly. </p>
<div class="separator" style="clear: both;"><a href="https://1.bp.blogspot.com/-tCiCUqJjAH0/X_W31j2z57I/AAAAAAAADRU/j3xeMb1T7hwqGYKPsXL4yDjE3RT__bgGgCLcBGAsYHQ/s641/Screenshot%2Bfrom%2B2021-01-06%2B08-13-31.png" style="display: block; padding: 1em 0px; text-align: center;"><img alt="" border="0" data-original-height="256" data-original-width="641" height="160" src="https://1.bp.blogspot.com/-tCiCUqJjAH0/X_W31j2z57I/AAAAAAAADRU/j3xeMb1T7hwqGYKPsXL4yDjE3RT__bgGgCLcBGAsYHQ/w400-h160/Screenshot%2Bfrom%2B2021-01-06%2B08-13-31.png" width="400" /></a></div>Joshua Stultshttp://www.blogger.com/profile/03506970399027046387noreply@blogger.com2tag:blogger.com,1999:blog-5822805028291837738.post-90447963636161374682021-01-05T08:38:00.006-05:002021-01-05T08:38:44.236-05:00Machine Learning for Fluid Dynamics: Patterns<iframe width="560" height="315" src="https://www.youtube.com/embed/3fOXIbycAmc" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
<p> </p>
The data set he mentions is <a href="http://turbulence.pha.jhu.edu/">Johns Hopkins Turbulence Databases</a>. Many hundreds of Terabytes of direct numerical simulations with <a href="http://turbulence.pha.jhu.edu/datasets.aspx">different governing equations and boundary conditions</a>. Joshua Stultshttp://www.blogger.com/profile/03506970399027046387noreply@blogger.com3tag:blogger.com,1999:blog-5822805028291837738.post-20620142143966798602020-11-11T08:46:00.002-05:002020-11-11T08:47:10.969-05:00Missiles and Rockets<div class="separator" style="clear: both;"><a href="https://1.bp.blogspot.com/-IAoWlkiAMAs/X6vqb0FNMOI/AAAAAAAADPw/FLJaFOHWP3I__qQ0Ne7I_lIJqao4bydTACLcBGAsYHQ/s1583/Screenshot%2Bfrom%2B2020-11-10%2B09-00-21.png" style="display: block; padding: 1em 0px; text-align: center;"><img alt="" border="0" data-original-height="642" data-original-width="1583" height="259" src="https://1.bp.blogspot.com/-IAoWlkiAMAs/X6vqb0FNMOI/AAAAAAAADPw/FLJaFOHWP3I__qQ0Ne7I_lIJqao4bydTACLcBGAsYHQ/w640-h259/Screenshot%2Bfrom%2B2020-11-10%2B09-00-21.png" width="640" /></a></div>
<a href="https://archive.org/details/misslesandrockets?tab=about">Missiles and Rockets</a> was a magazine that ran from the mid 1950's to the mid 1960's at the height of the Space Race. All <a href="https://archive.org/details/misslesandrockets?tab=collection">the issues</a> are available on the internet archive. It's neat to see some of the old concepts (like the manned rocket bomber above), and the advertisements in the early issues by companies trying to hire engineers and scientists so they can cash in on the flood of funding in the early gold rush days are pretty entertaining. Joshua Stultshttp://www.blogger.com/profile/03506970399027046387noreply@blogger.com0tag:blogger.com,1999:blog-5822805028291837738.post-26727427285254229742020-04-21T09:34:00.000-04:002020-04-21T09:34:23.096-04:00Octet Truss Cube: Printability Update<a href="https://www.shapeways.com/product/EDVLC5GKA/octet-truss-cube-3x3x3">This</a> little octet truss example cube moved from "First to Try" to "Successfully Printed" on <a href="https://www.shapeways.com/shops/variousconsequences">my Shapeways store</a>. They estimate a greater than 80% success rate on the print (thanks to whoever ordered the part!). <br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://www.shapeways.com/product/EDVLC5GKA/octet-truss-cube-3x3x3" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhFCx_pB8wmCorYSFLBPuWHsSfrsqq_KJf6XUr7T_SWKG4zaK84yifdo0-4qUTnF7Vm5L_TZQ9s47iUBukxI66ZJu76wGiSx_lBpjyPW6VMLHFHJq7E4-y83RCnUIZCOQnuTCgK5mAHa90/s640/710x528_15634483_893592_1473270569.jpg.png" width="640" height="476" data-original-width="710" data-original-height="528" /></a></div>I knew it could be printed since I had <a href="https://www.variousconsequences.com/2012/06/3d-printed-isogrid-and-octet-truss.html">previously printed</a> versions of this design, but Shapeways updated their print-ability guidelines since then. Nice to see it still works! Joshua Stultshttp://www.blogger.com/profile/03506970399027046387noreply@blogger.com0tag:blogger.com,1999:blog-5822805028291837738.post-89517208586711483362020-03-26T09:44:00.002-04:002020-03-26T09:44:09.781-04:00NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis<iframe width="560" height="315" src="https://www.youtube.com/embed/JuH79E8rdKc" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe><br />
This is really cool. The capture for the specular reflections is great. I'm excited that something like this could be really useful for better photogrammetry. For instance, see this <a href="https://www.variousconsequences.com/2013/03/gbu-8-photogrammetry.html">old GBU I did</a> a long time ago. The reflections off the shiny metal cause artifacts in the point cloud reconstruction. <br />
<br />
There's a <a href="https://arxiv.org/abs/2003.08934">paper</a>, a <a href="http://www.matthewtancik.com/nerf">project summary page</a> with more views, and a <a href="https://github.com/bmild/nerf">github project page for the code</a> as well. Joshua Stultshttp://www.blogger.com/profile/03506970399027046387noreply@blogger.com0tag:blogger.com,1999:blog-5822805028291837738.post-35835226622523517582019-04-06T07:38:00.001-04:002019-04-06T07:39:03.657-04:003D Shape Segmentation With Projective Convolutional NetworksThis is an interesting summary of an approach for shape segmentation. I think it's pretty cool how often <a href="https://neurohive.io/en/popular-networks/vgg16/">VGG-16</a> gets used for transfer learning with good results. It's amazing that these models can represent enough knowledge to generate 3-D surfaces from single images. (I also like how many folks use airplanes as examples : - ) <br />
<br />
<iframe allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="" frameborder="0" height="315" src="https://www.youtube.com/embed/H94ASpItkLI" width="560"></iframe><br />
<br />
There's a website for the <a href="https://shapenet.cs.stanford.edu/iccv17/">ShapeNet data</a> set that they used as a benchmark in the video, and <a href="https://arxiv.org/pdf/1710.06104.pdf">this paper</a> describes the initial methods folks developed during the challenge right after the data set was released. That's a pretty neat approach. It reminds me a bit of the AIAA drag prediction workshops. <br />
<br />
<a name='more'></a><br />
Here is the summary from <a href="https://arxiv.org/pdf/1710.06104.pdf">the paper</a>: <br />
<blockquote>As a summary of all approaches and results, we have the following major observations: <br />
<ol><li> Approaches from all teams on both tasks are deep learning based, which shows the unparallel popularity of deep learning for 3D understanding from big data<br />
</li>
<li> Various 3D representations, including volumetric and point cloud formats, have been tried. In particular, point cloud representation is quite popular and a few novel ideas to exploit point cloud format have been proposed<br />
</li>
<li> The evaluation metric for 3D reconstruction is a topic worth further investigation. Under two standard evaluation metrics (Chamfer distance and IoU), we observe that two different approaches have won the first place. In particular, the coarse-to-fine supervised learning method wins by the IoU metric, while the GAN based method wins by the Chamfer distance metric. <br />
</li>
</ol></blockquote><br />
I like the hierarchical approach because it seems like it would be efficient. They use an octree data structure to allow them to only refine where they have a boundary label in a voxel. This reminds me a lot of Cartesian mesh refinement that some folks use in CFD for adaptive meshing. <br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://1.bp.blogspot.com/-uA8rIw_LpK0/XKiMwdaASYI/AAAAAAAADEA/1xPHu9nB_DABtEGJelNzSsJabOlrefbXACLcBGAs/s1600/Screenshot%2Bfrom%2B2019-04-06%2B07-25-43.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="405" data-original-width="525" height="494" src="https://1.bp.blogspot.com/-uA8rIw_LpK0/XKiMwdaASYI/AAAAAAAADEA/1xPHu9nB_DABtEGJelNzSsJabOlrefbXACLcBGAs/s640/Screenshot%2Bfrom%2B2019-04-06%2B07-25-43.png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Hierarchical Surface Generation from Single Image, <a href="https://arxiv.org/pdf/1704.00710.pdf">Hierarchical Surface Prediction, by C. Hane, S.Tulsiani, J. Malik</a></td></tr>
</tbody></table>Joshua Stultshttp://www.blogger.com/profile/03506970399027046387noreply@blogger.com0tag:blogger.com,1999:blog-5822805028291837738.post-1394332334245770662019-03-31T17:25:00.000-04:002019-03-31T17:26:22.976-04:00Fun with Machines that BendI really like his 3-D printed titanium part at about the 8 minute mark, and the chainsaw clutch at minute 10 is pretty neat too. <br />
<iframe width="560" height="315" src="https://www.youtube.com/embed/97t7Xj_iBv0" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe><br />
<br />
The eight "P's" of compliant mechanisms:<br />
<ol><li>Part count: reduced parts count with bending parts instead of hinges and springs</li>
<li>Production processes: lower price through processes like injection molding</li>
<li>Price: lower because of reduced parts count and affordable processes with reduced assembly</li>
<li>Precise motion: no backlash (yea!), </li>
<li>Performance: no need for lubricants, reduced wear </li>
<li>Proportions: can be made at small scale with photolithography </li>
<li>Portable: lightweight </li>
<li>Predictable: the operation of the mechanism can be well-known and reliable </li>
</ol>Joshua Stultshttp://www.blogger.com/profile/03506970399027046387noreply@blogger.com0tag:blogger.com,1999:blog-5822805028291837738.post-4234576381126283052019-03-27T06:57:00.000-04:002019-03-27T06:59:57.655-04:00Engineering Sketch Pad<div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-0vdiVXRbp00/XJtWy3csv9I/AAAAAAAADDY/-5ivyiDNwdQrWRO9Ze4FIiUKiOJaszgfACLcBGAs/s1600/Screenshot%2Bfrom%2B2019-03-27%2B06-55-51.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="515" data-original-width="815" height="403" src="https://3.bp.blogspot.com/-0vdiVXRbp00/XJtWy3csv9I/AAAAAAAADDY/-5ivyiDNwdQrWRO9Ze4FIiUKiOJaszgfACLcBGAs/s640/Screenshot%2Bfrom%2B2019-03-27%2B06-55-51.png" width="640" /></a></div><br />
I haven't heard of <a href="https://github.com/OpenMDAO/EngSketchPad">Engineering Sketch Pad</a> (source code as part of <a href="http://openmdao.org/">OpenMDAO</a>, and <a href="https://acdl.mit.edu/ESP/">here</a>) before, but this is yet another NASA sponsored open source tool that could be useful to you for aircraft conceptual design. I read about it in <a href="https://blog.pointwise.com/2019/03/25/automatic-not-automated-meshing/#comment-42184">a post</a> on <a href="https://blog.pointwise.com/">Another Fine Mesh</a> about some interesting research the folks at Pointwise are doing. It reminds me of, but is different from, <a href="http://openvsp.org/">Open Vehicle Sketch Pad</a>. <br />
<br />
There's a seminar on the software given by one of the developers up on a NASA site: <a href="https://www.nas.nasa.gov/publications/ams/2017/04-27-17.html">The Engineering Sketch Pad (ESP): Supporting Design Through Analysis</a>. (yea, DARPA!)<br />
<br />
It has some neat features that make it useful to support high-fidelity analysis. It creates watertight geometry, it can carry attributes with the geometry that could guide mesh resolution, it does "conservative" data transfer for discipline coupling (match a solver's numerical scheme), and most of its parts are differentiable which is useful for optimization. <br />
<br />
I added this to my list of <a href="https://www.variousconsequences.com/p/open-source-aeronautical-engineering.html">Open Source Aeronautical Engineering Tools</a>. Joshua Stultshttp://www.blogger.com/profile/03506970399027046387noreply@blogger.com2tag:blogger.com,1999:blog-5822805028291837738.post-85236109305247298982019-01-24T07:01:00.000-05:002019-01-24T07:02:18.105-05:00OpenLSTO plus InverseCSG<!--l. 11--><br />
<div class="noindent">
I was recently excited to learn about the <a href="https://www.variousconsequences.com/2019/01/openlsto-open-source-topology-optimization-code.html">OpenLSTO</a> and <a href="https://www.variousconsequences.com/2019/01/inversecsg-recovers-cad-from-model.html">InverseCSG</a> projects, and that got me thinking: can we automate topology optimization interpretation for a 3D part with open source tools?</div>
<!--l. 18--><br />
<div class="indent">
Topology optimization results are usually a discrete set of density voxels (as from <a href="https://github.com/williamhunter/topy">ToPy</a>) or a triangulated mesh (as from <a href="http://m2do.ucsd.edu/software/">OpenLSTO</a>). There is an <span class="cmti-10">interpretation </span>step often required to take this result and turn it into something that you can fabricate or incorporate into further design activities. In the case of <a href="http://m2do.ucsd.edu/software/">OpenLSTO</a> you are getting what your manufacturing chain needs (an <a href="https://en.wikipedia.org/wiki/STL_(file_format)">stl file</a>) if you are 3D printing.</div>
<!--l. 28--><br />
<div class="indent">
Interpreting the results of a topology optimization can be a time consuming manual process for a designer. While the steps to interpret a 2D topology optimization result can already be automated with a complete open source tool-chain, 3D is harder. I demonstrated in <a href="https://www.variousconsequences.com/2014/12/fully-scripted-open-source-topology-optimization.html">this post</a> how the 2D bitmap output of <a href="https://github.com/williamhunter/topy">ToPy</a> can be traced to generate <a href="https://en.wikipedia.org/wiki/AutoCAD_DXF">dxf files</a> that you can import and manipulate in a <a href="https://en.wikipedia.org/wiki/Computer-aided_design">CAD</a> program. On the other hand, here’s <a href="https://www.variousconsequences.com/2012/12/open-source-topology-optimization-for.html">an example I did</a> that demonstrates the more manual process for a 3D part.</div>
<!--l. 42--><br />
<a name='more'></a><br /><br />
<div class="indent">
There is a fairly accessible literature on the interpretation of topology optimization results to get manufacturable designs. This was written about a decade ago, but I think it’s still a mostly accurate representation of the state of the practice: “The current industry standard for processing TO [topology optimization] results is to have a human designer manually create a parametric CAD model that approximates the [topology optimization output]. This is a tedious and time consuming process which is highly dependent on the individual designer’s interpretation of the TO results.” <span class="cite">[<a href="https://www.blogger.com/blogger.g?blogID=5822805028291837738#XLarsen09">1</a>]</span> For example, see <a href="https://altairuniversity.com/15364-interpretation-of-topology-optimization-result/">this tutorial</a> on how to use Altair’s set of tools to accomplish this task. There are some efforts to partially automate the interpretation of the topology optimization results which seem to be based primarily on curve fitting cross-sections.</div>
<hr class="figure" />
<div class="figure">
<a href="https://www.blogger.com/null" id="x1-21"></a><br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-CGoOr5cfFKY/XEmlOWkSsPI/AAAAAAAADBg/4ypGwIT4TaAvFJxrqbPd5NVGW95AMKukgCLcBGAs/s1600/larsen09-process.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="160" data-original-width="753" height="132" src="https://1.bp.blogspot.com/-CGoOr5cfFKY/XEmlOWkSsPI/AAAAAAAADBg/4ypGwIT4TaAvFJxrqbPd5NVGW95AMKukgCLcBGAs/s640/larsen09-process.png" width="640" /></a></div>
<!--l. 58--><br />
<div class="caption">
<span class="id">Figure 1: </span><span class="content">Topology Optimization Interpretation Process <span class="cite">[<a href="https://www.blogger.com/blogger.g?blogID=5822805028291837738#XLarsen09">1</a>]</span></span></div>
<!--tex4ht:label?: x1-21 --><br /></div>
<hr class="endfigure" />
<!--l. 61--><br />
<div class="indent">
Hsu et al. <span class="cite">[<a href="https://www.blogger.com/blogger.g?blogID=5822805028291837738#Xhsu05">2</a>]</span> demonstrated a method of interpretation that relied on approximating 2D cross-sections of the 3D topology optimization result and fitting those with <a href="https://en.wikipedia.org/wiki/Non-uniform_rational_B-spline">NURBS</a>. </div>
<hr class="figure" />
<div class="figure">
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgWQjxiRuYB_9Fqf-twZarQCly1rjhDXgi57Ty8S6A12LIkAwsnJgLC_A_duNEl6oaJxnkIE8pmcRZDwhgqM2SKigJFFr-ShgF3TjFOYwf-SGCM8cfXCgaxBd1C06wdW41ME9VWBCg8K2U/s1600/hsu-2005-3D-reconstruction.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="222" data-original-width="351" height="252" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgWQjxiRuYB_9Fqf-twZarQCly1rjhDXgi57Ty8S6A12LIkAwsnJgLC_A_duNEl6oaJxnkIE8pmcRZDwhgqM2SKigJFFr-ShgF3TjFOYwf-SGCM8cfXCgaxBd1C06wdW41ME9VWBCg8K2U/s400/hsu-2005-3D-reconstruction.png" width="400" /></a></div>
<a href="https://www.blogger.com/null" id="x1-32"></a><br />
<!--l. 66--><br />
<div class="caption">
<span class="id">Figure 2: </span><span class="content">Cross-section-based NURBS model of topology optimization result<br />
<span class="cite">[<a href="https://www.blogger.com/blogger.g?blogID=5822805028291837738#Xhsu05">2</a>]</span></span></div>
<!--tex4ht:label?: x1-32 --><br /></div>
<hr class="endfigure" />
<!--l. 69--><br />
<div class="indent">
They note a significant drawback with this sort of an approach, “the direction of selected representative cross-section will affect the efficiency in reconstructing the three-dimensional CAD model.” In a similar vein, Bruno et al. <span class="cite">[<a href="https://www.blogger.com/blogger.g?blogID=5822805028291837738#XBruno17">3</a>]</span> propose a<br />
method that automatically generates spline-based 2D CAD models that can be meshed and then analyzed to ensure that the interpreted design still meets design requirements since there is usually a degradation in the optimized metric (compliance, max stress) caused by the interpretation step. Some drawbacks noted by Larsen et al. <span class="cite">[<a href="https://www.blogger.com/blogger.g?blogID=5822805028291837738#XLarsen09">1</a>]</span> is that these spline fitting approaches miss opportunities to fit lower order models with fewer parameters than NURBS (like<br />
<a href="https://en.wikipedia.org/wiki/Constructive_solid_geometry">CSG</a> primitives), and there is no feature operation tree at the end of the interpretation process which would be useful to designers for further design steps. They lay out a nine step method to interpret topology optimization results</div>
<ol class="enumerate1">
<li class="enumerate" id="x1-5x1">Generate an IGES file containing the faceted surfaces of the TO results<br />
</li>
<li class="enumerate" id="x1-7x2">Import the design space part and overlay with the faceted surfaces<br />
</li>
<li class="enumerate" id="x1-9x3">Identify feature surfaces among the faceted surfaces of the IGES file<br />
</li>
<li class="enumerate" id="x1-11x4">Identify design space surfaces that the feature intersects<br />
</li>
<li class="enumerate" id="x1-13x5">Signify feature orientation geometry<br />
</li>
<li class="enumerate" id="x1-15x6">Specify the number of cross sections to use<br />
</li>
<li class="enumerate" id="x1-17x7">Sample the model at each cross section location<br />
</li>
<li class="enumerate" id="x1-19x8">Compare samples to polar maps of defined shape templates<br />
</li>
<li class="enumerate" id="x1-21x9">Utilize CAD API to cut geometry away from design space part</li>
</ol>
<!--l. 100--><br />
<div class="noindent">
This process requires a significant amount of interactive user input. I think it would be great if a tool like <a href="https://www.variousconsequences.com/2019/01/inversecsg-recovers-cad-from-model.html">InverseCSG</a> could automate this work-flow or something like it.</div>
<!--l. 104--><br />
<div class="indent">
The <a href="https://www.variousconsequences.com/2019/01/openlsto-open-source-topology-optimization-code.html">OpenLSTO</a> project ships with an example 3D cantilever. It is interesting to try <a href="https://www.variousconsequences.com/2019/01/inversecsg-recovers-cad-from-model.html">InverseCSG</a> on this part. The stl file is up on <a href="https://gist.github.com/jstults/089936a1e25e5dadeb5264e0b422b047">github</a>. <br />
<script src="https://gist.github.com/jstults/089936a1e25e5dadeb5264e0b422b047.js"></script><br />
I ran this shape (converted to <a href="https://en.wikipedia.org/wiki/OFF_(file_format)">off</a>) through the <a href="https://www.variousconsequences.com/2019/01/inversecsg-recovers-cad-from-model.html">InverseCSG</a> process. The resulting CSG part is pretty rough (unusable), but interesting. </div>
<hr class="figure" />
<div class="figure">
<script src="https://gist.github.com/jstults/e4957fcbe264831406d37e3364ab106b.js"></script><br /></div>
<hr class="endfigure" />
<!--l. 113--><br />
<div class="indent">
There are plenty of parameters I probably need to tweak in the <a href="https://gist.github.com/jstults/8cddf5e18c17d18a74ed6ec538d13f88">ransac.conf file</a> or the <a href="https://gist.github.com/jstults/3e6e1ac5958ad50f213f3dddff2c3ac1">arguments in the InverseCSG call</a> for accuracy and surface or volume sampling to get a better result. If you’ve got ideas in that direction please share them in the comments.</div>
<!--l. 121--><br />
<div class="indent">
One of the aspects of <a href="https://www.variousconsequences.com/2019/01/inversecsg-recovers-cad-from-model.html">InverseCSG</a> that I like is that it ships with a 50 part benchmark dataset. </div>
<hr class="figure" />
<div class="figure">
<a href="https://www.blogger.com/null" id="x1-223"></a><br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhxiDyCF1Vg1tacMuMVCIkAzXLQQJYDBzZEYTflW1dVhTCxSbQOAGL9ONwfrA0YeV3jpAALxKfFi3CqUNKBWbkpt0kpyf7OMExDAe3JlF7WGOHzqSCwPNAfunbX9-qmzMEwtuMI0VQuGR4/s1600/benchmark.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="800" data-original-width="1600" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhxiDyCF1Vg1tacMuMVCIkAzXLQQJYDBzZEYTflW1dVhTCxSbQOAGL9ONwfrA0YeV3jpAALxKfFi3CqUNKBWbkpt0kpyf7OMExDAe3JlF7WGOHzqSCwPNAfunbX9-qmzMEwtuMI0VQuGR4/s640/benchmark.jpg" width="640" /></a></div>
<!--l. 124--><br />
<div class="caption">
<span class="id">Figure 3: </span><span class="content"><a href="http://cfg.mit.edu/content/inversecsg-automatic-conversion-3d-models-csg-trees">InverseCSG</a> benchmark dataset</span></div>
<!--tex4ht:label?: x1-223 --><br /></div>
<hr class="endfigure" />
<!--l. 128--><br />
<div class="indent">
None of thse really looks like the sort of shape you get from a topology optimization code though. I think a really cool expansion to the <a href="https://www.variousconsequences.com/2019/01/inversecsg-recovers-cad-from-model.html">InverseCSG</a> benchmark dataset would be some outputs from a tool like <a href="https://www.variousconsequences.com/2019/01/openlsto-open-source-topology-optimization-code.html">OpenLSTO</a> or <a href="https://github.com/williamhunter/topy">ToPy</a> perhaps inspired by classic topology optimization cases that are often used in the literature like the Michell cantilever, the Messerschmidt–Bölkow–Blohm beam.</div>
<!--l. 138--><br />
<div class="indent">
Drop a commnent and let me know if you have luck trying this work-flow on any of your parts, or if you see any improvements out there you like that tackle the topology optimization interpretation step in a useful way. <br />
<br />
Thanks for reading!</div>
<br />
<h3 class="likesectionHead">
<a href="https://www.blogger.com/null" id="x1-1000"></a>References</h3>
<!--l. 143--><br />
<div class="noindent">
</div>
<div class="thebibliography">
<div class="bibitem">
<span class="biblabel"><br />
[1]<span class="bibsp"> </span></span><a href="https://www.blogger.com/null" id="XLarsen09"></a>Larsen, S., Jensen, C.G., “<a href="https://www.tandfonline.com/doi/abs/10.3722/cadaps.2009.407-418">Converting Topology Optimization Results into Parametric CAD Models</a>,” Computer-Aided Design and Applications, Vol. 6, No. 3, Taylor & Francis, 2009.<br />
</div>
<div class="bibitem">
<span class="biblabel"><br />
[2]<span class="bibsp"> </span></span><a href="https://www.blogger.com/null" id="Xhsu05"></a>Hsu, M.H., Hsu, Y.L., “<a href="https://dl.acm.org/citation.cfm?id=1668662">Interpreting three-dimensional structural topology optimization results</a>,” Journal of Computers and Structures, Vol. 83, No. 4-5, 2005.<br />
</div>
<div class="bibitem">
<span class="biblabel"><br />
[3]<span class="bibsp"> </span></span><a href="https://www.blogger.com/null" id="XBruno17"></a>Bruno, H.B.S., Barros, G., Martha, L., Menezes, I., “<a href="http://webserver2.tecgraf.puc-rio.br/~lfm/papers/HugoBruno-MECSOL2017.pdf">Interpretation of Density-Based Topology Optimization Results by Means of a Topological Data Structure</a>,” VI International Symposium on Solid Mechanics - MecSol 2017, 26-28 April, 2017.<br />
</div>
</div>
Joshua Stultshttp://www.blogger.com/profile/03506970399027046387noreply@blogger.com4tag:blogger.com,1999:blog-5822805028291837738.post-29114403400012986272019-01-09T17:50:00.002-05:002019-01-09T17:50:45.959-05:00InverseCSG recovers CAD from model<iframe allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="" frameborder="0" height="315" src="https://www.youtube.com/embed/mf7Xd6oxNrM" width="560"></iframe><br />
The MIT Computational Fabrication Group has <a href="http://cfg.mit.edu/content/inversecsg-automatic-conversion-3d-models-csg-trees">a page up</a> with the abstract and links to the paper and video. The InverseCSG folks took a <i>program synthesis</i> approach to enable them to generate CAD boolean operation "programs" from the 3D model "specification." <br />
<br />
<a name='more'></a><br />
The method seems pretty robust. It can handle noise in the input mesh, and it will approximate the input mesh with primitives that it knows even if the mesh was generated by primitives that it does not. <br />
<br />
The <a href="https://github.com/mit-gfx/InverseCSG">github repository for InverseCSG</a> is empty (email with the author said they are still planning on posting it after some tiddying up of the code), but a snapshot of the code is available in the <a href="https://dl.acm.org/citation.cfm?doid=3272127.3275006">source material on the ACM digital library site</a>. I had no trouble getting the code to compile on Fedora 29. You'll also need to install <a href="https://bitbucket.org/gatoatigrado/sketch-frontend/wiki/Home">sketch</a> by <a href="https://bitbucket.org/gatoatigrado/sketch-frontend/wiki/Installation#!dependencies-linux-opensuse-ubuntu-fedora-debian-mandrivia">following the instructions</a>, and make sure <a href="https://maven.apache.org/">Maven</a> and <a href="https://scikit-learn.org/stable/">Scikit Learn</a> are installed (likely packages are available for your distro). <br />
<br />
I ran the example as it suggested in the README<br />
<code><br />
python3 run_tests.py ../build/ one_cube<br />
</code><br />
You'll need to comment out line 7 in main.py: <br />
<code><br />
#import genetic_algorithm_pipeline <br />
</code><br />
This little bit of code clean up should be taken care of if you get the version off the github site once it goes live. With that small fix the test cases run as expected. <br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhPbrn-8rYsuzYOmFqOascryLKhuhlBgCpxWIyRSrCZL8uzi-9t7pMlPhnoRnUVz2HMVkYKvlNfjTsHb3igDlYH_ICOS4KfO_RDzSLRXqnuGzMTbRhApQV3-tivnk53JFQEkQ7alTxS96o/s1600/Screenshot+from+2019-01-09+17-49-02.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="771" data-original-width="1087" height="282" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhPbrn-8rYsuzYOmFqOascryLKhuhlBgCpxWIyRSrCZL8uzi-9t7pMlPhnoRnUVz2HMVkYKvlNfjTsHb3igDlYH_ICOS4KfO_RDzSLRXqnuGzMTbRhApQV3-tivnk53JFQEkQ7alTxS96o/s400/Screenshot+from+2019-01-09+17-49-02.png" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">One Cube Test Case OpenSCAD results</td></tr>
</tbody></table>
<br />
<br />
One of the other interesting things about this work is that they built a 50 CAD model benchmark data set. It will be cool to see if more researchers take up this benchmark to test new and improved algorithms. As it says in the README you can browse through the benchmark database
in the example folder, and open up the results in OpenSCAD. <br />
<br />
Other coverage: <br />
<ul>
<li><a href="https://3dprintingindustry.com/news/mit-researchers-automate-reverse-engineering-of-3d-models-146286/">MIT researchers automate reverse engineering of 3D models</a> </li>
</ul>
Joshua Stultshttp://www.blogger.com/profile/03506970399027046387noreply@blogger.com3tag:blogger.com,1999:blog-5822805028291837738.post-74801410405008945732019-01-04T12:36:00.001-05:002019-01-04T12:36:01.334-05:00OpenLSTO: New Open Source Topology Optimization Code<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://1.bp.blogspot.com/-EKVdv-iNYcU/XC-VE8ActgI/AAAAAAAADA4/FMKOv9tEycweGwLVhTE2ifhgrhiEhudrQCLcBGAs/s1600/openlsto-cantilever.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="215" data-original-width="678" height="126" src="https://1.bp.blogspot.com/-EKVdv-iNYcU/XC-VE8ActgI/AAAAAAAADA4/FMKOv9tEycweGwLVhTE2ifhgrhiEhudrQCLcBGAs/s400/openlsto-cantilever.png" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Optimized 3D Cantilever from <a href="http://m2do.ucsd.edu/static/pdf/OpenLSTO-Tutorial-v1.0.pdf">OpenLSTO Tutorial</a></td></tr>
</tbody></table><span id="goog_930509479"></span><span id="goog_930509480"></span><br />
I was excited to see this short mention of a new open source topology optimization code in the <a href="https://aerospaceamerica.aiaa.org/year-in-review-index/2018/">Aerospace America Year in Review</a>. <br />
<blockquote>In July, University of California, San Diego published open-source level set topology optimization software. This new software routinely runs 10 million element models by adapting and tailoring the level set method, making design for additive manufacturing immediately accessible.<br />
<a href="https://aerospaceamerica.aiaa.org/year-in-review/new-computing-tools-international-collaboration-spell-design-progress/">New computing tools, international collaboration spell design progress</a></blockquote><br />
The <a href="http://m2do.ucsd.edu/software/">software site</a> for UC San Diego's <a href="http://m2do.ucsd.edu/">Multiscale, Multiphysics optimization lab</a> has the basic license information, and links to documentation and downloads. The <a href="https://github.com/M2DOLab/OpenLSTO/blob/master/README.md">source code</a> is up on github as well. <br />
<br />
<a name='more'></a><br />
I like the level-set method OpenLSTO uses as opposed to the voxel-based approach such as in <a href="https://github.com/williamhunter/topy">ToPy</a>. I think at this point ToPy is still more capable since it can do heat transfer topology optimization as well as compliance minimization. The OpenLSTO developers claim to have heat transfer capabilities in their planned upgrades. In ToPy there was some post-processing effort required on the user's part to get from the <a href="https://www.variousconsequences.com/2012/12/open-source-topology-optimization-for.html">voxel output to a 3D printable shape</a> in a standard format suitable for fabrication. Getting an <a href="https://en.wikipedia.org/wiki/STL_(file_format)">stl file</a> export directly from the topology optimization code is great feature that is really enabled by OpenLSTO's use of a level-set approach. <br />
<br />
I can't wait to see what folks use this code to design. I am hoping for rapid progress in the new year on this new, and exciting open source topology optimization software! Joshua Stultshttp://www.blogger.com/profile/03506970399027046387noreply@blogger.com2tag:blogger.com,1999:blog-5822805028291837738.post-74650319753764096552018-08-12T10:02:00.001-04:002018-08-12T10:03:50.026-04:00Monoprice Mini Delta 3D PrinterI recently bought my first personal 3D printer. I have been involved in DIY and hobbyist 3D printing for many years through the <a href="http://www.daytondiode.org/">Dayton Diode</a> hackerspace I co-founded. <iframe frameborder="0" marginheight="0" marginwidth="0" scrolling="no" src="//ws-na.amazon-adsystem.com/widgets/q?ServiceVersion=20070822&OneJS=1&Operation=GetAdHtml&MarketPlace=US&source=ac&ref=tf_til&ad_type=product_link&tracking_id=variousconseq-20&marketplace=amazon&region=US&placement=B07CJQ3D6L&asins=B07CJQ3D6L&linkId=698e18be1de0338fdfd7c53926e57612&show_border=false&link_opens_in_new_window=false&price_color=333333&title_color=0066C0&bg_color=FFFFFF" style="clear: left; float: left; height: 240px; margin-bottom: 1em; margin-right: 1em; width: 120px;"></iframe> This <a href="https://www.amazon.com/gp/product/B07CJQ3D6L/ref=as_li_tl?ie=UTF8&camp=1789&creative=9325&creativeASIN=B07CJQ3D6L&linkCode=as2&tag=variousconseq-20&linkId=f4afd1e2716a9d5f3e051da8c85ee81b">Monoprice Mini Delta</a> is the first printer of my very own. The price point is amazing (less than $160!), and things just work right out of the box. What a hugely different experience than building that <a href="http://www.daytondiode.org/2012/06/printrbot-has-arrived.html">first printrbot kit</a> (<a href="https://hackaday.com/2018/07/19/a-farewell-to-printrbot/">RIP printrbot</a>). The Printrbot story is actually a piece of the <a href="https://amzn.to/2nxAt3e">Innovator's Dilemma</a> playing out in this market niche. Printrbot disrupted a higher-cost competitor (Makerbot) who retreated up-market towards higher-end machines, and was then in-turn disrupted by foreign suppliers like Monoprice. This caused Printrbot to reatreat unsuccessfully up-market themselves towards $1000 machines. Who will disrupt Monoprice? I can't wait for my voice controlled, artificially intelligent, $20 printer... In the meantime, this post is about my experience with this little desktop <a href="https://en.wikipedia.org/wiki/Fused_filament_fabrication">FDM machine</a> you can buy today. <br />
<a name='more'></a><br />
<br />
The printer has a small build volume, and because it is a delta style printer the volume is roughly cylindrical. <table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right; margin-left: 1em; text-align: right;"><tbody>
<tr><td style="text-align: center;"><a href="https://3.bp.blogspot.com/-UabbCFT-doE/W3Ay_HdM_RI/AAAAAAAAC_Y/XUAysh5_-1g6yZJ8cpHjX4qbUu3LDNszgCLcBGAs/s1600/Screenshot%2Bfrom%2B2018-08-12%2B09-15-27.png" imageanchor="1" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" data-original-height="817" data-original-width="1171" height="140" src="https://3.bp.blogspot.com/-UabbCFT-doE/W3Ay_HdM_RI/AAAAAAAAC_Y/XUAysh5_-1g6yZJ8cpHjX4qbUu3LDNszgCLcBGAs/s200/Screenshot%2Bfrom%2B2018-08-12%2B09-15-27.png" width="200" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Mini Delta Profile on <a href="https://ultimaker.com/en/products/ultimaker-cura-software">Cura</a></td></tr>
</tbody></table>The <a href="https://mpminidelta.monoprice.com/">product page lists</a> the build volume as 110 mm in diameter, and 120 mm tall. I have always like the user-friendly, yet powerful, open source slicing software <a href="https://ultimaker.com/en/products/ultimaker-cura-software">Cura</a>. There are profiles available from <a href="https://www.mpminidelta.com/start">the wiki</a> to load all the machine parameters and settings to get you started slicing your own design files. <br />
<br />
The first thing I printed was the little cat that comes loaded on the micro SD card with the printer. It came out great. My daughter thought it was amazing. Part of the reason I got this little desktop printer was so that she would be exposed to the possibilities these technologies represent at an early age. I also hoped to print some things that would be useful around the house. I thought it would be towel hooks or racks or brackets, but the first useful thing I printed was a little template for my wife so she could slice small pizzas evenly into thirds. A totally unexpected use case, but that's sort of the point of personal desktop fabrication. <br />
<br />
My overall experience with this printer has been great, but I have had some issues. The printbed is heated, which is required for printing ABS, and is helpful for PLA, but the build volume is not enclosed. I have noticed that if I don't protect the printer from drafts I can get warping and poor layer adhesion. The thing that I've found works pretty well is setting up two <a href="https://amzn.to/2MmkYcK">tri-fold foam boards</a> around the printer to create a bit of a torturous path for air circulation. It's not a full enclosure, but it's a simple expedient that's enough to help improve print quality and repeatability. The other problem I have had to deal with is nozzle clogs. I have found that you can use a small <a href="https://amzn.to/2B2LOC2">guitar string</a> to ream out the nozzle from time-to-time to clear these clogs out. <br />
<br />
For bed adhesion with PLA I recommend using plenty of raft, and I like the "Touching Buildplate" support option. There are also lots of infill patterns to experiment with in Cura. I have been primarily using the Octet infill pattern with good results. <br />
<br />
Video of the first print: <br />
<iframe allow="autoplay; encrypted-media" allowfullscreen="" frameborder="0" height="315" src="https://www.youtube.com/embed/32mmvqcro4Q" width="560"></iframe><br />
<br />
Close up of the first print in progress: <br />
<iframe allow="autoplay; encrypted-media" allowfullscreen="" frameborder="0" height="315" src="https://www.youtube.com/embed/5PxpDtuJxow" width="560"></iframe><br />
<br />
Joshua Stultshttp://www.blogger.com/profile/03506970399027046387noreply@blogger.com0tag:blogger.com,1999:blog-5822805028291837738.post-21311208951343864982018-02-07T06:52:00.002-05:002018-02-07T06:52:57.504-05:00<iframe width="560" height="315" src="https://www.youtube.com/embed/2mCGbguCw2U" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe><br />
Some interesting aerodynamics & control details on the re-design required for the Falcon Heavy at 15:20 or so. Great launch! Joshua Stultshttp://www.blogger.com/profile/03506970399027046387noreply@blogger.com0tag:blogger.com,1999:blog-5822805028291837738.post-63180352870337020442017-12-17T07:18:00.003-05:002017-12-17T07:18:57.665-05:00Topology Optimization with ToPy: Pure Bending<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://2.bp.blogspot.com/-gc7ULJqLkWw/WjVBKAKOf4I/AAAAAAAAC7U/58r5RkPKHrcx_BgVaSxYyAldvQysm4wfACLcBGAs/s1600/path945.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="748" data-original-width="1080" height="443" src="https://2.bp.blogspot.com/-gc7ULJqLkWw/WjVBKAKOf4I/AAAAAAAAC7U/58r5RkPKHrcx_BgVaSxYyAldvQysm4wfACLcBGAs/s640/path945.png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">From <a href="http://naca.central.cranfield.ac.uk/reports/arc/rm/3303.pdf">The Design of Michell Optimal Structures</a></td></tr>
</tbody></table>Here is an interesting paper from 1962 on the design of optimal structures: <a href="http://naca.central.cranfield.ac.uk/reports/arc/rm/3303.pdf">The Design of Michell Optimal Structures</a>. One of the examples is for pure bending as shown in the figure above. I thought this would be a neat load-case to try in ToPy.<br />
<br />
<a name='more'></a><br />
The problem definition is in <a href="https://gist.github.com/jstults/4067f6d8d9050fe8f633966e40966f5c">this gist</a>, and the final optimized topology for a 0.2 volume fraction is shown below. I also put a video up on youtube of the <a href="https://youtu.be/UXkHr4I8DlY">optimization iterations</a> so you can see the progress as ToPy optimizes the structure.<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-cUumf2nU5os/WjZgp4ov5-I/AAAAAAAAC7k/iIoH1Fln-vgnCqUKdZ6xwaclOQG4EO_XwCLcBGAs/s1600/frame_334.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://3.bp.blogspot.com/-cUumf2nU5os/WjZgp4ov5-I/AAAAAAAAC7k/iIoH1Fln-vgnCqUKdZ6xwaclOQG4EO_XwCLcBGAs/s640/frame_334.png" width="640" height="360" data-original-width="1280" data-original-height="720" /></a></div><iframe width="560" height="315" src="https://www.youtube.com/embed/UXkHr4I8DlY" frameborder="0" gesture="media" allow="encrypted-media" allowfullscreen></iframe>Joshua Stultshttp://www.blogger.com/profile/03506970399027046387noreply@blogger.com0tag:blogger.com,1999:blog-5822805028291837738.post-55253335872444070972017-11-29T06:47:00.000-05:002017-11-29T06:47:27.053-05:00Topology Optimization for Coupled Thermo-Fluidic ProblemsInteresting video of a talk by Ole Sigmund on optimizing topology for fluid mixing or heat transfer. <br />
<iframe width="560" height="315" src="https://www.youtube.com/embed/HH9RBQVzSZg" frameborder="0" allowfullscreen></iframe>Joshua Stultshttp://www.blogger.com/profile/03506970399027046387noreply@blogger.com0tag:blogger.com,1999:blog-5822805028291837738.post-33992963463418959872017-11-26T13:00:00.000-05:002017-12-02T16:37:20.487-05:00Installing ToPy in Fedora 26<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiZI_-fD2SUXieCBSAvVouFd14XnaR7V_d3y10fScJvu-LvY4EMjWXFUtVosjgNol_zfS0oeOgzUrkT2B0DQ54VSwvldIROVB9Uf6DhKuvpbhih_f9S8ouBriAjIRuVk11ljMO3OSH4LCw/s1600/beam_2d_reci_gsf_fine-0064.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="200" data-original-width="600" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiZI_-fD2SUXieCBSAvVouFd14XnaR7V_d3y10fScJvu-LvY4EMjWXFUtVosjgNol_zfS0oeOgzUrkT2B0DQ54VSwvldIROVB9Uf6DhKuvpbhih_f9S8ouBriAjIRuVk11ljMO3OSH4LCw/s1600/beam_2d_reci_gsf_fine-0064.png" /></a></div>This post summarizes the steps to install <a href="https://github.com/williamhunter/topy">ToPy</a> in Fedora 26. <br />
<a name='more'></a><br />
<h3>Dependencies</h3>Get the latest <a href="http://pysparse.sourceforge.net/">pysparse</a> off the sourceforge git repo: <br />
<blockquote>$ git clone git://pysparse.git.sourceforge.net/gitroot/pysparse/pysparse </blockquote><br />
The compile failed for me straight off the repo. This tiny change to an fprintf fixed it for me: <br />
<code><br />
$ git diff<br />
diff --git a/pysparse/direct/superlu/src/util.c b/pysparse/direct/superlu/src/util.c<br />
index 6647ec2..7864cbb 100644<br />
--- a/pysparse/direct/superlu/src/util.c<br />
+++ b/pysparse/direct/superlu/src/util.c<br />
@@ -29,7 +29,7 @@ SuperLUStat_t SuperLUStat;<br />
<br />
void superlu_abort_and_exit(char* msg)<br />
{<br />
<span style="color: red;">- fprintf(stderr, msg);</span> <br />
<span style="color: green;">+ fprintf(stderr, "%s", msg);</span> <br />
exit (-1);<br />
}<br />
</code><br />
With that fix you should be good to go with a <code>python setup.py install</code> for pysparse. Get the lastest PyVTK <br />
<blockquote>git clone https://github.com/pearu/pyvtk.git</blockquote>This should then install with no problems: <code>python setup.py install</code>. <br />
<br />
The other dependencies you can install from the Fedora repository if you don't already have them: <br />
<blockquote>$ dnf install numpy python2-matplotlib sympy </blockquote><br />
<h3>Install</h3>Here are the simple steps from the read-me: <br />
<blockquote>Once you've downloaded the dependencies (see the INSTALL file) all you need to do is the following:<br />
<br />
$ git clone https://github.com/williamhunter/topy.git<br />
$ cd topy/topy<br />
$ python setup.py install<br />
<br />
Alternatively, you can download the latest stable release, but it usually lags a little behind the Master branch (as can be expected).</blockquote><br />
<h3>Try an Examples</h3>Check out the install by running one of the examples that ships with ToPy. From the directory where you cloned the repo, <br />
<blockquote>$ cd examples/mbb_beam/<br />
$ python optimise.py beam_2d_reci_gsf_fine.tpd <br />
</blockquote><br />
The first time you run ToPy it will do some symbolic algebra to generate the elements for the FEA, then it will start iterating through the problem defined by in <code>beam_2d_reci_gsf_fine.tpd</code> (tpd stands for ToPy Problem Definition, a text file describing the problem and solver parameters). After many iterations the solution should look something like the picture at the top of this post. <br />
<br />
I also prefer to make png's of the output with the Image module using the <code>fromarray</code> function. I convert the ToPy design variables from floats to integers using <a href="https://www.scipy.org/">Scipy's</a> <code>uint8</code> function. This will produce nicer pixel-for-pixel output of your topology optimization problem rather than mangling it through matplotlib. Joshua Stultshttp://www.blogger.com/profile/03506970399027046387noreply@blogger.com2tag:blogger.com,1999:blog-5822805028291837738.post-86853734613563429702017-11-20T00:00:00.000-05:002017-11-20T06:08:33.267-05:00Machine Learning for CFD Turbulence Closures<div class="separator" style="clear: both; text-align: center;"></div><a href="https://3.bp.blogspot.com/-DIErlKpRxcY/WgmBs3mmAGI/AAAAAAAAC5Y/yKn0e8Zsvr4Lr-ZgbleozDrHMJhvO5k6ACLcBGAs/s1600/developing-compressible-flat-plate-boundary-layer.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="1600" data-original-width="124" height="640" src="https://3.bp.blogspot.com/-DIErlKpRxcY/WgmBs3mmAGI/AAAAAAAAC5Y/yKn0e8Zsvr4Lr-ZgbleozDrHMJhvO5k6ACLcBGAs/s640/developing-compressible-flat-plate-boundary-layer.png" width="48" /></a>I wrote a couple previous posts on some interesting work using <a href="http://www.variousconsequences.com/2017/11/deep-learning-to-accelerate-topology-optimization.html">deep learning to accelerate topology optimization</a>, and a couple <a href="http://www.variousconsequences.com/2017/11/deep-learning-to-accelerate-computational-fluid-dynamics.html">neural network methods for accelerating computational fluid dynamics</a> (with <a href="http://">source</a>). This post is about a use of machine learning in computational fluid dynamics (CFD) with a slightly different goal: to improve the quality of solutions. Rather than a focus on getting to solutions more <i>quickly</i>, this post covers work focused on getting <i>better</i> solutions. A better solution is one that has more predictive capability. There is usually a trade-off between predictive capability, and how long it takes to get a solution. The most well-known area for improvement in predictive capability of state-of-the-practice, industrial CFD is in our turbulence and transition modeling. There are a proliferation of approaches to tackling that problem, but the overall strategy that seems to be paying off is for CFD'ers to follow the enormous investment being made by the large tech companies in techniques, open source libraries, and services for machine learning. How can those free / low-cost tools and techniques be applied to our problems? <br />
<br />
The authors of <a href="https://www.osti.gov/scitech/servlets/purl/1367203">Machine Learning Models of Errors in Large Eddy Simulation Predictions of Surface Pressure Fluctuations</a> used machine learning techniques to model the error in their LES solutions. See an illustration of the instantaneous density gradient magnitude of the developing boundary layer from that paper shown to the right. Here's the abstract,<br />
<blockquote>We investigate a novel application of deep neural networks to modeling of errors in prediction of surface pressure fluctuations beneath a compressible, turbulent flow. In this context, the truth solution is given by Direct Numerical Simulation (DNS) data, while the predictive model is a wall-modeled Large Eddy Simulation (LES<br />
). The neural network provides a means to map relevant statistical flow-features within the LES solution to errors in prediction of wall pressure spectra. We simulate a number of flat plate turbulent boundary layers using both DNS and wall-modeled LES to build up a database with which to train the neural network. We then apply machine learning techniques to develop an optimized neural network model for the error in terms of relevant flow features</blockquote><a name='more'></a><br />
<br />
They implemented three types of neural networks<br />
<ul><li>Multi-layer perceptron (MLP) with 159 input nodes, 3 fully-connected hidden layers and an output layer of 159 nodes</li>
<li>Convolution neural network (CNN) with 159 input nodes, one-dimensional convolutional layer, then a max pool layer and then output layers</li>
<li>Convolution-Deconvolution neural network (CDNN) with convolution, max-pool, fully-connected and then finally output layers</li>
</ul>They also tried different types of training and hold-out approaches for their small database of supersonic boundary layers. The data set for training is small since direct numerical simulation (DNS) is so compute intensive. <br />
<br />
<a href="https://doi.org/10.1016/j.jcp.2015.11.012">A paradigm for data-driven predictive modeling using field inversion and machine learning</a> illustrates a related approach to learning turbulence model-form error as opposed to learning optimal model parameter values. Often in practice the nominal values suggested in the literature for a specific turbulence model are used with no learning (optimization, calibration) for a specific flow. Here's the abstract <br />
<blockquote>We propose a modeling paradigm, termed field inversion and machine learning (FIML), that seeks to comprehensively harness data from sources such as high-fidelity simulations and experiments to aid the creation of improved closure models for computational physics applications. In contrast to inferring model parameters, this work uses inverse modeling to obtain corrective, spatially distributed functional terms, offering a route to directly address model-form errors. Once the inference has been performed over a number of problems that are representative of the deficient physics in the closure model, machine learning techniques are used to reconstruct the model corrections in terms of variables that appear in the closure model. These reconstructed functional forms are then used to augment the closure model in a predictive computational setting. As a first demonstrative example, a scalar ordinary differential equation is considered, wherein the model equation has missing and deficient terms. Following this, the methodology is extended to the prediction of turbulent channel flow. In both of these applications, the approach is demonstrated to be able to successfully reconstruct functional corrections and yield accurate predictive solutions while providing a measure of model form uncertainties.</blockquote><br />
In much the same spirit as that paper, here's a set of slides on <a href="http://www.nianet.org/wp-content/uploads/2016/06/Xiao_NASA-2016-08-17.pdf">Physics-Informed Machine Learning for Predictive Turbulence Modeling</a>. The author suggests an important difference in approach for applying machine learning to computational physics, "<b>Assist but respect models</b>: Machine learning should be used to correct/improve existing models, not to replace them. Thus, we learn the <b>model discrepancy</b>, not the model output directly." [emphasis in original, slide 12] <br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-W6IrvJ8li_Q/WgwkHmoQ_3I/AAAAAAAAC5s/lDZfJdUzH4MrEa0FB9X4RS4OCFKuMlvZQCLcBGAs/s1600/Screenshot%2Bfrom%2B2017-11-15%2B06-24-52.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="994" data-original-width="1370" height="464" src="https://4.bp.blogspot.com/-W6IrvJ8li_Q/WgwkHmoQ_3I/AAAAAAAAC5s/lDZfJdUzH4MrEa0FB9X4RS4OCFKuMlvZQCLcBGAs/s640/Screenshot%2Bfrom%2B2017-11-15%2B06-24-52.png" width="640" /></a></div>I really like this approach because it allows us to benefit from all of the sunk cost in developing the Reynolds Averaged Navier Stokes (RANS) codes engineers use in anger today. We can add these machine learning approaches as "wrappers" around the existing tool-set to improve our predictive capability. A couple critical questions the author presents for applying these approaches,<br />
<ul><li>Where does the training data come from?</li>
<li>What are the quantities to learn (responses, targets, dependent variables)? Are they universal, at least to some extent?</li>
<li>What are the features (predictors, independent variables)?</li>
<li>What learning algorithm should be used?</li>
</ul>The author presents answers addressing these questions for a RANS example. <br />
<br />
<a href="https://nari.arc.nasa.gov/sites/default/files/DuraisamyLEARNSlides.pdf">Another set of slides</a> based on a Phase I <a href="https://nari.arc.nasa.gov/learn">NASA LEARN Project</a> illustrates an approach to learning turbulence models. They give a proof of concept using a supervised learning approach where the features are the terms in a Spalart-‐Allmaras turbulence model. <br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-9WLbILuZ7FE/WhAgw-qEcNI/AAAAAAAAC6A/XJ9r20Xdy0Ia1-sy3druL1L4UHnXVcKlACLcBGAs/s1600/Screenshot%2Bfrom%2B2017-11-18%2B06-57-46.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="791" data-original-width="1063" height="476" src="https://4.bp.blogspot.com/-9WLbILuZ7FE/WhAgw-qEcNI/AAAAAAAAC6A/XJ9r20Xdy0Ia1-sy3druL1L4UHnXVcKlACLcBGAs/s640/Screenshot%2Bfrom%2B2017-11-18%2B06-57-46.png" width="640" /></a></div>The slides cover some tweaks the authors made to the loss function to make it more applicable to CFD. The slides also summarize <a href="https://web.stanford.edu/group/ctr/Summer/SP14/09_Large-eddy_simulation/07_duraisamy.pdf">a paper on transition modeling</a> where they fit an indeterminacy field with a couple machine learning techniques: Gausian processes and neural networks. <a href="http://highorder.berkeley.edu/proceedings/aiaa-cfd-2017/6.2017-3626">Here's a paper</a> that applies this data-driven, or data augmented, approach to a two-equation RANS model. <br />
<br />
This data-driven machine learning approach to improving CFD has plenty of good ideas left to pursue. You could learn parameters of existing models, learn entirely new models, learn how to blend different models, build specific data sets to learn solutions in peculiar application areas, and probably a ton more I haven't even thought of yet. All of these results and different approaches to improving our CFD predictions are exciting, but there's always a catch. So you want to use machine learning to solve your problem? Now you have another problem: how are you going to get enough data? I like the vision for the future at the end of <a href="https://nari.arc.nasa.gov/sites/default/files/DuraisamyLEARNSlides.pdf">those LEARN Project slides</a>: "A continuously augmented curated database/website of high-‐fidelity CFD solutions (and experimental data!) that are input to the machine learning process." Something like that would benefit the good ideas we have now, and the ones we haven't got to yet. <br />
<br />
This is a pretty neat area of research. Please drop a comment with links to more! Joshua Stultshttp://www.blogger.com/profile/03506970399027046387noreply@blogger.com1tag:blogger.com,1999:blog-5822805028291837738.post-64096401310493458342017-11-13T06:30:00.000-05:002017-11-13T06:30:01.169-05:00Deep Learning to Accelerate Computational Fluid Dynamics<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://3.bp.blogspot.com/-3FiXGsT_bPI/WgXW15jozJI/AAAAAAAAC48/hSgh32WdxN0WbRTOkdh1lisKlFroLJDPACLcBGAs/s1600/fig_1.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="622" data-original-width="1033" height="384" src="https://3.bp.blogspot.com/-3FiXGsT_bPI/WgXW15jozJI/AAAAAAAAC48/hSgh32WdxN0WbRTOkdh1lisKlFroLJDPACLcBGAs/s640/fig_1.png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><a href="https://arxiv.org/abs/1705.09036">Lat-Net</a>: Compressing Lattice Boltzmann Flow Simulations using Deep Neural Networks</td></tr>
</tbody></table>I posted about a surprising application of <a href="http://www.variousconsequences.com/2017/11/deep-learning-to-accelerate-topology-optimization.html">deep learning to accelerate topology optimization</a>. The thing I like about that approach is it's a strategy that could be applied to accelerate many different solvers that we use to simulate all sorts of continuum mechanics based on partial differential equations (i.e. computational fluid dynamics, structural mechanics, electrodynamics, etc.). With a bit of help from Google I found <a href="https://arxiv.org/abs/1705.09036">a neat paper</a> and <a href="https://github.com/loliverhennigh/Phy-Net">project on github</a> doing exactly that for a Lattice-Boltzmann fluid solver. <br />
<a name='more'></a><br />
Here's the abstract,<br />
<blockquote>Computational Fluid Dynamics (CFD) is a hugely important subject with applications in almost every engineering field, however, fluid simulations are extremely computationally and memory demanding. Towards this end, we present Lat-Net, a method for compressing both the computation time and memory usage of Lattice Boltzmann flow simulations using deep neural networks. Lat-Net employs convolutional autoencoders and residual connections in a fully differentiable scheme to compress the state size of a simulation and learn the dynamics on this compressed form. The result is a computationally and memory efficient neural network that can be iterated and queried to reproduce a fluid simulation. We show that once Lat-Net is trained, it can generalize to large grid sizes and complex geometries while maintaining accuracy. We also show that Lat-Net is a general method for compressing other Lattice Boltzmann based simulations such as Electromagnetism.</blockquote><br />
This is pretty cool. Other similar efforts, <br />
<ul><li><a href="https://autodeskresearch.com/publications/convolutional-neural-networks-steady-flow-approximation">Convolutional Neural Networks for Steady Flow Approximation</a></li>
<li><a href="https://arxiv.org/abs/1607.03597">Accelerating Eulerian Fluid Simulation With Convolutional Networks</a></li>
</ul>have shown orders of magnitude reduction in run-time for the price of an approximation error and the upfront cost of training the network. There's a <a href="https://www.reddit.com/r/CFD/comments/5n91uz/cfd_machine_learning_for_super_fast_simulations/">short discussion on Reddit</a> about this as well. <br />
<br />
Joshua Stultshttp://www.blogger.com/profile/03506970399027046387noreply@blogger.com0tag:blogger.com,1999:blog-5822805028291837738.post-39475598451581994492017-11-10T07:11:00.001-05:002017-11-10T11:53:51.141-05:00Deep Learning to Accelerate Topology Optimization<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://3.bp.blogspot.com/-0AudlK3o0ZM/WgWQLoAyM4I/AAAAAAAAC4U/rersemOWM7sRdThBNS4poz3NUPSVQjLkwCLcBGAs/s1600/Screenshot%2Bfrom%2B2017-11-10%2B06-31-05.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="762" data-original-width="762" height="400" src="https://3.bp.blogspot.com/-0AudlK3o0ZM/WgWQLoAyM4I/AAAAAAAAC4U/rersemOWM7sRdThBNS4poz3NUPSVQjLkwCLcBGAs/s400/Screenshot%2Bfrom%2B2017-11-10%2B06-31-05.png" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Topology Optimization Data Set for CNN Training</td></tr>
</tbody></table><a href="https://arxiv.org/abs/1709.09578">Neural networks for topology optimization</a> is an interesting paper I read on arXiv that illustrates how to speed up the <a href="https://en.wikipedia.org/wiki/Topology_optimization">topology optimization</a> calculations by using a deep learning convolution neural network. The data sets for training the network are generate in <a href="https://github.com/williamhunter/topy">ToPy</a>, which is an <a href="http://www.variousconsequences.com/2012/12/open-source-topology-optimization-for.html">Open Source topology optimization tool</a>. <br />
<a name='more'></a><br />
The approach the authors take is to run ToPy for some number of iterations to generate a partially converged solution, and then use this partially converged solution and its gradient as the input to the CNN. The CNN is trained on a data set generated from randomly generated ToPy problem definitions that are run to convergence. Here's their abstract,<br />
<blockquote>In this research, we propose a deep learning based approach for speeding up the topology optimization methods. The problem we seek to solve is the layout problem. The main novelty of this work is to state the problem as an image segmentation task. We leverage the power of deep learning methods as the efficient pixel-wise image labeling technique to perform the topology optimization. We introduce convolutional encoder-decoder architecture and the overall approach of solving the above-described problem with high performance. The conducted experiments demonstrate the significant acceleration of the optimization process. The proposed approach has excellent generalization properties. We demonstrate the ability of the application of the proposed model to other problems. The successful results, as well as the drawbacks of the current method, are discussed.</blockquote><br />
The deep learning network architecture from the paper is shown below. Each kernal is 3x3 pixels and the illustration shows how many kernals are in each layer. <br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://4.bp.blogspot.com/-ZzGOdXeijdE/WgWTmG1AYfI/AAAAAAAAC4g/1Kkpj3NmRZowFEAfq3jhDZgWDcNkRfGPACLcBGAs/s1600/Screenshot%2Bfrom%2B2017-11-10%2B06-54-44.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="250" data-original-width="783" height="203" src="https://4.bp.blogspot.com/-ZzGOdXeijdE/WgWTmG1AYfI/AAAAAAAAC4g/1Kkpj3NmRZowFEAfq3jhDZgWDcNkRfGPACLcBGAs/s640/Screenshot%2Bfrom%2B2017-11-10%2B06-54-44.png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Architecture (Figure 3) from Neural Networks for Topology Optimization</td></tr>
</tbody></table><br />
The data set that the authors used to train the deep learning network contained 10,000 randomly generated (with certain constraints, <a href="https://arxiv.org/pdf/1709.09578.pdf">see the paper</a>) example problems. Each of those 10k "objects" in the data set included 100 iterations of the ToPy solver, so they are 40x40x100 tensors (40x40 is the domain size). The authors claim a 20x speed-up in particular cases, but the paper is a little light in actually showing / exploring / explaining timing results. <br />
<br />
The problem for the network to learn is to predict the final iteration from some intermediate state. This seems like it could be a generally applicable approach to speeding up convergence of PDE solves in computational fluid dynamics (CFD) or computational structural mechanics / finite element analysis. I haven't seen this sort of approach to speeding up solvers before. Have you? Please leave a comment if you know of any work applying similar methods to CFD or FEA for speed-up. Joshua Stultshttp://www.blogger.com/profile/03506970399027046387noreply@blogger.com2tag:blogger.com,1999:blog-5822805028291837738.post-76482880202438328692017-03-25T11:19:00.000-04:002017-03-25T11:25:09.615-04:00Innovation, Entropy and Exoplanets<div class="separator" style="clear: both; text-align: center;">
<a href="https://2.bp.blogspot.com/-XNrHNwvM1s0/WNaLtRRv5iI/AAAAAAAAC1c/ScWnNthwDBMsr8F9fD_SzuimxQQKicdTwCLcB/s1600/popular-entropy.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="190" src="https://2.bp.blogspot.com/-XNrHNwvM1s0/WNaLtRRv5iI/AAAAAAAAC1c/ScWnNthwDBMsr8F9fD_SzuimxQQKicdTwCLcB/s400/popular-entropy.png" width="400" /></a></div>
I enjoy <a href="http://www.shipulski.com/">Shipulski on Design</a> for the short articles on innovation. They are generally not technical at all. I like to think of most of the posts as <i>innovation poetry</i> to put your thoughts along the right lines of effort. <a href="http://www.shipulski.com/2017/03/22/maximize-the-learning-ratio/">This recent post</a> has a huge, interesting technical iceberg riding under the surface though. <br />
<blockquote>
If you run an experiment where you are 100% sure of the outcome, your learning is zero. You already knew how it would go, so there was no need to run the experiment. The least costly experiment is the one you didn’t have to run, so don’t run experiments when you know how they’ll turn out. If you run an experiment where you are 0% sure of the outcome, your learning is zero. These experiments are like buying a lottery ticket – you learn the number you chose didn’t win, but you learned nothing about how to choose next week’s number. You’re down a dollar, but no smarter.<br />
<br />
The learning ratio is maximized when energy is minimized (the simplest experiment is run) and probability the experimental results match your hypothesis (expectation) is 50%. In that way, half of the experiments confirm your hypothesis and the other half tell you why your hypothesis was off track.<br />
<a href="http://www.shipulski.com/2017/03/22/maximize-the-learning-ratio/">Maximize The Learning Ratio</a></blockquote>
<br />
<a name='more'></a><br />
In the very simple case of accepting or rejecting a hypothesis, Shipulski's advice amounts to <a href="https://scholar.google.com/scholar?q=maximum+entropy+sampling">maximum entropy sampling</a>. That is testing where you have the greatest uncertainty between the two outcomes. <br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://www.giss.nasa.gov/meetings/cess2011/presentations/loredo.pdf" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="314" src="https://2.bp.blogspot.com/-NahK0zhrpxU/WNaDkvhiw0I/AAAAAAAAC1I/P7Xe53m1MtY7fMwUyUXZ98WEDTtLvCnKACLcB/s400/scientific-method.png" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Classical Scientific Method</td></tr>
</tbody></table>
<br />
This is an initial step towards <a href="https://en.wikipedia.org/wiki/Bayesian_experimental_design">Bayesian Experimental Design</a>. Repeatedly applying this method is what the <a href="https://exoplanets.nasa.gov/resources/1055/">folks who search for exoplanets</a> term "<a href="https://www.giss.nasa.gov/meetings/cess2011/presentations/loredo.pdf">Bayesian Adaptive Exploration</a>."<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://www.giss.nasa.gov/meetings/cess2011/presentations/loredo.pdf" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="280" src="https://2.bp.blogspot.com/-2sUoqoQmt9I/WNaDpK-tVoI/AAAAAAAAC1M/55rP03dQWUUr7lV1Ha7aDbKWUFJEOS1tACLcB/s400/bayesian-adaptive-exploration.png" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Modern Scientific Method</td></tr>
</tbody></table>
<br />
I think we intuitively do something like <a href="http://www.astro.cornell.edu/staff/loredo/bayes/bae.pdf">Bayesian Adaptive Exploration</a> when we're searching for knowledge, but this provides a mathematical and computational framework for getting really rigorous about our search. This is especially important for <a href="https://en.wikipedia.org/wiki/Wicked_problem">wicked problems</a> where our intuition can be unreliable. <br />
<br />
<h3>
Further reading:</h3>
[1] Shewry, M.C., H.P. Wynn, <a href="http://dx.doi.org/10.1080/02664768700000020">Maximum entropy sampling</a>. Journal of Applied Statistics. 14 (1987), 2, 165--170. doi: 10.1080/02664768700000020. <br />
[2] Lindley, D. V. <a href="http://projecteuclid.org/download/pdf_1/euclid.aoms/1177728069">On a Measure of the Information Provided by an Experiment</a>. Ann. Math. Statist. 27 (1956), no. 4, 986--1005. doi:10.1214/aoms/1177728069. <a href="http://projecteuclid.org/euclid.aoms/1177728069">http://projecteuclid.org/euclid.aoms/1177728069</a>. <br />
[3] Chaloner, Kathryn; Verdinelli, Isabella (1995), "<a href="http://homepage.divms.uiowa.edu/~gwoodwor/AdvancedDesign/Chaloner%20Verdinelli.pdf">Bayesian experimental design: a review</a>" (PDF), Statistical Science, 10 (3): 273–304, <a href="https://dx.doi.org/10.1214%2Fss%2F1177009939">doi:10.1214/ss/1177009939</a><br />
[4] <a href="http://www.astro.cornell.edu/staff/loredo/">Loredo, T</a>. <a href="https://www.giss.nasa.gov/meetings/cess2011/presentations/loredo.pdf">Optimal Scheduling of Exoplanet Observations via Bayesian Adaptive Exploration</a>. GISS Workshop — 25 Feb 2011. <br />
[5] <a href="http://www.astro.cornell.edu/staff/loredo/">Loredo, T</a>. <a href="http://ada6.cosmostat.org/Presentations/ada10-exoplanets.pdf">Bayesian methods for exoplanet science: Planet detection, orbit estimation, and adaptive observing</a>. ADA VI — 6 May 2010. <br />
[6] <a href="https://en.wikipedia.org/wiki/Bayesian_experimental_designhttps://en.wikipedia.org/wiki/Bayesian_experimental_design">Bayesian Experimental Design</a>, on the wikipedia. Maximize utility (expected information) of your next test point. <br />
[7] <a href="http://www.variousconsequences.com/2009/12/dueling-bayesians.html">Dueling Bayesians</a>. The link to the video is broken, but the subsection on Bayesian design of validation experiments is relevant. Your design strategy has to change if your experimental goal becomes validation rather than discovery. Joshua Stultshttp://www.blogger.com/profile/03506970399027046387noreply@blogger.com0tag:blogger.com,1999:blog-5822805028291837738.post-7041605320606141522017-03-07T07:32:00.001-05:002017-03-07T07:32:11.077-05:00NASA Open Source Software 2017 Catalog<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiT-2APWPC1aNYyhqIFpHC-qQyFvbnTZi_mlh6QmDEBo5aVBeLldtMe37I3pkVqiT2FgLX2jjOedLGQXqUELBzrY6ylYsT_WCNDXAI3VsfBFyehX4lD0UpSg-5HZdBmOVveapeat7mhboQ/s1600/nasa-software.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="140" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiT-2APWPC1aNYyhqIFpHC-qQyFvbnTZi_mlh6QmDEBo5aVBeLldtMe37I3pkVqiT2FgLX2jjOedLGQXqUELBzrY6ylYsT_WCNDXAI3VsfBFyehX4lD0UpSg-5HZdBmOVveapeat7mhboQ/s400/nasa-software.png" width="400" /></a></div>
<br />
NASA has released its <a href="https://www.nasa.gov/press-release/nasa-releases-software-catalog-granting-the-public-free-access-to-technologies-for">2017-2018 Software Catalog</a> under their <a href="http://technology.nasa.gov/">Technology Transfer Program</a>. A <a href="https://technology.nasa.gov/NASA_Software_Catalog_2017-18.pdf">pdf version of the catalog</a> is available, or you can <a href="https://software.nasa.gov/">browse by category</a>. The <a href="https://code.nasa.gov/">NASA open code repository</a> is already on my list of <a href="http://www.variousconsequences.com/p/open-source-aeronautical-engineering.html">Open Source Aeronautical Engineering tools</a>. Of course many of the codes included in that list from <a href="http://www.pdas.com/">PDAS</a> are legacy NASA codes that were distributed on various media in the days before the internet. Joshua Stultshttp://www.blogger.com/profile/03506970399027046387noreply@blogger.com0tag:blogger.com,1999:blog-5822805028291837738.post-6077917450003407212016-12-17T10:33:00.000-05:002016-12-17T10:37:03.788-05:00Hybrid Parallelism Approaches for CFD<div class="separator" style="clear: both; text-align: center;"><a href="http://conferences.computer.org/sc/2012/papers/1000a040.pdf" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://1.bp.blogspot.com/-FF15tyJJKLE/WFVYMQJIMtI/AAAAAAAACwE/kk5sqXgk6iYG7nMTiNebtwGxJrUFfzBdgCLcB/s320/hybrid-cfd-2d-pressure-wave-ver.png" width="270" height="320" /></a></div>This previous post, <a href="http://www.variousconsequences.com/2016/11/plenty-of-room-at-exascale.html">Plenty of Room at Exascale</a>, focuses on one specific commercial approach to scaling CFD to large problems on heterogeneous hardware (CPU & GPU) clusters. Here's some more references I found interesting reading on this sort of approach. <br />
<br />
<h3>Strategies</h3><a href="https://arxiv.org/pdf/1309.3018v1.pdf">Recent progress and challenges in exploiting graphics processors in computational fluid dynamics</a> provides some general strategies for using multiple levels of parallelism accross GPUs, CPU cores and cluster nodes based on that review of the literature: <br />
<ul><li>Global memory should be arranged to coalesce read/write requests, which can improve performance by an order of magnitude (theoretically, up to 32 times: the number of threads in a warp)</li>
<li>Shared memory should be used for global reduction operations (e.g., summing up residual values, finding maximum values) such that only one value per block needs to be returned<br />
</li>
<li>Use asynchronous memory transfer, as shown by <a href="http://www.idav.ucdavis.edu/func/return_pdf?pub_id=1040">Phillips et al.</a> and <a href="http://scholarworks.boisestate.edu/mecheng_facpubs/38/">DeLeon et al.</a> when parallelizing solvers across multiple GPUs, to limit the idle time of either the CPU or GPU.</li>
<li>Minimize slow CPU-GPU communication during a simulation by performing all possible calculations on the GPU.</li>
</ul><br />
<a name='more'></a><br />
<h3>Example Implementations</h3><br />
There are two example implementations on github that were used to illustrate the scaling with grid size for some simple 2D problems: <br />
<ul><li><a href="https://github.com/kyleniemeyer/laplace_gpu">Laplace solver running on GPU using CUDA, with CPU version for comparison </a></li>
<li><a href="https://github.com/kyleniemeyer/lid-driven-cavity_gpu">Solves lid-driven cavity problem using finite difference method on GPU, with equivalent CPU version for comparison. </a></li>
</ul><br />
One of the interesting references from <a href="https://arxiv.org/pdf/1309.3018v1.pdf">the paper</a> mentioned above is <a href="http://conferences.computer.org/sc/2012/papers/1000a040.pdf">Hybridizing S3D into an Exascale Application using OpenACC</a>. They take an approach to use a combination of <a href="http://www.openacc.org/">OpenACC</a> directives for GPU processing, <a href="http://www.openmp.org/">OpenMP</a> directives for multi-core processing, and <a href="https://www.open-mpi.org/">MPI</a> for multi-node processing. Their three-level hybrid approach performs better than any single approach alone, and by making some clever algorithm tweaks they are able to run the same code on a node without a GPU without too much performance hit. Joshua Stultshttp://www.blogger.com/profile/03506970399027046387noreply@blogger.com1tag:blogger.com,1999:blog-5822805028291837738.post-14850271248054904152016-11-19T13:34:00.001-05:002016-11-19T13:34:23.480-05:00Plenty of Room at Exascale<div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-SSxW1Ga0j1c/WDCTZEVbALI/AAAAAAAACvU/ESk0T-ixV0Ip6C8vw8MpMgz0GL_yyqiRQCLcB/s1600/exascale-cover-crop-2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="105" src="https://2.bp.blogspot.com/-SSxW1Ga0j1c/WDCTZEVbALI/AAAAAAAACvU/ESk0T-ixV0Ip6C8vw8MpMgz0GL_yyqiRQCLcB/s400/exascale-cover-crop-2.png" width="400" /></a></div><br />
The folks at <a href="http://envenio.ca/">Envenio</a> have posted an interesting marketing video on <a href="http://envenio.ca/cfdsuite/">their solver</a>. <br />
<br />
<iframe allowfullscreen="" frameborder="0" height="315" src="https://www.youtube.com/embed/lUm1DglaFGA" width="560"></iframe><br />
<br />
<a name='more'></a><br />
It references several reports on future scaling of <a href="https://en.wikipedia.org/wiki/High-performance_computing">HPC</a> architectures and <a href="https://en.wikipedia.org/wiki/Computational_fluid_dynamics">CFD</a> software solutions towards "<a href="https://en.wikipedia.org/wiki/Exascale_computing">exascale</a>." At the risk of being identified as un-trendy, I'll admit I'm still excited about giga- and tera-scale applications, but I expect the excitement to continue out to 2030. Our decision makers, investors and product developers never seem to lack for want of certainty no matter how many cores we throw at problems. <br />
<br />
The Envenio solver uses some interesting approaches to balancing load accross different types of hardware (i.e. CPU & GPU). They allow blocks (in their terminology "Cells", which are collections of control volumes and interfaces) to be single or double precision in the same calculation. This enables efficient use of GPUs. The solver is capable of doing off-line "auto-tuning" to support smart load-balancing choices for specific hardware configurations. They also do a time-domain decomposition using "coarse" and "fine" time integrators in a predictor-corrector style. They claim that using the GPUs gives them a 20x speed-up, and their unique time integration approach gives another 2x. <br />
<br />
Clusters of heterogeneous commodity hardware makes the software engineering challenge a lot more complex. Commercial solution providers are chipping away at the problem as we march towards exascale. As always, the biggest room is the room for improvement.<br />
<br />
Here's some links to the reports referenced in the video, or relevant background info: <br />
<ul><li><a href="https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20140003093.pdf">CFD Vision 2030: A Path to Revolutionary Computational Aerosciences</a></li>
<li><a href="http://science.energy.gov/~/media/ascr/pdf/research/am/docs/EMWGreport.pdf">Applied Mathematics Research for Exascale Computing</a></li>
<li><br />
<a href="http://science.energy.gov/~/media/ascr/ascac/pdf/reports/Exascale_subcommittee_report.pdf">Opportunities and Challenges of Exascale Computing</a><br />
<div class="separator" style="clear: both; text-align: center;"><a href="http://science.energy.gov/~/media/ascr/ascac/pdf/reports/Exascale_subcommittee_report.pdf" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img alt=" Opportunities and Challenges of Exascale Computing" border="0" height="184" src="https://2.bp.blogspot.com/-IowyEBIDEhw/WDBrWkfR1lI/AAAAAAAACvE/Sjop7WKpDVov7W2RSUyFgtenwfe1aoMiQCLcB/s200/opportunities_challenges_exascale_cover_cropped.png" width="200" /></a></div></li>
<li><a href="http://www.lanl.gov/conferences/salishan/salishan2014/Mavriplis.pdf">Exascale Opportunities for Aerospace Engineering</a>: a presentation by <a href="http://www.uwyo.edu/mechanical/faculty-staff/dimitri-mavriplis/">Dimitri Mavriplis</a></li>
<li><a href="https://arxiv.org/pdf/1309.3018v1.pdf">Recent progress and challenges in exploiting graphics processors<br />
in computational fluid dynamics</a></li>
<li><a href="https://en.wikipedia.org/wiki/Parareal">Parareal algorithm</a></li>
</ul><br />
<br />
Joshua Stultshttp://www.blogger.com/profile/03506970399027046387noreply@blogger.com0tag:blogger.com,1999:blog-5822805028291837738.post-75309869531729668352016-08-03T07:26:00.000-04:002016-08-03T07:26:23.103-04:00Hypersonics Basic and Applied Lectures<iframe width="640" height="360" src="https://www.youtube.com/embed/videoseries?list=PLndEULTswG5aTuZ4gGTCZ-PWx3JUCFjyk" frameborder="0" allowfullscreen></iframe>Joshua Stultshttp://www.blogger.com/profile/03506970399027046387noreply@blogger.com0