Saturday, June 30, 2012

3D Printed Isogrid and Octet Truss

I've had some pretty good luck lately generating some simple parts for printing on Shapeways. I've settled into a pretty good work-flow using Blender's mesh modifier operations, and some additional post-processing in meshlab. The feature sizes on these example parts use the guidelines for minimum wall-thickness and aspect ratio.

Isogrid Cylinder
I used the Array modifier to replicate an isogrid in 2-D, and then used the Curve deformation modifier to wrap that into a cylinder. No post processing in meshlab on this one.

Octet Truss Cylinder
For the octet truss parts I generated 3D Arrays and then used the Curve deformation modifier, or unioned them with flat panels.

Octet Truss with Panels
For this part with panels, I used meshlab's Ball Pivoting surface reconstruction algorithm to get nice fillets between the spars of the truss and the surface of the panel on each side. This process also adds a little extra "meat" at the joints of the space frame. A few iterations of Gaussian smoothing does away with any ugly faceting with the large triangular faces that the Ball Pivoting algorithm might add.

Update: I tried out the cloud.netfabb.com STL fixing service. It seems to work automagically. They also have a basic version of their software that is free (but not Free).

Update: Some nice isogrid parts, FDM ABS by Fabbr.
Close-up showing the I-beam cross-section.


Saturday, June 23, 2012

Notre Dame V&V Workshop Notes

Last October I had the opportunity to attended a V&V workshop at Notre Dame. In a previous post I said I'd put up my notes once the slides were available. This post contains my notes from the workshop. Most of the presenter's slides are available through links in the program.

There are a couple highlights from the workshop that I'll mention before dumping the chronological notes.

James Kamm gave a really great presentation on a variety of exact solutions for 1-D Euler equations. He covered the well known shock tube solutions that you'd find in a good text on Riemann Solvers. Plus a whole lot more. Thomas Zang presented work on a NASA standard for verification and validation that grew out of the fatal Columbia mishap. The focus is not so much proscribing what a user of modeling and simulation will do to accomplish V&V, but requiring that what is done is clearly documented. If nothing is done then the documentation just requires a clear statement that nothing was done for that aspect of verification, validation, or uncertainty quantification. I like this approach because it's impossible for a standards writer to know every problem well enough to proscribe the right approach, but requiring someone to come out and put in writing "nothing was done" often means they'll go do at least something that's appropriate for their particular problem.

I think that in the area of Validation I'm philosophically closest to Bob Moser who seems to be a good Bayesian (slides here). Bill Oberkampf (who, along with Chris Roy, recently wrote a V&V book) did some pretty unconvincing hand-waving to avoid biting the bullet and taking a Bayesian approach to validation, which he (and plenty of other folks at the workshop) view as too subjective. I had a more recent chance to talk with Chris Roy about their proposed area validation metric (which is in some ASME standards), and the ad-hoc, subjective nature of the multiplier for their distribution location shifts seems a lot more treacherous to me than specifying a prior. The fact that they use frequentist distributional arguments to justify a non-distributional fudge factor (which changes based on how the analyst feels about the consequences of the decision; sometimes it's 2, but for really important decisions maybe you should use 3) doesn't help them make the case that they are successfully avoiding "too much subjectivity". Of course, subjectivity is unavoidable in decision making. There are two options. The subjective parts of decision support can be explicitly addressed in a coherent fashion, or they can be pretended away by an expanding multitude of ad-hoceries.

I appreciated the way Patrick Roache wrapped up the workshop, “decisions will continue to be made on the basis of expert opinion and circumstantial evidence, but Bill [Oberkampf] and I just don’t think that deserves the dignity of the term validation.” In product development we’ll often be faced with acting to accept risk based on un-validated predictions. In fact, that could be one operational definition of experimentation. Since subjectivity is inescapable, I resort to pragmatism. What is useful? It is not useful to say “validated models are good” or “unvalidated models are bad”. It is more useful to recognize validation activities as signals to the decision maker about how much risk they are accepting when they act on the basis of simulations and precious little else.