Topology Optimization Data Set for CNN Training |

The approach the authors take is to run ToPy for some number of iterations to generate a partially converged solution, and then use this partially converged solution and its gradient as the input to the CNN. The CNN is trained on a data set generated from randomly generated ToPy problem definitions that are run to convergence. Here's their abstract,

In this research, we propose a deep learning based approach for speeding up the topology optimization methods. The problem we seek to solve is the layout problem. The main novelty of this work is to state the problem as an image segmentation task. We leverage the power of deep learning methods as the efficient pixel-wise image labeling technique to perform the topology optimization. We introduce convolutional encoder-decoder architecture and the overall approach of solving the above-described problem with high performance. The conducted experiments demonstrate the significant acceleration of the optimization process. The proposed approach has excellent generalization properties. We demonstrate the ability of the application of the proposed model to other problems. The successful results, as well as the drawbacks of the current method, are discussed.

The deep learning network architecture from the paper is shown below. Each kernal is 3x3 pixels and the illustration shows how many kernals are in each layer.

Architecture (Figure 3) from Neural Networks for Topology Optimization |

The data set that the authors used to train the deep learning network contained 10,000 randomly generated (with certain constraints, see the paper) example problems. Each of those 10k "objects" in the data set included 100 iterations of the ToPy solver, so they are 40x40x100 tensors (40x40 is the domain size). The authors claim a 20x speed-up in particular cases, but the paper is a little light in actually showing / exploring / explaining timing results.

The problem for the network to learn is to predict the final iteration from some intermediate state. This seems like it could be a generally applicable approach to speeding up convergence of PDE solves in computational fluid dynamics (CFD) or computational structural mechanics / finite element analysis. I haven't seen this sort of approach to speeding up solvers before. Have you? Please leave a comment if you know of any work applying similar methods to CFD or FEA for speed-up.

Machine learning for supper fast simulations has an interesting comment: "...While I find this an interesting approach, it seems to me to be really confusing to talk about 'acceleration' and 'speed-up' in the way you are, because you're doing a COMPLETELY different thing from what a standard solver is doing. "

ReplyDeleteThat's why it's surprising! Surprise is powerful...

Here's an approach to speeding up a lattice-Boltzmann solver: Deep Learning to Accelerate Computational Fluid Dynamics

ReplyDeletePretty blog, so many ideas in a single site, thanks for the informative article, keep updating more article.

ReplyDeleteDigital marketing course in chennai