Saturday, November 19, 2016

Plenty of Room at Exascale


The folks at Envenio have posted an interesting marketing video on their solver.




It references several reports on future scaling of HPC architectures and CFD software solutions towards "exascale." At the risk of being identified as un-trendy, I'll admit I'm still excited about giga- and tera-scale applications, but I expect the excitement to continue out to 2030. Our decision makers, investors and product developers never seem to lack for want of certainty no matter how many cores we throw at problems.

The Envenio solver uses some interesting approaches to balancing load accross different types of hardware (i.e. CPU & GPU). They allow blocks (in their terminology "Cells", which are collections of control volumes and interfaces) to be single or double precision in the same calculation. This enables efficient use of GPUs. The solver is capable of doing off-line "auto-tuning" to support smart load-balancing choices for specific hardware configurations. They also do a time-domain decomposition using "coarse" and "fine" time integrators in a predictor-corrector style. They claim that using the GPUs gives them a 20x speed-up, and their unique time integration approach gives another 2x.

Clusters of heterogeneous commodity hardware makes the software engineering challenge a lot more complex. Commercial solution providers are chipping away at the problem as we march towards exascale. As always, the biggest room is the room for improvement.

Here's some links to the reports referenced in the video, or relevant background info:


No comments:

Post a Comment