Since its first announcement in July 2012, the Uber-Cloud Experiment has attracted over 160 industry and research organizations and individuals from 22 countries. They all have one goal: to jointly explore the end-to-end process of remotely accessing technical computing resources sitting in HPC centers and in the cloud. The focus of this experiment is on engineering simulations performed by small and medium enterprises that expect a quantum leap in innovation and competitiveness by using high performance computing. The Half Time report shows the progress of the team with their experiment in HPC in the Cloud. Learn more at www.hpcexperiment.com
Unlocking the Power of ChatGPT and AI in Testing - A Real-World Look, present...
Half time in the Uber-Cloud Experiment
1. Half Time in the Uber-Cloud
Status of the HPC Experiment and its 20 Application Teams
Wolfgang Gentzsch and Burak Yenier
Since its first announcement on June 28 here on HPCwire and its official start on July 20, the Uber-
Cloud Experiment attracted over 160 industry and research organizations and individuals from 22
countries who all have one goal: to jointly explore the end-to-end process of remotely accessing
technical computing resources sitting in HPC Centers and in the Cloud. The focus of this experiment is
on engineering simulations performed by small and medium enterprises that expect a quantum leap in
innovation and competitiveness by using HPC.
While the benefits of remote access to HPC is widely recognized and that we have and master most of
the technology needed to access and run our engineering workloads on remote resources, we still face
other challenges more related to us, the people, like for example, trusting the resource provider; giving
away control over our applications, data, and resources; security; provider lock-in; software licensing;
unfamiliar pay-per-use computing; and a general lack of clarity in distinguishing between hype and
reality. To explore these hurdles in detail and to learn more about this end-to-end process, we were able
to build 20 teams, each consisting of an end-user and his/her application, the software provider, the
computational resource provider, and an HPC and/or CAE expert who manages the team process. The
following 20 teams have been set up:
Team Project Description
Anchor Bolt Simulating steel to concrete fastening capacity for an anchor bolt
Resonance Electromagnetic simulations of NMR Probe heads
Simulation of the radiofrequency field distribution inside
Radiofrequency heterogeneous human body
Supersonic Simulation of jet mixing in the supersonic flow with shock
Liquid-Gas Two-phase flow simulation of separation columns
Wing-Flow Flow around an aerospace wing
Ship-Hull Simulation water flow around a hull of the ship
Cement-Flows Burner simulation with different solid fuels in mining industry
Sprinkler Simulating water flow through an irrigation water sprinkler
Space Capsule Aerothermodynamics and stability analysis of a space capsule
Car Acoustics Low frequency car acoustics
Dosimetry Numerical EMC and Dosimetry with high-res models
2. Weathermen Large-scale and high-resolution weather and climate prediction
Wind Turbine CFD simulations of vertical and horizontal wind turbines
Combustion Simulating combustion in an IC engine
Blood Flow Simulation of water/ blood flow inside rotating micro channels
ChinaCFD CFD using homegrown C/C++ application
Gas Bubbles Simulation of gas bubbles in a liquid mixing vessel
Side impact Optimization of the side-door intrusion bars under a crash
ColombiaBio Analysis of the biological diversity in a geography using R scripts
In the meantime, all 20 teams are underway, 2 of them are busy with defining their end-user project, 15
teams are in contact with the assigned computing resources and setting up the project environment, 1
working on initiating and monitoring the end-user project execution, 1 is reviewing the results with the
end-user, and 1 is already documenting the findings of the HPC as a Service process. To illustrate the
team process in more detail, we present two teams and their current status in the following.
Simulating new probe design for a medical device
Team Expert: Chris Dagdigian from BioTeam
Our team's end-user is faced with a common problem: a periodic need for large compute capacity in
order to simulate and refine potential product changes and improvements. The periodic nature of the
HPC requirements means that it is not possible to have the desired amount of capacity internally as the
company finds it difficult to justify capital expenditure for complex assets that may end up sitting idle
for long periods of time. To date the company has invested in a modest amount of internal HPC capacity
sufficient to meet base requirements. Additional HPC resources would allow the end user to greatly
expand the sensitivity of current simulations and may enable new product & design initiatives previously
written off as "untestable".
Our HPC software is CST Studio (www.cst.com), a popular commercial application for electromagnetic
simulations of many types. We are currently operating in the Amazon cloud and have successfully
completed a series of architecture refinements and scaling benchmarks. Our hybrid cloud-bursting
architecture allows local HPC resources residing at the end-user site to be utilized along with our
Amazon cloud-based resources. At this point in the project we are still exploring the scaling limits of
the Amazon GPU-equipped EC2 instance types and are beginning new tests and scaling runs designed
to test HPC task distribution via MPI. The use of MPI will allow us to leverage different EC2 instance
type configurations and scale beyond some technical limits imposed by the amount of memory residing
within the Nvidia GPU cards. We are currently at (or very nearly at) the point in which we are routinely
running simulations that would not be technically possible using the local-only resources of our end user.
We also intend to begin testing use of the Amazon EC2 Spot Market in which cloud-based assets can be
3. obtained from an auction-like marketplace offering deeply significant cost savings over traditional on-
demand hourly prices.
Multiphase flows within the cement and mineral industry
Team Expert: Ingo Seipp from science + computing ag
In this project ANSYS CFX is used to simulate a flash dryer in which hot gas is used to evaporate water
from a solid. The team consists of FLSmidth as the end user, Bull as the resource provider with its
extreme factory (XF) HPC on demand service, ANSYS as the software provider and science + computing
ag as team experts.
FLSmidth is the leading supplier of complete plants, equipment and services to the global minerals and
cement industries. The end user needs about 4 to 5 days to complete a simulation run on the local IT
infrastructure. He would like to reduce the total throughput time of the project and, in a second step,
increase the mesh size to refine the results, without investing in hardware, which may not always be
utilized full-time. For this, the simulation must be run on more cores and more memory through more
nodes connected by a high-speed network.
XF provides 150 Tflops of computing power with Infiniband, GPUs and currently about 30 installed
applications. Others are added on demand. Users can access XF through an easy-to-use web-portal or
direct logon.
In this project XF has enabled access to the end user and integrated ANSYS CFX in a web-interface for
submitting jobs for the end user. For the course of this project licenses have been granted by ANSYS. The
end user can manage his ANSYS licenses easily through the portal.
The preparations to run the jobs are almost completed now and the first test runs should be able to start
shortly.
We’d like to deeply thank our participants from Amazon AWS, BULL extreme factory, ANSYS, BioTeam,
science + computing ag, and our industry end-users working in these two teams.
Announcing Round 2 of the Uber-Cloud Experiment
We consider Round 1 as proof of the concept that: YES, remote access to HPC resources works, and,
there is a real need! YES, there are hurdles on the way, but we know how to overcome them.
During the Halftime webinar we asked the attendees: Would you participate in an Uber-Cloud
Experiment Round 2? 97% answered with “Yes”. Therefore, we decided to start a new round of the
Uber-Cloud Experiment right after the end of the current round, running from mid-November to mid-
February.
Round 2 of the experiment will be more professional; the end-to-end process of identifying, accessing
and using remote resources (hardware, software, expertise) will become more structured, standardized,
and tools-based; we will handle more teams and more applications beyond CAE, and offer a list of
additional professional services, for e.g. measuring the overall team effort. Existing teams will be
4. encouraged to use other resources, and existing participants can work in new teams.
Please find more information and the registration for Round 2 on the Uber-Cloud Experiment website.