Earlier today, we published a case study and press release about some work we did with the HyperXite team from the University of California, Irvine team and their efforts in the Hyperloop competition.
This team leveraged CycleCloud to run ANSYS Fluent™ on Microsoft Azure Big Compute to to complete their iterations in 48 hours, enabling them to get results fast enough to make adjustments and modifications to the design then rerun the simulations until they were able to converge on a final solution. All for less than $600 in simulation costs.
This was a case where Cloud enabled them to do something they could not have done any other way.
As a bit of background, Elon Musk’s SpaceX started the Hyperloop project as a way to accelerate development of a fast, safe, low-power, and cheap method of transporting people and freight. HyperXite was one of 27 teams that competed recently.
Nima Mohseni, the team’s simulation lead, used the popular computational fluid dynamics software ANSYS Fluent™ to perform modeling of the pod. Key areas that the team modeled were related to the braking approach that they were using. Through the use of simulation, they were able to show that they could brake with just the use of magnetic force, removing the need for mechanical brakes. This reduced weight, increased efficiency, and improved the overall design, which was recognized with a Pod Technical Excellence award last year.
Using the CycleCloud software suite, the HyperXite team created an Open Grid Scheduler cluster leveraging Azure’s memory-optimized instances in the East US region. Each instance has 16 cores based on the 2.4 GHz Intel Xeon® E5-2673 v3 (Haswell) processor with 7 GB of memory per core. Eight total instances were used, yielding a 128-core cluster – twice the size available to them internally. Since they used CycleCloud’s autoscaling feature, the execute instances were only online when needed.
From a simulation environment perspective, a key part of the platform was a shared NFS filer providing the storage for the running jobs. The team made use of Azure Blob storage (Azure’s cloud-based object store service) for storing model variations and results long-term. The actual models included roughly 15GB of input and produced roughly 5GB of output data per run. Using a mix of an NFS filer for simulation runs and lower-cost object storage allowed the team to efficiently and effectively run multiple jobs.
Five months of simulation and research done in 48 hours for $600? That is impressive.