Dell HPC Community event at SC ’16

The Saturday before Supercomputing ‘16, I had the pleasure of participating in the Dell HPC Community event. As you might expect from Dell’s partnership with us, the cloud was a big focus this year. What I didn’t expect, however, was just how much it had everyone’s attention.

Jay Boisseau, Chief HPC Technology Strategist at Dell, started the event off by saying interest in cloud technologies dominated the input from customers in the last six months. And Jim Ganthier, Sr. VP of Validated Solutions and HPC, was unequivocal about how Dell intends to respond. “We are going to make Cloud HPC a core Dell offering and competency.”

The event unfolded unlike any prior Dell gathering I’d attended. Instead of customers (or Dell) talking about PowerEdge servers, the latest CPUs and GPUs, or anything around Linpack, the presentations all spoke to organizations trying new approaches to serving scientists and engineers with the help of cloud technologies. A common motivation emerged, as well: urgency to meet the demands of increasingly heterogeneous and data-driven workloads in more nimble and collaborative ways.

Two of the more interesting presentations related to cloud were from Shawn Strande (San Diego Supercomputing Center) and Tommy Minyard (Texas Advanced Computing Center). They talked about their efforts to run various cloud technologies on locally-hosted cyberinfrastructure programs funded by the National Science Foundation. Each indicated cloudifying (my fake-word, not theirs) was putting meaningfully better control in the hands of researchers from their state and the nationwide XSEDE network, especially with the rise of NSF programs that see data streaming in from geographically distributed sensor and instrument networks in enormous volumes. But both Strande and Minyard also said “going Cloud” is a journey, and one that currently includes some painful disruption.

Toward the end of the day, I represented Cycle Computing on a panel discussion on Cloud HPC, along with representatives from Bright Computing, Intel, Mellanox, Nvidia, and R-Systems. In this more dialogue-driven setting, the conversation quickly turned to interest in how public Cloud HPC will fit into this. To my surprise, both members of the audience and panelists were keen to see what the guy from Cycle Computing thought. I offered up the following thoughts:

Cycle Computing believes that public clouds are a tool in the HPC bag, and not the answer to everything.

Prior to joining Cycle Computing, I spent seven years at the National Center for Supercomputing Applications (NCSA) I well know the unique value-add to users of cutting-edge, at-scale supercomputing resources tuned and managed by domain and HPC experts. There will always be some workflows that need top-of-the-line hardware resources.

Public cloud HPC is growing, but not necessarily in a way that cannibalizes traditional supercomputing.

What I’ve observed in my time at Cycle Computing is that organizations generally see Cloud as an additive layer. Many of them want to get the mass volumes of throughput-based workloads that can dominate an HPC’s system’s job queue onto the public Clouds so their infrastructure can run more large-scale jobs that their clusters were designed for in the first place. In other words, it’s all about a “right tool for the job” mentality.

Public clouds are affecting the HPC market and landscape in some dramatic ways, and HPC practitioners have opportunities to take advantage of this.

Public Clouds are causing technology vendors to accelerate and augment their technology roadmaps. More innovation is occurring because public Clouds make it worth the additional R&D investment. The Tesla M40 GPUs some of the audience might be enjoying in their internal clusters were the byproduct of a business partnership between Nvidia and Facebook. Similarly, the market explosion for FPGAs (including Intel treating them as first-class citizens in its product portfolio), is due to Microsoft using them to drive Bing searches and its own software-defined networking efforts. In other words, public Cloud companies are making more advanced technology available faster and with less access friction than ever before. This raises all boats.

The partnership between Dell and Cycle Computing demonstrates both organizations’ commitment to putting customers first.

Cycle Computing and Dell recognize that most organizations have a hybrid future ahead of them; one that spans on-premise and public Cloud-based HPC. The partnership we announced at Supercomputing ‘16 is just the start of a mutual effort to collectively help customers when, where, and how they need it. And the more input we get from HPC practitioners the more performance, value, and insight we can in turn help them deliver to their user communities.

Share this: