Cloud in a broader HPC world

Cloud in a broader HPC world

Cloud is bringing “HPC” into a broader world. We just don’t always see it that way. Cloud Computing for large workloads continues to evolve. One of the things that we see is how this technology is enabling people who never think of themselves as “HPC people” to do work that they never thought possible. Part of this is that the definition of “HPC” has always been focused on the hardware aspect of computation. However, what we see is a broader use of the techniques and technologies of HPC to help people use computation to predict rather than simply report. Historically, most analytic computation has been focused on validating and reporting what we already know: capturing transactions, reporting on activities, validating designs, checking our math. Classic examples include all of the accounting type workloads, human resource systems, inventory management, etc. With the availability of HPC-like (large compute, network, and storage capacity) environments easily accessible by anyone, more and more groups are using data and simulation to predict future events, outcomes, or reactions (be they people, materials, chemicals, medicines, etc). This ability to use computation to predict versus validate is a fundamental shift in how many people are starting to leverage HPC. The availability of additional compute power enables users to include computation as part of the design process, exploring multiple options to develop new approaches and not just validate completed designs. As an example, consider the process for finding useful small molecule targets for drug development. Historically, researchers would try to find potential targets by starting with a limited subset of pre-selected materials. This limit stems from a lack of...

Expand your realm of possibility with Big Data and HPC – Seminar

I recently had the chance to present at a seminar titled “Expand your realm of possibility with Big Data and HPC” sponsored by DellEMC. Attendees from around the northeast gathered to discuss their current and future Big Data and HPC needs. The dialogue between the attendees is always the part that gets you thinking the most. It is interesting to see how the real world definition of “HPC” is getting broader and broader. Everyone was discussing how the users of their environments are no longer just classic large core count, math-based runs. They all see more and more data analysis users, more and more high throughput applications, and more and more workflows that mix all types to get to the answers they are looking for. My presentation on Approaches for Cloud HPC addressed this by highlighting a number of the use cases we have seen, talking through a number of the lessons learned, and offering ideas on how to get started. It was a great event with a mix of presenters. Jason Banfelder from discussed how HPC advances science at Rockefeller University, Al Rittaco from Becker College highlighted the work going on at the Massachusetts Green High Performance Computing Center (MGHPCC), and the DellEMC team presented on Big Data and HPC options they provide. There was no doubt that is our ability to generate more and more data grows, our need to actually process that data and find insights is growing even...

Dell EMC partners with Cycle Computing

The ongoing struggle of our industry is the promise of accelerating innovation versus the challenges of long queue times, limited environment options, and trying to deliver a platform that efficiently serves all types of users. This dilemma often forces us to say “NO” to many new, unique user requests. Cloud is a tool to address that conundrum. Dell EMC has embraced the notion that hybrid environments of internal and external resources will be the optimal path for customers. We are very excited to partner with the Dell EMC team to deliver a complete hybrid cloud solution for Big Compute workloads in a way that no one else can today. Ok. Sounds interesting, but what does it really offer to many of us in the Big Computing land where HPC in the Cloud is still something we are trying to understand where it fits? This is exciting because people can start to solve the problems that they are having in today’s real world HPC data centers. For many reasons (physics and the speed of light driving some of them), HPC datacenters are not going away any time soon. For all the benefits that cloud offers there are still a lot of valid reasons for maintaining a “local” HPC data center. The ability to have large volumes of data, connected by very high speed interconnects with dedicated, high performance storage options and unique server configurations can deliver real competitive value relative to technologies available at scale within the cloud. That said, there are also plenty of applications that currently run on HPC datacenter environments that don’t really need those specialized technologies...
Leveraging Public Clouds for Financial Services Workloads

Leveraging Public Clouds for Financial Services Workloads

Many industries are learning to leverage cloud resources to help the solve the challenge of ever increasing, and uneven, demands for computationally-based analysis. With the growth of available data, advanced algorithms, competitive pressures, government regulations, and shrinking deadlines, the analysts and IT organizations within these institutions are struggling to find ways to meet these demands. This case study highlights one example of a financial workload and how it was moved to a public cloud. Specifically, we describe the CCAR regulatory analysis that motivated the project; we review the technical and organizational challenges associated with migration to the cloud; and we summarize the rewards of leveraging new approaches and using the cloud to resolve these problems. These challenges include security concerns, relationship to existing process, costs, technical experience, vendor choices and more. The rewards include delivering faster response to the business, improving overall operating efficiency, and driving improved business practices. Based on this initial success, several other time-sensitive workloads were migrated to a public cloud, thus enabling the organization be more responsive to customers and stakeholders. Download and let us know what you think...

Cycle Computing (Booth 3621) and SC16

It is that time of year again.  SuperComputing 16 is coming up Nov 13 – 18 in Salt Lake City. It should be a great event this year with lots of sessions and discussion plus a full exhibitor floor as well.     From what we have seen, Cloud will continue to be one of the hot topics at SC16. There are quite a few sessions and a number of companies that are talking cloud this year. As an industry, we have moved beyond simply discussions on “should I” and moved into discussion on things like MPI optimization for the cloud, GPU options, and clusters as a service. All these show the growing maturity of HPC in the cloud.   Of course, Cycle will be there, again, as we have for the past number of years. Clearly we are one of the original companies focused making our customers successful with Big Compute and Big Data workloads in the cloud, something we have been doing for 10 years now.   Ready to learn how you too can be successful in using public cloud for the kinds of workloads that make SC16 such an interesting conference? Reach out to us through our website form or by email and schedule a time where we can sit down with you.   Come see us at Booth 3621   Looking forward to a great week!!! Follow us on Twitter for updates throughout the conference....