Improving ALS research with Google Cloud, Schrödinger, and Cycle Computing

Today we published a case study describing how the use of Google Cloud enabled one professor to do work she never thought possible. May Khanna, Assistant Professor of Pharmacology at the University of Arizona, studies pharmacological treatments for pain. Her specific area of expertise focuses on research that uses protein binding to develop possible treatments. Using our CycleCloud™ software to manage a 5,000 core Google Cloud Preemptible VM cluster running Schrödinger® Glide™ has enabled research she never thought possible. This cluster was used to run 20,000 hours of docking computations in four hours for $192, thanks to the simple, consistent pricing of GCP’s Preemptible VMs. The n1-highcpu-16 instances she used have 16 virtual cores and 60 gigabyte of RAM, so they’re well-suited for this kind of compute-heavy workload. For this project, Professor Khanna wanted to analyze a protein associated with amyotrophic lateral sclerosis, also known as “ALS” or “Lou Gerhig’s disease”. ALS has no known cure and causes pain and eventual death for some 20,000 people in the United States every year. Protein binding simulation is compute-intensive, even under the constraints researchers often apply to achieve their results in a reasonable time. For example, proteins are often simulated in isolation and the binding sites are restricted to a set of known-or-expected active locations on the protein. With those constraints, Professor Khanna was only able to simulate 50,000 compounds, which yielded a grand total of four possible hits. She was about to give up on the project when she approached Cycle Computing. Using her Google Cloud cluster, she was able to simulate binding of a million compounds in just a...

Recognizing HPC Invention: Cycle Computing & Customers Receive 4 HPCWire Readers Choice Nominations

The arrival of autumn brings both some cooler air, as well as thoughts of the annual SC event – the high performance computing's (HPC) largest event. SC13 will be held in Denver Nov. 17-22, and is always one of my favorite events of the year. With so much going on, it's sometimes nice to step back and recognize the amazing work that's happened over the past year. And this is exactly what the HPC Wire Readers Choice Awards (http://bit.ly/votecycle2013) are all about. At Cycle Computing, we feel honored to be a part of four different nominations to receive Readers Choice Awards where clients are using Cycle and Amazon Web Services to accelerate invention. These awards are really about our customers, and what they've been able to accomplish through technology. Cycle Computing has been nominated for three and one of our customers make it four. Because the nomination descriptions are somewhat limited, I've put together further background on each, and am encouraging people to visit the HPC Wire Readers Choice Awards website and to vote. Voting ends Saturday, Oct. 12, 2013. 1. Best use of HPC application in manufacturing Johnson & Johnson for its deployment of Cycle Computing SubmitOnce software in complex computer aided design simulations with Dassault Systemes Abaqus Challenges: Disparate internal clusters, lack of resource for large problem execution, and home grown systems pain. Solutions: Cycle Computing SubmitOnce – jobs are routed to internal and external clusters via configurable intelligence based on cluster loads, data size and estimated runtimes. Single entry point for users without having to understand complex backend systems. Video: Presentation at Amazon Web Services Re:Invent http://www.youtube.com/watch?v=rhyjPgyG1Vs#t=343...

Enterprise HPC in the Cloud: Fortune 500 Use Cases

Cycle Computing invited some of their customers to AWS re:Invent to talk about the amazing science, analytics and discovery they are executing at scale in concert with Cycle technology platforms. Our segment was called “Enterprise HPC in the Cloud: Fortune 500 Use Cases“. First up Jason Stowe Cycle Computing CEO gives a wonderful overview of why Utility Supercomputing really matters for each and every one of us!   However, we also thought it would be wonderful to share our customers activities with you so you may see what leading players in the field of Utility Supercomputing are doing on a day to day basis. There are four amazing talks below, each do an excellent job of highlighting the challenges and solutions for each domain.  The talks describe the use of utility supercomputers to solve real world problems in Life Science, Finance, Manufacturing, Energy, and Insurance. The latest and greatest in advanced workloads are discussed that are helping push these industries and technologies forward through the insights gained from big data and high performance computing, including a sneak peak into what they see as coming next. First up, Kurt Prenger and Taylor Hamilton of J & J High Performance Computing:   Challenges: Disparate internal clusters, lack of resource for large problem execution, and home grown systems pain. Solutions: Cycle Computing SubmitOnce – jobs are routed to internal and external clusters via configurable intelligence based on cluster loads, data size and estimated runtimes. Single entry point for users without having to understand complex backend systems. Next up: David Chang, Assistant Vice President of Pacific Life:   Challenges: Increasing complexity in product design with...

Fortune 500s discuss Cloud HPC, Utility Supercomputing @ Cycle’s re:Invent session

As many of you know, at Cycle we think that giving every researcher, scientist, engineer, and mathematician access to the compute power they need, exactly when they need it, will enable humanity to achieve what its truly capable of.  So we organized five Fortune 500 Cycle customers of ours to talk at AWS re:Invent at 1pm Wednesday the 28th, about Cloud, HPC, and utility supercomputing. Whether its building safer medical devices, managing risk, helping quantify genomes at scale, protect hard-earned retirement funds, or find medicines to cure disease, they'll be talking about how they use Cloud to do it! At 1pm tomorrow (Wednesday) come to "Enterprise HPC in the Cloud: Fortune 500 use cases" in room 3401A to see: HartfordLIfe Johnson & Johnson Life Technologies  Novartis  PacificLife If you can't make the session come to Cycle's Booth #220, and we can talk more...

BigData, meet BigCompute: 1 Million Hours, 78 TB of genomic data analysis, in 1 week

It seems like every day at Cycle, we get to help people do amazing work, but this week is a little different. This week we wrapped up our involvement in the amazing work by Victor Ruotti of the Morgridge Institute for Research, winner of the inaugural Cycle Computing BigScience Challenge. In the name of improving the indexing of gene expression in differentiated stem cells, Cycle's utility supercomputing software just finished orchestrating the first publicly disclosed 1,000,000+ core hour HPC analysis on the Cloud. Yes, that’s 1 Million hours, or over a ComputeCenturyTM of work, on a total of 78 TB of genomic data, in a week, for $116/hr!  To put this 115 years of computing into context, the word ‘computer,’ meaning an electronic calculation device, was first used in 1897. So if you had started this run on a one-core computer when the term was first used, and kept it running through World War I, Jazz, the roaring 20’s, the Great Depression, WWII, Big Bands, the start of Rock’n’Roll, the Cold War, the Space Race, the Vietnam War, Disco, the 80s, grunge, techno, hip hop, reality TV, and up to Gangnam Style, Victor’s analysis would be finishing now, sometime in 2012. Now that’s a lot of compute. Below, we're going to explain the details of the analysis, and how it was executed, but if you're short on time, please skip to why this is important.  Cycle Computing BigScience Challenge Overview About a year ago we were very excited to announce the Challenge, a contest aimed at breaking the computation limits for any researchers working to answer questions that will help humanity....

CycleCloud Achieves Ludicrous Speed! (Utility Supercomputing with 50,000-cores)

Update: Since publishing this blog entry, our 50,000 core CycleCloud utility supercomputer has gotten great coverage by BusinessWeek, TheRegister, the NY Times, the Wall Street Journal’s CIO Report, Ars Technica, TheVerge, among many others. And now it would run for $750/hr with the AWS spot pricing as of 6/22/2012! Click here to contact us for more information… By now, we've shown that our software is capable of spinning up cloud computing environments that run  at massive scale and produce real scientific results.  After some of our previous efforts, we realized we were onto something with the CycleCloud Cloud HPC and Utility Supercomputing concept. However, even we underestimated the scales researchers would want to use and the scope of the research that this would impact.  Among the requests were some from a leader in computational chemistry research, Schrodinger. In collaboration with Nimbus Discovery, they needed to virtually screen 21 million molecule conformations, more than ever before, against one possible cancer target using their leading docking application, Glide. And they wanted to do it using a higher accuracy mode early-on in the process, which wasn’t possible before because it is so compute intensive! This is exactly what we did with our latest 50,000 core utility supercomputer that CycleCloud provisioned on Amazon Web Services, code-named Naga.  And Schrodinger/Nimbus got useful results they wouldn't have seen without utility supercomputing. We will describe how we accomplished this below, and in future articles and future blog posts. From a scale perspective, the most revolutionary concept implemented for Naga was scaling out all the components of an HPC environment. In our previous megaclusters, we performed a great deal of optimization...