Cycle Computing at Bio-IT World

We’re excited to be back at Bio-IT World & Expo this week. Before the show opens, I wanted to share some details about how you can find us. First, you can win an awesome prize! Tweet a selfie from our booth or other signage around the conference and include @cyclecomputing and #bioit17 to be entered into the drawing. We’ll have it on hand in booth #361. While you’re in our booth, you can also get your picture taken in our photo booth. We’ll have several fun props. And of course, we’re there to do work, too. We just released the latest version of our CycleCloud™ software suite for providing simple, managed access to big compute and cloud HPC. As we wrote earlier this month, this release includes improved monitoring for GPU-powered instances. GPUs provide a great boost to many scientific workloads. Benchmarking done at the University of Illinois showed a many-fold increase in NAMD performance on GPUs. Stop by booth 361 to learn how CycleCloud makes it easy to get the GPU or CPU resources you want with the control you need. Jason Stowe, our CEO, will be delivering the keynote introduction at 8 am on Wednesday and presenting “How cloud has changed life sciences” in the Cloud Computing track at noon on...

Improving ALS research with Google Cloud, Schrödinger, and Cycle Computing

Today we published a case study describing how the use of Google Cloud enabled one professor to do work she never thought possible. May Khanna, Assistant Professor of Pharmacology at the University of Arizona, studies pharmacological treatments for pain. Her specific area of expertise focuses on research that uses protein binding to develop possible treatments. Using our CycleCloud™ software to manage a 5,000 core Google Cloud Preemptible VM cluster running Schrödinger® Glide™ has enabled research she never thought possible. This cluster was used to run 20,000 hours of docking computations in four hours for $192, thanks to the simple, consistent pricing of GCP’s Preemptible VMs. The n1-highcpu-16 instances she used have 16 virtual cores and 60 gigabyte of RAM, so they’re well-suited for this kind of compute-heavy workload. For this project, Professor Khanna wanted to analyze a protein associated with amyotrophic lateral sclerosis, also known as “ALS” or “Lou Gerhig’s disease”. ALS has no known cure and causes pain and eventual death for some 20,000 people in the United States every year. Protein binding simulation is compute-intensive, even under the constraints researchers often apply to achieve their results in a reasonable time. For example, proteins are often simulated in isolation and the binding sites are restricted to a set of known-or-expected active locations on the protein. With those constraints, Professor Khanna was only able to simulate 50,000 compounds, which yielded a grand total of four possible hits. She was about to give up on the project when she approached Cycle Computing. Using her Google Cloud cluster, she was able to simulate binding of a million compounds in just a...

Recognizing HPC Invention: Cycle Computing & Customers Receive 4 HPCWire Readers Choice Nominations

The arrival of autumn brings both some cooler air, as well as thoughts of the annual SC event – the high performance computing's (HPC) largest event. SC13 will be held in Denver Nov. 17-22, and is always one of my favorite events of the year. With so much going on, it's sometimes nice to step back and recognize the amazing work that's happened over the past year. And this is exactly what the HPC Wire Readers Choice Awards (http://bit.ly/votecycle2013) are all about. At Cycle Computing, we feel honored to be a part of four different nominations to receive Readers Choice Awards where clients are using Cycle and Amazon Web Services to accelerate invention. These awards are really about our customers, and what they've been able to accomplish through technology. Cycle Computing has been nominated for three and one of our customers make it four. Because the nomination descriptions are somewhat limited, I've put together further background on each, and am encouraging people to visit the HPC Wire Readers Choice Awards website and to vote. Voting ends Saturday, Oct. 12, 2013. 1. Best use of HPC application in manufacturing Johnson & Johnson for its deployment of Cycle Computing SubmitOnce software in complex computer aided design simulations with Dassault Systemes Abaqus Challenges: Disparate internal clusters, lack of resource for large problem execution, and home grown systems pain. Solutions: Cycle Computing SubmitOnce – jobs are routed to internal and external clusters via configurable intelligence based on cluster loads, data size and estimated runtimes. Single entry point for users without having to understand complex backend systems. Video: Presentation at Amazon Web Services Re:Invent http://www.youtube.com/watch?v=rhyjPgyG1Vs#t=343...

Enterprise HPC in the Cloud: Fortune 500 Use Cases

Cycle Computing invited some of their customers to AWS re:Invent to talk about the amazing science, analytics and discovery they are executing at scale in concert with Cycle technology platforms. Our segment was called “Enterprise HPC in the Cloud: Fortune 500 Use Cases“. First up Jason Stowe Cycle Computing CEO gives a wonderful overview of why Utility Supercomputing really matters for each and every one of us!   However, we also thought it would be wonderful to share our customers activities with you so you may see what leading players in the field of Utility Supercomputing are doing on a day to day basis. There are four amazing talks below, each do an excellent job of highlighting the challenges and solutions for each domain.  The talks describe the use of utility supercomputers to solve real world problems in Life Science, Finance, Manufacturing, Energy, and Insurance. The latest and greatest in advanced workloads are discussed that are helping push these industries and technologies forward through the insights gained from big data and high performance computing, including a sneak peak into what they see as coming next. First up, Kurt Prenger and Taylor Hamilton of J & J High Performance Computing:   Challenges: Disparate internal clusters, lack of resource for large problem execution, and home grown systems pain. Solutions: Cycle Computing SubmitOnce – jobs are routed to internal and external clusters via configurable intelligence based on cluster loads, data size and estimated runtimes. Single entry point for users without having to understand complex backend systems. Next up: David Chang, Assistant Vice President of Pacific Life:   Challenges: Increasing complexity in product design with...

Fortune 500s discuss Cloud HPC, Utility Supercomputing @ Cycle’s re:Invent session

As many of you know, at Cycle we think that giving every researcher, scientist, engineer, and mathematician access to the compute power they need, exactly when they need it, will enable humanity to achieve what its truly capable of.  So we organized five Fortune 500 Cycle customers of ours to talk at AWS re:Invent at 1pm Wednesday the 28th, about Cloud, HPC, and utility supercomputing. Whether its building safer medical devices, managing risk, helping quantify genomes at scale, protect hard-earned retirement funds, or find medicines to cure disease, they'll be talking about how they use Cloud to do it! At 1pm tomorrow (Wednesday) come to "Enterprise HPC in the Cloud: Fortune 500 use cases" in room 3401A to see: HartfordLIfe Johnson & Johnson Life Technologies  Novartis  PacificLife If you can't make the session come to Cycle's Booth #220, and we can talk more...

BigData, meet BigCompute: 1 Million Hours, 78 TB of genomic data analysis, in 1 week

It seems like every day at Cycle, we get to help people do amazing work, but this week is a little different. This week we wrapped up our involvement in the amazing work by Victor Ruotti of the Morgridge Institute for Research, winner of the inaugural Cycle Computing BigScience Challenge. In the name of improving the indexing of gene expression in differentiated stem cells, Cycle's utility supercomputing software just finished orchestrating the first publicly disclosed 1,000,000+ core hour HPC analysis on the Cloud. Yes, that’s 1 Million hours, or over a ComputeCenturyTM of work, on a total of 78 TB of genomic data, in a week, for $116/hr!  To put this 115 years of computing into context, the word ‘computer,’ meaning an electronic calculation device, was first used in 1897. So if you had started this run on a one-core computer when the term was first used, and kept it running through World War I, Jazz, the roaring 20’s, the Great Depression, WWII, Big Bands, the start of Rock’n’Roll, the Cold War, the Space Race, the Vietnam War, Disco, the 80s, grunge, techno, hip hop, reality TV, and up to Gangnam Style, Victor’s analysis would be finishing now, sometime in 2012. Now that’s a lot of compute. Below, we're going to explain the details of the analysis, and how it was executed, but if you're short on time, please skip to why this is important.  Cycle Computing BigScience Challenge Overview About a year ago we were very excited to announce the Challenge, a contest aimed at breaking the computation limits for any researchers working to answer questions that will help humanity....