Cycle Computing in the News
Be sure to read the Cycle Computing Blog here.
Cycle Computing ™, the leader in cloud computing orchestration software for Big Compute and Big Data, today announced that its CEO, Jason Stowe will deliver the introduction to one of the Keynote speeches at the Bio-IT World Conference & Expo to be held April 5-7 in Boston. Stowe will introduce the panel “Big Data, Big Science, Smart Medicine.” Stowe will also speak at two additional sessions…
Cycle Computing ™, the leader in cloud computing orchestration software for Big Compute and Big Data, that it will be at the HPC for Wall Street 2016 conference, to be held April 4 at the Roosevelt Hotel in New York City. Cycle will be located at booth 105. Cycle will be demonstrating its latest version of its CycleCloud software..
Cycle Computing ™, the leader in cloud computing orchestration software for Big Compute and Big Data, announced a collaboration with NASA to determine the biological mass of bushes and trees over the entire arid and semi-arid zone on the south side of the Sahara.
Cycle Computing ™, the leader in cloud computing orchestration software for Big Compute and Big Data, announced a collaboration with NASA to determine the biological mass of bushes and trees over the entire arid and semi-arid zone on the south side of the Sahara.
Cycle Computing™, the leader in cloud computing orchestration software for Big Compute and Big Data, unveiled the latest version of its flagship product CycleCloud™ v5.
Cycle Computing™, the leader in cloud computing orchestration software for Big Compute and Big Data, is proud to announce that it was recently awarded a “method and system patent for automatically detecting and resolving infrastructure faults in cloud infrastructure.”
Cycle Computing™, the leader in cloud computing orchestration software for Big Compute and Big Data, announced today that Rob Futrick, CTO of Cycle Computing will be the Keynote speaker at DataCloud 2015, the 6th International Workshop on Data Intensive Computing in the Clouds.
Cycle Computing™, the leader in cloud computing orchestration software for Big Compute and Big Data, announced the addition of Rick Friedman to the management team as Vice President of Solutions.
If Google is serious about offering an alternative to A.W.S., it still has a way to go. Cycle Computing, the company that specializes in cloud supercomputing and set up the project for Broad, has done projects on A.W.S. that use 150,000 cores.
The Broad Institute will use Google Cloud Engine’s pre-emptible VMs, Cycle Computing’s orchestration, and machine learning to parse cancer cell and genetic data.
The Broad Institute used Cycle Computing’s service and preemptible VMs for its genomics research, for example. “Preemptible VMs are a great new offering for big compute, analytics, and batch workloads, and we’re happy to support them as an option for our customers. The Broad Institute’s story showcases a production workload running in 6 hours instead of 6 weeks, thanks to Preemptible VMs”, said Jason Stowe, CEO of Cycle Computing.
One company already taking advantage of this service is high-performance computing services provider Cycle Computing, which used the preemptible VMs to set up a system on behalf of one of its clients, Broad Institute.
Cycle enabled CycleCloud, DataMan and various other aspects of the Cycle toolkit to work on the Google Cloud Platform. Set up took about an hour and a half, Stowe said. “We essentially ran this cluster template at a small scale within 90 minutes having the executables—the workload management system, the batch computing environment, the shared file system, all of those components were just building blocks that we have lying around.”
These ties with Google Compute Engine are being announced in tandem with two related news items, one from Cycle, the other from Google. First is the fact that Cycle Computing has been working with the Broad Institute to make the move to the Google cloud for a cancer research workload that ate 50,000 cores on GCE.
In this podcast, Jason Stowe from Cycle Computing describes how the Broad Institute is mapping cancer genes with CycleCloud.
Cycle Computing recently ran a 50,000+ core workloadfor the Broad Institute using CycleCloud with Preemptible VMs, performing three decades of cancer research computations in an afternoon.
“Kubernetes combined with Docker enables the microwave for computation—you no longer have to wait 45 minutes for your casserole.” – Jason Stowe, Cycle Computing CEO.
Most CNCF members are not tied to a specific infrastructure stack. Companies like Box, Cycle Computing and Twitter, want to advance the cause of cloud-native applications, but they don’t necessarily care where those applications run.
Today Cycle Computing announced its participation in the Cloud Native Computing Foundation. Through its membership, Cycle Computing will enable enterprise users with big compute problems to fully leverage the advantages of cloud by running their workloads on cloud native software.
“Since our founding, Cycle has operated off of the simple premise that easier access to computing and storage resources increases innovation by decreasing the time it takes to solve compute challenges. Our participation in the Cloud Native Computing Foundation is a logical step, we are honored to be a part of it,” said Jason Stowe, Cycle Computing CEO. “Containers and Kubernetes will help increase access to compute for big data, analytics, and batch computing, and we look forward to collaborating with the community to remove the barriers to resources that analysts, engineers and scientists depend on.”
Open-source communities seem to do a better job than standards committees in creating new software that sticks to a common plan.
Innovation in open cloud technology is accelerating at a pace like the IT industry has not seen before. Growth of cross cutting open technology initiatives is welcome news for today’s enterprise developer, but at the same time presents integration and interoperability challenges, especially as they weave in the latest container and microservices technologies.
No fewer than 22 companies are coming together today to establish the Cloud Native Computing Foundation, a standards body that will provide shared governance for tools that companies can use when deploying applications in Linux containers — an alternative to longstanding virtual-machine technology.
The Linux Foundation today is announcing the official formation of the Cloud Native Computing Foundation in a bid to advance cloud application management and interoperability.
The newly formed open source foundation, a Linux Foundation collaborative project, plans to create and drive adoption of common container technologies at the orchestration level, and integrate hosts and services by defining common APIs and standards.
Kubernetes, the container management solution originally open sourced by Google, is deemed ready for mainstream use as it reaches version 1 today. Alongside this software versioning milestone, Google is joining a host of familiar names in a new Cloud Native Computing Foundation, under the auspices of the Linux Foundation.
The Linux Foundation in collaboration with 18 vendors and IT organizations announced today the formation of a Cloud Native Computing Foundation, which is committed to validating references architectures for integrating various technologies built on top of Docker containers.
The Linux Foundation has announced the Cloud Native Computing Foundation. ‘This new organization aims to advance the state-of-the-art for building cloud native applications and services, allowing developers to take full advantage of existing and to-be-developed open source technologies. Cloud native refers to applications or services that are container-packaged, dynamically scheduled and micro services-oriented.
Innovate or die. How many times have we heard this mantra in our careers? The difference is that today’s modeling…
Jason Stowe, CEO of Cycle Computing, has made a name for his company as a developer and provider of software to enable greater access to utility supercomputing, leading to an era of increased invention, and scientific discovery.
The other day I received information about an experiment that Cycle Computing ran to assess just how quickly a massively scaled computational cluster could be created. It’s one of those example of doing something, partly because it is useful, but also partly because it is just insanely cool. The problem that Cycle was trying to solve was the simulated testing of physical drive heads.
Slidecast: Cycle Computing Powers 70,000-core AWS Cluster for HGST
Has Cloud HPC finally made it’s way to theMissing Middle? In this slidecast, Jason Stowe from Cycle Computing describes how the company enabled HGST to spin up a 70,000-core cluster from AWS and then return it 8 hours later.
… [Novartis] relied on AWS EC2 and Cycle Computing software to create and access a $44 million supercomputer for just $5,000. In only 11 hours, the Novartis researchers had three potential cancer-fighting compounds to run further trials on.
… long-time HPC community member Tim Carroll talks about the future of high performance computing in the cloud. Before joining Cycle Computing, Carroll was at Dell, where he was focused on the HPC market segment.
Celebrating a year of record-growth, Cycle Computing has added HPC executive and startup veteran Tim Carroll as Vice President of Business Development and Ecosystem; and has also promoted Rob Futrick to Chief Technical Officer (CTO). The buildout of the management team will leverage and accelerate rapid growth of revenue, new customers and core hours under management.
In this video from ISC’14, Brad Rutledge from Cycle Computing describes how the company brings greater access to supercomputing resources. By making it easy to spin up large clusters, the company is breaking records of scale using Cloud HPC.
Cycle Computing seems to be on a roll at the moment, as the company is hiring in multiple U.S. locations. And in related news, Cycle Computing announced today that Novartis is using Cycle Computing to discover new Cancer fighting drugs.
Schrödinger, LLC and Cycle Computing, LLC announced today a partnership that will allow customers to run Schrödinger’s Materials Science Suite on the Cloud and elastic resources worldwide using Cycle Computing’s CycleCloud orchestration software.
Scientific Computing: Big Compute: The Collision of where HPC is Meeting the Challenges of Big Data
There is a shift underway where researchers, engineers, and analysts, can change the very way they think about problems. Previously, we have been limited by the computing resources we have— the clusters we have on premise. Today, we can change the very way we ask our questions. Ask the right questions — and use the Cloud to create the size of system needed to answer your questions.
HPCwire: Scaling the Super Cloud
“The number one problem we face as humanity is getting people to think outside of the boxes they bought,” says Cycle Computing CEO, Jason Stowe. His company has made big waves and proven that the combination of Amazon servers and their own innovations can open new infrastructure options for users with HPC applications.
Jason Stowe, CEO of HPC cloud company, Cycle Computing put it best when he told us, “We in HPC pay attention to the fastest systems in the world: the fastest CPUs, interconnects, and benchmarks. From petaflops to petabytes, we [in HPC] publish and analyze these numbers unlike any other industry…While we’ll continue to measure things like LINPACK, utilization, and queue wait times, we’re now looking at things like Dollars per Unit Science, and Dollar per Simulation, which ironically, are lessons that has been learned from enterprise.”
Cycle Computing announced the addition of three executives, Gavan Corr as chief strategy officer, Robert Petrocchi as vice president of worldwide sales, and Brad Rutledge as vice president of marketing. This team will further help educate enterprises about the benefits and advantages of Utility HPC, grow the Cloud HPC ecosystem, and implement leading technology and processes to quickly and easily onboard Cycle Computing customers.
Amazon has been smart to focus on new markets, whether it is online storage providers such as Dropbox or HPC-in-the-cloud services such as Cycle Computing. They are the ones who are cloud-native and have created infrastructure demand that is many times the legacy companies.
The New York Times: IBM to announce more powerful Watson via the Internet
On Tuesday, a company appearing at the Amazon conference said it had run in 18 hours a project on Amazon’s cloud of computer servers that would have taken 264 years on a single server.
“It’s now $90 an hour to rent 10,000 computers,” the equivalent of a giant machine that would cost $4.4 million, said Jason Stowe, the chief executive of Cycle Computing, the company that did the Amazon supercomputing exercise, and whose clients include The Hartford, Novartis, and Johnson & Johnson. “Soon smart people will be renting a conference room to do some supercomputing.”
Cycle Computing, which divvies up workloads to run across AWS regions and zones, has been able to run and manage Schrödinger’s quantum chemistry software on a whopping 156,000 cores across 8 AWS regions.
What runs faster than the majority of the world’s supercomputers, costs less, and was used to research organic solar-power cells? The answer is Megarun, a 1.21-petaflop super that was spun up by Cycle Computing in the AMAZON CLOUD.
For the past few years, HPC software company Cycle Computing has been helping researchers harness the power of Amazon Web Services when they need serious computing power for short bursts of time. The company has completed its biggest Amazon cloud run yet, creating a cluster that ran for 18 hours, hitting 156,314 cores at its largest point and a theoretical peak speed of 1.21 petaflops.
Supercomputing, by definition, is never going to be cheap. But a company called Cycle Computing wants to make it more accessible by matching computing jobs with Amazon’s mammoth computing infrastructure.
Cycle Computing has a software management platform that controls the hundreds of thousands of virtual machines that are needed to run these types of jobs. Life science testing is a perfect fit for this software because of the massive amounts of options that are available to scientists to test a broad range of theories.
Podcast Silicon Angle -The CUBE: http://www.youtube.com/
SiliconANGLE is a place where computer science meets social science. Cycle Computing’s Jason Stowe was interviewed on a live video stream discussing the record-setting MegaRun.
In this week’s Structure Show, we talk through all the AWS Re:Invent news; the looming era of open-source switches, and how Cycle Computing helps a scientist build the solar panels of the future.
|The cloud: High performance computing’s best hope?|
|OCTOBER 11, 2013ZDNetAt the recent ISC Cloud’ 13 conference, Jason Stowe, CEO of Cycle Computing, presented an interesting assessment of the growing needs many companies have for on-demand high performance computing.Cycle Computing believes that the easy availability of high performance computing — that is, the ability to address the largest and most complicated computing task by harnessing together the power of hundreds, or perhaps, thousands, of computers — will improve the capabilities of many companies that previously were not able to use high performance computing.Organizations that wish to use this approach need a large budget for hardware, software, power, networking and storage, as well as high levels of expertise on hand — unless they turn to offerings from cloud service providers.Read full article »|
|Bidding strategies? Arbitrage? AWS spot market is where computing and finance meet.|
|OCTOBER 8, 2013GigaOMAmazon last week launched a contest for companies to show their Spot Instance pricing strategies, with $5,000 in AWS credits going towards the best use cases and $3,000 in credits going to the runner up. But the second year of the contest is as good a time as any to look at the often-mysterious beast that is AWS Spot Instances.While not often used, they are an important element in Amazon’s bag of tricks as well as something that startups are using to save tens of thousands on certain workloads. I’ve spoken with several companies to understand the tips, tricks and strategies involved in playing the AWS spot market.Read full article »|
|Flexibility — for HPC, clouds, and the workforce.|
|OCTOBER 2, 2013iSGTWTo build an HPC cluster in house, or to access third party HPC resources through the cloud: that is the question. While it may not be quite as poetic as Hamlet, this is the conundrum with which many small-to-medium-sized enterprises and research institutes are faced. Organizations interested in conducting computationally expensive data analysis or carrying out complex simulations have to decide whether to build in-house HPC clusters, or take advantage of the availability of such clusters through cloud offerings. Both options have their relative pros and cons, but the message from last week’s ISC Cloud ’13 conference in Heidelberg, Germany, is that there is increasingly a very clear middle way.Termed ‘utility HPC’ by keynote speaker Jason Stowe, CEO of Cycle Computing, this middle way involves organizations owning in-house HPC resources of sufficient performance to cover their typical usage, but also supplementing this on an ad hoc basis with additional cloud-based HPC resources for particularly computationally expensive projects — in other words, HPC clouds to cover the peaks.Read full article »|
|Cloud HPC Firm Dares Scientists to Ask Big Questions|
|AUGUST 20, 2013HPCwireCloud-based supercomputing is, theoretically, a great idea, but the trend has not taken off as some in the HPC field believed it would. That isn’t stopping the folks at Cycle Computing, who say its Amazon-based supercomputers are not only helping scientists and researchers get real work done, but freeing their brains to ask the really big questions.Scientific creativity is being hamstrung by the finite resources of traditional fixed-size supercomputing infrastructures, Cycle Computing CEO Jason Stowe said in a recent video. While all kinds of advances are being made in the HPC arena–particularly on the software side–all too often, scientists and researchers can’t adequately explore their ideas or ask the big questions due to a sheer lack of HPC capacity.Read full article »|
|Still no end in sight for US banks’ parallel run|
|AUGUST 13, 2013Op Risk and RegulationSince last month’s announcement from the US Federal Reserve on Basel III implementation, US institutions finally have certainty about their future capital requirements. Banks, the Fed said, will be forced to hold common equity Tier I capital equivalent to 7% of their risk-weighted assets reflecting a minimum ratio of 4.5% and an additional capital conservation buffer of 2.5%. Banks will also face a higher leverage ratio of 4%, rather than the 3% minimum under Basel III, which the Fed will introduce on January 1, 2014.So while US banks now have a much clearer roadmap to execute their Basel III implementation programmes, they remain on parallel run for Basel II compliance. This means that they are still waiting to have their advanced measurement approaches (AMA) to operational risk approved.Read full article »|
|The Promise of Utility Supercomputing|
|AUGUST 9, 2013WiredImagine this: You’re a computational drug designer at a Big 10 Pharma studying the pathway for a cancer target. According to a GLOBOCAN/World Health Organization statistic from 2008, there are 12.6+ million new cases of cancer globally, and you’re in the trenches on this fight. Now, a cancer target is a protein that, much like a lock, has a pocket where molecules can fit, like keys, to either enhance or inhibit its function. The problem is, rather than the tens of keys on a normal keychain, you have tens of millions of molecules to check. Each one is computationally intensive to simulate, so in this case, we have approximately 340,000 hours of computation, or nearly 40 compute years, ahead of you.Now imagine you need to propose to your management that you run this workload, and to do so in a timely fashion you need about 10,600 servers of infrastructure. Chirag Dekate of IDC says this equates to a 14,400 square foot data center that would take a year to get up and running, at a total cost of $44 million when you factor in space, cooling, power, cabling, and the process of hiring the people with expertise to run it. It is safe to say this science would never happen.Read full article »|
|Cycle Computing and the HPC Experiment|
|JULY 15, 2013HPC in the CloudWith hardware advancing at a relatively stable (if still exponential) rate and datasets increasing at a much higher rate, parallelism is a main tenet of high performance computing today. That parallelism is difficult to attain in a cloud environment, as latencies there are typically higher, thus slowing performance.Three weeks ago, Jason Stowe, CEO of Cycle Computing, spoke with HPC in the Cloud about their work in renting large clusters of Amazon HPC instances for companies looking for a short but powerful burst of that parallelized computing power. The focus was on how they aided Schrodinger in winning a Bio-IT Best Practices award with their intensive yet relatively inexpensive protein calculations.Read full article »|
|The Cloud’s the Limit: Rentable Supercomputers for Improving Drug Discovery|
|JULY 11, 2013Bio-IT WorldCreating a computer program that accurately tells pharmaceutical companies which candidate drugs they should spend millions of dollars developing may seem like a daunting task, but Schrodinger, a software company that specializes in life science applications, hopes to do just that.“Our mission is to advance computational drug design to the point of becoming a true enabling technology,” said Alessandro Monge, Schrodinger’s VP of Strategic Business.Schrodinger won the Bio-IT World Best Practice Award for IT Infrastructure at the Bio-IT World Expo this past April for a drug discovery project they ran in collaboration with Cycle Computing that harnessed the power of cloud-based computing, a tool that allows companies to rent high performance computing hardware.Read full article »|
|Big data spurring HPC, co-processor workloads|
|JUNE 17, 2013Virtualization ReviewHigh performance computing systems are increasingly using co-processor systems with Intel and Nvidia seen as a key tag team for big data workloads, according to IDC.In a study detailing high performance computing (HPC) sites, IDC looked at 905 systems. In 2011, IDC profiled 488 HPC systems. The two year jump largely highlights how 67 percent of HPC sites are now focused on big data workloads, said IDC.The study from IDC corresponds with the latest top 500 supercomputer ranking.Read full article »|
|Cycle Computing CEO to Speak on Utility HPC at Cloud Slam and LiveStream Event|
|JUNE 12, 2013HPCWireJune 12 — Cycle Computing, the leader in utility high performance computing (HPC) software, announced today that CEO Jason Stowe will speak at Cloud Slam ‘13. On Tuesday, June 18 at 10:45am-11:15am PST, Stowe will present in person at the conference on the benefits of accessible compute power and the implications for science. Stowe’s talk will include a number of HPC case studies in Life Sciences, in such areas as cancer drug research and stem cell indexing.In addition to his presentation on accessible compute, Stowe will give an online talk on Tuesday, June 18 at 4:40pm-5:00pm PST, focused on large-scale HPC workloads on Intel Xeon processors in the cloud. Stowe will discuss case studies across verticals such as Life Sciences, Financial Services and Manufacturing. This presentation is sponsored by Intel Healthcare and will be available live online via LiveStream. Interested attendees can visit the CloudSlam LiveStream channel.Read full article »|
Cycle Computing to Speak on Utility HPC at AWS Summit Tokyo
What: Cycle Computing, the leader in utility high performance computing (HPC) software, and AWS Advanced Technology Partner, announced today that CEO Jason Stowe will speak at the AWS Summit Tokyo 2013. Stowe will be a featured guest speaker during Amazon CTO Werner Vogels’ opening keynote on Wednesday, June 5. During his talk, Stowe will discuss the success Cycle’s customers have had leveraging Cycle software and Amazon’s EC2 to run large scale, complex HPC workloads in such areas as Drug Discovery, Manufacturing and Genomics.
In addition to his talk during Vogels’ keynote, Stowe will give a session during the AWS Partner Briefing on Tuesday, June 4. Stowe will discuss how the ability to orchestrate Utility HPC and data access creates new opportunities for AWS Partners to grow their business in a variety of vertical markets.
The AWS Summit Tokyo features over 63 sessions focused on the cloud. Over two days attendees will hear from over 20 companies with relevant use cases focused on the latest technology trends in cloud computing.
Both of Stowe’s talks will be translated to Japanese.
When: Wednesday June 5, 2013 – Thursday June 6, 2013
Where: Grand Prince Hotel New Takanawa, Tokyo
Who: Cycle Computing CEO, Jason Stowe, will discuss HPC cloud based use cases and how using the cloud has made impossible science possible. Stowe will share his thoughts on the future of cloud computing and the democratization of compute power. In addition, Stowe will discuss how its customers are leveraging Cycle’s new data management product, DataManager, to schedule and manage the secure transfer and storage of data sets needed for large scale computations.
To schedule a briefing with Jason Stowe at the event, contact Shaina Mardinly a 212-255-0080 ext. 15 or firstname.lastname@example.org.
About Cycle Computing
Cycle Computing is the leader in Utility HPC software. As a self-funded, profitable software company, Cycle makes award-winning products that accelerate breakthroughs at any scale. From 50 to 50,000+ cores against up to 100s of TBs of data, the world’s brightest minds rely on Cycle software to tackle their most challenging computational problems in less time, for less cost than ever before possible. Cycle software provides the single pane of glass from which customers and partners easily orchestrate complex workloads and data across a right-sized set of internal and external HPC resources. Cycle helps clients maximize existing infrastructure and speed computations on servers, VMs, and on-demand in the cloud, like the 10,000-core cluster for Genentech, the 30,000+ core cluster for a Top 5 Pharma, and the 50,000-core cluster for Schrödinger covered in Wired, The Register, BusinessWeek, Bio-IT World, and Forbes. Since 2005, starting with three initial Fortune 100 clients, Cycle has grown to deploy proven implementations at Fortune 500s, SMBs and government and academic institutions including JP Morgan Chase, Purdue University, Pfizer and Lockheed Martin.
# # #
Media Contact: Shaina Mardinly Articulate Communications Inc.email@example.com 212.255.0080, ext. 15
Schrödinger Named Bio-IT World Best Practices Grand Prize Winner
Big 5 Pharma Leverages Cycle Computing Software to Win Bio-IT World’s IT Infrastructure Grand Prize
New York – May 21, 2013 – Schrödinger, Inc., a scientific leader in chemical simulation for pharmaceutical and biotechnology research, was named the IT infrastructure grand prize winner of Bio-IT World’s best practices award for a 50,000-core utility supercomputer orchestrated by Cycle Computing, leader in Utility HPC software. Conducted in the Amazon Web Services (AWS) cloud, the environment was created to accelerate the screening of potential new cancer drugs.
Schrödinger’s researchers used Cycle’s HPC software to orchestrate the cloud computing resources needed to complete more than 4,480 days of work, nearing 12.5 years of computations, in less than three hours. The project cost less than $4,828 per hour at peak and required no upfront capital. Schrödinger had previously been conducting coarser screens due to the constraints of their internal infrastructure. In contrast, access to large scale yet cost effective computing made it possible to conduct much more granular screens on a significantly larger number of compounds. This approach identified many compounds that were good potential drug candidates that would have otherwise not been discovered.
“We’re honored that our project was recognized by the Bio-IT World judges,” said Dr. Alessandro Monge, Schrödinger’s VP of strategic business. “With the level of sophisticated technology that Cycle provided us, we have significantly eliminated false negatives and false positives that delay drug discovery. The same calculation would’ve been cost prohibitive on our own infrastructure.”
“Our work with Schrödinger demonstrates how scientists can take advantage of innovative technology to complete better research faster and for exponentially less cost,” said Jason Stowe, founder and CEO, Cycle Computing. “We’re thrilled to create HPC environments to empower Schrödinger’s drug discovery breakthroughs and are honored by Bio-IT World’s recognition of their efforts.”
“We extend our sincere congratulations to the winners of this year’s Bio-IT World Best Practices Awards competition,” said Kevin Davies, editor of Bio-IT World. “Our select judges enjoyed evaluating the dozens of excellent entries received this year, and believe that the contest has highlighted some truly innovative, game-changing tools and solutions. Our winners should be very proud that they have captured the imagination and respect of such a distinguished jury.”
Schrödinger makes significant investments in R&D, which has led to major advances in the field of computational chemistry; it has achieved breakthroughs in quantum chemistry, molecular modeling, force fields, molecular dynamics, protein structure determination, scoring, and virtual screening. The company’s full product offerings range from general molecular modeling programs to a comprehensive suite of drug design software. Besides the company’s industry-leading drug discovery solutions, Schrödinger is actively developing state-of-the art simulation tools for materials research as well as enterprise software that can be deployed throughout an entire research organization. Schrödinger’s methods development and applications papers have thousands of citations and are often among the most-cited scientific publications. Schrödinger’s science is continually validated internally and by its users worldwide. Founded in 1990, Schrödinger has operations in the United States as well as in Europe, India, and Japan.
About Cycle Computing
Cycle Computing is the leader in Utility HPC software. As a self-funded, profitable software company, Cycle makes award-winning products that accelerate breakthroughs at any scale. From 50 to 50,000+ cores against up to 100s of TBs of data, the world’s brightest minds rely on Cycle software to tackle their most challenging computational problems in less time, for less cost than ever before possible. Cycle software provides the single pane of glass from which customers and partners easily orchestrate complex workloads and data across a right-sized set of internal and external HPC resources. Cycle helps clients maximize existing infrastructure and speed computations on servers, VMs, and on-demand in the cloud, like the 10,000-core cluster for Genentech, the 30,000+ core cluster for a Top 5 Pharma, and the 50,000-core cluster for Schrödinger covered in Wired, TheRegister, BusinessWeek, Bio-IT World, and Forbes. Since 2005, starting with three initial Fortune 100 clients, Cycle has grown to deploy proven implementations at Fortune 500s, SMBs and government and academic institutions including JP Morgan Chase, Purdue University, Pfizer and Lockheed Martin.
About Bio-IT World
Part of the Cambridge Healthtech Institute Media Group, Bio-IT World provides outstanding coverage of cutting-edge trends and technologies that impact the management and analysis of life sciences data, including next-generation sequencing, drug discovery, predictive and systems biology, informatics tools, clinical trials, and personalized medicine. Through a variety of sources including, Bio-ITWorld.com, the Weekly Update Newsletter and the Bio-IT World News Bulletins, Bio-IT World is a leading source of news and opinion on technology and strategic innovation in the life sciences, including drug discovery, development.
# # #
|The Chef Feeding Facebook’s Infrastructure|
|MARCH 12, 2013Datacenter DynamicsJesse Robins talks really fast. It is the pace of someone that is very excited. And when you first meet him it doesn’t take long to realize why.Robbins is a co-founder of Opscode, a company born out of a consultancy building fully automated infrastructure for startups. Robbins came from amazon.com where he was responsible for website availability. Adam Jacobs, another founder, had been building new infrastructures for startups and had worked as a systems administrator and architect and Barry Steinglass, the third party behind Opscode, was an early member of the Xbox platform team. The final founder, Nathan Haneysmith, used to be the Linux Platform lead for IBM e-Business Web hosting.The company they founded now has 400 paying customers ranging from Brightcove, DreamHost and Splunk who use the hosted and private offering of Chef – Opscode’s infrastructure automation tool. It also has more than 800 “cook books”, shared recipes for code released by its open source community, the users of which number in their thousands. And just as I talked to Robbins, Opscode had secured one of the biggest technology wins a tech company today could ask for – Facebook. Even more exciting, the social media giant is actually paying for Chef, which found its roots in open source, taking on services and cementing its value above the open source offering.Read full article »|
|Cloud Computing: Where Are We Now?|
|MARCH 6, 2013Inc’s Productivity@Work NewsletterTAs is often the case in technology development, the advantages cloud computing provides to larger organizations are beginning to trickle down to smaller ones, creating a world of new opportunities for small and medium-sized businesses. Cloud-computing solutions offer an inexpensive alternative for SMBs looking to save money (potentially, a very sizable amount) on their IT costs, and the new technology can help them level the playing field with larger competitors.“IT begins and ends with increased efficiency and cost savings,” says Jim Darragh, CEO of Abiquo, a provider of advanced enterprise cloud software solutions. IT departments are working to find solutions to answer user demand, and that is especially true of SMBs, which typically have smaller budgets and must respond appropriately to both employee and market demand in order to survive. “So if SMBs can adapt to the cloud successfully, they are removing manual processes and installing automated, or at least very-easy-to-use processes, and that’s a recipe for increased productivity and cost savings,” he says.Read full article »|
Cycle Computing Introduces DataManager™ to Lower Storage Costs and Ease Big Data Access for HPC Workloads
Utility HPC Software Provider Automates Secure, Large Scale Data Transfers to and from Amazon Glacier On-demand
New York – April 18, 2013 – Cycle Computing, the leader in utility HPC software, today announced the release of DataManagerTM, a new solution that schedules and manages the secure transfer and storage of data sets needed for large scale computations. DataManager seamlessly automates data archival and retrieval from lower cost cloud storage solutions, such as Amazon Glacier. This enables users to more quickly and cost-effectively conduct a wide range of compute and data intensive workloads in areas such as life sciences, financial services, manufacturing, academia and energy.
HPC workloads from molecular modeling to risk simulation require analysis of increasingly large volumes of data. Access to affordable compute has driven down the cost of generating this data, faster than the cost to store it. As a result, organizations want to reliably and securely leverage internal and cloud-based storage solutions to ensure their users have local access to the data needed to run these complex workloads.
“We introduced DataManager to meet the growing need for scientists to more efficiently manage large amounts of data to foster new discoveries in their fields,” said Jason Stowe, CEO of Cycle Computing. “Researchers now have the ability to control the transfer of large computation result, to and from various internal and cloud-based storage. With DataManager, intelligent scheduling and automation can keep data in the right place at the right time for reference and future research.”
Cycle uses DataManager to automatically move data, using a variety of open source and 3rd party transfer protocols and appliances, to utility HPC environments like the recently announced utility HPC cluster for a Top 10 Pharma. This 10,600 server utility supercomputer was created in 2 hours, ran 39 years of computing on Amazon EC2 in 9 hours, for $4,372, using one Opscode Chef server to automate configuration management. “DataManager does for automating data scheduling at scale what Opscode Chef does for configuration management in utility HPC clusters,” added Mr. Stowe.
Key benefits of DataManager include:
• Data elasticity: Efficiently move data from endpoint to endpoint, such as a local file system to Glacier or a remote file system, based on the specific date and time it is needed
• Data awareness and usage chargeback: Usage statistics enable accurate accounting and chargeback and user awareness of cost savings
• Client-side security and encryption: Locally control encryption and key management for data in transit and at rest in Amazon Glacier
• Lower latency and cost: Make cost-effective cloud archival more readily accessible to free up “hot” storage and avoid the need to buy additional on-premise devices
DataManager is now available for general use after being successfully tested and used in beta by two leading pharmaceutical companies and two of the top five life insurance organizations.
# # #