Cycle Computing Collaborates with ANSYS on its Enterprise Cloud HPC Offering

CycleCloud to provide orchestration and management for leading engineering simulation cloud offering New York, NY – (Marketwired – February 2, 2017) – Cycle Computing, the global leader in Big Compute and Cloud HPC orchestration, today announced that ANSYS has officially chosen its CycleCloud product to spearhead the orchestration and management behind the ANSYS® Enterprise Cloud™. ANSYS is the global leader in engineering simulation bringing clarity and insight to its customers’ most complex design challenges. Many ANSYS customers require simulation workloads to be migrated to the cloud, as customers look to leverage dynamic cloud capacity to accelerate time to result, shorten product development cycles and reduce costs. ANSYS Enterprise Cloud, an enterprise-level engineering simulation platform, delivered on the Amazon Web Service (AWS) global platform using the CycleCloud software platform, enables this migration, including secure storage and data management and access to resources for interactive and batch execution that scales on demand for virtual-private cloud (VPC) for enterprise simulation. “Our collaboration with Cycle Computing enables the ANSYS Enterprise Cloud to meet the elastic capacity and security requirements of enterprise customers,” said Ray Milhem, vice president, Enterprise Solutions and Cloud, ANSYS. “CycleCloud has run some of the largest Cloud Big Compute and Cloud HPC projects in the world, and we are excited to bring their associated, proven software capability to our global customers with the ANSYS Enterprise Cloud.” Cycle Computing’s CycleCloud will optimize ANSYS Enterprise Cloud with the orchestration of cloud HPC clusters with ANSYS software applications in the cloud, ensuring optimal AWS Spot instance usage, and ensuring that appropriate resources are used for the right amount of time in the ANSYS...

Cycle Computing at IEEE Big Data Conference

We’ve see how disruptive the cloud has been for compute workflows because of it’s elasticity and vast scale, but IEEE Big Data 2016 revealed that the cloud has been disruptive for Big Data workloads for other reasons. I attended the conference to present work done by NASA in collaboration with Cycle Computing entitled “Using Cloud Bursting to Count Trees and Shrubs in Sub-Saharan Africa”.  The work describes large scale analysis of satellite imagery using our event-based workflows. The session agenda favored downstream analysis, but the audience showed interest in data provenance and metadata awareness in the primary batch workflow. George Percivall from Open Geospatial Consortium detailed how we’re entering the era of data that is “born connected”.  Users, services, and applications will derive value from linking various PB-scale data repositories. Whether they be climate data, geological data, satellite imagery, or surface sensor readings these data repos will be created with rich metadata and served in the cloud.  Mr Percivall and his organization advocate for owners of these large dataset to share them in the cloud with consistent standards so that they’re accessible and consumable. A complimentary and very thought-provoking presentation by Brian Wilson from JPL about SciSpark, JPL’s Spark variant, showed us what a companion analysis architecture for these massive linked datasets is likely to look like. There has been consolidation around Spark-based analysis architectures in the last several years. Mr. Wilson described how spark can be used optimally for just about any kind of analysis of big data with a careful design of the backend filesystem. He points out that for dense, well structured data that HDFS is...

CycleCloud 6 feature: cluster upgrade button

This post is one of several in a series describing features introduced in CycleCloud 6, which we released on November 8. When you create a CycleCloud cluster, it gets “pinned” to the version of CycleCloud you’re running. This means the images and automation tools will stay within the same release series to prevent an unexpected changes in behavior. When you upgrade your CycleCloud installation, clusters will continue using the same version they were created with until you decide to upgrade it. This allows you to upgrade clusters on your own schedule. New in CycleCloud 6 is a button to perform the upgrade on terminated clusters. The “Upgrade” button will appear for any cluster that is set to a previous version. If the cluster is terminated, you can click the button to upgrade the cloud images and automation suite. Running clusters cannot be upgraded in order to prevent a cluster from running mixed versions. Once you click the Upgrade button, CycleCloud will update the internal record to use the new version. You can immediately click Start to launch your upgraded...

CycleCloud 6 feature: Improved HealthCheck framework

This post is one of several in a series describing features introduced in CycleCloud 6, which we released on November 8. CycleCloud’s HealthCheck system provides a mechanism for detecting and terminating bad instances. The definition of “bad” can vary by use case, so HealthCheck provides a framework for custom scripts to use instead of prescribing a defined state. HealthCheck runs customer-provided Python and shell (Unix shell or Windows batch, as appropriate) on cloud instances regularly. New in CycleCloud 6, we’ve made it safer and easier for users to create custom health checks for the clusters. The HealthCheck system requires an explicit return code (254) to terminate an instance. This removes the possibility of an error in the script itself causing CycleCloud to terminate an instance. A return code of 0 still indicates a healthy system, and any other exit code is logged to CycleCloud’s Event Log. At SC16? Stop by booth #3621 for a...

CycleCloud 6 feature: Easier SSL certificate handling

This post is one of several in a series describing features introduced in CycleCloud 6, which we released on November 8. Managing SSL certificates in Java can be a daunting task, but using them is important to provide secure communication to the server. CycleCloud 6 adds a new feature to help manage these certificates. To create and install a self-signed certificate or generate a certificate signing request (CSR) to send to a certificate authority (CA) for signing, use the create_request subcommand. For example, to create a CSR for the host cycleserver.example.com: ./cycle_server keystore create_request cycleserver.example.com This will create and install a self-signed certificate. CycleCloud will begin using that certificate immediately with no restart required. The CSR is written to cycle_server.csr, which can be sent to your CA if you do not want to continue using a self-signed certificate. Once you have the signed certificate bundle, save it to a directory (for example, certs/) and run: ./cycle_server  keystore import certs/domain.key certs/*.crt Your CycleCloud installation is now using the signed certificate. At SC16? Stop by booth #3621 for a...

Dell EMC partners with Cycle Computing

The ongoing struggle of our industry is the promise of accelerating innovation versus the challenges of long queue times, limited environment options, and trying to deliver a platform that efficiently serves all types of users. This dilemma often forces us to say “NO” to many new, unique user requests. Cloud is a tool to address that conundrum. Dell EMC has embraced the notion that hybrid environments of internal and external resources will be the optimal path for customers. We are very excited to partner with the Dell EMC team to deliver a complete hybrid cloud solution for Big Compute workloads in a way that no one else can today. Ok. Sounds interesting, but what does it really offer to many of us in the Big Computing land where HPC in the Cloud is still something we are trying to understand where it fits? This is exciting because people can start to solve the problems that they are having in today’s real world HPC data centers. For many reasons (physics and the speed of light driving some of them), HPC datacenters are not going away any time soon. For all the benefits that cloud offers there are still a lot of valid reasons for maintaining a “local” HPC data center. The ability to have large volumes of data, connected by very high speed interconnects with dedicated, high performance storage options and unique server configurations can deliver real competitive value relative to technologies available at scale within the cloud. That said, there are also plenty of applications that currently run on HPC datacenter environments that don’t really need those specialized technologies...