December 20, 2020

Introducing EPC Enterprise Performance Computing™

Written by Tony Gaughan

2021 will be the year of EPC Enterprise Performance Computing™
RSTOR EPC Enterprise Performance Computing

As we reflect on the monumental changes that 2020 brought to the cloud industry, we’ve also been thinking about what will happen next – in 2021 and beyond. We built and continue to build RSTOR to be one step ahead, free of the cloud industry’s legacy approach. It’s been liberating to think about what customers need, more than how do we shoehorn those needs into a 20-year-old system design.

2020 has been a year of growth and evolution for our customers and for RSTOR, too. This year we reimagined the software defined cloud where data can roam freely between storage nodes and be ever- present for our customers where they may need it with predictive data superpositioning.

In 2021 we’re moving our attention to the cloud compute market and thinking through how customers will best execute their jobs. Note our focus here is not what is the best way for cloud companies to make money nor for the best way to lock customers to a particular solution – but what is best for them. To us, putting customers’ needs first is just good business.

So as we look forward to cloud compute services we’d like to introduce a concept for you to consider which we call RSTOR EPC for “Enterprise Performance Computing”. You may have heard of High-Performance Computing (HPC) – which is one of those self-defeating acronyms (who doesn’t want their computing to be high performance??) but regardless it’s a phrase used to most often to describe massively parallelized computing systems used by the world’s largest supercomputers.

“One of the best-known types of HPC solutions is the supercomputer. A supercomputer contains thousands of compute nodes that work together to complete one or more tasks. This is called parallel processing. It’s similar to having thousands of PCs networked together, combining compute power to complete tasks faster.”
–NetApp

We know this space well – our sister company Sylabs Singularity was selected as the containerization solution for the world’s fastest supercomputer – RIKEN’s Fugaku. But the problem with HPC is that it is a model that is not open to your average SMB or even Fortune 500 enterprise – you can’t book a supercomputer to run the monthly payroll, do some routine data analysis or accelerate some high intensity workloads like 3D simulations. But at RSTOR we ask, why not? If massive parallelization of compute jobs is the most efficient ways to run compute then we should make it available for all workloads – large and small. You might argue that a small job doesn’t need this sort of supercomputer infrastructure. But let’s take a real world example and analyze it – a frequent enterprise job aggregates data from multiple global sources, then processes it (eg. AI analysis) and then returns data results and optimizations back to regional POPs. Maybe the process takes 24 hours and runs weekly. Why would you not want to speed that up? What business optimizations could a company make if the weekly analysis was done in 1 hour? Or 10minutes? Or constantly running – in real-time? Especially if that compute could be done for a lower cost than the weekly run? How much efficiency could we unlock if we reduce the cost and barriers to that level of compute?

At RSTOR we address the problem in terms of data availability and compute availability – and the need for a powerful network to combine them both. An HPC solution would break the compute job down into smaller tasks that can be parallelized across many different machines. An Enterprise Performance Computing solution would do the same but takes the leap that those machines do not need to be co-located with each other. What if we could run one job across thousands, or millions, of compute instances that are distributed at the edge but make it perform and report as one job? Then the compute could be moved closer to the data and avoid the need to waste time moving it to regional availability zones. RSTOR’s data superpositioning does that today by automatically moving data to the closest compute location. Now you can see why we don’t charge for egress – we want data to move freely. In fact we positively encourage it. The selected compute instances could be on any number of different compute service providers – including the major CSPs and the compute selected would be one that matches the minimum criteria of the job, at the best available price. The key to this model is the ability to manage massively distributed data and massively distributed compute jobs and seamlessly manage them so they are deceptively easy to deploy and manage securely. You can see why we launched the software defined cloud with the scale to run all of these platforms and our own multi-point network.

So in 2021 look to hear more about RSTOR EPC Enterprise Performance Computing. We see it as the future of cloud computing – effectively cloud virtualization. In 2020 we’ve been urging customers to Take Control of their Data. And now you see why. Imagine what you can do with your data when it is superpositioned and empowered to run distributed compute jobs with the power of a supercomputer? It should be a Happy New Year.

All the best from our family to yours.
Tony

You may also like…