The shift to cloud repatriation: Why organizations are making the change – Part 1

Digital clouds against a blue background.
(Image credit: Shutterstock / Blackboard)

Over the past decade, there has arguably not been any IT trend more transformative than the widespread availability of public cloud. With hyperscalers offering the promise of infinite scalability and flexibility for their workloads while alleviating the need for organizations to spend on internal infrastructure, tools and personnel, organizations have rushed headlong into a new era.

But more recently, as companies’ cloud strategies have continued to mature, there has been a growing realization not only that the expected financial payoff from public cloud investments may prove elusive, but also an understanding that organizations may risk sacrificing flexibility, security, and control when they go “all in” on public cloud. As a result, we have seen a growing number of companies starting to re-think their cloud strategies and making more judicious decisions about where their most critical workloads should reside. This reconsideration has led to a gradual migration of workloads back out of the public cloud and into private cloud environments – “repatriation” – and reflects a growing understanding of an undeniable truth: the public cloud is simply not the optimal choice for every type of workload.

So how should organizations think strategically about the types of workloads that might benefit from repatriation? The decision about which workloads belong where really hinges on a deep understanding of their nature and the organization’s specific needs. Regardless of a company’s specific IT architecture, successful repatriation requires a nuanced approach and an understanding of how you want to access your data, what you need to protect and how much you are willing to spend.

In this first part of a two-part series, we’ll look at two of the four key factors driving the current wave of repatriation: edge computing and data privacy/sovereignty.

Bryan Litchford

Vice President of Private Cloud at Rackspace.

‘Living on the edge’ computing: Bringing workloads home

According to research from Virtana, most organizations currently employ some type of hybrid cloud strategy, with over 80% operating in multiple clouds and about 75% utilizing a form of private cloud. More recently we’ve seen a shift, particularly in sectors such as retail, industrial businesses, transit and healthcare, to edge computing, driven by the need for greater flexibility and control over computing resources. The development of the Internet of Things (IoT) has been critical here, as it has enabled the collection of a vast array of data at the network edge.

When the number of connected IoT devices at the edge was relatively insubstantial, it made sense for organizations to send the data they provided to the public cloud. But as these devices have continued to proliferate, there is a lot of efficiency to be gained by collecting and analyzing data at the edge, including near real-time response and greater reliability of critical infrastructure, such as point-of-sale systems and assembly lines.

Especially in industries where uninterrupted operations are paramount, minimizing downtime is crucial for maintaining profitability and competitiveness. This shift towards edge computing reflects a strategic reassessment of IT infrastructure deployment, prioritizing localized solutions over traditional public cloud services, and it has led many organizations to pull workloads back from the public cloud.

Data sovereignty and privacy

As businesses grapple with mounting concerns surrounding the privacy and ownership of information, there has been a growing recognition of the need to maintain greater control over sensitive data and establish parameters and policies governing its use.

In industries such as healthcare and financial services, where vast amounts of sensitive critical data are generated and exchanged, maintaining trust and control over this information is of utmost importance. Ensuring that this data resides in highly trusted environments allows organizations to effectively safeguard their assets and mitigate the risk of unauthorized access or breaches.

Moreover, increased scrutiny by key stakeholders such as CIOs, CTOs, and boards has elevated the importance data sovereignty and privacy, resulting in a notable increase in the scrutiny of third-party cloud solutions. While public clouds may be suitable for workloads that are not subject to data sovereignty laws, a private solution is often required to meet compliance thresholds. Key factors to consider when deciding whether a public or private cloud solution might be more appropriate include how much control, oversight, portability, and customization the workload requires.

Trust and privacy are not the only data factors driving repatriation, of course. There are ancillary operational and strategic benefits to be gained by maintaining data within trusted environments, such as greater control over how information is accessed, used, and shared.

In part two of this series, we will look at two other key factors playing a role in repatriation: the rise of Kubernetes and the flexibility of containers.

We've featured the best cloud backup.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Bryan Litchford is Vice President of Private Cloud at Rackspace.