Commissioned: This public cloud tale may sound familiar to you seasoned IT leaders.
You began your foray into the public cloud with a few controlled applications. You gained speed and velocity through elastic compute, storage and other services, as well as perceived cost advantages by embracing an OPEX model. Years passed and your consumption scaled due to business growth, but your apps also became more expensive to run. Eventually, you came to question the cost benefits of running certain workloads in the public cloud versus on premises. Yet the more apps and data you offloaded to the public cloud, the harder it became to move them, forcing you to make some hard calls…
lf this tale sounds familiar to you (and maybe triggers a cold sweat fit) you’re likely one of the many CIOs who have suffered from the cost overruns associated with fast-growing public cloud consumption.
Repeat it: cloud repatriation is okay
In recent years, such scenarios have paved the way for cloud repatriation, in which organizations removed applications from the public cloud and brought them back in house or moved them elsewhere.
Indeed, according to a Dell survey of 189 IT leaders published in September 2022, 96% of 139 IT decision makers who repatriated workloads or applications cited cost efficiency as the top benefit of their efforts, while 95% said their security posture improved as a result of the move, according to a recent survey Dell conducted. And 85% cited business agility as a benefit .
In the 451 Research Voice of the Enterprise: Datacenters 2021 Survey published in September 2021, 48% of 600 IT decision makers (ITDMs) reported that they migrated apps or workloads from hyperscale vendors to another location.
Sometimes the trigger for repatriation is due to technical misfires, such as when an application that has been moved from an on-premises estate to a public cloud – lift and shift, in industry parlance – without proper reconfiguring doesn’t scale as expected. Mistakes happen.
Cloud repatriation carries with it a stigma, suggesting that the organization grossly erred in migrating workloads to a hyperscaler. But deeming these rollbacks as abject failures runs counter to the fail-fast-and-learn ethos on which so many organizations pride themselves.
Do you also view cloud repatriation as a failure? If that’s your perspective, you might be thinking about it the wrong way.
No, really – cloud repatriation is good for business
Consider an alternative perspective: View cloud repatriation as a tactical lever for workload placement, which is a critical objective for organizations embarking on a multicloud-by-design strategy. This intentional approach to managing IT assets offers a consistent cloud experience for organizations whose IT estates have spread to an unwieldy mixed bag of on-premises and off-premises computing scenarios (also known as multicloud-by-default).
For instance, as organizations pursued digital transformations, they incorporated more applications running in diverse locations across their environments. Workload distribution grew more rampant as the COVID-19 pandemic forced more employees to work remotely or split time between home and corporate offices. Infrastructure supporting these applications naturally spreads across on-premises infrastructure, private and public clouds, colocation facilities and edge devices.
Further compounding the challenge is data gravity, in which data generated by applications weighs on performance and latency. This requires IT to move resources closer to the workloads, resulting in even more distributed estates.
In multicloud-by-design, application workloads are placed intentionally, based on a variety of factors that could include cost efficiency, performance optimization, latency, security, compliance and governance.
The value of workload placement
Anticipating and planning for the impact of these trends on your business can help you better allocate and optimize your IT resources.
Which means everything is on the table, including repatriating some workloads you allocated to the public cloud. Like most IT leaders, you probably put them there because it was the quickest way to access compute, storage and other resources that enabled critical applications up and running.
But as these workloads have grown you may come to find it’s more cost-effective to move them to a more optimal location. For example, maybe an analytics app you launched in the public cloud began to strain retrieval times as its data volume grew. Perhaps you improved performance by running it in your own datacenter. Or maybe you moved ERP, CRM or other apps to your own datacenters (or a colocation facility) because data governance rules required that data stored in those critical apps stay in certain geographies.
Once upon a time, public cloud was the go-to platform for rapidly getting apps up, running and scaling. But that is no longer necessarily the case for some workloads operating in the emerging multicloud-by-design world.
A new cloud experience
As you scheme for this more distributed IT estate, you might consider rebalancing your portfolio. This means evaluating whether assets are best served running on-premises or in public clouds for rapid scalability, colos to satisfy cloud adjacency goals, or the edge to support speedy data processing for apps on endpoint devices.
Which means calculating where applications will run best based on such critical factors as performance, latency, compliance, security and costs. For example, steady state applications often cost much less to run on-premises because you aren’t paying a premium to access scalable resources on the fly.
You’ll ask: How can I get consistency and visibility across my applications and all the IT resources supporting them, regardless of where they are running? Can my developers access self-service tools to build, test and run apps?
If I do elect to run workloads in public cloud providers, how much time and money will it cost to train up IT staff to work with those environments? Will I be able to meet SLAs for performance, availability, security and compliance for every location I choose to operate in?
Such as the hallmarks of the cloud experience. But what about costs?
Adapting the cloud experience in on-premises data centers includes paying for IT resources similar to the way organizations pay for public cloud software. Fifty-four percent of 742 senior IT professionals said they prefer to apply a consumption-based model in data centers instead of a traditional hardware purchase model, according to ESG’s 2023 Technology Spending Intentions survey .
As it happens, Dell Technologies APEX portfolio of services supports a cloud experience via a flexible consumption model.
Because in this multicloud-by-design world you need to run workloads how, when and where you want with simplicity, agility and control – without the fear of shaming associated with retrieving assets from the public cloud.
Learn more about our portfolio of cloud experiences delivering simplicity, agility and control as-a-Service: Dell Technologies APEX.
Brought to you by Dell Technologies.