IT leaders viewing cloud optimization through the lens of cost are giving the topic short shrift. Cloud optimization is also about deliberate workload placement.
Cloud optimization is broadly understood as a tactic that organizations use to curb cloud spending. Look no further than recent earnings calls of public cloud providers, whose CFOs cited optimization as a lever for slowing cloud consumption.
No doubt economic headwinds are a driver for why companies reduce compute instances and/or migrate to lower cost storage tiers. With headwinds looming, spot pricing, reserved instances and savings plans look more attractive to cost-conscious customers.
That’s part natural ebb and flow of business, part correction to exuberant cloud spending. Indeed, 64% of CIOs surveyed by IDC said they were spending more on cloud software than they budgeted. As the cloud vendors all attest, the corrections are happening—through cloud optimization.
Rethinking Cloud Optimization
And yet.
Viewing cloud optimization solely through the lens of financial considerations neglects the broader shift underway. Cloud optimization isn’t just about reducing infrastructure costs in hosted environments; it includes rethinking where and how organizations are placing their application workloads.
Over the years, IT departments have found themselves supporting disparate application workloads and data across public and private clouds, on-premises systems, colocation facilities and even edge networks. Roughly 90% of IT environments today operate public and private clouds, according to HashiCorp.
Some of this hodge podge was cultivated from workload placement decisions made by IT; some of it is due to developers running workloads wherever made sense for them. Much of this workload accrual occurred by happenstance, coalescing in a multicloud by default that added complexity.
Organizations have grown weary of this dispersed collection of assets, many of which require cloud-native development or refactoring of traditional apps, due to their unique infrastructure and APIs. Such approaches often include inconsistent and proprietary tools.
As a result, IT departments are taking a more intentional approach to placing workloads to simplify operations and boost IT agility while reining in costs. In this multicloud-by-design strategy, application and business requirements dictate where workloads reside.
Several factors determine how IT leaders decide to redeploy workloads.
Performance. Are applications performing as desired to deliver the best business outcome? Is that AI application better served running in a public cloud environment or on prem? For getting a test-and-learn instance up and running or an app that requires dynamic scaling or “burstability,” a public cloud may be the optimal solution. For a steady state app, an on-premises deployment will almost always deliver better speed and lower latency.
Check application dependencies. Modern applications often depend on others to perform their intended functions. Running apps that depend on handshakes or handoffs with other apps residing in different environments can court latency issues or may break. Ideally, apps with greater interdependency will run in the same locations, or at least adjacent locations, such as on prem and colo.
Data residency matters. Rapidly revolving regulatory changes means governance, compliance and security requirements may determine where workloads are best suited to run. Check: Does data locality afford the opportunity to run apps in a cloud or does it dictate that apps should run on-prem in data centers or an adjacent colo?
Timing is everything. If an app requires real-time access to data it’s important to evaluate whether it is running in the appropriate location. Public clouds and on-prem environments can support both real-time and time tolerant workloads. And edge environments often work well for certain real-time requirements. It will usually come down to which environment offers the best price/performance ratio. The choice mustn’t denigrate the user experience.
The sustainability quotient. Enterprises are increasingly taking ESG goals into account and IT must also get on board. Check whether service and equipment providers meet corporate sustainability requirements around hardware, including plans for critical end-of-life, recycling and refresh tasks.
The Bottom Line
More than a cost cutting exercise, cloud optimization is a reconsideration of the technical viability of running a workload in a specific cloud. It’s about performing due diligence and right-sizing IT estates to accommodate the new multicloud paradigm.
The optimization calculus will be different for every organization. Some organizations may choose to repatriate workloads, or move them from a public cloud or other home to a new location—even another public cloud.
Organizations implementing greenfield applications have golden opportunities to plan where they place their workloads, ideally in locations that offer the best cost-per-performance ratios.
Our Dell APEX suite of solutions facilitates the multicloud-by-design strategy that can help IT organizations enjoy consistency, agility and control while achieving desired business outcomes—without the constraints associated with siloed ecosystems and proprietary tools.
Taking a multicloud-by-design approach to optimizing workloads can help IT departments craft a consistent experience across their IT estate and brace for economic challenges now and in the future.
Keep Reading: What’s Multicloud by Design and Why Does It Matter?
Read the full article here