Relationship-led approach to managing service providers in 2023

People fist-bumping in office denoting collaboration
(Image credit: Pixabay)

Service level agreements (SLA) are considered to be a critical component of any technology vendor contract. Key to establishing responsibility and mutual trust, SLAs specify the level of service expected, outlining the business metrics by which this is measured, as well as remedies or penalties if service levels are not met.

Yet, in some ways, today's service level agreements between enterprises and service providers have become outdated and are developed with the yesteryears of on-premises ecosystems in mind. Current IT environments are characterized by digital transformation efforts that require new levels of operationalizing cloud and with it, new approaches to optimization efforts are required too.

The good news is that cloud providers recognize this and are now taking more of a collaborative approach to customer relationships. That said, to truly succeed with today’s complex infrastructure dynamic, there needs to be a new kind of service-level relationship (SLR) that’s proactively initiated and governed by the enterprise. And, furthermore, is empowered by evidence that truly reflects a new enterprise reality.

From find-and-fix, to evidence-and-escalate

This need for a more relationship-based approach to managing the cloud is driven by the changing nature of applications and hosting arrangements.

Previously, applications were built in-house, usually with support from third-party code libraries. IT teams would examine their own codebase when faults occurred to locate the source of the problem. They would then engage with the responsible internal party (or third-party library maintainer) to resolve it.

Modern applications today often rely on multiple public or private clouds, either by purposeful design or as a result of third-party service dependencies sitting in different cloud provider networks. Applications using modular frameworks are API-centric, meaning that API-to-API communications are a typical operation in an application flow.

Consequently, when dealing with an infrastructure that now sits beyond the control of IT teams, the incident response and management paradigm has moved away from simply finding and correcting faults. Now, IT teams are required to quickly pinpoint where along the end-to-end delivery chain an issue is occurring to quickly isolate and address the source of the problem — before it impacts the customer or employee experience. To collaborate effectively and reach rapid resolution, said evidence is key to quickly get on the same page and have informed conversations with service providers during this reactive timeframe, but also proactively about long-term performance.

Mike Hicks

Mike Hicks is Principal Solutions Analyst for Cisco ThousandEyes.

Knowing your SLRs

SLRs recognize that internal cloud optimization efforts can only get an organization so far. At some point, performance gains are subject to a law of diminishing returns, requiring a relatively large investment of effort to make relatively small improvements.

This is a natural starting point for the SLR conversation: An organization might say to their cloud provider, “this is the best I can achieve on my own. Is there anything else that you can do on your end to make our service and experience better?”

It’s much less about getting buy-in for SLRs from the business, and more about how they can start an honest discussion based on data-driven insight that can help an enterprise step towards improved digital experiences. However, most IT teams lack the necessary visibility into cloud provider performance metrics that impact the delivery of their services and therefore lack the necessary insight to activate these performance discussions. True end-to-end cloud intelligence solutions can help bridge that gap. They create a window into an agency’s entire IT ecosystem—both internal and external. These solutions, when paired with procurement best practices, can help ensure the cost-effective performance of agency IT infrastructure with appropriate service response time measurement for internal and third-party users.

A good starting point is setting service baselines for each SLR. Baselines simplify review, assessment and performance reporting of each service provider. And, importantly, they should acknowledge that there is no steady state in the cloud. Cloud networks are in constant flux as providers strive to scale and expand their infrastructure and add new locations, services, and connectivities. Knowing that these networks are dynamic and ever-changing helps to inform future-proofed operational strategy. ThousandEyes also recently shared the 2022 edition of its Cloud Performance Report — aimed at examining some of the notable performance and architecture differences between the top cloud providers.

What the SLR conversation looks like in reality

Independent data points can provide an important conversation starter, and can benchmark the cross-cloud, cross-region and cross availability zone (cross-AZ) performance levels they expect to see.

In fact, three core sets of performance data including: end-user measurements, inter-availability zone measurements and finally, inter-region measurements, will help determine this. So, how do they help?

End-user measurements are crucial as they provide customers of cloud logging as a service (IaaS) and platform services insight into how different cloud provider locations are connected to the broader internet. It can also determine how the end to end paths perform for different locations. This insight gives enterprises an edge in helping them to inform successful application planning and deployment decisions.

Whereas inter-availability zone measurements are typically used for resiliency. Multi-AZ configurations can improve availability, cost efficiency and time management. Additionally, the metrics can evaluate performance consistency and latency thresholds in regards to the application design.

Finally, inter-region measurements are often distinctive to individual cloud providers and are primarily used for latency concerns. In other words, deploying applications and content closer to the user improves the experience for the end user of the application.

We often talk about overcoming silos in IT and when it comes to realising the true benefit of cloud at scale, we have to consider new work processes and tools that reflect the new order of working with a multitude of providers that all have a hand in the delivery of customer and employee experiences. SLRs are just a reflection of that new order, ushering in a more informed and effective cooperation between enterprise and cloud provider. That’s a good thing, and I’m sure SLRs are just the start.

We've featured the best online collaboration tools.

Mike Hicks is Principal Solutions Analyst for Cisco ThousandEyes.