Magazine Button
The secret to successfully deploying service mesh platforms

The secret to successfully deploying service mesh platforms

CloudIndustry ExpertSoftwareTop Stories

Solutions Architect at Venafi, Steve Judd, suggests that despite the complexities of cloud infrastructure, service mesh technology reduces the strain and stress of development teams.

Steve Judd, Solutions Architect at Venafi

The global pandemic brought with it many changes for businesses of all sizes and one of the most striking has been a sharp uptick in the use of cloud platforms and services. Faced with keeping dispersed workforces connected and productive, many businesses shifted their attention away from on-premise infrastructures and increasingly embraced cloud-based alternatives.

This shift also had a big impact on Digital Transformation strategies. Seeking ways to boost performance and agility while lowering costs, large numbers of organizations took advantage of technologies such as Kubernetes. This allowed platform teams to deploy multiple clusters across multi-cloud environments.

According to data released in 2022, a record 96% of organizations are either using or evaluating Kubernetes technology – up from 83% in 2020 and 78% in 2019. At the same time, the way organizations are deploying the technology is also maturing.

Managing rising complexity with service mesh

However, while this shift is already delivering significant benefits, they have come at the cost of increased complexity and a greater need for better governance. This is where service meshes like Istio come into their own.

Service meshes offer an increasingly popular option of acting as a separate infrastructure layer sitting on top of Kubernetes clusters. In this way, they offer several network connectivity and security-related features for those clusters. 

These features include mutual TLS (mTLS) for transparent service-to-service encryption using TLS certificates. This, in turn, enables all communication between workloads to happen directly between each other.

In this scenario, because all traffic flows through the service mesh, it allows for deeper observability including traceability of pod-to-pod requests and performance insights. End-users also benefit from more deployment options, traffic customization and circuit breaking. 

Although there are a significant number of service mesh vendors active in the market, Istio and Linkerd have become the most widely used. Acting as transparent, language-agnostic frameworks, they deliver all the benefits offered by the service mesh concept in a way that would otherwise require multiple-point solutions. 

Barriers to effective deployment

While the business benefits of service mesh are clear, unfortunately so are the challenges that go along with its implementation. These challenges can sometimes deter organizations that lack the time, money and in-house skills to support such projects.

One of the key challenges that need to be addressed relates to so-called ‘sidecars’. Each pod has a main application container and a sidecar that contains the service mesh proxy. This is the ‘secret sauce’ that enables an organization to harness all the benefits of service mesh, as it is where network traffic from the pod’s container is directed. 

However, sidecars add network latency and take up processor and memory resources, which can become a significant issue at scale. Even if each sidecar only takes 5MB of memory and 0.1 CPU, multiplied by 100,000 pods, it can become an enormous resource drain.

Significant advances have been made in finding ways to make it fundamentally easier to deploy service meshes, however, it is still very complex to troubleshoot connectivity issues and configure the architecture – especially in large environments.

Also, there are unintentional operational headaches introduced by mTLS. While the technology works well with HTTP traffic, mTLS becomes more problematic with TCP traffic.

Understand what you want to achieve

It needs to be recognized that Kubernetes comes with a significantly steep learning curve. Add a service mesh on top of it and the curve only gets steeper.

For this reason, the first task for an IT team is to create a list of goals that their organization is aiming to achieve from using a service mesh. This will determine exactly which technologies are selected and how they will be deployed.

The area of machine identity management also needs to be addressed as each service mesh control plane will have a component that deals with certificate management. When each control plane is installed, it creates a root certificate authority (CA) for each cluster that generates certificates signed by that CA.

However, having a self-signed root CA is far from best practice, especially in highly regulated sectors such as financial services. For organizations with multiple service meshes and self-signed root CAs in operation, the issues multiply. These organizations need to remember that pods have a relatively short shelf-life and each one will need its own certificate. 

Unfortunately, self-signed certificates don’t offer the visibility and control that is required by IT security teams. They are note signed by a publicly trusted CA, cannot be revoked and never expire.

These issues are all clearly security red flags. To overcome this issue, IT teams need to make use of a cloud-agnostic, automated method of managing the process of ID issuance and lifecycle management.

While machine identity is clearly not the only challenge when it comes to the successful deployment of service mesh infrastructure, it is certainly a significant one. For this reason, it is vital that teams take the steps required to get their machine identity management infrastructures fully functional.

In this way, the benefits of service mesh can be delivered without an organization facing any potential downside security risks.

Click below to share this article

Browse our latest issue

Intelligent CIO APAC

View Magazine Archive