Often designed as a group of distributed microservices running in containers, cloud-native applications are better known as containerized applications. Increasingly, the containerized applications are Kubernetes-based, Kubernetes being the de-facto standard for container orchestration. But the exponential growth in microservices makes it quite challenging to find out how to enforce and standardize routing between multiple services, encryption, authentication and authorization, as well as load balancing within a Kubernetes cluster. Building on service mesh helps to tackle these challenges. Like containers abstract away the operating system from the application, a service mesh abstracts away how inter-process communications are tackled.
What Is a Service Mesh?
A service mesh refers to a dedicated infrastructure layer for tackling service-to-service communication. It enables reliable delivery of requests through the complex topology of services constituting a modern, cloud native application. Practically, the service mesh implements as an array of lightweight network proxies deployed alongside application code. In other words, it is made up of sidecar proxies attached to all the pods in an application. The concept of the service mesh in terms of a distinct layer is connected to the growth of the cloud native application. Remember service-to-service communication is not only complex, but a fundamental aspect of runtime behavior and ubiquitous. Managing it is crucial to assure end-to-end performance and reliability.