It’s all about communication

Jehoszafat Zimnowoda
6 min readSep 30, 2019

--

A client connects to application through a cloud

Have you ever thought about how the HTTP request ends up in the Kubernetes cluster? This story is about packets traversal through the Google Cloud Platform and the Kubernetes cluster.

Introduction

During my experiments with Google Cloud Platform (GCP), I have deployed a Kubernetes cluster with one service. Next, I exposed the service by setting up an Ingress resource. Everything went perfectly according to the instruction — I was able to communicate with my webpage via public IP address. Then, I asked myself: “do you really understand how does it work?”. The negative answer forced me to start digging into the topic. As a result of my research, I wrote this article.

Prerequisite

You are familiar with some fundamental Google Cloud Platform concepts, like region, instance group and VM instance. You have some basic knowledge about the Kubernetes cluster, Pod and Docker container.

From Kubernetes cluster to Ingress

Kubernetes cluster

When a Kubernetes cluster is created, several Virtual Machines (VMs) are started. The VMs are grouped in Instance Group as presented in the below picture:

Kubernetes cluster: Nodes

The instance group is bound to a given region and the traffic is load-balanced between VMs. Each VM has assigned an external and internal IP address. In the picture the internal IP addresses are denoted as:<n-ip1>, <n-ip2>, <n-ip3>. The VMs are seen in Kubernetes cluster as nodes and can be inspected using kubectl command line tool:

Example 1: Node configurations

Kubernetes service

A service is an abstraction, which defines a logical set of Pods. It also load-balance traffic between Pods. The load-balancing is performed by applying firewall rules on each node in the cluster,

Note: These firewall rules are not visible in the GCP console. They can be inspected by using iptables tool from the command line on a given VM (use: sudo iptables -nvL -t nat).

A service can be exposed outside of the cluster by assigning NodePort type:

Example 2: Service configuration — before applying it to the cluster

After deploying a service, clusterIP and nodePort are assigned:

Example 3: Service configuration — after applying it to the cluster

The nodePort is a TCP port exposed on every IP address on each node. The clusterIP is a virtual IP that Kubernetes assigns to a service.

The Cluster IP address is not assigned to any network interfaces. Similarly, the nodePort is not assigned to any listening process in the operating system.

Instead, they exists as a set of firewall rules that are used to expose the service and to load-balance traffic between existing Pods. Since we have not defined any Pods yet, we can only tell about the following firewall rules:

  • VM-1: forward traffic from n-ip1:n-port to s-ip:s-port
  • VM-2: forward traffic from n-ip2:n-port to s-ip:s-port
  • VM-3: forward traffic from n-ip3:n-port to s-ip:s-port

They can be presented in the below picture:

Kubernetes cluster: Service without Pods

Kubernetes deployment

A Deployment controller provides declarative updates for Pods and ReplicaSets.

A simple deployment configuration can be found below:

Example 4: Deployment configuration — before applying to the cluster

After applying deployment configuration to the cluster, Pods are created and the following firewall rules are applied on each node:

  • forward packets from s-ip:s-port to either p-ip1:p-port or p-ip1:p-port

The following picture presents hypothetical configuration:

Kubernetes cluster: Service without Pods

The service proxies incoming traffic to the Pods by using firewall rules. In this particular example, the rule load-balance traffic between Pod-1 (from VM-1) and Pod-2 (VM-3). The p-port is the TCP port that Pod is using to expose an application that is running in Docker container.

Exposing Kubernetes service outside the cluster

Exposing service to the Internet can be performed by creating one of the following Kubernetes resources:

  • Load Balancer
  • Ingress

In both cases the GCP updates configuration of the fully distributed Load Balancer, which resides outside of the Kubernetes cluster. In this story, we take a look at the Ingress resource.

What is Ingress?

Ingress is the Kubernetes resource that describes HTTP(S) Load Balancer configuration. The HTTP(S) Load Balancer “acts as the entry point for your cluster by dispatching URL patterns and redirecting requests to the services that reside in Kubernetes cluster”.

In other words:

Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.

Why do we need Ingress?

The Ingress provides HTTP(S) Load Balancer as a service. It abstracts the Load Balancer configuration and allows you to define it as a Kubernetes resource. Moreover, Ingress automates the configuration of various GCP resources, so they can communicate with the Kubernetes cluster.

Kubernetes Ingress

So far we established that a given service defined in the Kubernetes cluster can be accessed at n-ip1:n-port or n-ip2:n-port or n-ip3:n-port. Keep it in mind, because that knowledge is crucial to understand communication between the cluster and other GCP resources.

The following configuration describes the Ingress resource.

Ingress configuration

The ingress configuration says that all requests are forwarded to the default backend on service Port <s-port> (not <n-port>). After applying Ingress resource to the cluster:

(..) the GKE ingress controller creates a Google Cloud Platform HTTP(S) load balancer and configures it according to the information in the Ingress and its associated Services.

Now, let's take a look at the picture that presents applied ingress configuration:

There are:

  • globally distributed HTTP(S) load balancer (a.k.a Google Front End)
  • managed instance group of VMs that compose the Kubernetes cluster.

The clients communicate with distributed HTTP(S) load balancer by using its public IP address. The load balancer communicates with the instance group by using VM internal IP addresses (n-ip1 or n-ip2 or n-ip3). The load balancer is also aware about URL to TCP port (n-port) mapping.

We also know that Kubernetes service has created firewall rules that forward target TCP ports: from n-port to s-port, and then to p-port where container inside a Pod is waiting for requests.

That’s it. We have a big-picture now! Well … let’s picture it:

Conclusions

The Ingress controller provides HTTP(S) LoadBalancer as a service. The cloud operator gets a simple way to configure it as a reverse proxy server.

The Ingress is an excellent solution because it offloads engineers from creating many resources manually, which could be error-prone. However, it is a bit enigmatic the way different resources are interconnected.

References

--

--

Jehoszafat Zimnowoda

Passionate about computer networks and distributed system. OSS contributor and occasionally technical writer.