Kubernetes CNI, short explanations

We are familiar about the Kubernetes (k8s) orchestration tool. Kubernetes is one of the most popular orchestration tool that currently use to deploy containerised workloads. K8s clusters can scale from single node to thousands os nodes.

Wide verities of Kubernetes flavours are available and different people using different kind of Kubernetes setup based on their requirements. Read more about the Linux containers.

In this blog post, I am adding some network related details which will be helpful to understand the internal flow of Kubernetes, inter node / pod communication.

What is CNI? How it relates to k8s?

CNI (Container Network Interface), a Cloud Native Computing Foundation project, consists of a specification and libraries for writing plugins to configure network interfaces in Linux containers, along with a number of supported plugins. CNI concerns itself only with network connectivity of containers and removing allocated resources when the container is deleted. Because of this focus, CNI has a wide range of support and the specification is simple to implement. You can read more about CNI from the official page.

CNI is not only for the Kubernetes. It’s a standard / technology for Linux containers networking. Kubernetes adopted this feature for managing network resources on a cluster. Other orchestration tools like, Apache Mesos, AWS ECS, rtk, OpenShift etc are also using this for managing the networking efficiently.

What does CNI do actually in a Kubernetes cluster?

A CNI plugin (there are multiple CNI plugins are available based on different use cases) is responsible for enabling communications between container/s (pod) and host/s (node) in a cluster. CNI plugin actually insert network interface into the container network namespace and necessary changes at host level as well.

A veth pair, one at container end and attaching the other end of the veth into a bridge.

CNI plugins are responsible for maintaining this networking. The container/pod initially has no network interface. The container runtime calls the CNI plugin with verbs such as ADD, DEL, CHECK, etc.

CNI plugin then assigns an IP address to the interface (POD) and sets up the routes consistent with the IP Address Management section by invoking the appropriate IP Address Management (IPAM) plugin.

CNI plugins work on different OSI Network Layers to cover their different use cases. Many of them work on Layer 2, Layer 3 and Layer 4. If a CNI works on Layer 2, they are generally faster but don’t have as many features as Layer 3 or Layer 4 based CNIs.

Why we need additional plugins?

It’s a common question that will come on top of our head, why Kubernetes can not managing this networking stuff without any additional plugins? Yes, Kuberenetes do this by default, however, the efficiency and customisations are not that much upto the mark.

Kubernetes, by default, is using the Kube-Net plugin to handle coming requests. Kube-Net is a very basic plugin that doesn’t have many features.

If someone needs (of course we need) more features like ip filtering, isolation between name spaces (Network Policies), cross-node networking etc, Kuberenetes can not achieve these king fo stuff with its default Kube-Net CNI plugin. It’s a very basic, simple network plugin, on Linux only. If you are interested, you can see more details from the official documentation.

Some example plugins

Flannel, Cilium, Calico, Weave Net etc are some popular CNI plugins for Kubernetes. You can see complete details about different CNI plugins from this official documentation.

Network Models

There are two types of networking models commonly using for CNI plugins.

  • Encapsulated Network
  • Unencapsulated Network

What is Encapsulated Network?

  • Data Encapsulation is the process in which some extra information is added to the data item to add some features to it.
  • This information can either be added in the header or the footer of the data.
  • This network model provides a logical Layer 2 (L2) network encapsulated over the existing Layer 3 (L3) network topology that spans the Kubernetes cluster nodes.
  • Encapsulation “takes information from a higher layer and adds a header to it, treating the higher layer information as data”.
  • With this model, we will have an isolated L2 network for containers without needing routing distribution.
  • Encapsulation information is distributed by UDP ports between Kubernetes workers, interchanging network control plane information about how MAC addresses can be reached.
  • Common encapsulation used in this kind of network model is VXLAN, Internet Protocol Security (IPSec), and IP-in-IP.
  • CNI plugins which are using this method: Flannel, Canal, and Weave.
  • In short, this model create a network bridge extended between nodes in a cluster where pods are running.
  • This network model is sensitive to L3 network latencies of the Kubernetes workers. If the servers are in different DC and there are any network latencies, this will impact the performance of cluster which uses Encapsulated Network model
Source: https://rancher.com/

What is Unencapsulated Network?

  • This model doesn’t generate an isolated l2 network.
  • Works in L3 only to route packets between containers.
  • Instead of using IP headers for encapsulation, this network model uses a network protocol between Kubernetes workers to distribute routing information to reach pods, such as BGP.
  • In short, this model generate a kind of network router extended between Kubernetes workers, which provides information about how to reach pods.
  • Example of CNI plugins include Calico and Romana.
Source: https://rancher.com/

Hope, this helped you to get into this Kubernetes networking basics. You can go through the official documentation for more details.


Please add your suggestions as comment.

Post navigation

Arunlal Ashok

DevOps Engineer. Linux lover. Traveller.
Always happy for an open discussion! Write to arun ((@)) crybit ((dot)) com.

Leave a Reply

Your email address will not be published.