AKS Networking Deep Dive: Kubenet vs Azure-CNI vs Azure-CNI (overlay)

Inder Singh
8 min readDec 25, 2022

--

Kubernetes networking enables you to configure communication within our k8s network. When deploying a AKS cluster, there are three networking models to consider:

  • Kubenet networking
  • Azure Container Networking Interface (CNI) networking
  • Azure Container Networking Interface (CNI)- Overlay networking

So, which networking option should we choose for our AKS (Azure Kubernetes Service) deployment in Production? Let’s find it out.

By default, AKS clusters use kubenet, and an Azure virtual network and subnet are created for you.

Whatever network model you use, all of them can be deployed in one of the following ways:

  • The Azure platform can automatically create and configure the virtual network resources when you create an AKS cluster. Vnet and subnet will be automatically created for you in the resource group and this resource group too get automatically created by the Azure resource provider in our own subscription. By default, AKS will name the node resource group MC_resourcegroupname_clustername_location.
  • We can manually create and configure the virtual network resources and attach to those resources when we create our AKS cluster. We can configure our Vnet and subnet at the time of the AKS creation.

If we let Azure create our networking resources for our cluster at creation time , Azure will create a virtual network in the managed cluster resource group, thats the resource group that holds virtual machines and the networking element associated with AKS managed clusters.

Below are the details of the default Network resources created by azure platform incase we don’t specify vnet.

  • Network range for this virtual network will be 10.224.0.0/12
  • Then within that vnet we are going to get subnet created for us for our nodes and its range going to be 10.224.0.0/16
Default Vnet and Subnet created by Azure

Before we discuss about networking models in detail , let’s first understand few more things about AKS networking basics.

AKS Networking basics

Node Pools CIDR

The network where our system nodes & user nodes reside. In Azure Kubernetes Service (AKS), nodes of the same configuration are grouped together into node pools. Nodes from these node pools get IP addresses from the subnet of the VNET. All node pools must reside in the same virtual network.

Incase of default vnet, nodepool CIDR will be 10.224.0.0/16 for all models.

In case of Kubenet model, nodes get an IP address from the Azure virtual network subnet that resides in the system resource group (‘MC_*’-group) if we go with Azure default vnet and subnet created for us automatically. But if we go with our own vnet and subnet then IP addresses for cluster’s nodes are assigned from the specified subnet within the virtual network.--vnet-subnet-id subnet will provide IP addresses to our cluster nodes.

az aks create \
-g $RG \
-n kubenet-cluster \
--network-plugin kubenet \
--vnet-subnet-id $KUBENET_SUBNET_ID \

In Azure CNI & CNI-overlay model, IP addresses for cluster’s nodes are assigned from the specified subnet within the virtual network. When adding node pool, reference the node subnet using --vnet-subnet-id.

az aks nodepool add — cluster-name $clusterName -g $resourceGroup -n newnodepool \
— max-pods 250 \
— node-count 2 \
— vnet-subnet-id $nodeSubnetId
- pod-subnet-id $podSubnetId

Services CIDR

The Services CIDR is used to assign internal services in the AKS cluster to an IP address. This range should not be used by any network element on or connected to this virtual network. However, we can reuse the same service CIDR for multiple AKS clusters. Reference Service CIDR range using the --service-cidr .

az aks create \
-g $RG \
-n kubenet-cluster \
--network-plugin kubenet \
--vnet-subnet-id $KUBENET_SUBNET_ID \
--pod-cidr "10.100.0.0/16" \
--service-cidr "10.200.0.0/16"

We can use any private address range that satisfies the following requirements:

  • Must not be within the virtual network IP address range of your cluster
  • Must not overlap with any other virtual networks with which the cluster virtual network peers
  • Must not overlap with any on-premises IPs
  • Must not be within the ranges 169.254.0.0/16, 172.30.0.0/16, 172.31.0.0/16, or 192.0.2.0/24

Pods CIDR

The Pod CIDR is the pool of addresses where the pods get their IPs from and is usually different from the node address pool.

In case of kubenet, pods receive an IP address from a logically different address space than the nodes’ Azure virtual network subnet.Pods can’t communicate directly with each other. User Defined Routing (UDR) and IP forwarding is used for connectivity between pods across nodes.

Kubenet
Kubenet- The following diagram shows how the AKS nodes receive an IP address in the virtual network subnet, but not the pods.

With Azure CNI, each pod receives an IP address in the IP subnet, and can directly communicate with other pods and services.Unlike kubenet, traffic to endpoints in the same virtual network isn’t NAT’d to the node’s primary IP. The source address for traffic inside the virtual network is the pod IP. Traffic that’s external to the virtual network still NATs to the node’s primary IP.

Azure CNI
Azure CNI

A drawback with the traditional CNI is the exhaustion of pod IP addresses as the AKS cluster grows, resulting in the need to rebuild the entire cluster in a bigger subnet.

The new dynamic IP allocation capability in Azure CNI solves this problem by allocating pod IPs from a subnet separate from the subnet hosting the AKS cluster. --pod-subnet-id is used to specify the subnet whose IP addresses will be dynamically allocated to pods.

az aks nodepool add --cluster-name $clusterName -g $resourceGroup  -n newnodepool \
--max-pods 250 \
--node-count 2 \
--vnet-subnet-id \
--pod-subnet-id

Networking Models

Kubenet

Kubenet is a very basic, simple network plugin, on Linux only. It does not, of itself, implement more advanced features like cross-node networking or network policy.

  1. Nodes receive an IP address from the Azure virtual network subnet.
  2. Pods receive an IP address from a logically different address space than the nodes’ Azure virtual network subnet.
  3. Network address translation (NAT) is then configured so that the pods can reach resources on the Azure virtual network.
  4. The source IP address of the traffic is translated to the node’s primary IP address.
  5. This approach reduces the number of IP addresses you need to reserve in your network space for pods to use.

Azure CNI

  • IP addresses for the pods and the cluster’s nodes are assigned from the specified subnet within the virtual network.
  • Each node is configured with a primary IP address. By default, 30 additional IP addresses are pre-configured by Azure CNI that are assigned to pods scheduled on the node.
  • Our clusters can be as large as the IP address range you specify.
  • Clusters configured with Azure CNI networking require additional planning. The size of your virtual network and its subnet must accommodate the number of pods you plan to run and the number of nodes for the cluster.
  • For intercommunications with other Azure services, e.g. VM, the source address of the packets arrived from AKS backed by Azure CNI is the pod IP address. So, it adds transparency to a network.
  • We don’t have to manage user defined routes for pod connectivity.

Azure CNI has better performance than Kubenet

Azure CNI Overlay

  • With Azure CNI Overlay, the cluster nodes are deployed into an Azure Virtual Network (VNet) subnet, whereas pods are assigned IP addresses from a private CIDR logically different from the VNet hosting the nodes.
  • In overlay networking, only the Kubernetes cluster nodes are assigned IPs from a subnet.
  • A separate routing domain is created in the Azure Networking stack for the pod’s private CIDR space, which creates an overlay network for direct communication between pods.
  • Communication with endpoints outside the cluster, such as on-premises and peered VNets, happens using the node IP through Network Address Translation. Azure CNI translates the source IP (overlay IP of the pod) of the traffic to the primary IP address of the VM, which enables the Azure Networking stack to route the traffic to the destination.
  • Endpoints outside the cluster can’t connect to a pod directly. You will have to publish the pod’s application as a Kubernetes Load Balancer service to make it reachable on the VNet.
  • This solution saves a significant amount of VNet IP addresses and enables you to seamlessly scale your cluster to very large sizes.

Like Azure CNI Overlay, Kubenet assigns IP addresses to pods from an address space logically different from the VNet but has scaling and other limitations. Kubenet adds latency with extra hop and requires route tables and UDR for pod networking.

Benefits & Limitations of Kubenet are:

  • Reduced number of IP addresses that are reserved in a network space for pods.
  • Simpler in design, management, etc. than Azure CNI.
  • An additional hop is required in the design of kubenet, which adds minor latency to pod communication.
  • Route tables and user-defined routes are required for using kubenet, which adds complexity to operations.
  • Azure network policies are not supported

Use kubenet when:

  • You have limited IP address space.
  • Most of the pod communication is within the cluster.
  • You don’t need advanced AKS features such as virtual nodes or Azure Network Policy.

Benefits & Limitations of Azure CNI are:

  • It allows for the separation of control and management of resources
  • Pods get full virtual network connectivity and can be directly reached via their private IP address from connected networks.
  • Requires more IP address space.
  • Different teams can manage and secure resources, which is good for security reasons. Traffic coming from Pod doesn’t go through NAT, pod that establishes communication to other resources can be identified. With Kubenet the source of the packet is always Node IP address.
  • This model requires more planning, and often leads to IP address exhaustion or the need to rebuild clusters in a larger subnet as your application demands grow.
  • Virtual Nodes (vKubelet) are available only with Azure CNI
  • Windows nodes are available only with Azure CNI

Use Azure CNI when:

  • You have available IP address space.
  • Resources outside the cluster need to reach pods directly.
  • Most of the pod communication is to resources outside of the cluster.
  • You don’t want to manage user defined routes for pod connectivity.
  • You need AKS advanced features such as virtual nodes or Azure Network Policy

Benefits & Limitations of Azure CNI Overlay are:

  • This solution saves a significant amount of VNet IP addresses and enables you to seamlessly scale your cluster to very large sizes.
  • An added advantage is that the POD private CIDR can be reused in different AKS clusters, truly extending the IP space available for containerized applications in AKS.
  • Support Azure network policies.
  • Dont support AKS advanced features, such as virtual nodes.
  • Overlay can be enabled only for new clusters. Existing (already deployed) clusters can’t be configured to use overlay.

Use overlay networking when:

  • You would like to scale to a large number of pods, but have limited IP address space in your VNet.
  • Most of the pod communication is within the cluster.
  • You don’t need advanced AKS features, such as virtual nodes.

Summary

--

--

Inder Singh
Inder Singh

Written by Inder Singh

Enterprise Modernization, Platforms & Cloud | CKA | CKS | 3*AWS | GCP | Vault | Istio | EFK | CICD | https://www.linkedin.com/in/inder-pal-singh-6a203b66/

Responses (3)