Understanding the differences between Azure CNI, CNI Overlay, and Kubenet is crucial for AKS network architecture decisions and networking options determine how your pods communicate, how IP addresses are allocated, and what capabilities are available to your cluster! All the solutions enable pod connectivity, but they differ in their implementation, performance characteristics, and scalability considerations. This technical overview examines the key differences between Azure CNI (including its traditional and overlay modes) and Kubenet, focusing on IP address management, network performance, and integration capabilities to help inform your AKS networking strategy.
IP Address Management
In the traditional Azure CNI, each pod receives an IP address directly from the subnet of your Azure Virtual Network. This means pods are first-class network citizens within your VNet, enabling direct communication with other resources in your Azure environment. The pod IPs are routable and can be accessed by any resource that has network connectivity to the subnet.
Azure CNI now also offers an Overlay mode, where only the cluster nodes are deployed into a subnet. Pods are assigned IP addresses from a private CIDR logically different from the VNet hosting the nodes. Pod and node traffic within the cluster use an Overlay network, while Network Address Translation (NAT) uses the node’s IP address to reach resources outside the cluster. This solution saves a significant amount of IP addresses and enables you to scale your cluster much larger.
Kubenet also implements an overlay network. Pods receive IP addresses from a separate address space that isn’t part of your VNet. When pods need to communicate with resources outside the cluster, Kubenet uses NAT to route traffic through the node’s IP address. This creates an additional networking hop but conserves VNet IP addresses since only the nodes need IPs from your subnet.
Subnet Space Requirement
When using traditional Azure CNI, subnet planning requires careful consideration as each potential pod needs a dedicated IP address from your subnet. This means you must allocate a subnet large enough to accommodate all nodes and their maximum potential pods. So if each node can run 30 pods and you plan to scale to 10 nodes, you’ll need at least 300 IP addresses for pods plus additional IPs for the nodes themselves. This requirement often forces you to choose larger CIDR ranges and can impact your overall network design.
Azure CNI Overlay offers more efficient IP address utilization by assigning a /24 address space to each node from a private CIDR specified during cluster creation. The /24 block is fixed and can support up to 250 pods per node. You can reuse the same pod CIDR space across multiple independent AKS clusters in the same VNet, significantly extending the IP space available for containerized applications.
Kubenet also offers efficient IP address utilization since it only requires VNet IPs for the nodes themselves. Pods use an internal CIDR range for the overlay network, typically defaulting to 10.244.0.0/16, which doesn’t consume your VNet’s address space. This makes Kubenet a good choice in environments with limited IP availability or when working within strict IP address constraints. However, this efficiency comes with the trade-off of requiring additional routing configuration through User Defined Routes (UDRs).
Network Performance
Traditional Azure CNI provides superior network performance due to its direct network integration model. Since pods have direct connectivity to the VNet without an overlay network, there’s minimal overhead in your networking stack. Network packets flow directly between pods and other VNet resources without additional encapsulation or translation layers, resulting in lower latency and higher throughput for network-intensive workloads.
Azure CNI Overlay maintains performance comparable to VMs in a VNet. There’s no need to provision custom routes or use encapsulation methods to tunnel traffic between pods, which provides connectivity performance between pods on par with VMs in a VNet.
Kubenet introduces additional networking overhead because it requires NAT and routing through the node’s network interface. Each packet must be processed by the overlay network and translated between the pod’s internal IP address and the node’s external IP address. While this performance impact is minimal for many applications, it can become noticeable in scenarios with high network throughput requirements or latency-sensitive workloads.
Network Security Group (NSG) Control
Azure CNI works with NSGs at the subnet and network interface level, where rules can target pod IPs since they are part of the VNet. While NSGs cannot be attached directly to individual pods, you can create specific NSG rules that target individual pods or groups of pods through their VNet IPs.
With Azure CNI Overlay, pod-to-pod traffic isn’t encapsulated, and subnet NSG rules are applied. Specific rules are required for proper cluster functionality, including traffic between node CIDR ranges and pod CIDR ranges. Network policies are recommended for workload traffic control.
With Kubenet, network security control is limited to the node level since pods don’t have direct VNet IP addresses. NSG rules can only be applied to the node’s network interface, which means all pods on a node share the same network security rules. This limitation makes it more challenging to implement pod-specific network policies and may require alternative solutions like Kubernetes Network Policies for more granular traffic control within the cluster.
VNet Integration
Traditional Azure CNI provides comprehensive VNet integration, allowing pods to directly communicate with any resource in your Azure environment that’s connected to the VNet. This includes Azure services like SQL Managed Instances, Azure Cache for Redis, or resources in peered VNets. Since pods get their IP addresses directly from the VNet subnet, they can establish direct connections without additional networking configuration, making it ideal for scenarios requiring seamless integration with other Azure services.
Azure CNI Overlay provides VNet integration through NAT, where outbound pod traffic uses the node’s IP address. While pods can’t be directly accessed from outside the cluster, you can publish pod applications as Kubernetes Load Balancer services to make them reachable on the VNet.
Kubenet offers more limited VNet integration capabilities. While pods can still communicate with external resources, this communication requires UDRs to properly route traffic between the pod overlay network and the VNet. This additional routing complexity could potentially impact connectivity with certain Azure services and may require extra configuration when working with VNet peering or hybrid network scenarios. The routing requirements can also become more complex as your network architecture grows.
Route Table Management and Setup Complexity
Route table management varies significantly between the options. Traditional Azure CNI simplifies route management because pods communicate directly on the VNet and there’s no need for additional route table entries to handle pod traffic. The routing configuration remains straightforward even as the cluster scales. The trade-off comes in initial setup complexity, where careful IP address planning is required.
Azure CNI Overlay simplifies pod networking by not requiring UDRs for basic connectivity, unlike Kubenet. While it does require specific NSG rules to be configured for proper cluster functionality, the solution automatically handles pod-to-pod routing and maintains performance comparable to traditional VNet communication. UDRs may still be needed for specific scenarios like forced tunneling or custom routing requirements.
Kubenet uses UDRs and IP forwarding for pod connectivity, which are automatically created and maintained by the AKS service by default. While you can optionally bring your own route table for custom management, the standard setup doesn’t require manual route table maintenance. The main limitation is that Azure supports a maximum of 400 routes in a UDR, which effectively limits your cluster to 400 nodes. Initial setup is simpler than traditional Azure CNI since you don’t need extensive IP address planning, you only need to account for node IPs in your subnet. However, the UDR architecture does add an extra network hop that introduces minor latency to pod communication, and you can’t share subnets between multiple Kubenet clusters.
Additional Features
Each networking option supports different advanced features. Traditional Azure CNI provides the broadest feature support, including Windows containers, Azure Network Policies, and Application Gateway integration. Azure CNI Overlay supports Windows containers and various network policy options (Azure, Calico, Cilium) but cannot use Application Gateway as an Ingress Controller. Both CNI modes support dual-stack networking, though Overlay has some limitations. Kubenet has more limited feature support, working only with Linux containers and Calico network policies. Also, Kubenet is limited to Linux containers only, supports up to 400 nodes with 250 pods per node, and is restricted to Calico for network policies. It doesn’t support Windows containers or advanced Azure networking features.
Which to Choose?
The choice between networking options depends on your specific workload requirements and infrastructure constraints:
Traditional Azure CNI is best suited for enterprise workloads that require direct network integration, high performance, or advanced Azure service integration. Choose this when:
- You have available IP address space
- Most pod communication is to resources outside the cluster
- Resources outside the cluster need to reach pods directly
- You need advanced AKS features like virtual nodes
- Application Gateway integration is required
Azure CNI Overlay is ideal for large-scale deployments with limited IP address space. Choose this when:
- You need to scale to a large number of pods but have limited IP address space
- Most pod communication is within the cluster
- You want a simpler network configuration
- You need Windows container support but don’t need Application Gateway Ingress Controller.
- You want to reuse pod CIDR ranges across clusters
Kubenet works well for basic Linux workloads and environments with IP address constraints. Choose this when:
- You have basic Linux-only workloads
- You have IP address constraints but don’t need the scale of CNI Overlay
- You don’t need advanced networking features
- You’re setting up development or test environments
The networking plugin choice for your AKS cluster has long-term implications for scalability, performance, and operational complexity! Understanding the distinct characteristics and limitations of each option is crucial for making an informed decision that aligns with your infrastructure requirements and future growth plans. Remember that switching between networking options typically requires cluster recreation, so this decision should be made early in your AKS planning process.
One additional note, On 31 March 2028, kubenet networking for AKS will be retired. Consider migrating to Azure CNI Overlay as a suitable replacement that addresses many of Kubenet’s limitations while maintaining efficient IP address utilization.
Contact DoiT today to discuss how we can help you maximize the efficiency, cost-effectiveness, and security of your cloud environment. With DoiT, you get access to exclusively-senior cloud expertise for consulting, upskilling, and support, and we’re here whenever you need expert advice, an outside opinion, assistance with adopting new technologies, or help fighting fires in production.
If you are interested in deep-diving into other cloud security and architecture topics, check out our cloud engineering blog posts.