Research — 3 Feb, 2022

A primer on container networking

Introduction

Adoption of containers — stand-alone packages that encapsulate an entire application runtime environment — is well underway among enterprises, with the majority of companies using containers for a portion of their applications. Containers are extremely useful for IT because they make efficient use of IT infrastructure and, when paired with composable application designs, allow IT to easily scale up and down on demand and move applications to new locations seamlessly. The flexibility afforded by containerization is, by design, because the application environment is largely self-contained and meant to be simple to operate and portable.

Container environments are purely software-defined infrastructure, with the compute, storage and networking virtualized in software. While all the core functionality is built in, there are functions like application performance, security and monitoring capabilities that enterprises can improve upon using commercial and open-source software. In this primer on container networking, we will go through the technologies available for container networking. We are going to stick with terminology that is used with Kubernetes, since it is the most widely deployed container management platform, but the concepts will apply to other platforms.

SNL Image

Enterprise use of application microservices is driving the adoption of containers on-premises, in the cloud and at the edge. Enterprise network IT teams are being tasked to take over the operational management and monitoring of container networking to ensure proper configuration, performance and security. This emphasis is relatively new in the industry, along with the technologies to support integrated container network management, but it is an area that will be critical for enterprise success with container strategies. Networking vendors have typically used the Container Networking Interface, or CNI, specification, largely to integrate physical networks with containers. However, there is richer integration going on, particularly in application networking, and in security products using sidecars and leveraging service meshes.

SNL Image

Context

Container environments are, by design, meant to be automated application environments, self-operating after being given a configuration or application template, and isolated from the rest of the data center or cloud. The details of application behavior in the environment when it is working properly are hidden and should be of no interest to IT. Failures in container instances are handled by simply destroying the container and starting a new one. On the one hand, application resiliency is simpler to manage because automation in the container system remediates issues, but on the other hand, something has to keep track of session state to ensure connections are not disrupted by container operations.

Networking in the container has a very clear demarcation point between the network inside and outside the cluster and pod. Inside the cluster or pod, the network is intended to allow all containers in the pod to connect to each other and allow pods in the cluster to talk to each other by default. The reason is that an application would be delivered as a series of pods in a cluster, and they should be able to talk to each other as needed. Clients and services from outside of the cluster should only use the cluster IP address and DNS name, and never connect directly to containers. There are configuration options to connect directly to containers, but they should never be used in production.

Splitting the networking to inside and outside means the container management system can do all the networking needed to scale and recover pods and nodes in the cluster, and the clients and other services using it are not impacted. The service IP address and DNS name are all that matter to clients and applications outside of the cluster. This delineation is part of what makes containers and microservices easier to create, manage and deploy across teams within the organization.

Kubernetes can also perform other networking functions like limiting intra-cluster communication, load balancing traffic among pods in the cluster, and providing a platform for other software to be integrated into the container environment and managed by the management system.

Technology

Enterprises will have to rethink how they approach container networking because container applications are dynamic and ephemeral, making them subject to change. This dynamism is a desirable outcome of using containers, but it makes traditional management concepts like the permanence of IP addressing impossible to use. In most cases, enterprise IT will want visibility into the container cluster for fault and performance management, and to extend security controls beyond the service address. These are features that aren't usually found in container management systems like Kubernetes.

Networking products are integrated using three methods. The first is integration using the open-source CNI, which defines a set of integration-related application programming interfaces, or APIs, that software developers can use to interoperate with the container management system, such as replacing the cluster manager's IP address management functions or implementing access controls within the clusters and pods. CNIs are also used to coordinate the configuration of the external virtual or physical network with the container network.

The second method is using a sidecar, which is a container that is deployed alongside the application pod, and the container management system directs traffic to it. Containerized application delivery controllers and network firewalls are often deployed as sidecars. The third way is deploying the network software in the Linux kernel, called eBPF, for extended Berkeley Packet Filtering, which allows extremely deep and dynamic integration into the container environment that cannot be achieved via CNI or sidecars.

Switching and routing

Enterprises already have data center networks and extensive management processes around their operations. Container environments will need to be brought under the operational framework if IT has any hope of assuring network and application performance.

Switching and routing is often integrated as a CNI into the container system, so the software-defined network, or SDN, system can manage the networking components of the container system, independent from the application clusters and pods. Rather than the container management system allocating networking and creating the necessary networks, the external SDN will take over those functions through the CNI integration. This allows low-level control of the network and seamless interoperation with the container management system.

The integration of the data center network, cloud networks and container environments will be an increasingly important element for network IT as it takes on more operational control of container connectivity. Data center networking vendors like Arista, Cisco and VMware are adding container networking management capabilities to their SDN and management systems, in order to provide a seamless experience. In addition, enterprise adoption of network automation will drive enterprises to centralize and standardize on a management architecture for manual and automation management.

Ingress and proxy controllers

Ingress resources and controllers are used in container environments to connect the traffic of a service address to the containers that will handle the request. The ingress resources or proxies perform load balancing across the containers in the pods and maintain state for the application.

In simple application scenarios, the native ingress proxies will suffice for connectivity, but enterprises will often want more advanced application capabilities such as improved load balancing, security services like authentication management, web application firewall functions, and TLS offload. Application delivery controllers, or ADCs, can be used on a per-application cluster basis, allowing different DevOps teams to use their own ADC software.

External or internal ADCs can be integrated within a container environment and automated using native container management constructs. A connector application or an ingress controller such as A10 Networks Inc.'s Thunder Kubernetes Connector, Citrix Systems Inc.'s Ingress Controller, or F5 Inc.'s Container Ingress Services is deployed as a container service and listens for application management events from Kubernetes like adding or removing a container within an application. It then updates the external ADC software with the changes, making the entire process automated.

As container usage matures, we expect to see enterprise IT embrace ADCs in container applications over the native ingress proxies with container systems, due to the extensive feature sets that are available to both developers and operations staff. Having a consistent, well-understood container infrastructure reduces much of the operational complexity in application lifecycle management.

Network performance management

One of the most difficult aspects of container management is instrumenting, monitoring and documenting network performance using network data for network performance management, or NPM. In physical workloads and on virtual machines, physical and virtual taps can be employed to capture packets and generate network flow data.

Applications on physical and virtual servers are often stationary and clearly defined on the network that taps and network packet brokers are effective. In containers where instances are ephemeral and dynamic, ensuring that the application is properly instrumented at all times and relating traffic to a container instance that may no longer exist is complicated. Equally important, organizations that need to retain traffic for compliance reasons have difficulty doing so in container environments.

NPM vendors that rely on network data offer a variety of ways to instrument container environments including using CNIs to direct traffic to a capture device or network packet broker, using a sidecar proxy to capture traffic within a container pod, or using eBPF to trigger data collection continuously or on demand. Captured traffic must be related to data that identifies the container, pod and cluster that generated the data and is usually accomplished by attaching tags from the container management system to the data related to a container that is no longer running, so it can be managed and viewed in context.

Monitoring consistently ranks among the most critical tools for development and operations and cloud-native deployments, according to 451 Research's 2021 Voice of the Enterprise: DevOps, Workloads & Key Projects survey. Organizations say they mostly purchase new tooling from existing vendors (50% of survey respondents) to respond to the monitoring needs of cloud-native applications. However, many organizations have engaged new vendors (44% of respondents). This indicates that cloud-native continues to be an arena where new vendors can differentiate with purpose-built systems designed with cloud-native technologies in mind.

Security

Security requirements drive a lot of IT architecture and purchasing decisions. In the Voice of the Enterprise: DevOps, Workloads & Key Projects survey, 52% of respondents think network-oriented security will fall under network ITs purview. This creeping responsibility makes sense considering most security products do not have integration with container management systems but can be operationalized as a network service, with policy control under the management of the security team. Similar to ADCs, security functions will typically be integrated as a sidecar or as a service managed via the ingress or egress controller.

This article was published by S&P Global Market Intelligence and not by S&P Global Ratings, which is a separately managed division of S&P Global.

Gain access to our full news & research coverage and the industry-specific data that informs our insights.