Networking Containers: Policy Finally Comes of Age

by Mike Cohen

Few question that Linux containers signal a tectonic shift in how applications are built, deployed, and operated and their composition into microservices will likely become a best practice in application design.(1)  This shift could mean a lot for us in the networking space.  We’ll see a faster migration to 40G or 25/50/100G architectures, a serious focus on scale as we could see up to a million MAC/IPs coming to a single ToR, and a push for improved visibility and telemetry as the network becomes the unifying measurement point between separate micro services.(2) But potentially most important will be the shift towards using higher level, application oriented policies to capture user intent, a change that could revolutionize automated network configuration.  

Concepts around policy have been percolating in the open source community for some time.  They offered a path to vastly improving automation, scale, and security as well as a means of reconciling the different languages used by development, infrastructure, and security teams.  Open source frameworks such as Group-Based Policy have been introduced to both OpenStack and OpenDaylight.  While they represented a better path forward, they also demanded teams capture their application requirements up front, which required some operational and organizational changes to fully implement.  With containers, the situation is different.  If you look closely at orchestration frameworks like Docker Compose/Swarm or Kubernetes, you will see that policies are a native part of the interface.  Developers using these tools will by default document their application policies, allowing them to be distributed to automation tools for things like network and storage.  This is great news as it means that the adoption of policy in the container world could be nearly instantaneous and ubiquitous.  

Let’s take a quick look at a simple docker_compose.yml example and frame it in Group-Based Policy terminology:

Group Based Terminology Policy


As you can see, Docker Compose (and other tools like Kubernetes) already capture key elements of application policy making them a very natural fit for policy based automation in the network.  With application policy handled natively in container platforms, the next step in the evolution towards policy automation rests in capturing infrastructure and operational policies, such as physical network integration, IPAM, security/isolation, performance, and monitoring.  In fact, one open source project is already pursuing this path.  Project Contiv was designed to compliment application intent with the ability to specify infrastructure and operational policies for network, storage and compute.   Today, Contiv offers plugins for Docker networking and storage designed to capture and automate operational policies.  

Project Contiv

It’s very much still early in the what will be a multiyear shift towards containers and microservices, a shift that could have profound implications for the networking world.  While many things are still uncertain, one thing is becoming increasingly clear.  The world is shifting towards using policy, both at an application and an operational level, and if all goes well, it will have hugely positive effect on the way we deliver cloud based infrastructure and applications.  


[1] Note that “”downloads of Docker images having risen from 67 million in December 2014 to 1.2 billion just a year later” in “Docker 2016 Predictions: The Rise of Containers-as-a-Service (CaaS) and the Defining Role of Applications in 2016” by David Messina, VP Marketing, Docker, Dec 2015.

[2] See “The Impact of Containers and Microservices” in “Why Network Silicon Innovation Is Critical to Next-Generation Datacenters” by Brad Casemore, Feb 2016

Author bio


Mike Cohen

Senior Director of Product Management, Cisco Systems

Mike Cohen is Senior Director of Product Management at Cisco Systems where he leads a team focused on developing open source policy-based solutions. Mike began his career as an early engineer on VMware’s hypervisor team and subsequently worked in infrastructure product management on Google and Big Switch Networks. Mike holds a BSE in Electrical Engineering from Princeton University and an MBA from Harvard Business School.

Author's Bio

Guest Author