ACI & Kubernetes

This post also includes video how to segment an application deployed in Kubernetes with Cisco ACI with full network visibility and security. Don’t feel like watching the video? Check out the written steps below.


If you’re a bit familiar with Cisco Application Centric Infrastructure (ACI), you are probably aware you can have VMM integration. This function provides you the benefit that whenever you create a new EPG, you can map it to a VMM. If you run VMware vCenter, the Cisco Application Policy Infrastructure Controller (APIC) will push a port group to the DVS through the open vCenter API’s. This allows you to automate port group creation across vSphere hosts, have segmentation of VM’s in place and full visibility of virtual machines in the fabric. Nice. But let’s take it a step further and do the same with containers.

Containers are great, really great. You don’t have to deal with application dependencies (which was really a pain), they are portable, don’t have Operating System overhead, are lightweight and can speed up application development. But networking and management of Docker containers is a bit cumbersome. Kubernetes to the rescue!

Kubernetes is an open-source platform for managing containerized workloads and services. You can see it as the manager of the containers. It does not create containers itself, therefore you still need a container service as Docker but it does orchestrate container deployments and network services.


Cisco has developed a CNI plugin for Kubernetes, with a straightforward name: ACI-CNI. CNI stands for Container Networking Interface and its goal is to create a generic plugin-based networking solution for containers in Kubernetes.

There are two options to implement the ACI-CNI plugin in Kubernetes cluster:

  • Through a workflow where you need to create the policies in the APIC, setup or use an existing Kubernetes Cluster, and deploy the CNI containers in the K8S cluster. If you follow the steps carefully, it’s not that difficult but it requires some knowledge.
  • Use Cisco Container Platform (CCP), where all of the above is automated. You just need to provide credentials of the APIC, give some basic Kubernetes/networking parameters and CCP will take care of setting up the Kubernetes cluster, configuration the APIC and deploying the ACI-CNI containers. I really love the easiness of Cisco Container Platform as it automates the whole procedure, so you can focus on building applications and/or creating security and network policies.


I’ve used the Cisco Container Platform to create my Kubernetes clusters with ACI-CNI plugin but the integration is exactly the same as you would install ACI-CNI manually.

Ok, let’s get started.

Once CCP has provisioned my K8S cluster, I can go to the APIC, log in and I can see a new tenant has been created with the name ‘wouter_k8S’. In my tenant I have a new Application Profile ‘kubernetes’ with 3 EPG’s (‘kube-default’, ‘kube-nodes’ and ‘kube-system’) and also two new bridge domains (‘kube-node-bd’ and ‘kube-pod-bd’). Two subnets are available, one for the master and worker nodes and one for the K8S Pods. There are also some contracts created to allow my cluster to function properly.

Cisco Container Platform automates ACI-CNI integration in the ACI Fabric

The steps below include either the kubectl command or a Python script (hosted at Github).

Let’s deploy the application by creating a new namespace in the Kubernetes cluster first.
kubectl create namespace guestbook

Once you’ve created the new namespace, it’s directly visible in the ACI fabric.

Good, time to deploy the application. I’ve created an Application Profile in the CloudCenter Suite that will deploy 3 Kubernetes deployments (Redis Master, Redis Slave and Apache frontend server), based on the Kubernetes Guestbook app.

But you can also use a .yaml file to deploy your application:

kubectl apply -f

Once you have deployed the deployments, you will see there are 3 deployments (6 pods) and we have a load balancer service created for the frontend so it’s accessible from the outside.

Ok, the application is deployed in Kubernetes and instant visible in the ACI fabric.

K8S Pod visibility in ACI

Now, we want to apply segmentation to this containerized application. We are going to put the database POD’s in an EPG and the front-end POD’s in an EPG. You can do that through the ACI UI, use this Python script, Ansible, Terraform, … I’ll do it through the Action Orchestrator in CloudCenter Suite. If you manually create new EPGs:

  • Don’t forget to add the Kubernetes domain.
  • You need to be able to access the same K8S services (DNS, Health, …) as kube-default. You can either add all the contracts manually to the EPG’s or you can inherit the contracts from the ‘kube-default’ via master contract.

You’ll end up with something like this:

Awesome, now we have our application deployed, it’s visible in the ACI fabric and we have our segmentation constructs in place. Now, we need to map the deployments in Kubernetes to the EPG’s. In this example, we are going to create a deployment annotation, but this could also be a POD or namespace.

First, list the deployments in the namespace:
kubectl get deployments -n guestbook
Then, apply the annotation to the deployment.
kubectl --namespace="namespace" annotate deployment "deployment-name"'{"tenant":"tenant","app-profile":"application profile","name":"EPG"}' --overwrite=true

Example: kubectl --namespace=guestbook annotate deployment test-321-redis-master-322'{"tenant":"wouter_k8s","app-profile":"kubernetes","name":"guestbook-database"}' --overwrite=true

Once done, you should see the POD popping up in the right EPG.

Deployment annotation results in EPG – Deployment mapping

Ok, we just need to configure one more thing. As you know, endpoints in different End Point Groups can’t communicate with each other unless we have specified a contract. So we need to create a contract, subject, filter, and entry (with the Redis port) and assign that to the EPG’s. Once done, we are finished and our guestbook Apache web server can communicate with the Redis database with full visibility in the network and strict control of security rules.

There are different ways to do this integration. You could use the ACI GUI and Kubectl for everything or you could make it even more automated. In the video, I’ve used Cisco Container Platform for provisioning the Kubernetes cluster with ACI-CNI integration and CloudCenter Suite for my K8S application deployment and all REST API calls to Cisco ACI and K8S API server.

Kubernetes with ACI-CNI is Kubernetes networking done right with full visibility and security in place.

1 Comment

  1. Anonymous April 22, 2020 at 6:33 pm

    Nice post!


Leave A Comment

Your email address will not be published. Required fields are marked *