Thursday, 12 May 2016

Openstack Docker Integration with VDX

Openstack Docker Integration with VDX

Openstack Kuryr (Docker) Integration with Brocade VDX (AMPP)

Openstack kuryr is integrated with neutron. Kuryr provides a remote driver as per the Docker Container Networking Model (CNM).
Kuryr driver translates the libnetwork callbacks to appropriate neutron calls.

Here we are going to show case the integration of Kuryr with Brocade VDX Device

enter image description here

There are two hosts controller ( and compute ( which are part of the Docker swarm. These hosts also function as openstack nodes.
They are connected to VDX Fabric on the interfaces Te 135/0/10 and Te 136/0/10 respectively

Docker Swarm

Docker swarm is setup having two nodes controller ( and compute ( as seen from the docker_swarm info details.

root@controller:~# docker_swarm info
Nodes: 2
  └ Status: Healthy
  └ Containers: 3
  └ Reserved CPUs: 0 / 8
  └ Reserved Memory: 0 B / 12.31 GiB
  └ Labels: executiondriver=, kernelversion=4.2.0-27-generic, operatingsystem=Ubuntu 14.04.4 LTS, storagedriver=aufs
  └ Error: (none)
  └ UpdatedAt: 2016-05-12T11:37:31Z
  └ ServerVersion: 1.12.0-dev
  └ Status: Healthy
  └ Containers: 4
  └ Reserved CPUs: 0 / 8
  └ Reserved Memory: 0 B / 16.44 GiB
  └ Labels: executiondriver=, kernelversion=4.2.0-27-generic, operatingsystem=Ubuntu 14.04.4 LTS, storagedriver=aufs
  └ Error: (none)
  └ UpdatedAt: 2016-05-12T11:37:23Z
  └ ServerVersion: 1.12.0-dev

Setup of Openstack Plugin


Brocade Plugins require a specific version of ncclient (Net conf library). It can be obtained from the following github location.

git clone
cd ncclient
sudo python install

Install Brocade Plugin

git clone --branch=<stable/branch_name>
cd networking-brocade
sudo python install

Note: branch is an optional if the latest files(master branch) from the repository is required.

Upgrade the Database

Upgrade the database so that Brocade specific table entries are created in neutron database

 neutron-db-manage  --config-file /etc/neutron/neutron.conf  
  --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head

Openstack Controller Configurations (L2 AMPP Setup)

Following configuration lines needs to be available in ‘/etc/neutron/plugins/ml2/ml2_conf.ini’ to start Brocade VDX Mechanism driver (brocade_vdx_vlan).

tenant_network_types = vlan
type_drivers = vlan
mechanism_drivers = openvswitch,brocade_vdx_ampp
network_vlan_ranges = physnet1:2:500
bridge_mappings = physnet1:br1


  • mechanism driver needs to be set to ‘brocade_vdx_ampp’ along with openvswitch.
  • ‘br1’ is the openvswith bridge.
  • ‘2:500’ is the vlan range used

Following configuration lines for the VDX Fabric needs to be added to either ‘/etc/neutron/plugins/ml2/ml2_conf_brocade.ini’ or ‘/etc/neutron/plugins/ml2/ml2_conf.ini’.

If added to ‘/etc/neutron/plugins/ml2/ml2_conf_brocade.ini’ then this file should be given as config parameter during neutron-server startup.

username = admin 
password = password 
address  =
ostype   = NOS 
physical_networks = physnet1 
initialize_vcs = True
nretries = 5
ndelay = 10
nbackoff = 2

[ml2_brocade] - entries

  • is the VCS Virtual IP (IP for the L2 Fabric).
  • osversion - NOS version on the L2 Fabric.
  • nretries - number of netconf to the switch will be retried in case of failure
  • ndelay - time delay in seconds between successive netconf commands in case of failure

Openstack Compute Configurations (L2 AMPP Setup)

Following configuration lines needs to be available in one of the configuration files used by openvswitch agent.
e.g /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini

bridge_mappings = physnet1:br1
network_vlan_ranges = 2:500
tenant_network_type = vlan


  • ‘br1’ is the openvswitch bridge.
  • ‘2:500’ is the vlan range used

VDX Configurations

Put all the interfaces connected to compute node in port-profile mode. This is a one-time configuration. (Te 135/0/10 and Te 136/0/10 in the topology above).

sw0(config)#  interface TenGigabitEthernet 135/0/10
sw0(conf-if-te-135/0/10)# port-profile-port
sw0(config)#  interface TenGigabitEthernet 136/0/10
sw0(conf-if-te-136/0/10)# port-profile-port

Setup of Kuryr

Install Kuryr project on both compute and controller (each of the the host nodes)

sudo pip install
sudo pip install -r requirements.txt
sudo ./scripts/

Update the ‘"/etc/kuryr/kuryr.conf’ to contain the following lines, kuryr driver is run in the global scope and neutron_uri is provided of the neutron server. In this case the ip address is that of the controller node (

capability_scope = global

# Neutron URL for accessing the network service. (string value)
neutron_uri =

Restart both the remote driver(stop the one started earlier in the above step) and the docker service

sudo ./scripts/
sudo service docker restart

Docker CLI command

Create Network

Create a Docker Network called “black_network” on the docker swarm having the subnet

root@controller:~# docker_swarm network create --driver kuryr --subnet= --gateway=   black_network

This creates a neutron network with segmentation id (vlan 43)

root@controller:~# neutron net-show kuryr-net-2e36e5ac
| Field                     | Value                                              |
| admin_state_up            | True                                               |
| availability_zone_hints   |                                                    |
| availability_zones        | nova                                               |
| created_at                | 2016-05-12T11:16:55                                |
| description               |                                                    |
| id                        | 23beebb7-c4ec-41be-a12a-96f897b1dace               |
| ipv4_address_scope        |                                                    |
| ipv6_address_scope        |                                                    |
| mtu                       | 1500                                               |
| name                      | kuryr-net-2e36e5ac                                 |
| port_security_enabled     | True                                               |
| provider:network_type     | vlan                                               |
| provider:physical_network | physnet1                                           |
| provider:segmentation_id  | 43                                                 |
| router:external           | False                                              |
| shared                    | False                                              |
| status                    | ACTIVE                                             |
| subnets                   | 5072db88-54be-4be0-a39b-f52b60a674ef               |
| tags                      | |
|                           | |
| tenant_id                 | 1035ac77d5904b0184af843e58c37665                   |
| updated_at                | 2016-05-12T11:16:56                                |

This also creates a port-profile on the Brocade switch with appropriate parameters.

sw0(config)# do show running-config port-profile openstack-profile-43
port-profile openstack-profile-43
  switchport mode trunk
  switchport trunk allowed vlan add 43
port-profile openstack-profile-43 activate

Create Docker Containers

Create docker containers based on the busybox image on both the nodes in the docker_swarm
’black_1’ on the compute node ( and ‘black_2’ on the controller node (

root@controller:~# docker_swarm run -itd --name=black_1 --env="constraint:node==compute" --net=black_network busybox
root@controller:~# docker_swarm run -itd --name=black_2 --env="constraint:node==controller" --net=black_network busybox

Creation of docker containers result in the application of port-profile (openstack-profile-43) on the interfaces connected to the host servers (Te 135/0/10 and Te 136/0/10) respectively

sw0(config)# do show port-profile status
Port-Profile           PPID        Activated        Associated MAC        Interface
UpgradedVlanProfile    1           No               None                  None
openstack-profile-43   2           Yes              fa16.3e2b.38b6        Te 135/0/10
                                                    fa16.3ebf.796c        Te 136/0/10
                                                    fa16.3ed6.7f0b        Te 135/0/10

Now the network connectivity has been established between two containers (black_1 and black_2) running on two different hosts in the docker swarm. Traffic between these two containers transit using the Brocade VDX Fabric.

Container Trace displays the connectivity as seen from the Brocade VCS Fabric. This provides details about like container Name, Host Network Name, VLAN ID, NIC details.

sw0:FID128:root> container_trace 
| Name    | Host Network  | Vlan | Host IP      | Switch Interface | Container IPv4Address | Container MacAddress |
| black_1 | black_network | 43   | | Te 136/0/10      |          | fa:16:3e:bf:79:6c    |
| black_2 | black_network | 43   | | Te 135/0/10      |          | fa:16:3e:d6:7f:0b    |

Ping between Containers

Attach to one of the containers(black_1) and ping to the other container (black_2)

root@controller:~# docker_swarm attach black_1
/ # ifconfig
eth0      Link encap:Ethernet  HWaddr FA:16:3E:BF:79:6C
          inet addr:  Bcast:  Mask:
          inet6 addr: fe80::f816:3eff:febf:796c/64 Scope:Link
          RX packets:52 errors:0 dropped:14 overruns:0 frame:0
          TX packets:9 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:5956 (5.8 KiB)  TX bytes:738 (738.0 B)

lo        Link encap:Local Loopback
          inet addr:  Mask:
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

/ # ping
PING ( 56 data bytes
64 bytes from seq=0 ttl=64 time=1.825 ms
64 bytes from seq=1 ttl=64 time=0.819 ms
64 bytes from seq=2 ttl=64 time=0.492 ms
64 bytes from seq=3 ttl=64 time=0.458 ms
64 bytes from seq=4 ttl=64 time=0.489 ms
64 bytes from seq=5 ttl=64 time=0.480 ms
64 bytes from seq=6 ttl=64 time=0.438 ms
64 bytes from seq=7 ttl=64 time=0.501 ms

No comments:

Post a Comment