OpenDaylight OpenStack Integration with DevStack on Fedora

OpenDaylight OpenStack Integration with DevStack on Fedora

OpenDaylight Logo Sm

OpenDaylight OpenStack Integration with DevStack on Fedora

The following is a walk through of the OVSDB project within OpenDaylight for OpenStack integration. There are a couple of bugs so it is not for the faint of heart. It is intended for those looking to get their development environment up and running. We will have videos and what not walking through the installation and code reviews of the implementation. It was developed by folks from various vendors and users in the community. Some are in companies that aren’t dedicating resources to the project but are doing it in their personal time. Both Madhu and I are maintaining this post to keep it as accurate as possible and up to date with our own individual notes as we move into the next phase of development for Helium. Madhu is going to do a separate Fedora 19 post on a new blog he is setting up which I am pumped about! Until then, keep an eye out for Fedora19 and Fedora20 specific instructions below.

Recordings of the Installation

The following are some late night / early morning recordings from Madhu and me. In case anyone gets stuck and wants to follow along at home. They are all done on our laptops so resources are a bit tight. For questions and issues please send them to the OVSDB Listserv so that one of us or other awesome folks in the community can assist since our Q/A bandwidth is focused there to try and build a good collective of knowledge to share. If you have further interest in the project, just assisting others in the Listserv, documenting and code contributions are all amazing and much appreciated and respected.

  • OpenStack/OpenDaylight/OVSDB Installation Part 1 – Configuring VirtualBox and VM Fusion
  • OpenStack/OpenDaylight/OVSDB Installation Part 2 – Stacking and Spinning up Multi-Node OpenStack w/the OpenDaylight Controller
  • For more OpenDaylight / OVSDB videos and weekly Hangouts please see our YouTube channel:

OpenDaylight OVSDB Channel →

OVSDB Project Control and Management Logic

No standards were hurt in the making of this recording. The only southbound protocols we used in the OVSDB project OpenStack implementation were OpenFlow v1.3 and OVSDB. We chose not to use any extensions or the use of agents. Open vSwitch supported the necessary OpenFlow v1.3 and OVSDB functionality we required for this architecture. Those of us in the OVSDB project are pretty agnostic to southbound protocols as long as there is a healthy adoption so as not to waste our time and based on open standards such as OpenFlow v1.3, RFCs 7047 (Informational OVSDB RFC) and/or de facto drafts like draft-mahalingam-dutt-dcops-vxlan (VXLAN framing). We are keen to see NXM extension functionality upstream into the OpenFlow specification. OVS ARP responder is something we are beginning to work on proofing now. NXM and OXM extensions merging for ARP and Tunnel feature parity would make our design and coding lives easier. The overall architecture looks something like the following. I have hardware TEPs in the diagram. We have cycles to help hardware vendors implement the hardware_vtep database schema (assuming they prescribe to open operating systems):

OVSDB OpenFlow OpenDaylight Overlay

The provider segmentation keys used in the encap (GRE key/VNI) is a hash of Network and Tenant ID since as long as we are subnet bound, networks will always need to support multi-tenant logical networks until we eradicate L2 all together. The design is flexible and as generic as possible to allow for any vendor to add differentiation on top of the base network virtualization. Of course, we have plenty to do between now and stability, so moving right along.

  • A quick visual of the OVSDB Neutron implementation code flow itself and how it ties into the controller project and OpenStack is the excellent diagram Madhu did:
OVSDB OpenDaylight Architecture
Configure the Fedora Images for your Environment

There are two options for images. Fedora 19 and Fedora 20. We tend to recommend F19 due to an issue with MariaDB and hostnames in the F20 VM. More on that when we edit the hostname in the tutorial. For assistance with getting the stack going please ping the OVSDB Listserv and check the archives for answers.

Download the pre-built image we made that contains OpenDaylight, DevStack installing Ice House OpenStack, Open vSwitch all on Fedora:

Fedora 19 based all-in-one VM:

or if you prefer, you can download Fedora 20 based all-in-one VM:

Clone this Virtual Machine image into two images. One is for the Control (This VM runs both the OpenStack Controller and OpenDaylight Controller) and the other for the Compute instance. If you use VM Fusion the vanilla image works as is with no need to change any adaptor settings. Use the ‘ip addr’ configuration output as a reference in the next section. I recommend using SSH to connect to the host rather then using the TTY interface.

VirtualBox NIC Network Caveats
If using VirtualBox During the cloning process, reinitialize the mac-addresses to get ensure you have a unique address. and it is recommended to use Host-Only network Adapters as opposed to NAT network adapters. I usually use a different hypervisor for test/dev and managed to blow up a demo at the Summit by using NAT w/Vbox.

Here are two screenshots with VirtualBox network adaptor examples. The first are the two networks you can create. vxboxnet0 is there by default. Create the 2nd network with the +w/a nic picture in the following example. Note: you have to manually fill in the DHCP server settings on the new network. Refer to the existing if unsure of the values to use. When complete the host OS should be able to reach the guest OS.

Virtual Box Network DHCP

The second example is what the VirtualBox NIC setup can look like without have to deal with the NAT Network option in VirtualBox. VM Fusion has integrated hooks in to resolve the need for host only etc. NAT and Host only work fine with NAT so the host can reach your networks default gateway and get to the Inets as needed. With host only that is not the case but it is plenty to run the stack and integration.

Virtual Box Host Only NIC Configuration

Boot both guest VMs write down the four IP addresses from both NICs. You will primarily only use one of them other then a gateway or out of band SSH connectivity etc.

If you are using the Fedora 19 VM, then please use the following credentials to login:

If you are using the Fedora 20 VM, then please use :

Oops 🙂

In this example the configuration of the IP addrs are as follows:

Record the IP addresses of both of the hosts:

Controller IP addresses:

Compute IP addresses:

Go to the home directory of the user id odl:

Start the OVS Service (DevStack should start this svc but I have seen this not on occasion fwiw). This startup script can be loaded at startup of OVS to load at the OS init.

Configure the /etc/hosts file to reflect your controller and compute hostname mappings. While not necessarily required it can cause issues for Nova output.

Verify the OpenStack Controller /etc/hosts file. The only edit is adding the compute IP to hostname mapping. E.g. x.x.x.x fedora-odl-2

You will need to edit the compute nodes /etc/hosts from fedora-odl-1 to fedora-odl-2:

Then change the compute hostname from (compute only):

Then reboot the cloned Compute node for the change to take affect:

After the host restarts verify the hostnames like so:

Unfortunately, in the Fedora 20 VM, if you comment out “#127.0.0.1 localhost fedora-odl-1” you will blow up MySql. (Thanks for digging into that Vijay!) So avoid doing any changes to the host name locally resolving to 127.0.0.1. Mestery has a link on one of his blogs to this issue also. Net is, leave localhost. If you change it and try and revert back it will still get angry.

Start OpenDaylight Controller on the OpenStack Controller Node

Check that the configuration is set for OpenFlow v1.3 with the following to ensure that ovsdb.of.version=1.3 is uncommented:

If it is not uncommented, adjust the config.ini file to uncomment the line ovsdb.of.version=1.3
The file is located at /home/odl/opendaylight/configuration/config.ini

Or, simply paste the following:

Lastly, start the ODL controller w/ the following:

When the controller if finished loading here are some typical messages in the OSGI console:

You can verify the sockets/ports are bound with the following command. Ports 6633, 6640 and 6653 should all be bound and listening:

Configure the DevStack for the OpenStack Controller

Make sure all bridges are removed only if you have previously “stacked”

Once the OpenDaylight Controller is running, stack the OpenStack Controller:

If you are using the Fedora 19 VM :

For the Fedora 20 VM :

Edit the local.conf you just copied with the appropriate IPs. Replace all instances with brackets to the Daylight SDN controller, the OpenStack controller IP or the Openstack compute IP (Compute ethX address is only on the compute node). In this example I am using the addresses for eth1 on the guest VM. Again ensure you have IP reachability between controller and compute API services when troubleshooting issues.

In the local.conf you will see four lines that require the hardcoding of an IP address. You can always replace some of those w/ variables but we thought it important to help understand the DevStack configuration and Neutron REST call to the ODL OpenStack API implemented in org.opendaylight.controller.networkconfig.neutron.implementation.

The following is the OpenStack controller local.conf for this tutorial:

Verify the local.conf by greping for the IP prefix used:

Finally execute the stack.sh shell script:

You should see activity in your OSGI console as Neutron adds the default private and public networks like so:

You will see more activity as ODL programs the OVSDB server running on the OpenStack node.

Here is the state of Open vSwitch after the stack completes and prior to booting a VM instance. If you do not see the is_connected: true boolean after Manager (OVSDB) and Controller (OpenFlow) something has gone wrong and check that the controller/manager IPs are reachable and the ports are bound using the lsof command listed earlier:

Next up is stacking the compute node.

Configure the OpenStack Compute Node

The compute configuration steps are virtually identical to the controller other then the configurations and that it does not run the Daylight controller.

If you are using the Fedora 19 VM :

For the Fedora 20 VM :

Edit the local.conf you just copied with the appropriate IPs in the devstack directory on the compute host like the following example with your controller and compute host IPs:

Or check the conf file quickly by grepping it.

And now stack the compute host:

Once you get the stack working SNAPSHOT the image 🙂 If you break a fraction of the things that I do, when I touch them snapshots can be a handy timesaver. Also leaving DevStack “offline=true” and “reclone=no” except for when you need to pull a patch. Get to your functioning stack and it is rock solid and incredibly useful and necessary in my experience. I would say we merry few consumed by this project are laser focused on simplification, not dragging along every use case under the sun starting out. Lets do the basics right and well while we have the chance.

The state of OVS after the stack should be the following:

Next up, verify functionality.

Verifying OpenStack is Functioning

Verify the stack with the following on either host. First we will list that there are two KVM hypervisors registered with Nova. *Note openrc will populate the proper Keystone credentials for service client commands. These can be viewed using the export command from your shell:

Before we can boot VM instances, there is one more minor configuration difference between the Fed19 and Fed20 VM.

If you are using the Fedora 19 VM,

If you are using Fedora 20 VM,

Next lets boot a couple of VMs and verify the network overlay is created by ODL/OVSDB. There are lots of ways to boot a VM but what we tend to use is the following one liner. We are going to boot off the default private network that was setup by Devstack:

Boot a second node:

You can also force a host to boot to a particular hypervisor using the following (note: this requires an admin role which is implicitly granted to the admin user):

View the state of the VMs with the following:

You can also look directly at Libvirt using Virsh. This is handy for determining where a host is located:

From here lets make sure we can ping the endpoints. For this I just use the shell by grabbing a namespace for qdhcp or qrouter. This provides an L3 source to ping the VMs. These will only exist on the controller or wherever you are running those services in your cloud:

Verify the OF13 flowmods short for “flow modifications”, or also referred to as “flow rules”, “flow tables”, “forwarding tables” or whatever you think it should be called.

You can also define new networks with encaps of VXLAN or GRE along with specifying the segmentation ID. In this case GRE:

And then boot those instances using those networks:

Create Multi Network Types, Gre and VXLAN

Suppose we want to create some hosts in an overlay using the VXLAN encap with specified segmentation IDs (VNIs):

Next take a look at the networks which were just created.

Now boot the VMS

You can pull up the Horizon UI and take a look at the nodes you have spun up by pointing your web browser at the controller IP (port 80).

GRE/VXLAN Overlays in OpenStack Horizon

Now lets ping one of the hosts we just created to verify it is functional:

Now create three new Neutron networks using the GRE encapsulation. Depending on your VM memory you can always run the risk of blowing up your VM with too many guest VMs.

Here is what the OVS config looks like. Remember, since the tunnel ID is being set using the OpenFlow OXM metadata field to to set the logical port OFPXMT_OFB_TUNNEL_ID implemented in OpenFlow v1.3.

Neutron mappings from the Neutron client output:

Next take a look at the Open vSwitch configuration. Worthy of note is the tunnel IPv4 src/dest endpoints are defined using OVSDB but the Tunnel ID is set using the flowmod in OpenFlow using key=flow. This tells OVSDB to look for the tunnel ID in the flowmod. There is also a similar concept for IPv4 tunnel source/destination using Nicira extensions with NXM_NX_TUN_IPV4_SRC and NXM_NX_TUN_IPV4_DST that was implemented in OVS 2.0. The NXM code points are referenced in the OF v1.3 specification but it seems pretty nascent wether the ONF is looking to handle tunnel operations with OF-Config or via flowmods such as the NXM references. The NXM code points are defined the ODL openflowjava project that implements the library model for OFv1.3 and would just need to be plumbed through the MD-SAL convertor.

And then the OF v1.3 flowmods:

For more on TEPs please see a nice document authored by Ben Pfaff who needs no introduction, that can be found here.

Next take a look at the flowmods. We simply break the pipeline into three tables, a classifier, egress and ingress. Over the next 6 months we will be adding services into pipeline for a much more complete implementation. We are looking for user contributions in the roadmap and even better, pushing code upstream as the project continues to grow.

Lastly if you want to force availability zones from say the “demo” UID. You can add the admin role to different UIDs using the following Keystone client calls.

Unstack and Cleanup DevStack

Getting used to this setup will take trial and error. Use the following to teardown the stack and reset the state of the VM to pre-stack.

running unstack.sh will kill the stack. It doesn’t hurt to look at the OVS config and make sure all bridges have been deleted:

A slightly lame cleanup, but handy nonetheless, is to run a few commands to ensure the stack was effectively torn down. Paste the following to create a shell script called ./reallyunstack.sh. Madhu and I always laugh about this one but it can help level set the platform for debugging:

Thats all for now. I am on my way back from the OpenDaylight Summit and have to catch a redeye. My cycles are a wreck at the moment for keeping up with comments so if you have any issues that require attention please shoot them to the ovsdb-dev listserv at OpenDaylight or even better jumping on the irc channel located at #opendaylight-ovsdb on irc.freenode.net. We are driving hard over the next few months for the next release so please be patient with bugs and lack of cycles at times. Cloud infrastructures are complex, troubleshooting is the best way to learn the various frameworks. Hopefully more will get to hacking and join the community to help grow the project and one another. Concrete is what matters now. Plenty of time for talk later.

Thanks much to the relentless work from the key guys on the coding and integration effort:

  • Madhu Venugopal (Red Hat)@MadhuVenugopal
  • Kyle Mestery @mestery (Cisco)
  • (Ryan Moats (IBM) if he would ever get on Twitterz ehem :_)

Thanks to all the folks who contributed and certainly give a follow to some of the guys on Twitters that either directly or indirectly (out of time to list them all) in making this happen like (Cisco) @kernelcdub (Red Hat) @dave_tucker (HP) @edwarnicke (Cisco), Arvind (Cisco) @FlorianOtel (HP) @ashw7n @evanzeller @alagalah (Nexusis), Bachman, Meo, Hague, DMM, Anees, Dixon, phrobb and of course thanks to the creators of OVS @mcasado and @Ben_Pfaff and the rest of the OVS team which without we would not be able to do a project such as this.

Thanks for stopping by!

About the Author

Brent SalisburyI have over 20 years of experience wearing various hats from, network engineer, architect, ops and software engineer. More at Brent's LinkedInView all posts by Brent Salisbury →

  1. Te Yen LiuTe Yen Liu02-10-2014


    Hi Brent,
    Great article. I am curious about your picture as the url link:
    http://networkstatic.net/wp-content/uploads/2014/02/Overlay-OpenDaylight-OVSDB-OpenFlow.png
    You mentioned that Hardware TEP and Hypervisor TEP, and looks like the hardware ( switch ) needs to support OVSDB and offload the tunneling effort from hypervisor. Could you give more info about how does these work? Thanks a lot.

    • Madhu VenugopalMadhu Venugopal02-15-2014


      http://openvswitch.org/docs/vtep.5.pdf is the Hardware VTEP schema that is implemented by the supported hardware switches. This schema specifies relations that a VTEP can use to integrate physical ports into logical switches main-tained by a network virtualization controller like OpenDaylight. VXLAN Gateway is the primary use-case for this schema.

  2. Scott RichmondScott Richmond02-10-2014


    Is there any way to reduce the amount of RAM the Openstack+ODL Controller takes up? 4GB per VM slays most laptops with 8GB RAM. Would be nice to be able to prototype and dev with slightly less (2GB per VM).
    ODL gets killed by the kernel if the controller VM is run with anything less than 4GB, so I assume there is an option there to stop openstack or ODL from utilizing so much?

    • Madhu VenugopalMadhu Venugopal02-15-2014


      We decided to wrap both the Devstack controller and ODL controller within the same VM for simplicity. You can instead run the ODL Controller in your native machine. Please refer to http://www.opendaylight.org/software/downloads to download the Virtualization edition and use the installation guide.
      With this, you should be able to run the devstack controller with 2gb memory. But you may have to increase it as you add more compute nodes to your demo setup. Try it out and let us know.

  3. Scott RichmondScott Richmond02-10-2014


    This tutorial doesn’t appear to work at all for me. Setting up the controller seems to go fine – I can run the ODL script and devstack stacks fine. I can see the correct ports listening. Both the compute and controller can ping each other on the 172.16.86.0 network
    However when I attempt to run the compute node it fails to talk to the controller DB:
    2014-02-11 10:29:33 Initializing OpenDaylight
    2014-02-11 10:29:33 +++ sudo ovs-vsctl get Open_vSwitch . _uuid
    2014-02-11 10:29:33 2014-02-11T10:29:28Z|00001|reconnect|WARN|unix:/var/run/openvswitch/db.sock: connection attempt failed (No such file or directory)
    2014-02-11 10:29:33 ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)
    2014-02-11 10:29:33 +++ grep url /etc/neutron/plugins/ml2/ml2_conf.ini
    2014-02-11 10:29:33 +++ sed -e ‘s/.*\/\///g’ -e ‘s/\:.*//g’
    2014-02-11 10:29:33 ++ ODL_MGR_IP=172.16.86.129
    2014-02-11 10:29:33 ++ ‘[‘ ” == ” ‘]’
    2014-02-11 10:29:33 ++ ODL_MGR_PORT=6640
    2014-02-11 10:29:33 ++ sudo ovs-vsctl set-manager tcp:172.16.86.129:6640
    2014-02-11 10:29:33 2014-02-11T10:29:28Z|00002|reconnect|WARN|unix:/var/run/openvswitch/db.sock: connection attempt failed (No such file or directory)
    2014-02-11 10:29:33 ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)
    2014-02-11 10:29:33 +++ failed
    2014-02-11 10:29:33 +++ local r=1
    2014-02-11 10:29:33 ++++ jobs -p
    2014-02-11 10:29:33 +++ kill
    2014-02-11 10:29:33 +++ set +o xtrace
    2014-02-11 10:29:33 stack.sh failed: full log in stack.sh.log.2014-02-11-102851

    • Brent SalisburyBrent Salisbury02-18-2014


      Hi Scott, it looks like you do not have OVS running. Check in the instructions for the part that says we recommend starting Open vSwitch explicitly rather then waiting on DevStack to start it for you.

      The command is:
      sudo /sbin/service openvswitch start

      Later,
      -Brent


  4. In Fedora 20, we need to enable postgresql
    disable_service mysql
    enable_service postgresql

    Also I am trying to replicate similar setup using Packstack and Fedora supports MariaDB.

    Thanks,
    Shankar Ganesh P.J

    • Brent SalisburyBrent Salisbury03-16-2014


      Great to know Shanker, we will edit the image next time we update it. Ideally we dont need images. I hate having to spin them up but it seems the float on dependencies is not taken into consideration in QA. I hope to see PackStack address this for the mere mortals just trying to use OpenStack.

      Cheers and thanks for the feedback.
      -Brent

  5. jennyjenny04-05-2014


    Hai, my neutron fail to start. OVS and ODL controller have running.

    Waiting for Neutron to start…
    2014-04-05 05:27:10 [Call Trace]
    2014-04-05 05:27:10 ./stack.sh:1108:start_neutron_service_and_check
    2014-04-05 05:27:10 /home/fedora/devstack/lib/neutron:464:die
    2014-04-05 05:27:10 [ERROR] /home/fedora/devstack/lib/neutron:464 Neutron did not start

    when I check this: lsof -iTCP | grep 66
    java 1640 fedora 78u IPv6 23193 0t0 TCP *:6653 (LISTEN)
    java 1640 fedora 96u IPv6 23198 0t0 TCP *:6633 (LISTEN)

    6640 is not there

    thx

    My neutron did not start? I dont know why

    • Brent SalisburyBrent Salisbury04-07-2014


      Hi Jenny,
      Take a look at OSGI and make sure the OVSDB modules are running like so:

      osgi> ss ovs
      “Framework is launched.”

      id State Bundle
      127 ACTIVE org.opendaylight.ovsdb_0.5.1.SNAPSHOT
      233 ACTIVE org.opendaylight.ovsdb.neutron_0.5.1.SNAPSHOT
      250 ACTIVE org.opendaylight.ovsdb.northbound_0.5.1.SNAPSHOT

      Best bet for quick help is the ovsdb-dev listserv or the #opendaylight-ovsdb channel on irc.freenode.net.

      Thanks!
      -B

  6. SidSid04-13-2014


    Hey Brent and Madhu

    Awesome work on the integration of OpenStack and Opendaylight. I am encountering an error that I am not able to determine on internet. It would be great if you can help me out. I have used the Fedora19 image and performed the installation as per the steps written in blog. Everything is up and running, but when I was playing around with the setup I came across this error/bug that I am not able to determine a solution.

    sudo ovs-ofctl show br-int
    2014-04-13T07:38:01Z|00001|vconn|WARN|unix:/var/run/openvswitch/br-int.mgmt: version negotiation failed (we support version 0x01, peer supports version 0x04)
    ovs-ofctl: br-int: failed to connect to socket (Broken pipe)

    • Brent SalisburyBrent Salisbury04-17-2014


      Hi Sid, you have probably already solved this by now, but just in case what that means is you are running OpenFlow v1.0 client commands against an OF v1.3 flow rules configuration. If you do a “ovsdb-client dump” and look at the Bridge table you will see a column that is specifying the DPID OF version(s). It can be multiple types but not mixed.

      Here are some random aliases that might help:

      ## Fedora ##
      ### OVS Aliases ###
      alias novh=’nova hypervisor-list’
      alias novm=’nova-manage service list’
      alias ovstart=’sudo /usr/share/openvswitch/scripts/ovs-ctl start’
      alias ovs=’sudo ovs-vsctl show’
      alias ovsd=’sudo ovsdb-client dump’
      alias ovsp=’sudo ovs-dpctl show’
      alias ovsf=’sudo ovs-ofctl ‘
      alias logs=”sudo journalctl -n 300 –no-pager”
      alias ologs=”tail -n 300 /var/log/openvswitch/ovs-vswitchd.log”
      alias vsh=”sudo virsh list”
      alias ovap=”sudo ovs-appctl fdb/show ”
      alias ovapd=”sudo ovs-appctl bridge/dump-flows ”
      alias dpfl=” sudo ovs-dpctl dump-flows ”
      alias ovtun=”sudo ovs-ofctl dump-flows br-tun”
      alias ovint=”sudo ovs-ofctl dump-flows br-int”
      alias ovap=”sudo ovs-appctl fdb/show ”
      alias ovapd=”sudo ovs-appctl bridge/dump-flows ”
      alias ovl=”sudo ovs-ofctl dump-flows br-int”
      alias dfl=”sudo ovs-ofctl -O OpenFlow13 del-flows ”
      alias ovls=”sudo ovs-ofctl -O OpenFlow13 dump-flows br-int”
      alias dpfl=”sudo ovs-dpctl dump-flows ”
      alias ofport=” sudo ovs-ofctl -O OpenFlow13 dump-ports br-int”
      alias del=” sudo ovs-ofctl -O OpenFlow13 del-flows ”
      alias delman=” sudo ovs-vsctl del-manager”
      alias addman=” sudo ovs-vsctl set-manager tcp:172.16.58.1:6640″
      alias lsof6=’lsof -P -iTCP -sTCP:LISTEN | grep 66’
      alias vsh=”sudo virsh list”
      alias ns=”sudo ip netns exec “

  7. Sreenivas MakamSreenivas Makam04-17-2014


    Hi Brent
    As usual, great writeup on Openstack+ODL integration.

    I went through this blog as well as the recording of Kyle’s and Madhu’s session in the summit. Very useful information. I was able to try this out on Fedora20 and it works great. Its nice to see the Network virtualization with openstack+odl working in a single host:)

    Few questions:
    1. I understood that there is only 1 tunnel established between the nodes or hypervisors(1 each for GRE and VXLAN) and I could see it. I see that segmentation id specified while creating network appears as tunnel id when we dump the openflow table. As I understood, we have a separate segmentation id(tunnel id) per subnet. Is that correct?
    2. I see that same subnet can be reused between GRE and VXLAN tunnel. If there are 2 independent tenants who want to use the same IP and use only VXLAN tunnel, is it possible to do this? I tried doing this and I did not see the tunnel for the second segmentation id populated in openflow table. Is this possible to do?

    Thanks
    Sreenivas

  8. vbrvbr04-24-2014


    Hi Brent,
    Great blog post. I have been studying it and the videos attached and have found them immensely useful.

    One question: In order to understand how the flows in the flow table accomplish their goal of creating a tunnel, would it be possible for you to explain what each of the ports is and each of the entries in the table? Perhaps a diagram showing the bridge / ports / vm’s etc. so we can see what connects to what and how the flows flow etc?

    Thanks.