Configuring VXLAN and GRE Tunnels on OpenvSwitch

Configuring VXLAN and GRE Tunnels on OpenvSwitch

VXLan Tunnels Post

Configuring VXLan and GRE tunnels on OpenvSwitch
Update: VXLan Encapsulation
VXLan is now upstreamed into the Master build. It is worth mentioning that is the VXLAN framing only, not the Multicast control plane functionality. So basically no need to pull from the fork anymore but pull directly from the Open vSwitch Master or 1.10+ tarball (approximately).

I have done a couple of GRE tunnel how-tos using OpenvSwitch (OVS). I had been itching to give VXLan a spin in OVS so why not ferret out someones tree on GitHub. I believe VXLan is still scheduled to officially release soon in OpenvSwitch. So here are the steps for installing, configuring tunnels on OpenvSwitch with VXLan and GRE encapsulations. At the end we will compare some of the protocols with difference MTU sizes. The results were interesting I think (for a nerd). We will be installing and then configure both GRE and VXLan encapsulated tunnels using Open vSwitch.

I like seeing some collaboration of really smart people from different companies as displayed in the GPL below.

/* * Copyright (c) 2011 Nicira Networks.
* Copyright (c) 2012 Cisco Systems Inc.
* Distributed under the terms of the GNU GPL version 2.
* * Significant portions of this file may be copied from parts of the Linux
* kernel, by Linus Torvalds and others. */
Open vSwitch Tunnels GRE and VXLan

Figure 1. Example of how tunnels can be leveraged as overlays.


By the way I should probably disclaim now that huge Layer2 networks do not scale and huge Layer3 networks do. Host count in a broadcast domain/Vlan/network should be kept to a reasonable 3 digit number. Cisco is quick to point out that OTV is the solution over WAN’s for extending Layer 2 networks with OTV as the solution to extend Layer 2 Vlans. That said Overlays are flat out required to overcome Vlan number limitations and have lots of potential with programmatic orchestration.


Open vSwitch Datapath Forwarding

Figure 2. OVS punts the first packet to user land for the forwarding decision and passes the data path back to the data path in the kernel for subsequent packets in the flow. slow path for the first packet then fast path for the rest.


I dd some simple Iperf tests at the end of the post using different MTU values of 1500 and 9000 byte MTUs. The numbers are kind of fun to look at from the result. For a really nice analysis take a look at Martin’s post at Network Heresy on comparing STT, Linux Bridging and GRE. I am going to spin up some VMs on the VXLan tunnel later this week and measure the speeds a little closer and see how GRE and VXLan stack up to one another from hosts using overlays. I just ran some Iperfs from the hypervisor itself rather than VMs. OVS also supports Capwap encapsulation that performs mac in GRE which is rather slick. Wonder why we do not hear much about that. I am going to dig in when I get back to home from the road later next week.


Open vSwitch VXLan GRE

Figure 3. Lab Setup basically has a fake interface up with Br1. Real world Br1 would have VMs tapped on it. The video uses br1 and br2 but it got to be confusing for people so I changed it to br0 and br1 to match most peoples eth0 = br0 NIC naming in ifconfig.

Quick Video of the fast install below.


If you would prefer to install using packages check out this post. It will install an older version that may or may not support VXLAN framing depending on the timeline. It will support GRE.
OpenvSwitch Configure from Packages and Attaching to a Floodlight OpenFlow Controller →


For those familiar with the build you can just paste the following in your bash shell as root. To walk through the install skip the following snippet. The installation is extensively documented in the INSTALL file in the root of the tarball. The current and LTS releases are located here.

Open vSwitch System Preperation

This is on two boxes with 64 cores and Broadcom Corporation NetXtreme II BCM57810 10G Nics to a random 10G TOR switch.
Install dependencies


Download the  Open vSwitch latest build


Compile OpenvSwitch From Source


Initial Open vSwitch Configuration


Start ovsdb-server, this stores the config into a file that is persistent even after restarts.


*note* “brcompat” is depreciated since the OVS upstream. Output should just be “openvswitch” as a loaded kernel module. If they are not there try loading again and check your path to the kernel module. You shouldn’t see it loaded in the kernel modules unless running a very old version (*cough* 1.4 on Citrix). Get to know the functionality of the network control with an SDN OpenFlow controller, setting up overlays and one of the most interesting parts of OVS in the configuration database (OVSDB).

At this point you have a fucntioning vanilla OVS install. Output should look something like this.

Configure Linux Networking

I have one NIC (eth0) on the same LAN segment/network/vlan.
We are attaching eth0 to br1 and applying an IP to the bridge interface.
We are attaching an IP to br1. br1 is the island that we are building a tunnel for hosts to connect on. Without the VXLAN tunnel, the two br1 interfaces should not be able to ping one another. Note: This is being setup on the same subnet, it is important to keep in mind that the VXLAN framing will allow for the tunnel to be established over disparate networks. E.g. can be done over the Internet etc.


Configuration for Host 1


Configuration for Host 2


If you have issues getting the bridge built you may need to kill the OVS processes and restart them depending on your step order.


Your Linux routing table and ifconfig should now look something like this:


Troubleshooting Open vSwitch Installation

An error was sent to me by someone that looked like this below, restarting the OVS procs will clear it up:


Build a GRE Tunnel in Open vSwitch


Adjust MTUs with % ifconfig <interface e.g. br0 eth0 etc> mtu 9000
#(Remember your switch needs to have >= your MTU.

openvswitch gre tunnel

Figure 4. Output of GRE Tunnels running with 1500 byte MTUs.

openvswitch gre tunnel jumbo

Figure 5. Output of GRE Tunnels running with 9000 byte MTUs.


Now delete the GRE ports

Configure the VXLan Tunnel

The difference here is the “type” specifying the tunnel encapsulation.


openvswitch VXLan tunnel vxlan mtu 1500

Figure 6 Output of VXLan Tunnels running with 1500 byte MTUs.

openvswitch gre tunnel vxlan

Figure 7 Output of VXLan Tunnels running with 9000 byte MTUs.


Thanks for stopping by.


About the Author

Brent SalisburyI have over 15 years of experience wearing various hats from, network engineer, architect, devops and software engineer. I currently have the pleasure of working at the company that develops my favorite software I have ever used, Docker. My comments here are my personal thoughts and opinions. More at Brent's BioView all posts by Brent Salisbury →

  1. LeslieLeslie07-03-2012


    Great job! You are so dedicated and your passion for what you are doing is inspiring!

  2. Dmitri KalintsevDmitri Kalintsev07-04-2012


    Hi Brent,

    Looks like the config commands under the “Add the VXLan tunnel” section are incorrect (still refer to GRE, when they should be doing vxlan). 🙂

  3. Brent SalisburyBrent Salisbury07-04-2012


    Thanks Dmitri !!! I double checked it this time lol.

  4. NerijusNerijus07-14-2012


    I think you do mistake in “Configure Linux Network” part. Must be br2 and eth2 above “#ifconfig eth2 0” line.

  5. Brent SalisburyBrent Salisbury07-15-2012


    Thanks Nerijus, Not sure why I had eth2 on there at all since the lab only uses one physical interface per host other than the bridges. Br1 carries the traffic to the two TEPs and Br2 are virtual islands where the VMs would reside that are totally Hypervisor software networking. Thanks for the help and catching that.
    Cheers!

  6. EvanEvan07-17-2012


    Thank you so much for these examples. It makes picking up new technologies so much easier!

  7. Brent SalisburyBrent Salisbury07-17-2012


    My pleasure Evan! Thanks for the feedback, never sure if it makes any sense or not when I stare at it for a few hours putting it together lol. Let me know if you have any issues.
    Cheers!

  8. ns murthyns murthy07-18-2012


    I have tried testing VXLAN on OVS with my own implementation of VXLAN.

    I could receive the ping request from OVS and could send back the proper ping reply from my side with expected port of 4341.

    But OVS is sending ICMP destination port unreachable message. Is it required to configure anything on OVS side for this.?
    Please help.

  9. Brent SalisburyBrent Salisbury07-19-2012


    Hi NS. Paste in ‘ovs-vsctl show’ ‘ifconfig’ and ‘route -n’ and we can take a look.

    I will try and get a chance to verify the current snapshot on Mestery’s GitHub for you and do a screen cap of it.

    • ns murthyns murthy07-27-2012


      please find the snapshots for “ovs-vsctl show”,”ifconfig”,”route -n”

      As i said earlier,”OVS is sending ICMP destination port unreachable message” in response to my ping response. The OVS is searching a hash table and is not able to find the port it received in VXLAN message. It seems to me that some configuration is missing for the vxlan tunnel regarding the destination port.

      root@user-desktop:~# ovs-vsctl show
      c354223a-a435-403c-b33b-e045ce0f9c58
      Bridge “br1”
      Port “eth2”
      Interface “eth2”
      Port “br1”
      Interface “br1”
      type: internal
      Bridge “br2”
      Port “vx1”
      Interface “vx1″
      type: vxlan
      options: {out_key=”310″, remote_ip=”192.168.4.1”}
      Port “br2”
      Interface “br2”
      type: internal
      ———————————————-
      root@user-desktop:~# ifconfig
      br1 Link encap:Ethernet HWaddr 00:11:95:fc:8d:62
      inet addr:192.168.4.2 Bcast:192.168.4.255 Mask:255.255.255.0
      inet6 addr: fe80::211:95ff:fefc:8d62/64 Scope:Link
      UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
      RX packets:6 errors:0 dropped:0 overruns:0 frame:0
      TX packets:115 errors:0 dropped:0 overruns:0 carrier:0
      collisions:0 txqueuelen:0
      RX bytes:448 (448.0 B) TX bytes:22940 (22.9 KB)

      br2 Link encap:Ethernet HWaddr 22:28:bc:b6:64:4a
      inet addr:192.168.6.22 Bcast:192.168.6.255 Mask:255.255.255.0
      inet6 addr: fe80::2028:bcff:feb6:644a/64 Scope:Link
      UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
      RX packets:0 errors:0 dropped:0 overruns:0 frame:0
      TX packets:55 errors:0 dropped:0 overruns:0 carrier:0
      collisions:0 txqueuelen:0
      RX bytes:0 (0.0 B) TX bytes:9962 (9.9 KB)

      eth1 Link encap:Ethernet HWaddr 00:16:76:4b:91:dc
      inet addr:192.168.90.64 Bcast:192.168.91.255 Mask:255.255.254.0
      inet6 addr: fe80::216:76ff:fe4b:91dc/64 Scope:Link
      UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
      RX packets:346769 errors:0 dropped:0 overruns:0 frame:0
      TX packets:28058 errors:0 dropped:0 overruns:0 carrier:0
      collisions:0 txqueuelen:1000
      RX bytes:78130470 (78.1 MB) TX bytes:4896867 (4.8 MB)

      eth2 Link encap:Ethernet HWaddr 00:11:95:fc:8d:62
      inet6 addr: fe80::211:95ff:fefc:8d62/64 Scope:Link
      UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
      RX packets:1144 errors:0 dropped:0 overruns:0 frame:0
      TX packets:2959 errors:0 dropped:0 overruns:0 carrier:0
      collisions:0 txqueuelen:1000
      RX bytes:96602 (96.6 KB) TX bytes:3573286 (3.5 MB)
      Interrupt:22 Base address:0xc000

      lo Link encap:Local Loopback
      inet addr:127.0.0.1 Mask:255.0.0.0
      inet6 addr: ::1/128 Scope:Host
      UP LOOPBACK RUNNING MTU:16436 Metric:1
      RX packets:18 errors:0 dropped:0 overruns:0 frame:0
      TX packets:18 errors:0 dropped:0 overruns:0 carrier:0
      collisions:0 txqueuelen:0
      RX bytes:1352 (1.3 KB) TX bytes:1352 (1.3 KB)

      ———————————————-
      root@user-desktop:~# route -n
      Kernel IP routing table
      Destination Gateway Genmask Flags Metric Ref Use Iface
      0.0.0.0 192.168.91.254 0.0.0.0 UG 100 0 0 eth1
      192.168.90.0 0.0.0.0 255.255.254.0 U 0 0 0 eth1
      192.168.4.0 0.0.0.0 255.255.255.0 U 0 0 0 br1
      192.168.6.0 0.0.0.0 255.255.255.0 U 0 0 0 br2

      • Farrukh AftabFarrukh Aftab08-31-2012


        I am actually having the same problem as you. I hooked wireshark up to my interfaces to check it and it was quite strange that vx1 was receiving and replying to the ARP packet but on the terminal it kept on showing the same message.

        Would love to know if you’ve found some solution to this

      • xuejinxuejin07-11-2013


        I am having the same problem as you,Would love to know if you’ve found some solution to this.

  10. colecole08-03-2012


    Hi Brent
    have you tried programming the VXlAN pipeline in the OVS using floodlight or any other Openflow 1.0 controller?

    Can you shed some light on how these 1.0 rules might look like assuming that the ARPs are statically provisioned in each VM (learning not needed)for sake of brevity

    you could consider the same example wherein you have shown the usage of VXLAN using static OVS commands

    cheers
    cole

  11. Farrukh AftabFarrukh Aftab08-30-2012


    So glad that I stumbled upon your blog. It is an excellent effort Brent. I am a newbie to SDN & tunneling. Have been studying for a few weeks only and I must say, you’re blog is very very helpful

    I wanted to ask you if you knew what the difference between VXLAN patch released by Ben in Oct. 2011 and one you took from GIT is? I have been trying to implement that patch by Ben for some time and facing quite a lot of difficulties

    • Brent SalisburyBrent Salisbury08-31-2012


      Hi Farrukh, I am glad that helped, thats great. Gosh I dont have a clue what the difference between than and now is other than this effort was headed up by @mestery from Cisco in conjunction with Nicira and I am guessing Ben. The listserv might be your best bet.
      Cheers,
      -Brent

      • Farrukh AftabFarrukh Aftab08-31-2012


        Thank you for that very quick reply. Much appreciated.

        Mestery has cleaned up the code a little bit, made a few new header files. I am trying to implement his patch now. Thanks for the help. =)

        Regards.

  12. kedarkedar09-12-2012


    You rock Brent.

    have configured gre tunnel using the steps above, but not seeing any traffic on iperf. can u tell me

  13. xsitedxsited09-12-2012


    Thanks. Very helpful.

    What is the current state of “STT code to play with? “

    • Brent SalisburyBrent Salisbury09-13-2012


      Was just thinking that my self. Last time I pestered they said they would know in 4-6 weeks. Its been 6 🙂 I hope VMware doesnt kill it. I think offloading on the NIC is a great idea.

      • AnnaAnna09-24-2012


        Hi Brent,

        Good post! Thanks!
        Do you know if it is possible to run OVS in one of the VM’s without using nested VM architecture?

        • Brent SalisburyBrent Salisbury09-25-2012


          Hi Anna, you sure can. Matter of fact thats how I would build a tunnel today from a Public Cloud IaaS provider if I needed to tunnel back from there for whatever reason. Check out the OVS install instructions in this post.
          http://networkstatic.net/public-cloud-network-as-a-service/
          Its such a good question and point you make I should articulate it more. just add your IP to the br-int interface you make and that will use OVS as it’s address. From their you can add another bridge that acts as a dumb interface or really an object you can assign an IP to and point your default route for another NIC, VIF Loopback etc to. Let me know if you have problems before I get a post about it up later this week and I can elaborate better.
          Thanks for stopping by and the great question!

          • IPIP11-27-2012


            Hello Brent, thanks very much for your informative post. I am facing a few issues when setting up the GRE tunnel between two VMs running openvswitch 1.7.1 (not the git branch you are referring to here). When I ping the br2 10.1.2.x IP address of the other VM, I see that the ARP messages encapsulated in GRE are reaching the other VM on the br1 interface, but I don’t see any responses from the other node with the MAC address of the local br2 interface. Can you suggest what to try next? thanks IP

  14. RadhikaRadhika10-21-2012


    Hi Brent,

    Thanks for a great post. I was able to get VXLAN tunnels between VMs running. The throughput I am seeing however is at 6.1 Gbps with 9000 MTU on the ovsbridge (i should be seeing closer to 10Gbos). The openvswitch version I am using is the same, which makes me wonder if this has something to do with the kernel that I am running. Could you please give me some more details about your environment.

    • Wenfei WuWenfei Wu11-06-2012


      Yes. I have the same question. I am using 2 Dell T5500 Desktop. 8-core Intel Xeon 2.27GHz CPU. With Ubuntu 10.04, kernel 2.6.38.3. I connect 2 physical servers with a cross wire directly.
      I use ovs-vxlan in https://github.com/mestery/ovs-vxlan
      The throughput is only a bit more than half of the result in all cases in this blog.
      What may be the other things that influence the throughput?
      TCP variant?

  15. Brent SalisburyBrent Salisbury11-06-2012


    Hi guys, Radhika, apologies for missing your question. Here is some info. If needed I can compile from the trunk and rerun the benchmarks for you guys. 2.6 is prior to OVS getting upstreamed in v3.3 so that could certainly have something to do with it.
    Thanks,
    -Brent

    root@openstack3:~# uname -a
    Linux openstack3 3.2.0-25-generic #40-Ubuntu SMP Wed May 23 20:30:51 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux
    root@openstack3:~# lscpu
    Architecture: x86_64
    CPU op-mode(s): 32-bit, 64-bit
    Byte Order: Little Endian
    CPU(s): 32
    On-line CPU(s) list: 0-31
    Thread(s) per core: 2
    Core(s) per socket: 8
    Socket(s): 2
    NUMA node(s): 2
    Vendor ID: GenuineIntel
    CPU family: 6
    Model: 45
    Stepping: 7
    CPU MHz: 2593.658
    BogoMIPS: 5184.28
    Virtualization: VT-x
    L1d cache: 32K
    L1i cache: 32K
    L2 cache: 256K
    L3 cache: 20480K
    NUMA node0 CPU(s): 0-7,16-23
    NUMA node1 CPU(s): 8-15,24-31
    root@openstack3:~# lspci | grep “Gig”
    03:00.0 Ethernet controller: Broadcom Corporation NetXtreme II BCM57810 10 Gigabit Ethernet (rev 10)
    03:00.1 Ethernet controller: Broadcom Corporation NetXtreme II BCM57810 10 Gigabit Ethernet (rev 10)
    root@openstack3:~# apt-get install netperf
    root@openstack3:~# netperf -t TCP_STREAM -H 127.0.0.1 -l 10
    MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 127.0.0.1 (127.0.0.1) port 0 AF_INET : demo
    Recv Send Send
    Socket Socket Message Elapsed
    Size Size Size Time Throughput
    bytes bytes bytes secs. 10^6bits/sec

    87380 16384 16384 10.00 21057.26

    • RadhikaRadhika11-09-2012


      Hi Brent,

      Thanks much for getting back on this! Yes, if you could share with us the same benchmarks against the latest ovs-vxlan branch, I’d be very grateful.

      Here are my set up details:
      uname -r
      3.5.0-17-generic

      modinfo openvswitch
      filename: /lib/modules/3.5.0-17-generic/kernel/net/openvswitch/openvswitch.ko
      version: 1.9.90
      license: GPL
      description: Open vSwitch switching datapath
      srcversion: B833F5F3A240867927A7A45
      depends:
      vermagic: 3.5.0-17-generic SMP mod_unload modversions

      Open vSwitch bridge info (MTU 9000):
      ifconfig ovsbr
      ovsbr Link encap:Ethernet HWaddr ae:56:8b:51:d6:4a
      inet addr:10.0.113.3 Bcast:10.0.255.255 Mask:255.255.0.0
      inet6 addr: fe80::ac56:8bff:fe51:d64a/64 Scope:Link
      UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1
      RX packets:5199359 errors:0 dropped:0 overruns:0 frame:0
      TX packets:2264022 errors:0 dropped:0 overruns:0 carrier:0
      collisions:0 txqueuelen:0
      RX bytes:37631493830 (37.6 GB) TX bytes:17375858808 (17.3 GB)

      (other side is similar)

      Physical link info (MTU 9000):
      ifconfig p1p1
      p1p1 Link encap:Ethernet HWaddr 90:e2:ba:26:88:c4
      inet addr:15.0.113.3 Bcast:15.0.255.255 Mask:255.255.0.0
      inet6 addr: fe80::92e2:baff:fe26:88c4/64 Scope:Link
      UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1
      RX packets:4485908 errors:0 dropped:0 overruns:0 frame:0
      TX packets:1993574 errors:0 dropped:0 overruns:0 carrier:0
      collisions:0 txqueuelen:1000
      RX bytes:19248771222 (19.2 GB) TX bytes:5307066196 (5.3 GB)
      (other side is similar)

      Rx throughput (5.07 Gbps):
      netperf -H 10.0.113.3 -f g
      MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.0.113.3 () port 0 AF_INET
      Recv Send Send
      Socket Socket Message Elapsed
      Size Size Size Time Throughput
      bytes bytes bytes secs. 10^9bits/sec

      87380 65536 65536 10.00 5.07

      Tx throughput (4.11 Gbps):
      netperf -H 10.0.101.3 -f g
      MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.0.101.3 () port 0 AF_INET
      Recv Send Send
      Socket Socket Message Elapsed
      Size Size Size Time Throughput
      bytes bytes bytes secs. 10^9bits/sec

      87380 65536 65536 10.00 4.11

      Physical link throughput:
      Rx (9.9Gbps):
      netperf -H 15.0.113.3 -f g
      MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 15.0.113.3 () port 0 AF_INET
      Recv Send Send
      Socket Socket Message Elapsed
      Size Size Size Time Throughput
      bytes bytes bytes secs. 10^9bits/sec

      87380 65536 65536 10.01 9.90

      Tx(8.28 Gbps)
      netperf -H 15.0.101.3 -f g
      MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 15.0.101.3 () port 0 AF_INET
      Recv Send Send
      Socket Socket Message Elapsed
      Size Size Size Time Throughput
      bytes bytes bytes secs. 10^9bits/sec

      87380 65536 65536 10.00 8.28

      Since we really cannot play around much with offload settings of Open vSwitch, I do not know where the get back the missing throughput from. Your insights would be very helpful. I’ve also logged an issue at https://github.com/mestery/ovs-vxlan/issues/4 .

      • RadhikaRadhika11-09-2012


        Also, I am running Ubuntu 12.10 on my server. Thanks much again!

  16. GhalibGhalib11-07-2012


    Hi Guys,
    I want to implement GTP tunneling on OVS. I have my C++ implementation of GTP but dont know how to integrate it into OVS and then test.

    • Farrukh AftabFarrukh Aftab11-08-2012


      You may want to start by looking at tunnel.h first. That would help you define a new protocol in OVS. I do not know much details about GTP i.e. is it based on TCP, UDP etc? Or uses some other underlying protocol. So I don’t know where to point you next. Correct me if I am wrong here, I do not think I have seen any cpp files in OVS. It is based on C/Python.

  17. Brent SalisburyBrent Salisbury11-12-2012


    Radhika and Wenfei, I finally got a chance to lab this up tonight and I am hitting a snag somewhere. Hopefully in the next day or two I will get a chance to troubleshoot it. Its 1am EST and I have meetings in the morning so I need to hit the sack. I will get the perf tests posted this week tho.

    Farrukh thanks for helping Ghalib. Ghalib, have you taken a look at the Github? I was going the the code a couple weeks ago, but do not remember off the top of my head where the protocol was integrated. I will look this week.

    Thanks guys,
    -Brent

  18. Brent SalisburyBrent Salisbury11-14-2012


    All numbers are w/ 9000byte MTU.

    No tunnel:
    [ 5] local 172.31.246.6 port 5001 connected with 172.31.246.7 port 55769
    [ 5] 0.0-10.0 sec 11.5 GBytes 9.90 Gbits/sec
    [ 4] local 172.31.246.6 port 5001 connected with 172.31.246.7 port 55770
    [ 4] 0.0-10.0 sec 11.5 GBytes 9.90 Gbits/sec

    GRE
    [ 5] local 10.1.2.11 port 5001 connected with 10.1.2.12 port 35150
    [ 5] 0.0-10.0 sec 1.83 GBytes 1.57 Gbits/sec
    [ 4] local 10.1.2.11 port 5001 connected with 10.1.2.12 port 35151
    [ 4] 0.0-10.0 sec 1.94 GBytes 1.66 Gbits/sec

    VXLan
    [ 4] local 10.1.2.11 port 5001 connected with 10.1.2.12 port 35161
    [ ID] Interval Transfer Bandwidth
    [ 4] 0.0-10.1 sec 1.27 GBytes 1.08 Gbits/sec
    [ 5] local 10.1.2.11 port 5001 connected with 10.1.2.12 port 35163
    [ 5] 0.0-10.0 sec 1.13 GBytes 968 Mbits/sec
    [ 4] local 10.1.2.11 port 5001 connected with 10.1.2.12 port 35164
    [ 4] 0.0-10.1 sec 1.08 GBytes 919 Mbits/sec

    sudo ifconfig eth0 mtu 9000
    sudo ifconfig br2 mtu 9000

    Noticed even though all of my ports are set with an MTU of 9k cannot get a frame >1500bytes thru.

    root@openstack3:~/ovs-vxlan# ifconfig
    br1 Link encap:Ethernet HWaddr e8:39:35:c4:92:20
    inet addr:172.31.246.7 Bcast:172.31.246.255 Mask:255.255.255.0
    inet6 addr: fe80::ea39:35ff:fec4:9220/64 Scope:Link
    UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1
    RX packets:14994643 errors:0 dropped:72 overruns:0 frame:0
    TX packets:30261402 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:0
    RX bytes:2088400199 (2.0 GB) TX bytes:135203437061 (135.2 GB)

    br2 Link encap:Ethernet HWaddr 3e:6c:21:50:78:4d
    inet addr:10.1.2.12 Bcast:10.1.2.255 Mask:255.255.255.0
    inet6 addr: fe80::3c6c:21ff:fe50:784d/64 Scope:Link
    UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1
    RX packets:11451582 errors:0 dropped:0 overruns:0 frame:0
    TX packets:859880 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:0
    RX bytes:756727184 (756.7 MB) TX bytes:32818348274 (32.8 GB)

    eth0 Link encap:Ethernet HWaddr e8:39:35:c4:92:20
    inet6 addr: fe80::ea39:35ff:fec4:9220/64 Scope:Link
    UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1
    RX packets:12165982 errors:0 dropped:0 overruns:0 frame:0
    TX packets:28392717 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:1000
    RX bytes:1808756680 (1.8 GB) TX bytes:126984648851 (126.9 GB)
    Interrupt:32 Memory:f6000000-f67fffff

    eth1 Link encap:Ethernet HWaddr e8:39:35:c4:92:24
    inet6 addr: fe80::ea39:35ff:fec4:9224/64 Scope:Link
    UP BROADCAST RUNNING PROMISC MULTICAST MTU:9000 Metric:1
    RX packets:49 errors:0 dropped:160 overruns:0 frame:0
    TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:1000
    RX bytes:6222 (6.2 KB) TX bytes:0 (0.0 B)
    Interrupt:36 Memory:f4800000-f4ffffff

    lo Link encap:Local Loopback

    root@openstack3:~/ovs-vxlan# ping -s 8192 -M do 10.1.2.11
    PING 10.1.2.11 (10.1.2.11) 8192(8220) bytes of data.
    From 10.1.2.12 icmp_seq=1 Frag needed and DF set (mtu = 1500)
    From 10.1.2.12 icmp_seq=1 Frag needed and DF set (mtu = 1500)

    I will try and find some time to tshoot this weekend. My week is shot and I need to rack out tonight. Weird perf is this bad. Did see mestery tweeted the VXlan patch is submitted for upstream so may wait until then and check perf.

    Thanks,
    -Brent

    • RadhikaRadhika11-15-2012


      Thanks for confirming Brent! The perf numbers do look pretty bad. Thanks for the headsup on the tweet. Looking forward to check it out when it gets in.

      Do let us know if you find some settings that change the perf numbers whenever you get a chance.

      • Kyle MesteryKyle Mestery11-16-2012


        I think I know what the performance problem may be. The patches I submitted upstream do not suffer performance issues compared to GRE. I’ll update my github with the same soon, once you see that commit, can you pull them and try them out?

        Thanks,
        Kyle

  19. Rehan AhmedRehan Ahmed11-15-2012


    Hello Guys,

    When I am Using GRE Tunnel in OVS, How I can get those messages in Wireshark, I am Running GRE and ping between two PC’s but Nothing of GRE is Shown in the WireShark.

    Just ping Request and Response Shown there.

  20. SteveSteve11-15-2012


    Hi Brent,

    Thanks for all these great posts, they’ve been very usuful to setup some testing environments.
    I tried to setup a scenario with vxlan connecting 3 different hypervisors. As I haven’t found any example of how to configure, I’ve tried the following implementation which is not working as expected, do you have an example configuration that you can share?

    This is the one I’ve been trying, on Host 1:

    Bridge br-int
    –Port br-int
    —-Interface br-int
    ——type:internal
    –Port “eth0”
    —-Interface “eth0”
    Bridge “br2”
    –Port “br2”
    —-Interface “br2″
    ——type:interal
    –Port vxlan
    —-Interface vxlan
    ——type:vxlan
    ——options:{remote_ip=”192.168.1.1″}
    –Port vxlan2
    —-Interface vxlan2
    ——type:vxlan
    ——options:{remote_ip=”192.168.1.2.”}

    I’ve read that vxlan on OVS in not implementing multicast, how does the protocol manage it? sending multi-unicast? or do we have to insert manually flow entries on each OVS?

    Thanks!!

    • Kyle MesteryKyle Mestery11-16-2012


      Hi Steve:

      Yes, currently there is no multicast support in the VXLAN patches. The plan is that once the patch I’m working on goes upstream we can implement support for multicast learning in userspace.

      Hope that helps!

      Thanks,
      Kyle

      • SteveSteve11-19-2012


        Hi Kyle,

        Thanks for your answer, looking forward to test your patch once available! on how much time do you reckon it will be released?
        Regarding my second question, do you know how to implement a multi tunnel configuration between different sites?

        Thanks,

        Steve

  21. RajuRaju11-16-2012


    HI Brent,
    How to setup STT in OVS. I need some document which can talk about STT setup in OVS.

    Thanks,
    Raju

  22. Kyle MesteryKyle Mestery11-16-2012


    Anyone having performance problems, please try the latest version of my patches from my github on the vxlan branch and let me know what you see. All perf issues should be fixed I believe.

    Thanks,
    Kyle

    • RadhikaRadhika11-19-2012


      Hi Kyle,

      Thanks for the update. I tried the latest branch, but I don’t see an improvement. Perhaps there is some config parameter that I am missing? I’d love to know insights on what are the knobs I could use to change the perf that I am seeing.

      Here are my testbed details: physical links (both ends have 9000 MTU): 15.0.113.3 and 15.0.101.3
      openvswitch with vxlan(both ends have 9000 MTU): 11.0.113.3 and 11.0.101.3

      RX, Baseline (on physical link): 9.9Gbps
      RX, VXLAN(openvswitch): 5.42 Gbps

      TX, Baseline (on physical link): 8.32Gbps
      TX, VXLAN(openvswitch): 4.57 Gbps

      Output of ovs-ofctl show, ovs-dpctl show, ovs-vsctl show:
      Server 1) ovs-ofctl show ovsbr
      OFPT_FEATURES_REPLY (xid=0x1): dpid:00009e713aab6145
      n_tables:254, n_buffers:256
      capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
      actions: OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_NW_TOS SET_TP_SRC SET_TP_DST ENQUEUE
      1(vx1): addr:5a:30:46:d7:73:b1
      config: 0
      state: 0
      speed: 0 Mbps now, 0 Mbps max
      LOCAL(ovsbr): addr:9e:71:3a:ab:61:45
      config: 0
      state: 0
      speed: 0 Mbps now, 0 Mbps max
      OFPT_GET_CONFIG_REPLY (xid=0x3): frags=normal miss_send_len=0

      ovs-dpctl show
      system@ovs-system:
      lookups: hit:3044610 missed:30 lost:0
      flows: 0
      port 0: ovs-system (internal)
      port 1: ovsbr (internal)
      port 2: vx1 (vxlan: remote_ip=15.0.101.3)

      ovs-vsctl show
      b7081fed-9be3-4243-893e-94b4b50211d8
      Bridge ovsbr
      Port ovsbr
      Interface ovsbr
      type: internal
      Port “vx1”
      Interface “vx1″
      type: vxlan
      options: {remote_ip=”15.0.101.3”}

      Server 2) ovs-ofctl show ovsbr
      OFPT_FEATURES_REPLY (xid=0x1): dpid:00001eb955408244
      n_tables:254, n_buffers:256
      capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
      actions: OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_NW_TOS SET_TP_SRC SET_TP_DST ENQUEUE
      1(vx1): addr:5a:b1:02:dd:b7:15
      config: 0
      state: 0
      speed: 0 Mbps now, 0 Mbps max
      LOCAL(ovsbr): addr:1e:b9:55:40:82:44
      config: 0
      state: 0
      speed: 0 Mbps now, 0 Mbps max
      OFPT_GET_CONFIG_REPLY (xid=0x3): frags=normal miss_send_len=0

      ovs-dpctl show
      system@ovs-system:
      lookups: hit:2794001 missed:30 lost:0
      flows: 0
      port 0: ovs-system (internal)
      port 1: ovsbr (internal)
      port 2: vx1 (vxlan: remote_ip=15.0.113.3)

      ovs-vsctl show
      6a1f48dd-30d8-4d1b-b52a-4698354e5c26
      Bridge ovsbr
      Port “vx1”
      Interface “vx1″
      type: vxlan
      options: {remote_ip=”15.0.113.3”}
      Port ovsbr
      Interface ovsbr
      type: internal

      Here’s the modinfo for the openvswitch module:
      Server 1) modinfo openvswitch
      filename: /lib/modules/3.5.0-030500-generic/kernel/net/openvswitch/openvswitch.ko
      version: 1.9.90
      license: GPL
      description: Open vSwitch switching datapath
      srcversion: 726FDD22BBD59C95CB6769A
      depends:
      vermagic: 3.5.0-030500-generic SMP mod_unload modversions

      Server 2) modinfo openvswitch
      filename: /lib/modules/3.5.0-17-generic/kernel/net/openvswitch/openvswitch.ko
      version: 1.9.90
      license: GPL
      description: Open vSwitch switching datapath
      srcversion: 726FDD22BBD59C95CB6769A
      depends:
      vermagic: 3.5.0-17-generic SMP mod_unload modversions

      Please let me know if there is any other debug info I can provide.

      Thanks
      Radhika

  23. Brent SalisburyBrent Salisbury11-19-2012


    Thanks Radhika, I am in between x86 for a couple weeks but will try and get a loaner or two and try and test myself also.
    Thanks,
    -Brent

  24. RamachandraRamachandra11-23-2012


    Hi Brent,

    Thanks a lot for this tutorial (and others on openflow). I think I have the setup fine, but the bridge that has the gre interface is sending out ARP requests for the remote tunnel IP instead of tunneling.
    Can you please help me with the setup here.
    I am trying to get a gre tunnel working by using 2 VMs that have openvswitch and 2 VMs as hosts. Following is my setup:

    ——————- =========
    (interface eth1)Switch 2 (interface eth2)

    Here is my configuration:

    Host 1: The interface connected to the switch-1 has the IP 192.168.5.100

    Switch 1:
    eth1 is part of a bridge br1 and has IP 192.168.1.1
    eth2 is part of another bridge br2 and has IP 192.168.5.1
    There is another bridge br3, which has the IP 10.1.1.1 and has a GRE interface with remote IP 192.168.1.100 (which is the ‘public IP’ of switch-2).

    Host 2: The interface connected to the switch has the IP 192.168.10.100

    Switch 2:
    eth1 is part of a bridge br1 and has IP 192.168.1.1008.eth2 is part of another bridge br2 and has IP 192.168.10.1
    There is another bridge br3, which has the IP 11.1.1.1 and has a GRE port with remote IP 192.168.1.1 (‘public’ IP of Switch-1)

    I have added relevant routes: ‘route add -host 192.168.10.100 dev br2’ on Switch-1 and ‘route add -host 192.168.5.1 00 dev br2’ on Switch-1.

    Now, when I try to ping from Host 1 to Host 2 using the tunnel, I did a tcpdump on br2 on Switch-1(the bridge having gre interface). br2 is sending out an ARP request for 192.168.10.100 instead of tunneling the packet. Can you please tell me what I may be missing here? Do we have to add flows manually?

    Thanks & Regards,
    Ramachandra

  25. RamachandraRamachandra11-23-2012


    Hi Brent,

    I just observed my setup depiction in the above comment is not clear (a pasting error probably). This is the setup that I am using:

    ——————- =========
    (interface eth1)Switch 2 (interface eth2)

    Can you please help me with understanding what I am missing here.

    Thanks & Regards,
    Ramachandra Kasyap

  26. RamachandraRamachandra11-23-2012


    Again the same issue. I apologize for multiple posts, but the content looks fine in the text box, but somehow does not show up properly in the post. Making a final attempt:

    ————————- —————connected to—

    —–connected to————————-

    I am trying to ping from Host 1 to Host 2 over the tunnel.

    Thanks,
    Kasyap

    • RamachandraRamachandra11-23-2012


      I see that we don’t need 3 bridges – only 2 bridges would do:

      Switch -1:
      br1 (which has eth1) and IP 192.168.1.1
      br2 (which has eth2 and gre10). br2 has IP 192.168.5.1. Remote IP of gre10 is 192.168.1.100.
      There is a VM connected to eth2 with the IP 192.168.5.100

      Similarly, Switch-2 has the following configuration:
      br1 (which has eth1) and IP 192.168.1.100
      br2 (which has eth2 and gre10). br2 has IP 192.168.10.1. Remote IP of gre10 is 192.168.1.1.
      There is a VM connected to eth2 with the IP 192.168.10.100

      I am trying ping from VM1 (with IP 192.168.5.100) to VM2 (192.168.10.100). But, I still face the same issue – at Switch 1, an ARP request is sent out on br2 for 192.168.10.100. I am not sure what I am missing.

      Thanks & Regards,
      Ramachandra

      • RamachandraRamachandra11-24-2012


        Debugging further, I think I know where the issue is (though I am not sure how to fix this).

        In openvswitchd.log, I see the following messages when I add the port for gre:

        ovs-vsctl add-port br2 gre10 …

        WARN|system@br2: failed to add gre10 as port: Address family not supported by protocol.

        I did not check this earlier as the ‘ovs-vsctl add-port ..’ command was not throwing any errors. I got suspicious as the gre port was not shown in ‘ovs-dpctl show’, then enabled logging and retried adding bridges and ports again.
        If anyone faced this issue and fixed it, please let me know.

        Thanks & Regards,
        Ramachandra

  27. Arunkumar UArunkumar U03-02-2013


    Hi,

    Please help me , I could not ping VM from one host to another host via VXLAN

    I have installed OVS 1.7.3 in 2 hosts , configuration is below

    Created 2 VMs in Host 1 and 1 VM in Host 2 , added VMs interface (vnet0, vnet1) in the bridge “br2” and physical port in “br1”

    Host 1:
    ——–
    ovs-vsctl add-br br1
    ovs-vsctl add-br br2

    ovs-vsctl add-port br1 p6p1
    ifconfig p6p1 0

    ovs-vsctl add-port br2 vnet0
    ovs-vsctl add-port br2 vnet1

    ifconfig br1 192.168.1.11 netmask 255.255.255.0
    route add default gw 192.168.1.1 br1

    ifconfig br2 10.1.2.11 netmask 255.255.255.0

    ovs-vsctl add-port br2 vx1 — set interface vx1 type=vxlan options:remote_ip=192.168.1.10

    VM1 IP : 10.1.2.50
    VM2 IP : 10.1.2.51

    Host 2 :
    ——–

    ovs-vsctl add-br br1
    ovs-vsctl add-br br2

    ovs-vsctl add-port br1 p20p1
    ifconfig p20p1 0

    ovs-vsctl add-port br2 vnet0

    ifconfig br1 192.168.1.10 netmask 255.255.255.0
    route add default gw 192.168.1.1 br1

    ifconfig br2 10.1.2.10 netmask 255.255.255.0

    ovs-vsctl add-port br2 vx1 — set interface vx1 type=vxlan options:remote_ip=192.168.1.11

    VM1 IP : 10.1.2.52

    Any idea?

    • Arunkumar UArunkumar U03-05-2013


      Hi Brent,

      GRE is working fine in this scenario, could ping from VM to VM by GRE, it is encapsulating with GRE header.

      But in VXLAN it is not working

      br1 ping is successfull from both the host.

      Host 1 :
      ======

      [root@VERYX home]# ping 192.168.1.10
      PING 192.168.1.10 (192.168.1.10) 56(84) bytes of data.
      64 bytes from 192.168.1.10: icmp_req=1 ttl=64 time=0.542 ms
      64 bytes from 192.168.1.10: icmp_req=2 ttl=64 time=0.213 ms

      ovs-vsctl show
      ——————-

      6924d99b-c848-4b24-a7ca-f837647cd9ff
      Bridge “br2”
      Port “vx1”
      Interface “vx1″
      type: vxlan
      options: {remote_ip=”192.168.1.10”}
      Port “br2”
      Interface “br2”
      type: internal
      Port “vnet0”
      Interface “vnet0”
      Bridge “br1”
      Port “br1”
      Interface “br1”
      type: internal
      Port “p6p1”
      Interface “p6p1”

      route -n
      ———–

      Kernel IP routing table
      Destination Gateway Genmask Flags Metric Ref Use Iface
      0.0.0.0 192.168.66.129 0.0.0.0 UG 0 0 0 p5p1
      10.1.2.0 0.0.0.0 255.255.255.0 U 0 0 0 br2
      192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 br1
      192.168.12.0 192.168.66.129 255.255.255.0 UG 0 0 0 p5p1
      192.168.66.0 0.0.0.0 255.255.255.0 U 0 0 0 p5p1

      br1: flags=4163 mtu 9000
      inet 192.168.1.11 netmask 255.255.255.0 broadcast 192.168.1.255
      inet6 fe80::58d1:f6ff:fed5:2942 prefixlen 64 scopeid 0x20
      ether 00:30:48:ba:3b:85 txqueuelen 0 (Ethernet)
      RX packets 266 bytes 37372 (36.4 KiB)
      RX errors 0 dropped 0 overruns 0 frame 0
      TX packets 245 bytes 23359 (22.8 KiB)
      TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

      br2: flags=4163 mtu 1500
      inet 10.1.2.11 netmask 255.255.255.0 broadcast 10.1.2.255
      inet6 fe80::5446:bbff:fe8e:2e4a prefixlen 64 scopeid 0x20
      ether 56:46:bb:8e:2e:4a txqueuelen 0 (Ethernet)
      RX packets 19890 bytes 35230671 (33.5 MiB)
      RX errors 0 dropped 0 overruns 0 frame 0
      TX packets 22500 bytes 3325783 (3.1 MiB)
      TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

      lo: flags=73 mtu 16436
      inet 127.0.0.1 netmask 255.0.0.0
      inet6 ::1 prefixlen 128 scopeid 0x10
      loop txqueuelen 0 (Local Loopback)
      RX packets 6144579 bytes 9900599294 (9.2 GiB)
      RX errors 0 dropped 0 overruns 0 frame 0
      TX packets 6144579 bytes 9900599294 (9.2 GiB)
      TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

      p5p1: flags=4163 mtu 1500
      inet 192.168.66.130 netmask 255.255.255.0 broadcast 192.168.66.255
      inet6 fe80::230:48ff:feba:3b84 prefixlen 64 scopeid 0x20
      ether 00:30:48:ba:3b:84 txqueuelen 1000 (Ethernet)
      RX packets 12188226 bytes 10304812103 (9.5 GiB)
      RX errors 0 dropped 0 overruns 0 frame 0
      TX packets 16014534 bytes 16100938795 (14.9 GiB)
      TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
      device interrupt 16 memory 0xd0200000-d0220000

      p6p1: flags=4419 mtu 9000
      inet6 fe80::230:48ff:feba:3b85 prefixlen 64 scopeid 0x20
      ether 00:30:48:ba:3b:85 txqueuelen 1000 (Ethernet)
      RX packets 216483 bytes 183723740 (175.2 MiB)
      RX errors 0 dropped 0 overruns 0 frame 0
      TX packets 88504 bytes 6230355 (5.9 MiB)
      TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
      device interrupt 17 memory 0xd0300000-d0320000

      vnet0: flags=4163 mtu 1500
      inet6 fe80::fc54:ff:fe49:b4da prefixlen 64 scopeid 0x20
      ether fe:54:00:49:b4:da txqueuelen 500 (Ethernet)
      RX packets 119043 bytes 186635778 (177.9 MiB)
      RX errors 0 dropped 0 overruns 0 frame 0
      TX packets 246158 bytes 185881453 (177.2 MiB)
      TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

      vnet1: flags=4163 mtu 1500
      inet6 fe80::fc54:ff:fe7b:e96d prefixlen 64 scopeid 0x20
      ether fe:54:00:7b:e9:6d txqueuelen 500 (Ethernet)
      RX packets 7570 bytes 614957 (600.5 KiB)
      RX errors 0 dropped 0 overruns 0 frame 0
      TX packets 11730 bytes 146176530 (139.4 MiB)
      TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

      Host 2:
      =====

      [root@VXLAN home]# ping 192.168.1.11
      PING 192.168.1.11 (192.168.1.11) 56(84) bytes of data.
      64 bytes from 192.168.1.11: icmp_req=1 ttl=64 time=0.526 ms
      64 bytes from 192.168.1.11: icmp_req=2 ttl=64 time=0.203 ms

      ovs-vsctl show
      ——————

      573e0df5-1c70-420f-b31a-8ddf3e4ca5da
      Bridge “br1”
      Port “p20p1”
      Interface “p20p1”
      Port “br1”
      Interface “br1”
      type: internal
      Bridge “br2”
      Port “vnet0”
      Interface “vnet0”
      Port “vx1”
      Interface “vx1″
      type: vxlan
      options: {remote_ip=”192.168.1.11”}
      Port “br2”
      Interface “br2”
      type: internal

      route -n
      ———-
      Kernel IP routing table
      Destination Gateway Genmask Flags Metric Ref Use Iface
      0.0.0.0 192.168.66.129 0.0.0.0 UG 0 0 0 p21p1
      10.1.2.0 0.0.0.0 255.255.255.0 U 0 0 0 br2
      192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 br1
      192.168.66.0 0.0.0.0 255.255.255.0 U 0 0 0 p21p1

      ifconfig -a
      ————-

      br1: flags=4163 mtu 1500
      inet 192.168.1.10 netmask 255.255.255.0 broadcast 192.168.1.255
      inet6 fe80::260:e0ff:fe49:455e prefixlen 64 scopeid 0x20
      ether 00:60:e0:49:45:5e txqueuelen 0 (Ethernet)
      RX packets 48 bytes 6678 (6.5 KiB)
      RX errors 0 dropped 0 overruns 0 frame 0
      TX packets 56 bytes 8854 (8.6 KiB)
      TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

      br2: flags=4163 mtu 1500
      inet 10.1.2.10 netmask 255.255.255.0 broadcast 10.1.2.255
      inet6 fe80::1c78:87ff:fe14:9740 prefixlen 64 scopeid 0x20
      ether 1e:78:87:14:97:40 txqueuelen 0 (Ethernet)
      RX packets 34 bytes 4910 (4.7 KiB)
      RX errors 0 dropped 0 overruns 0 frame 0
      TX packets 35 bytes 7066 (6.9 KiB)
      TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

      lo: flags=73 mtu 16436
      inet 127.0.0.1 netmask 255.0.0.0
      inet6 ::1 prefixlen 128 scopeid 0x10
      loop txqueuelen 0 (Local Loopback)
      RX packets 104165 bytes 199828052 (190.5 MiB)
      RX errors 0 dropped 0 overruns 0 frame 0
      TX packets 104165 bytes 199828052 (190.5 MiB)
      TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

      p16p1: flags=4099 mtu 1500
      ether 00:60:e0:49:45:5a txqueuelen 1000 (Ethernet)
      RX packets 0 bytes 0 (0.0 B)
      RX errors 0 dropped 0 overruns 0 frame 0
      TX packets 0 bytes 0 (0.0 B)
      TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
      device interrupt 16 memory 0xd0100000-d0120000

      p17p1: flags=4099 mtu 1500
      ether 00:60:e0:49:45:5b txqueuelen 1000 (Ethernet)
      RX packets 0 bytes 0 (0.0 B)
      RX errors 0 dropped 0 overruns 0 frame 0
      TX packets 0 bytes 0 (0.0 B)
      TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
      device interrupt 17 memory 0xd0200000-d0220000

      p18p1: flags=4099 mtu 1500
      ether 00:60:e0:49:45:5c txqueuelen 1000 (Ethernet)
      RX packets 0 bytes 0 (0.0 B)
      RX errors 0 dropped 0 overruns 0 frame 0
      TX packets 0 bytes 0 (0.0 B)
      TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
      device interrupt 18 memory 0xd0300000-d0320000

      p19p1: flags=4099 mtu 1500
      ether 00:60:e0:49:45:5d txqueuelen 1000 (Ethernet)
      RX packets 0 bytes 0 (0.0 B)
      RX errors 0 dropped 0 overruns 0 frame 0
      TX packets 0 bytes 0 (0.0 B)
      TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
      device interrupt 19 memory 0xd0400000-d0420000

      p20p1: flags=4163 mtu 1500
      inet6 fe80::260:e0ff:fe49:455e prefixlen 64 scopeid 0x20
      ether 00:60:e0:49:45:5e txqueuelen 1000 (Ethernet)
      RX packets 49 bytes 6934 (6.7 KiB)
      RX errors 0 dropped 0 overruns 0 frame 0
      TX packets 60 bytes 9696 (9.4 KiB)
      TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
      device interrupt 16 memory 0xd0500000-d0520000

      p21p1: flags=4163 mtu 1500
      inet 192.168.66.131 netmask 255.255.255.0 broadcast 192.168.66.255
      inet6 fe80::260:e0ff:fe49:455f prefixlen 64 scopeid 0x20
      ether 00:60:e0:49:45:5f txqueuelen 1000 (Ethernet)
      RX packets 93124 bytes 8012610 (7.6 MiB)
      RX errors 0 dropped 0 overruns 0 frame 0
      TX packets 156267 bytes 194696815 (185.6 MiB)
      TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
      device interrupt 17 memory 0xd0600000-d0620000

      p33p1: flags=4099 mtu 1500
      ether 00:60:e0:49:45:59 txqueuelen 1000 (Ethernet)
      RX packets 0 bytes 0 (0.0 B)
      RX errors 0 dropped 0 overruns 0 frame 0
      TX packets 0 bytes 0 (0.0 B)
      TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
      device interrupt 16 memory 0xd0000000-d0020000

      vnet0: flags=4163 mtu 1500
      inet6 fe80::fc54:ff:fe9e:cb75 prefixlen 64 scopeid 0x20
      ether fe:54:00:9e:cb:75 txqueuelen 500 (Ethernet)
      RX packets 243 bytes 24945 (24.3 KiB)
      RX errors 0 dropped 0 overruns 0 frame 0
      TX packets 3611 bytes 192533 (188.0 KiB)
      TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

      Thanks & Regards,
      Arunkumar U

  28. Brent SalisburyBrent Salisbury03-05-2013


    Hey Arunkumar, paste in the following for me if still having problems from the two physical hosts.

    route -n
    ifconfig -a
    ovs-vsctl show
    Ensure the two br1 interfaces can ping one another. e.g, ping 192.168.1.10

    Thanks,
    -Brent

    • Arunkumar UArunkumar U03-05-2013


      Hi Brent,

      GRE is working fine in this scenario, could ping from VM to VM by GRE, it is encapsulating with GRE header.

      But in VXLAN it is not working

      br1 ping is successfull from both the host.

      Host 1 :
      ======

      [root@VERYX home]# ping 192.168.1.10
      PING 192.168.1.10 (192.168.1.10) 56(84) bytes of data.
      64 bytes from 192.168.1.10: icmp_req=1 ttl=64 time=0.542 ms
      64 bytes from 192.168.1.10: icmp_req=2 ttl=64 time=0.213 ms

      ovs-vsctl show
      ——————-

      6924d99b-c848-4b24-a7ca-f837647cd9ff
      Bridge “br2”
      Port “vx1”
      Interface “vx1″
      type: vxlan
      options: {remote_ip=”192.168.1.10”}
      Port “br2”
      Interface “br2”
      type: internal
      Port “vnet0”
      Interface “vnet0”
      Bridge “br1”
      Port “br1”
      Interface “br1”
      type: internal
      Port “p6p1”
      Interface “p6p1”

      route -n
      ———–

      Kernel IP routing table
      Destination Gateway Genmask Flags Metric Ref Use Iface
      0.0.0.0 192.168.66.129 0.0.0.0 UG 0 0 0 p5p1
      10.1.2.0 0.0.0.0 255.255.255.0 U 0 0 0 br2
      192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 br1
      192.168.12.0 192.168.66.129 255.255.255.0 UG 0 0 0 p5p1
      192.168.66.0 0.0.0.0 255.255.255.0 U 0 0 0 p5p1

      br1: flags=4163 mtu 9000
      inet 192.168.1.11 netmask 255.255.255.0 broadcast 192.168.1.255
      inet6 fe80::58d1:f6ff:fed5:2942 prefixlen 64 scopeid 0x20
      ether 00:30:48:ba:3b:85 txqueuelen 0 (Ethernet)
      RX packets 266 bytes 37372 (36.4 KiB)
      RX errors 0 dropped 0 overruns 0 frame 0
      TX packets 245 bytes 23359 (22.8 KiB)
      TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

      br2: flags=4163 mtu 1500
      inet 10.1.2.11 netmask 255.255.255.0 broadcast 10.1.2.255
      inet6 fe80::5446:bbff:fe8e:2e4a prefixlen 64 scopeid 0x20
      ether 56:46:bb:8e:2e:4a txqueuelen 0 (Ethernet)
      RX packets 19890 bytes 35230671 (33.5 MiB)
      RX errors 0 dropped 0 overruns 0 frame 0
      TX packets 22500 bytes 3325783 (3.1 MiB)
      TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

      lo: flags=73 mtu 16436
      inet 127.0.0.1 netmask 255.0.0.0
      inet6 ::1 prefixlen 128 scopeid 0x10
      loop txqueuelen 0 (Local Loopback)
      RX packets 6144579 bytes 9900599294 (9.2 GiB)
      RX errors 0 dropped 0 overruns 0 frame 0
      TX packets 6144579 bytes 9900599294 (9.2 GiB)
      TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

      p5p1: flags=4163 mtu 1500
      inet 192.168.66.130 netmask 255.255.255.0 broadcast 192.168.66.255
      inet6 fe80::230:48ff:feba:3b84 prefixlen 64 scopeid 0x20
      ether 00:30:48:ba:3b:84 txqueuelen 1000 (Ethernet)
      RX packets 12188226 bytes 10304812103 (9.5 GiB)
      RX errors 0 dropped 0 overruns 0 frame 0
      TX packets 16014534 bytes 16100938795 (14.9 GiB)
      TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
      device interrupt 16 memory 0xd0200000-d0220000

      p6p1: flags=4419 mtu 9000
      inet6 fe80::230:48ff:feba:3b85 prefixlen 64 scopeid 0x20
      ether 00:30:48:ba:3b:85 txqueuelen 1000 (Ethernet)
      RX packets 216483 bytes 183723740 (175.2 MiB)
      RX errors 0 dropped 0 overruns 0 frame 0
      TX packets 88504 bytes 6230355 (5.9 MiB)
      TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
      device interrupt 17 memory 0xd0300000-d0320000

      vnet0: flags=4163 mtu 1500
      inet6 fe80::fc54:ff:fe49:b4da prefixlen 64 scopeid 0x20
      ether fe:54:00:49:b4:da txqueuelen 500 (Ethernet)
      RX packets 119043 bytes 186635778 (177.9 MiB)
      RX errors 0 dropped 0 overruns 0 frame 0
      TX packets 246158 bytes 185881453 (177.2 MiB)
      TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

      vnet1: flags=4163 mtu 1500
      inet6 fe80::fc54:ff:fe7b:e96d prefixlen 64 scopeid 0x20
      ether fe:54:00:7b:e9:6d txqueuelen 500 (Ethernet)
      RX packets 7570 bytes 614957 (600.5 KiB)
      RX errors 0 dropped 0 overruns 0 frame 0
      TX packets 11730 bytes 146176530 (139.4 MiB)
      TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

      Host 2:
      =====

      [root@VXLAN home]# ping 192.168.1.11
      PING 192.168.1.11 (192.168.1.11) 56(84) bytes of data.
      64 bytes from 192.168.1.11: icmp_req=1 ttl=64 time=0.526 ms
      64 bytes from 192.168.1.11: icmp_req=2 ttl=64 time=0.203 ms

      ovs-vsctl show
      ——————

      573e0df5-1c70-420f-b31a-8ddf3e4ca5da
      Bridge “br1”
      Port “p20p1”
      Interface “p20p1”
      Port “br1”
      Interface “br1”
      type: internal
      Bridge “br2”
      Port “vnet0”
      Interface “vnet0”
      Port “vx1”
      Interface “vx1″
      type: vxlan
      options: {remote_ip=”192.168.1.11”}
      Port “br2”
      Interface “br2”
      type: internal

      route -n
      ———-
      Kernel IP routing table
      Destination Gateway Genmask Flags Metric Ref Use Iface
      0.0.0.0 192.168.66.129 0.0.0.0 UG 0 0 0 p21p1
      10.1.2.0 0.0.0.0 255.255.255.0 U 0 0 0 br2
      192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 br1
      192.168.66.0 0.0.0.0 255.255.255.0 U 0 0 0 p21p1

      ifconfig -a
      ————-

      br1: flags=4163 mtu 1500
      inet 192.168.1.10 netmask 255.255.255.0 broadcast 192.168.1.255
      inet6 fe80::260:e0ff:fe49:455e prefixlen 64 scopeid 0x20
      ether 00:60:e0:49:45:5e txqueuelen 0 (Ethernet)
      RX packets 48 bytes 6678 (6.5 KiB)
      RX errors 0 dropped 0 overruns 0 frame 0
      TX packets 56 bytes 8854 (8.6 KiB)
      TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

      br2: flags=4163 mtu 1500
      inet 10.1.2.10 netmask 255.255.255.0 broadcast 10.1.2.255
      inet6 fe80::1c78:87ff:fe14:9740 prefixlen 64 scopeid 0x20
      ether 1e:78:87:14:97:40 txqueuelen 0 (Ethernet)
      RX packets 34 bytes 4910 (4.7 KiB)
      RX errors 0 dropped 0 overruns 0 frame 0
      TX packets 35 bytes 7066 (6.9 KiB)
      TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

      lo: flags=73 mtu 16436
      inet 127.0.0.1 netmask 255.0.0.0
      inet6 ::1 prefixlen 128 scopeid 0x10
      loop txqueuelen 0 (Local Loopback)
      RX packets 104165 bytes 199828052 (190.5 MiB)
      RX errors 0 dropped 0 overruns 0 frame 0
      TX packets 104165 bytes 199828052 (190.5 MiB)
      TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

      p16p1: flags=4099 mtu 1500
      ether 00:60:e0:49:45:5a txqueuelen 1000 (Ethernet)
      RX packets 0 bytes 0 (0.0 B)
      RX errors 0 dropped 0 overruns 0 frame 0
      TX packets 0 bytes 0 (0.0 B)
      TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
      device interrupt 16 memory 0xd0100000-d0120000

      p17p1: flags=4099 mtu 1500
      ether 00:60:e0:49:45:5b txqueuelen 1000 (Ethernet)
      RX packets 0 bytes 0 (0.0 B)
      RX errors 0 dropped 0 overruns 0 frame 0
      TX packets 0 bytes 0 (0.0 B)
      TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
      device interrupt 17 memory 0xd0200000-d0220000

      p18p1: flags=4099 mtu 1500
      ether 00:60:e0:49:45:5c txqueuelen 1000 (Ethernet)
      RX packets 0 bytes 0 (0.0 B)
      RX errors 0 dropped 0 overruns 0 frame 0
      TX packets 0 bytes 0 (0.0 B)
      TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
      device interrupt 18 memory 0xd0300000-d0320000

      p19p1: flags=4099 mtu 1500
      ether 00:60:e0:49:45:5d txqueuelen 1000 (Ethernet)
      RX packets 0 bytes 0 (0.0 B)
      RX errors 0 dropped 0 overruns 0 frame 0
      TX packets 0 bytes 0 (0.0 B)
      TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
      device interrupt 19 memory 0xd0400000-d0420000

      p20p1: flags=4163 mtu 1500
      inet6 fe80::260:e0ff:fe49:455e prefixlen 64 scopeid 0x20
      ether 00:60:e0:49:45:5e txqueuelen 1000 (Ethernet)
      RX packets 49 bytes 6934 (6.7 KiB)
      RX errors 0 dropped 0 overruns 0 frame 0
      TX packets 60 bytes 9696 (9.4 KiB)
      TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
      device interrupt 16 memory 0xd0500000-d0520000

      p21p1: flags=4163 mtu 1500
      inet 192.168.66.131 netmask 255.255.255.0 broadcast 192.168.66.255
      inet6 fe80::260:e0ff:fe49:455f prefixlen 64 scopeid 0x20
      ether 00:60:e0:49:45:5f txqueuelen 1000 (Ethernet)
      RX packets 93124 bytes 8012610 (7.6 MiB)
      RX errors 0 dropped 0 overruns 0 frame 0
      TX packets 156267 bytes 194696815 (185.6 MiB)
      TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
      device interrupt 17 memory 0xd0600000-d0620000

      p33p1: flags=4099 mtu 1500
      ether 00:60:e0:49:45:59 txqueuelen 1000 (Ethernet)
      RX packets 0 bytes 0 (0.0 B)
      RX errors 0 dropped 0 overruns 0 frame 0
      TX packets 0 bytes 0 (0.0 B)
      TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
      device interrupt 16 memory 0xd0000000-d0020000

      vnet0: flags=4163 mtu 1500
      inet6 fe80::fc54:ff:fe9e:cb75 prefixlen 64 scopeid 0x20
      ether fe:54:00:9e:cb:75 txqueuelen 500 (Ethernet)
      RX packets 243 bytes 24945 (24.3 KiB)
      RX errors 0 dropped 0 overruns 0 frame 0
      TX packets 3611 bytes 192533 (188.0 KiB)
      TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

      Thanks & Regards,
      Arunkumar U

  29. Arunkumar UArunkumar U03-11-2013


    Hi Brent,

    VXLAN is not working in the above scenario, do you have any idea ?

    Thanks & Regards,
    Arunkumar U

  30. Roni TanRoni Tan04-11-2013


    Is the BRCOMPAT taken out ? After I do ‘make && make install’. The openswitch module is there but the BRCOMPAT module is missing.

    Is the brcompat module still required to run with the openswitch ?

    Thanks,
    Roni

  31. Brent SalisburyBrent Salisbury04-11-2013


    Hi Roni, It is no longer recommended after the upstream in 3.2. I removed it, thanks for the feedback. I will verify the build, real quick.

  32. Brent SalisburyBrent Salisbury04-12-2013


    I verified the build. I changed br2 -> br1 and br1-> br0 as I think it confused folks looking for br0 to equal eth0. The 2nd bridge , br1 is just to setup bring up an interface. That would normally be a VM. You can build the tunnels lots of ways. I use a second br since the hosts are both on the same subnet in my laptop on VM Fusion, so I need to have seperate networks on br1 on the backside that arent reachable. Here was the output from one of the hosts with config after OVS was built.

    root@ub64:/home/brent/openvswitch# ovs-vsctl add-br br0
    root@ub64:/home/brent/openvswitch# ovs-vsctl add-br br1
    root@ub64:/home/brent/openvswitch# ovs-vsctl add-port br0 eth0
    root@ub64:/home/brent/openvswitch# ifconfig eth0 0 && ifconfig br0 192.168.1.10 netmask 255.255.255.0
    root@ub64:/home/brent/openvswitch# route add default gw 192.168.1.1 br0
    root@ub64:/home/brent/openvswitch# ifconfig br1 10.1.2.10 netmask 255.255.255.0
    root@ub64:/home/brent/openvswitch# ovs-vsctl add-port br1 gre1 — set interface gre1 type=gre options:remote_ip=192.168.1.11
    root@ub64:/home/brent/openvswitch#
    root@ub64:/home/brent/openvswitch# ping 10.1.2.11
    PING 10.1.2.11 (10.1.2.11) 56(84) bytes of data.
    64 bytes from 10.1.2.11: icmp_req=1 ttl=64 time=0.625 ms
    64 bytes from 10.1.2.11: icmp_req=2 ttl=64 time=0.313 ms
    ^C
    — 10.1.2.11 ping statistics —
    2 packets transmitted, 2 received, 0% packet loss, time 999ms
    rtt min/avg/max/mdev = 0.313/0.469/0.625/0.156 ms
    root@ub64:/home/brent/openvswitch# route -n
    Kernel IP routing table
    Destination Gateway Genmask Flags Metric Ref Use Iface
    0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 br0
    10.1.2.0 0.0.0.0 255.255.255.0 U 0 0 0 br1
    192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 br0
    root@ub64:/home/brent/openvswitch# ifconfig
    br0 Link encap:Ethernet HWaddr 00:0c:29:65:fd:82
    inet addr:192.168.1.10 Bcast:192.168.1.255 Mask:255.255.255.0
    inet6 addr: fe80::285f:3ff:fea2:dafc/64 Scope:Link
    UP BROADCAST RUNNING MTU:1500 Metric:1
    RX packets:622 errors:0 dropped:0 overruns:0 frame:0
    TX packets:54 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:0
    RX bytes:171756 (171.7 KB) TX bytes:3832 (3.8 KB)

    br1 Link encap:Ethernet HWaddr ea:34:7d:49:e7:49
    inet addr:10.1.2.10 Bcast:10.1.2.255 Mask:255.255.255.0
    inet6 addr: fe80::c0a7:aaff:fe90:29dc/64 Scope:Link
    UP BROADCAST RUNNING MTU:1500 Metric:1
    RX packets:12 errors:0 dropped:0 overruns:0 frame:0
    TX packets:13 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:0
    RX bytes:896 (896.0 B) TX bytes:986 (986.0 B)

    eth0 Link encap:Ethernet HWaddr 00:0c:29:65:fd:82
    inet6 addr: fe80::20c:29ff:fe65:fd82/64 Scope:Link
    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
    RX packets:1222493 errors:0 dropped:0 overruns:0 frame:0
    TX packets:53824 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:1000
    RX bytes:653399936 (653.3 MB) TX bytes:4089412 (4.0 MB)

    lo Link encap:Local Loopback
    inet addr:127.0.0.1 Mask:255.0.0.0
    inet6 addr: ::1/128 Scope:Host
    UP LOOPBACK RUNNING MTU:16436 Metric:1
    RX packets:7088 errors:0 dropped:0 overruns:0 frame:0
    TX packets:7088 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:0
    RX bytes:537609 (537.6 KB) TX bytes:537609 (537.6 KB)

    root@ub64:/home/brent/openvswitch# ps -ea | grep ovs
    27288 ? 00:00:00 ovs_workq
    27293 ? 00:00:00 ovsdb-server
    27296 ? 00:00:00 ovs-vswitchd
    27297 ? 00:00:00 ovs-vswitchd
    root@ub64:/home/brent/openvswitch# ovs-vsctl del-port gre1
    root@ub64:/home/brent/openvswitch# ovs-vsctl add-port br1 vx1 — set interface vx1 type=vxlan options:remote_ip=192.168.1.11
    root@ub64:/home/brent/openvswitch# ping 10.1.2.11
    PING 10.1.2.11 (10.1.2.11) 56(84) bytes of data.
    64 bytes from 10.1.2.11: icmp_req=1 ttl=64 time=0.810 ms
    ^C
    — 10.1.2.11 ping statistics —
    1 packets transmitted, 1 received, 0% packet loss, time 0ms
    rtt min/avg/max/mdev = 0.810/0.810/0.810/0.000 ms

    Thanks,
    -Brent

  33. masrepusmasrepus05-16-2013


    Brent,

    great info – it is awesome to find clear and informative details.

    Has there been any update on the mulicast support and VXLAN?

    Thanks, Mas

    • Brent SalisburyBrent Salisbury07-19-2013


      Hey Mas, Good question and I have not but pinging the listserv or #openvswitch on irc.freenode.net would get it straight from Ben Pfaff or Kyle Mestery who rolled VXLAN into OVS. Kyle is @mestery on Twitter if you wanted to ping him. I will ask next time I chat w/him if I remember also (forgetful) .

      Regards,
      -Brent

  34. jUNjUN06-29-2013


    Thanks for your documentation above.. However I have Question.. May I leave my question ont this?

    My Questions are blow.. (dmesg | tail)
    [ 379.578038] openvswitch: Unknown symbol gre_del_protocol (err 0)
    [ 379.578069] openvswitch: Unknown symbol gre_add_protocol (err 0)

    Because of this, I can not insert module… I am using Ubuntu12.04 LTS (Server) which is downloaded on wetsite..

    Please let me know.. Help me..

  35. Brent SalisburyBrent Salisbury07-19-2013


    Hi Jun, Looks like you probably upgraded your kernel to the latest release. OVS is still unsupported there until patches go in last I looked. There is probably a thread on OVS-Discuss w/ more details. If you back off versions you will be able to load the .ko again.

    Cheers,
    -Brent

  36. ImranImran07-25-2013


    Hi Brent,

    I am not able to setup a VXLAN tunnel using the instructions.I created the separate tunnel end points on the br0 bridge in both hosts and they are able to ping each other. But the isolated bridge br1 on both hosts cannot ping each other. I am newbie and not able to understand how to troubleshoot the tunnel. I am doing this in virtual box and the virtual machines are the Ubuntu 13.04 images. I guess i am missing some important piece of puzzle here. Some sort of help will be appreciated.

    Thanks!

  37. JUNJUN08-04-2013


    I am..so.. confuse… about this openvswitch….
    I can not understand what is the problem.. my.. testing…

    I used ubuntu 12.04 …. However.. whenever I try to build kernel installation with openvswith…
    it does not work….

    with “lsmod”
    sometime I can see below
    gre .. .. openvswitch (used by)
    openvswitch ….

    however… sometimes I can see below
    openvswitch …..
    (only.. this openvswitch module is existed….)

    I think.. at this time.. ohter openvswitch.ko…is loaded in booting…
    I do not know what should i do..

    do you have any idea about this?

  38. Gregory GeeGregory Gee08-21-2013


    Hopefully this is a simple question. Great article.

    ovs-vsctl add-port s1 s1-gre1 — set interface s1-gre1 type=gre options:remote_ip=192.168.1.10

    I was wondering how you can set it up so multiple GRE and/or multiple VXLAN tunnels terminate in the same OVS. Googling, I see references to option:key ? Is this it and does it work for GRE and VXLAN?

    Thanks.

  39. Brent SalisburyBrent Salisbury08-21-2013


    Hey Gregory, thanks for the feedback. You are correct. Take a look at “Tunnel Options” in the OVSDB schema Interface table. Doc at http://openvswitch.org/ovs-vswitchd.conf.db.5.pdf

    Cheers,
    -Brent

  40. BuiosuBuiosu09-28-2013


    Hello Brent!

    I find your tutorial very nice, however either my English or networking skills lack too badly. I wanted to evaluate EoGRE and VXLAN as one of methods for my MSc thesis to be reviewed. I’m using OVS for the first time, just for this purpose at the moment.

    I have such network topology:
    [A]—[B]===Internet===[C]—[D] //Internet is not really Internet but nvm
    A and D are local LANs to be connected via L2 tunnels, B and C are tunnel endpoints and they are to get all traffic from eth1 (interface in front of A/D) and put it into a tunnel and via eth0 (‘public’ interface).

    B and C are VMs on VMware ESXi with Ubuntu Server 13.04 and OVS 1.11.0.

    I tried to configure the OVS instances without br1 as I assumed that they are only simulating private LAN. However it didn’t work. I tried to make solution analogous to yours but it doesn’t work. Could you please help me to spot the issue? Or maybe something else is wrong?

    A has IP address 192.168.10.1/24
    D has IP address 192.168.10.2/24
    B has empty eth0 and eth1, however from eth0 public IP address/30 has been moved to br0.
    C has empty eth0 and eth1, however from eth0 public IP address/30 has been moved to br0
    When trying to copy your solution, br1 of B got IP address 192.168.10.10/24, and br1 of C got IP address 192.168.10.20/24.

    All interfaces are up and running.

    Here is ovs-vsctl show result for one of the tunnel enpoints:
    Bridge “br1”
    Port “gre1”
    Interface “gre1″
    type: gre
    options: {remote_ip=”a.b.c.230”} //public IP
    Port “br1”
    Interface “br1”
    type: internal
    Bridge “br0”
    Port “eth0”
    Interface “eth0”
    Port “br0”
    Interface “br0”
    type: internal

    And route -n:
    Destination Gateway Genmask Flags Metric Ref Use Iface
    0.0.0.0 a.b.c.225 0.0.0.0 UG 0 0 0 br0
    192.168.10.0 0.0.0.0 255.255.255.0 U 0 0 0 br1
    a.b.c.224 0.0.0.0 255.255.255.252 U 0 0 0 br0

    What is wrong?
    I’m afraid that I assumed gre tunnel here as L2 while it seems to be L3 to me right now.

    Regards

    • BuiosuBuiosu09-28-2013


      Man, I forgot about eth1 in all that process. I did add-port br1 eth1 right now but it didn’t help neither.
      Now I tried to do this the same way I did EtherIP or Mikrotik’s EoIP Linux implementation – br1 consisting of gre1 and eth1, nothing else, eth0 with public interface. Doesn’t work, moreover I experience no connectivity between tunnel endpoint and even it’s default gateway.

      Regards

    • BuiosuBuiosu09-30-2013


      What might be worrying, when running ovs-vswitchd I’ve noticed:
      dpif_netdev|ERR|gre_system: cannot receive packets on this network device (Resource temporarily unavailable)

      dpif|WARN|netdev@ovs-netdev: failed to add gre1 as port: Operation not supported.

      With vxlan I get the same error (pointing to vxlan_sys_4789) and warning.

      Regards.

  41. Brent SalisburyBrent Salisbury10-15-2013


    Hmm, not sure what error that would indicate Buiosu. Would probably need to browse source code to figure that error may be a good place to start.

    Btw, just built OVS on Ubuntu 13.04. Things looked good I tweaked a couple of things here and there but nothing major. Also, installing from package is failing for me, particularly the module-assistant so maybe avoid that and save the headache by using the tarball. Will poke around on the pkg build but in the meantime lookout for the following command if automated anywhere for install.

    Failed: module-assistant auto-install openvswitch-datapath

    Cheers!
    -Brent

  42. Brent SalisburyBrent Salisbury10-21-2013


    I ran into issues building from source on Ubuntu 13.04+ in the past week or so. The specific error is: ### “db:Open_vSwitch,manager_options : invalid syntax” ###

    For now I pulled cloning from the how-to and opt for the tarball. If you want to pull from git use:
    git clone git://git.openvswitch.org/openvswitch

    Cheers,
    -Brent

  43. plus.google.complus.google.com12-17-2013


    When I originally commented I clicked the “Notify me when new comments are added” checkbox and now each time
    a comment is added I get four e-mails with the same comment.
    Is there any way you can remove me from that service? Many thanks!

  44. Rajat GuptaRajat Gupta12-25-2013


    Hi Brent,

    VXLAN is not working in the below scenario,
    Not able to ping VM on host2 from VM on host1.

    ovs_version: 1.4.0+build0
    Ubuntu : ubuntu-12.04.3-desktop-amd64

    ###################Host1 information ##########################

    abc@abc-:~$ route -n
    Kernel IP routing table
    Destination Gateway Genmask Flags Metric Ref Use Iface
    0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 br0
    10.1.2.0 0.0.0.0 255.255.255.0 U 0 0 0 br1
    192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 br0
    abc@abc-:~$
    abc@abc-:~$ sudo ovs-vsctl show
    6ef6b449-9bc3-4a62-be33-be9c0218614b
    Bridge “br1”
    Port “vnet0”
    Interface “vnet0”
    Port “br1”
    Interface “br1”
    type: internal
    Port “vxlan1”
    Interface “vxlan1″
    type: vxlan
    options: {remote_ip=”192.168.1.11”}
    Bridge “br0”
    Port “eth2”
    Interface “eth2”
    Port “br0”
    Interface “br0”
    type: internal
    ovs_version: “1.4.0+build0”
    abc@abc-:~$
    abc@abc-:~$ ifconfig
    br0 Link encap:Ethernet HWaddr a0:36:9f:09:14:27
    inet addr:192.168.1.10 Bcast:192.168.1.255 Mask:255.255.255.0
    inet6 addr: fe80::a236:9fff:fe09:1427/64 Scope:Link
    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
    RX packets:284 errors:0 dropped:0 overruns:0 frame:0
    TX packets:58 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:0
    RX bytes:17402 (17.4 KB) TX bytes:12585 (12.5 KB)

    br1 Link encap:Ethernet HWaddr aa:bf:fb:3f:25:4e
    inet addr:10.1.2.10 Bcast:10.1.2.255 Mask:255.255.255.0
    inet6 addr: fe80::a8bf:fbff:fe3f:254e/64 Scope:Link
    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
    RX packets:82 errors:0 dropped:0 overruns:0 frame:0
    TX packets:69 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:0
    RX bytes:13860 (13.8 KB) TX bytes:12834 (12.8 KB)

    eth2 Link encap:Ethernet HWaddr a0:36:9f:09:14:27
    inet6 addr: fe80::a236:9fff:fe09:1427/64 Scope:Link
    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
    RX packets:7407021 errors:0 dropped:0 overruns:0 frame:0
    TX packets:16573594 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:1000
    RX bytes:826516246 (826.5 MB) TX bytes:8048295924 (8.0 GB)
    Memory:cfb00000-cfc00000

    lo Link encap:Local Loopback
    inet addr:127.0.0.1 Mask:255.0.0.0
    inet6 addr: ::1/128 Scope:Host
    UP LOOPBACK RUNNING MTU:65536 Metric:1
    RX packets:155707 errors:0 dropped:0 overruns:0 frame:0
    TX packets:155707 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:0
    RX bytes:378958204 (378.9 MB) TX bytes:378958204 (378.9 MB)

    vnet0 Link encap:Ethernet HWaddr fe:54:00:94:70:dc
    inet6 addr: fe80::fc54:ff:fe94:70dc/64 Scope:Link
    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
    RX packets:82 errors:0 dropped:0 overruns:0 frame:0
    TX packets:96 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:500
    RX bytes:13860 (13.8 KB) TX bytes:19123 (19.1 KB)
    abc@abc-:~$
    abc@abc-:~$ sudo ovs-dpctl show
    system@br0:
    lookups: hit:259 missed:87 lost:0
    flows: 0
    port 0: br0 (internal)
    port 1: eth2
    system@br1:
    lookups: hit:98 missed:53 lost:0
    flows: 0
    port 0: br1 (internal)
    port 1: vnet0
    abc@abc-:~$

    ################# Host2 Information ########################

    abc@abc:~$ route -n
    Kernel IP routing table
    Destination Gateway Genmask Flags Metric Ref Use Iface
    0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 br0
    10.1.2.0 0.0.0.0 255.255.255.0 U 0 0 0 br1
    192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 br0
    abc@abc:~$
    abc@abc:~$ sudo ovs-vsctl show
    06a5e043-aca4-477a-b54f-76010347acfe
    Bridge “br0”
    Port “br0”
    Interface “br0”
    type: internal
    Port “eth1”
    Interface “eth1”
    Bridge “br1”
    Port “vnet0”
    Interface “vnet0”
    Port “br1”
    Interface “br1”
    type: internal
    Port “vxlan1”
    Interface “vxlan1″
    type: vxlan
    options: {remote_ip=”192.168.1.10”}
    ovs_version: “1.4.0+build0”
    abc@abc:~$
    abc@abc:~$ ifconfig
    br0 Link encap:Ethernet HWaddr 00:d0:b7:6b:65:a7
    inet addr:192.168.1.11 Bcast:192.168.1.255 Mask:255.255.255.0
    inet6 addr: fe80::2d0:b7ff:fe6b:65a7/64 Scope:Link
    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
    RX packets:5 errors:0 dropped:0 overruns:0 frame:0
    TX packets:128 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:0
    RX bytes:441 (441.0 B) TX bytes:5648 (5.6 KB)

    br1 Link encap:Ethernet HWaddr e2:42:30:94:d5:41
    inet addr:10.1.2.11 Bcast:10.1.2.255 Mask:255.255.255.0
    inet6 addr: fe80::e042:30ff:fe94:d541/64 Scope:Link
    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
    RX packets:90 errors:0 dropped:0 overruns:0 frame:0
    TX packets:9 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:0
    RX bytes:16881 (16.8 KB) TX bytes:594 (594.0 B)

    eth1 Link encap:Ethernet HWaddr 00:d0:b7:6b:65:a7
    inet6 addr: fe80::2d0:b7ff:fe6b:65a7/64 Scope:Link
    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
    RX packets:5 errors:0 dropped:0 overruns:0 frame:0
    TX packets:270 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:1000
    RX bytes:441 (441.0 B) TX bytes:12124 (12.1 KB)

    lo Link encap:Local Loopback
    inet addr:127.0.0.1 Mask:255.0.0.0
    inet6 addr: ::1/128 Scope:Host
    UP LOOPBACK RUNNING MTU:65536 Metric:1
    RX packets:1641 errors:0 dropped:0 overruns:0 frame:0
    TX packets:1641 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:0
    RX bytes:6898664 (6.8 MB) TX bytes:6898664 (6.8 MB)

    vnet0 Link encap:Ethernet HWaddr fe:54:00:58:4c:be
    inet6 addr: fe80::fc54:ff:fe58:4cbe/64 Scope:Link
    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
    RX packets:90 errors:0 dropped:0 overruns:0 frame:0
    TX packets:15 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:500
    RX bytes:16881 (16.8 KB) TX bytes:1062 (1.0 KB)
    abc@abc:~$
    abc@abc:~$ sudo ovs-dpctl show
    system@br0:
    lookups: hit:127 missed:36 lost:0
    flows: 0
    port 0: br0 (internal)
    port 1: eth1
    system@br1:
    lookups: hit:66 missed:33 lost:0
    flows: 0
    port 0: br1 (internal)
    port 1: vnet0
    abc@abc:~$

    Thanks & Regards,
    Rajat Gupta

  45. Rajat GuptaRajat Gupta12-26-2013


    Hi Brent,

    I have installed the OVS v1.10.0, but “ovs-dpctl show” command still not showing the vxlan inetrface.

    Below command is used to create the vxlan interface.
    ovs-vsctl add-port br1 vxlan1 — set interface vxlan1 type=vxlan options:remote_ip=192.168.1.11

    One more thing if I configured GRE tunnel then all works well but VXLAN is not working.

    Thanks & Regards,
    Rajat Gupta

    • Brent SalisburyBrent Salisbury12-28-2013


      take a look at your sys logs and dmesg output and t-shoot from there. paste logs if you want me to take a peak. Warning ahead of time I am pretty hit and miss on timeliness atm 🙂

  46. Li, ChenLi, Chen12-30-2013


    Hi, thanks for the instructions.
    But, after follow this ,I still can’t get reach to another node.
    And I observed ovs dropped all my packages, did you know why this happening ????

    on Node 1:
    1. Run command : ping 10.1.2.11
    2. Run command : ovs-dpctl dump-flows
    in_port(1),eth(src=fe:34:0f:f9:16:48,dst=ff:ff:ff:ff:ff:ff),eth_type(0x0806),arp(sip=10.1.2.10,tip=10.1.2.11,op=1,sha=fe:34:0f:f9:16:48,tha=00:00:00:00:00:00), packets:5, bytes:210, used:0.647s, actions:drop

    Hope you guys can give me some advice about this.

    Thanks.
    -chen

    Here is my set-up’s status:
    Node 1:
    ifconfig
    br-int Link encap:Ethernet HWaddr FE:34:0F:F9:16:48
    inet addr:10.1.2.10 Bcast:10.1.2.255 Mask:255.255.255.0
    inet6 addr: fe80::1038:7dff:fec0:c4f1/64 Scope:Link
    UP BROADCAST RUNNING MTU:1500 Metric:1
    RX packets:393799 errors:0 dropped:0 overruns:0 frame:0
    TX packets:30 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:0
    RX bytes:75528724 (72.0 MiB) TX bytes:1476 (1.4 KiB)

    eth0 Link encap:Ethernet HWaddr 00:25:90:79:EE:58
    inet addr:192.168.11.102 Bcast:192.168.255.255 Mask:255.255.0.0
    inet6 addr: fe80::225:90ff:fe79:ee58/64 Scope:Link
    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
    RX packets:47704498 errors:0 dropped:0 overruns:0 frame:0
    TX packets:7581995 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:1000
    RX bytes:8436995106 (7.8 GiB) TX bytes:2025995928 (1.8 GiB)
    Memory:dfd20000-dfd40000

    lo Link encap:Local Loopback
    inet addr:127.0.0.1 Mask:255.0.0.0
    inet6 addr: ::1/128 Scope:Host
    UP LOOPBACK RUNNING MTU:16436 Metric:1
    RX packets:57 errors:0 dropped:0 overruns:0 frame:0
    TX packets:57 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:0
    RX bytes:5022 (4.9 KiB) TX bytes:5022 (4.9 KiB)

    ovs-vsctl show
    4ab902f3-5d19-46b9-bd24-0ad080540613
    Bridge br-int
    Port “gre1”
    Interface “gre1″
    type: gre
    options: {remotr_ip=”192.168.11.103”}
    Port br-int
    Interface br-int
    type: internal
    ovs_version: “1.11.0”

    ovs-ofctl dump-flows br-int
    NXST_FLOW reply (xid=0x4):
    cookie=0x0, duration=33.562s, table=0, n_packets=0, n_bytes=0, idle_age=33, priority=0 actions=NORMAL

    Node 2:
    ifconfig
    br-int Link encap:Ethernet HWaddr F6:B2:FB:FA:4F:40
    inet addr:10.1.2.11 Bcast:10.1.2.255 Mask:255.255.255.0
    inet6 addr: fe80::874:14ff:fe1c:6f2d/64 Scope:Link
    UP BROADCAST RUNNING MTU:1500 Metric:1
    RX packets:0 errors:0 dropped:0 overruns:0 frame:0
    TX packets:60 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:0
    RX bytes:0 (0.0 b) TX bytes:2736 (2.6 KiB)

    eth0 Link encap:Ethernet HWaddr 00:25:90:79:D3:F4
    inet addr:192.168.11.103 Bcast:192.168.255.255 Mask:255.255.0.0
    inet6 addr: fe80::225:90ff:fe79:d3f4/64 Scope:Link
    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
    RX packets:50628891 errors:0 dropped:0 overruns:0 frame:0
    TX packets:10316125 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:1000
    RX bytes:9826925316 (9.1 GiB) TX bytes:2822135225 (2.6 GiB)
    Memory:dfd20000-dfd40000

    lo Link encap:Local Loopback
    inet addr:127.0.0.1 Mask:255.0.0.0
    inet6 addr: ::1/128 Scope:Host
    UP LOOPBACK RUNNING MTU:16436 Metric:1
    RX packets:1391 errors:0 dropped:0 overruns:0 frame:0
    TX packets:1391 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:0
    RX bytes:154486 (150.8 KiB) TX bytes:154486 (150.8 KiB)

    ovs-vsctl show
    218f9b54-b3e8-47ac-8563-58ff91ca57e9
    Bridge br-int
    Port “gre1”
    Interface “gre1″
    type: gre
    options: {remotr_ip=”192.168.11.102”}
    Port br-int
    Interface br-int
    type: internal
    ovs_version: “1.11.0”

    ovs-ofctl dump-flows br-int
    NXST_FLOW reply (xid=0x4):
    cookie=0x0, duration=7014.936s, table=0, n_packets=60, n_bytes=2736, idle_age=6412, priority=0 actions=NORMAL

  47. JZ ZhaoJZ Zhao04-26-2014


    Hi Brent,

    I saw you mentioned that OVS can support CAPWAP encapsulation. And I am pretty interested in it. I have found the vport-capwap.c in the datapath module in OVS. But I would like to know how to let the OVS switch sent this CAPWAP message.

    Thanks

    JZ Zhao