How to Build an SDN Lab without Needing OpenFlow Hardware

How to Build an SDN Lab without Needing OpenFlow Hardware

OpenFlow Lab

How to Build an SDN Lab without Needing OpenFlow Hardware
I no longer use much in this post. For a current lab setup please see OpenDaylight OpenStack Integration with DevStack on Fedora

How do you build a networking lab without networking equipment? Yet another plus in the column of open software driven networks. Proofing and prototyping networks today are often done with things like NETFPGA or expensive vendor manufactured hardware. Since we are beginning to build primitives, APIs and abstraction layers the value of what the software development community has had for two decades is becoming obvious to networking.

Update: For a more updated and scripted install take a peek at this post:

Quit a few of us are working on wrapping our heads around what the current and future state of Software Defined Networks (SDN) will look like. To build or modify modules and applications we need a proper lab to do this in. Something I run into quite often is the idea that we need OpenFlow enabled hardware to build this lab. The great news is we absolutely do not need personally once that dawned on me a vSwitch as really all I ever need for modeling and prototyping. Once you need performance benchmarks and metrics transitioning to hardware starts making sense.

Two Easy OpenFlow Lab Options
  1. Run a Hypervisor like KVM or XEN with Open vSwitch managing the VM nodes. This requires Hardware Virtualization.
  2. Run multiple VM instances each running Open vSwitch and using multiple bridges to create unique Data Path IDs (DPID) that essentially appear as different hosts. This is less hardware requirements but is a bit less flexible from seeing what the host is doing. That said you could spin up dozens of vSwitches under the control of single or separate OF controllers and instantiate data paths. An example of this option is in this post. All you need to do after that is attach each bridge to a controller and now data paths are built by the OF controller rather than the OVS slow path. There are quite a few controllers out there. I primarily use POX (Python) and Floodlight (Java) since both have the best maintenance and community from what I have been exposed to along with easy modules to use, adjust or spin your own. The documented APIs are coming along as well. FloodLight probably has the slight edge there but I prefer Python so its a wash for me. POX does not have a monetized product attached to it (BigSwitch) which would be attractive to some folks also.

OpenFlow Hardware Limitations

With both options you can integrate physical hardware which is essentially the SDN roadmap topology being proposed in data center solutions today. A development environment is ideal to have in software to quickly change components and designs. The other key part is that we are not contained in software by the shortcomings of hardware matches today. For example, HP switches cannot match on layer 2 fields so ether type and mac src and dest are wild carded. Other vendors can only match source OR destination IP rather than both. These constraints are due to limited amounts of TCAM and/or Software development limitations at this point. Also who wants to have to get up and move cables and soak your power bill 🙂

SDN Topology

Figure 1. Using a physical Switch for you lab.

SDN Islands

Figure 2. Using a vSwitch for your lab. Same topologies and logically similar, programmatically the same. Why is that? We finally actually have a HAL or instruction set/primitive providing abstraction. Yay!

Lab Prerequisites

The KVM requires an x86 machine with either Intel VT or AMD w/AMD-V support. Anything fairly new will have that support in the processor. There are a few older HW builds that support hardware assisted virtualization by enabling it in the bios.

This is done on a fresh install of 64-bit Ubuntu 12.04 (Precise).

Video: Installation Screencast Part I

Uninstall network-manager if running Ubuntu desktop (optional) ,not required but you will likely have to troubleshoot past it if you don’t.  This will likely cut you off if remote.

System Preparation (Optional)

Install OpenvSwitch

Verify install

Processes should look something like this

Enable bridge compatability

and change brcompat from no to yes Change from:
TO and uncomment by removing the #:
Restart OVS

Add your bridge, think of this as a subnet if you aren’t familiar with the term.

Add a physical interface to your virtual bridge for connectivity off box. If you don’t script this part you will probably clip your connection as you zero out eth0 and apply it to br-int. You can pop the commands into a text file and make it executable with chmod +x <filename>,

Zero out your eth0 interface and slap it on the bridge interface (warning will clip you unless you script it)

Change your default route

SDN Controller Option A: FloodLight (Java)

Install dependencies, apt-get for UB and yum for RH:

Clone the Github project and build the jar and start the controller:

cd into the floodlight directory created.

Run ant to build a jar. It will be in the ~/floodlight/target directory.

Run the controller :

By default it will binds to port 6633 and all ports e.g.

SDN OpenFlow Controller Option B: POX (Python)

By default it will binds to port 6633 and all ports e.g.

Attach OpenvSwitch to the Controller

In the FloodLight console you will see something similar to what is directly below. The DPID is the unique Data Path Identifier (DPID) of the switch not the controller.

The output of OVS ‘ovs-vsctl show’ looks something like this:

Install KVM and Integrate into OVS

These two scripts bring up the KVM Tap interfaces into your bridge from the CLI. If you copy and paste below make sure the (‘) does not get formatted improperly. It should be yellow in nano. “switch=br-int” br-int is the name of your bridge in OVS.
$nano /etc/ovs-ifup  (open and paste what is below)

In the directory /etc/ovs-ifdown open and paste what is below:

Make both files executable

Video: Installation Screencast Part II

Spin up the VMs

Note: If you chose to use the small linux kernel replace Ubuntu image with this linux-0.2.img.bz2. Recommended for those comfortable configuring Linux networking from command line. Examples are:

  • Host1

  • Host2

  • Host3

Each one of those will begin loading from the ISO. I just click “Try Ubuntu” when they are booting and just run them from disk since really all we need are nodes that can test connectivity as we push static flows. If it is a more permanent test lab it would make since to install them to disk. These are your hosts you can test with.

OpenvSwitch Tap

Figure 3.Open vSwitch Taps (Vnic/Vnet)

Onecs the VM hosts have booted up assign IP addresses to them by clicking in the top left of the Ubuntu window and type in ‘terminal’ no parentheses. Then give them IPs if you want to statically assign them with ifconfig.

Now that you have that, here are some example labs with the POX SDN Controller: Lab installation

About the Author

Brent SalisburyI have over 15 years of experience wearing various hats from, network engineer, architect, devops and software engineer. I currently have the pleasure of working at the company that develops my favorite software I have ever used, Docker. My comments here are my personal thoughts and opinions. More at Brent's BioView all posts by Brent Salisbury →

  1. Yitzhak Bar GevaYitzhak Bar Geva09-13-2012

    I’ve been grappling with the following challenge, hoping that you can give me the pointers needed:
    I want to bring up OVS inside an LXC container, kinda backwards from what’s usually done, without(!!) BRCOMPAT=yes and without having to compile a kernel module.
    It seems doable, but I’m confused. Advice warmly accepted.
    Thanks so much,

  2. DeepankarDeepankar03-12-2013

    I am try to learn about openflow and openvswitch. The setup which I am trying to create

    / \
    / \
    / \
    Ovs Br0——Ovs Br1
    | |
    | |
    VM1 VM2

    The setup is on single Ubuntu host machine. The OVS Br0 and Br1 is connected using patch cable and VM1(Qemu-kvm) and

    VM2(Qemu-kvm) are connected to respective Bridges using tap interface created using tunctl. VM1 and VM2 are different

    subnet. I want ping from VM1 to VM2. How should I do it using openflow controllers.

    Thanks in Advance

  3. unknownunknown04-10-2014

    Hey …R we trying to install controller and vswitch in the same ubuntu machine….

    will this configs work if two vms are present (1 for controller and other for switch ???)