Installing OpenStack ML2 Neutron Plugin with DevStack on Fedora

Installing OpenStack ML2 Neutron Plugin with DevStack on Fedora

OpenStack ML2 Driver Post

Installing OpenStack ML2 Neutron Plugin with DevStack on Fedora

OpenStack networking is tricky. This is primarily because programmable distributed systems are relatively new beyond the rigid L2/L3 control protocols we have used for the past 20 years. What I am consistently impressed with about OpenStack networking is the innovative network services that systems programmers are developing using APIs and virtual switching. Of course most vendors have a plugin for their hardware but service instantiation in the hypervisor endpoint is relatively easy since these are open systems with compute scale.

My comrades and I that are working on the OVSDB implementation in the OpenDaylight project have needed to get up to speed on the OpenStack Neutron ML2 (Modular Layer 2) plugin to begin integration. OVSDB in OpenDaylight enables us to implement an agent-less/driver-less OpenStack/OpenDaylight integration using OVSDB for topology provisioning and OpenFlow for forwarding control and service insertion. We are fortunate to have our friend and OVSDB committer Kyle Mestery who is also on the core Neutron team to assist on the ML2 integration front.

Brief ML2 Neutron Overview

The ML2 GRE overlay implementation proactively provisions tenant tunnels with ini files on the compute nodes that provides each OVS a list of VTEPs. The OVS instances build TEPs to each compute hosts along with endpoint(s) with an external bridge (br-ex) to drain traffic into the native underlay. From a control perspective proactive flow rules are pushed for each segmentation ID into an OVS table. For more information on Open vSwitch check the FAQ. Ben Pfaff and team do an awesome job documenting and the listserv has about every answer you need archived.

When Nova requests a compute node, Libvirt will provision the post by calling OVS using the ovs-vsctl client and Neutron requests the provider network and allocates a provider segmentation ID which is used for a GRE key or VXLAN VNI for tenant isolation. The provider network type is simply the overlay encapsulation in this case (GRE). It also provides the tenants device_id, network_id, tenant_id, IP addr, MAC addr to name a few important fields. Neutron ML2 API calls to a plugin are currently unidirectional currently, but I think this can improve over time.

Git Devstack

For first time DevStackers, this is very doable but is not quite easy mode and may require some Googling to troubleshoot (best way to learn, break and rebuild). Also, this is all subject to change as projects are architectures do so also.

  • FYI, there appears to be a packaging bug when installing the Fedora using the LiveCD with Open vSwitch 1.11. The symptom is that the OVS_GRE

[brent@fedora1 devstack]$ lsmod | grep openvswitch
openvswitch 66519 0
gre 18216 1 openvswitch

[brent@fedora1 devstack]$ modinfo gre
filename: /lib/modules/3.11.7-200.fc19.x86_64.debug/kernel/net/ipv4/gre.ko

  • The easiest workaround is to install Fedora using the Net-Install Fedora image (317 MB) found here: Download Fedora-19-x86_64-netinst.iso
  • Thats it, just make sure you use the net install disk. I didn’t see a bug so will file one shortly. If you have had success with an image maybe save someone else the problems and pop your success below if you have a moment to share.

There is no warranty with this post and guaranteed to have an exceedingly short half-life. There are far too many moving parts to not require some troubleshooting. I tend to do a snapshot of the vanilla OS due to my ability to ruin a machine from time to time and it saves the OS rebuild time.

I ran into a dbus services bug with an error of the following:

A symlink will workaround the bug so you may want verify you have the latest NetworkManager patches:

If you have an existing installation from package, you can query and remove it with the following:

Then remove the existing. You can always upgrade but the extra step may avoid potential issues removing the kernel module:

In the example local.conf configs the hosts are as follows:

  • Controller : 172.16.58.139 [Fedora2] (SERVICE_HOST)
  • Compute: 172.16.58.136 [Fedora1]

Edit the hostnames and make them resolvable via hostname to one another with vi /etc/hosts.

local.conf files for DevStack on Fedora

This local.conf works on Fedora 19 for the controller host. For Fedora builds use QPID not RABBIT. I left Rabbit as there was a bug a while back but I haven’t noticed issues disabling it for Fedora. There are a few unnecessary fields in the config files below. I recommend reading the script and getting to know it especially if looking to do dev or integrating other projects into OpenStack as the DevStack script greatly simplifies exchanging development environment amongst a team.

DevStack local.conf Compute Node (Fedora2 – 172.16.58.139)


DevStack local.conf Compute Node (Fedora1 – 172.16.58.136)

This local.conf foes on the compute node:

Post Installation

The following will get you started getting hosts booted:

I will post VXLAN/VNI configurations for just vanilla DevStack and even cooler, ODL support in the next week or so hopefully (dependency is getting OF v1.3 atm).

Additional Content: OpenStack ML2 Videos from the OpenStack Summit

Check out this great panel with some friends and rockstar leaders from the OpenDaylight project speaking at the ODL mini-summit in Hong Kong. The speakers are Chris Wright (RedHat), Anees Shaikh (IBM), Kyle Mestery (Cisco) and Stephan Baucke (Ericsson).

OpenDaylight: An Open Source SDN for Your OpenStack Cloud

A Deep Dive into the ML2 Plugin

Bob Kukura (RedHat) and Kyle Mestery (Cisco) give a great overview and dig into the critical details.

What Next?

Scott Shenker of UC-Berkeley and a co-founder of Nicira said network virtualization is the “killer application”. If this is the case, the first application of the killer application is supporting OpenStack at scale. Nicira earned this right with the huge investment it made into the SDN reference data plane implementation of OVS. If you have not whatched Shenker’s latest revision of his always impressive SDN vision be sure to make time for it.

Software-Defined Networking at the Crossroads – →

I could go on for days but time is unfortunately not on my side with day job in a couple of hours. Not to mention the all-star Madhu’s Roomba just went off on the Hangout we are on which is the 4AM nightly alarm and always hilarious sounding. We will demo and post instructions for ODL/Neutron plugin in the next few days. We are waiting on OF v1.3 to finish so we can get multi-table support *cough* *hint* *hint* Ed 🙂 OFv1.3 will avoid a full mesh tunnel overlay per tenant/network. In the meantime, take a look ML2 using DevStack. I am really impressed at the thought that went into putting it together. It has limitations obviously but I think in the near term, SDN controllers and hardware w/vTEP capabilities will alleviate a lot of the OpenStack networking pain and abstract the overlay complex state of hacking networks today.

About the Author

Brent SalisburyI have over 20 years of experience wearing various hats from, network engineer, architect, ops and software engineer. More at Brent's LinkedInView all posts by Brent Salisbury →

  1. that1guy15that1guy1511-26-2013


    You keep doing this Brent!

    Posting great SDN / Openstack post that just temp me to set aside my CCIE and jump head first in.

    Bad Brent!! 🙂

  2. Brent SalisburyBrent Salisbury11-26-2013


    Haha!! Fun times!

  3. bmullanbmullan11-27-2013


    Brent

    I’ve got a question for you that so far I’m getting diverse answers.

    Maybe because it might not have been asked before I’m not sure but I do know that searching the web I’ve not been able to find it posted as a question anywhere .. so here goes.

    Neutron is but one of the “services” in OpenStack. Many vendors/etc are designing SDN related “plugins” for Neutron.

    Could Neutron be used by itself sans OpenStack?

    By this I am asking could Neutron be installed on a RHEL/Centos/Ubuntu server and then be utilized to offer the related technologies SDN access to either VMs or LXC containers running on that server w/out the rest of OpenStack installed (Nova, Keystone et al)?

    I am asking because it would be a great way to enable applications/vm’s/LXC containers on that Linux servers to gain the capabilities of SDN (via Neutron & its plugins) without installing/configuring an entire OpenStack.

    I realize there may be quite of bit of “intertwingling” btwn Neutron, Keystone, Nova code .. but Neutron does have an API and Neutron is open source so my thinking anything like that could possibly be overcome by some smart devs.

    Anyway, I’m not sure anyone has asked the question before and the old saying goes that “the only dumb question is the one you don’t ask”.

  4. Brent SalisburyBrent Salisbury12-05-2013


    Hi Bmullan, The core services are all deeply related. Nova is pretty much the glue that binds them together. In the API calls for ML2 those are all communicated through Nova. That said, anything is possible 🙂 Keystone can use a backend LDAP DS services but Nova is really what ties it together.

    A controller is mostly used for provisioning topology and forwarding rules. Libvirt actually creates the ports. Cloudstack, or something more lightwieght like oVirt might be interesting for you. I have been trying to find time to revisit oVirt as I haven’t used it in a while. Always curious to hear what you turn up and thanks for the question!
    -Brent