Installing OpenStack ML2 Neutron Plugin with DevStack on Fedora
OpenStack networking is tricky. This is primarily because programmable distributed systems are relatively new beyond the rigid L2/L3 control protocols we have used for the past 20 years. What I am consistently impressed with about OpenStack networking is the innovative network services that systems programmers are developing using APIs and virtual switching. Of course most vendors have a plugin for their hardware but service instantiation in the hypervisor endpoint is relatively easy since these are open systems with compute scale.
My comrades and I that are working on the OVSDB implementation in the OpenDaylight project have needed to get up to speed on the OpenStack Neutron ML2 (Modular Layer 2) plugin to begin integration. OVSDB in OpenDaylight enables us to implement an agent-less/driver-less OpenStack/OpenDaylight integration using OVSDB for topology provisioning and OpenFlow for forwarding control and service insertion. We are fortunate to have our friend and OVSDB committer Kyle Mestery who is also on the core Neutron team to assist on the ML2 integration front.
Brief ML2 Neutron Overview
The ML2 GRE overlay implementation proactively provisions tenant tunnels with ini files on the compute nodes that provides each OVS a list of VTEPs. The OVS instances build TEPs to each compute hosts along with endpoint(s) with an external bridge (br-ex) to drain traffic into the native underlay. From a control perspective proactive flow rules are pushed for each segmentation ID into an OVS table. For more information on Open vSwitch check the FAQ. Ben Pfaff and team do an awesome job documenting and the listserv has about every answer you need archived.
When Nova requests a compute node, Libvirt will provision the post by calling OVS using the ovs-vsctl client and Neutron requests the provider network and allocates a provider segmentation ID which is used for a GRE key or VXLAN VNI for tenant isolation. The provider network type is simply the overlay encapsulation in this case (GRE). It also provides the tenants device_id, network_id, tenant_id, IP addr, MAC addr to name a few important fields. Neutron ML2 API calls to a plugin are currently unidirectional currently, but I think this can improve over time.
Git Devstack
For first time DevStackers, this is very doable but is not quite easy mode and may require some Googling to troubleshoot (best way to learn, break and rebuild). Also, this is all subject to change as projects are architectures do so also.
- FYI, there appears to be a packaging bug when installing the Fedora using the LiveCD with Open vSwitch 1.11. The symptom is that the OVS_GRE
[brent@fedora1 devstack]$ lsmod | grep openvswitch
openvswitch 66519 0
gre 18216 1 openvswitch
[brent@fedora1 devstack]$ modinfo gre
filename: /lib/modules/3.11.7-200.fc19.x86_64.debug/kernel/net/ipv4/gre.ko
- The easiest workaround is to install Fedora using the Net-Install Fedora image (317 MB) found here: Download Fedora-19-x86_64-netinst.iso
- Thats it, just make sure you use the net install disk. I didn’t see a bug so will file one shortly. If you have had success with an image maybe save someone else the problems and pop your success below if you have a moment to share.
There is no warranty with this post and guaranteed to have an exceedingly short half-life. There are far too many moving parts to not require some troubleshooting. I tend to do a snapshot of the vanilla OS due to my ability to ruin a machine from time to time and it saves the OS rebuild time.
I ran into a dbus services bug with an error of the following:
1 2 3 4 |
Activation via systemd failed for unit 'dbus-org.freedesktop.nm-dispatcher.service': Unit dbus-org.freedesktop.nm-dispatcher.service failed to load: No such file or directory. See system logs and 'systemctl status dbus-org.freedesktop.nm-dispatcher.service' for details. Dec 4 03:33:52 fedora1 NetworkManager[7498]: <warn> Dispatcher failed: (32) Unit dbus-org.freedesktop.nm-dispatcher.service failed to load: No such file or directory. See system logs and 'systemctl status dbus-org.freedesktop.nm-dispatcher.service' for details. |
A symlink will workaround the bug so you may want verify you have the latest NetworkManager patches:
1 2 3 4 5 |
sudo yum -y upgrade NetworkManager systemctl enable NetworkManager-dispatcher.service sudo systemctl status NetworkManager-dispatcher.service |
If you have an existing installation from package, you can query and remove it with the following:
1 2 3 |
rpm -qa | grep openvswitch |
Then remove the existing. You can always upgrade but the extra step may avoid potential issues removing the kernel module:
1 2 3 |
rpm -e openvswitch-1.11.0-1.fc19.x86_64 |
In the example local.conf configs the hosts are as follows:
- Controller : 172.16.58.139 [Fedora2] (SERVICE_HOST)
- Compute: 172.16.58.136 [Fedora1]
Edit the hostnames and make them resolvable via hostname to one another with vi /etc/hosts.
1 2 3 4 5 6 |
git clone https://github.com/openstack-dev/devstack.git cd devstack vi local.conf (Add a compute or controller local.conf config from below) ./stack.sh |
local.conf files for DevStack on Fedora
This local.conf works on Fedora 19 for the controller host. For Fedora builds use QPID not RABBIT. I left Rabbit as there was a bug a while back but I haven’t noticed issues disabling it for Fedora. There are a few unnecessary fields in the config files below. I recommend reading the script and getting to know it especially if looking to do dev or integrating other projects into OpenStack as the DevStack script greatly simplifies exchanging development environment amongst a team.
DevStack local.conf Compute Node (Fedora2 – 172.16.58.139)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 |
[[local|localrc]] LOGFILE=stack.sh.log SCREEN_LOGDIR=/opt/stack/data/log LOG_COLOR=False #OFFLINE=True RECLONE=yes disable_service rabbit enable_service qpid enable_service n-cond disable_service n-net enable_service q-svc enable_service q-agt enable_service q-dhcp enable_service q-l3 enable_service q-meta enable_service neutron Q_HOST=$SERVICE_HOST HOST_IP=172.16.58.139 Q_PLUGIN=ml2 Q_AGENT=openvswitch Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch,linuxbridge,l2population Q_ML2_TENANT_NETWORK_TYPE=gre Q_USE_SECGROUP=True Q_AGENT_EXTRA_AGENT_OPTS=(tunnel_types=gre) Q_AGENT_EXTRA_SRV_OPTS=(local_ip=$HOST_IP) VNCSERVER_PROXYCLIENT_ADDRESS=192.168.64.193 VNCSERVER_LISTEN=0.0.0.0 SERVICE_HOST_NAME=${HOST_NAME} SERVICE_HOST=172.16.58.139 FLOATING_RANGE=192.168.100.0/24 #PUBLIC_NETWORK_GATEWAY=192.168.75.254 MYSQL_HOST=$SERVICE_HOST #RABBIT_HOST=$SERVICE_HOST GLANCE_HOSTPORT=$SERVICE_HOST:9292 KEYSTONE_AUTH_HOST=$SERVICE_HOST KEYSTONE_SERVICE_HOST=$SERVICE_HOST MYSQL_PASSWORD=mysql QPID_PASSWORD=qpid SERVICE_TOKEN=service SERVICE_PASSWORD=admin ADMIN_PASSWORD=admin |
DevStack local.conf Compute Node (Fedora1 – 172.16.58.136)
This local.conf foes on the compute node:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
[[local|localrc]] LOGFILE=stack.sh.log SCREEN_LOGDIR=/opt/stack/data/log LOG_COLOR=False #OFFLINE=true RECLONE=yes disable_all_services enable_service n-cpu quantum q-agt n-novnc qpid HOST_IP=172.16.58.136 SERVICE_HOST_NAME=fedora2 SERVICE_HOST=172.16.58.139 FLOATING_RANGE=192.168.100.0/24 Q_PLUGIN=ml2 Q_AGENT=openvswitch Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch,linuxbridge,l2population Q_ML2_TENANT_NETWORK_TYPE=gre Q_AGENT_EXTRA_AGENT_OPTS=(tunnel_types=gre) Q_AGENT_EXTRA_SRV_OPTS=(local_ip=$HOST_IP) Q_HOST=$SERVICE_HOST MYSQL_HOST=$SERVICE_HOST GLANCE_HOSTPORT=$SERVICE_HOST:9292 KEYSTONE_AUTH_HOST=$SERVICE_HOST KEYSTONE_SERVICE_HOST=$SERVICE_HOST MYSQL_PASSWORD=mysql QPID_PASSWORD=qpid SERVICE_TOKEN=service SERVICE_PASSWORD=admin ADMIN_PASSWORD=admin |
Post Installation
The following will get you started getting hosts booted:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 |
### Set ENV Credentials ### [brent@fedora1 devstack]$ . ./openrc admin admin ### Verify Nodes ### [brent@fedora2 devstack]$ nova hypervisor-list +----+---------------------+ | ID | Hypervisor hostname | +----+---------------------+ | 1 | fedora1 | | 2 | fedora2 | +----+---------------------+ ### Get the Image ID for cirros-0.3.1-x86_64-uec ### [brent@fedora1 devstack]$ nova image-list +--------------------------------------+---------------------------------+--------+--------+ | ID | Name | Status | Server | +--------------------------------------+---------------------------------+--------+--------+ | 166de3dd-b52e-4d04-9237-92d063b9482a | cirros-0.3.1-x86_64-uec | ACTIVE | | | 8a93dc1d-dcac-4ace-b2ac-9dc0038243d2 | cirros-0.3.1-x86_64-uec-kernel | ACTIVE | | | 41d9c73a-d177-4333-8b47-dbb4209e5999 | cirros-0.3.1-x86_64-uec-ramdisk | ACTIVE | | +--------------------------------------+---------------------------------+--------+--------+ ### Get the Network ID ### [brent@fedora1 devstack]$ nova network-list +--------------------------------------+---------+------+ | ID | Label | Cidr | +--------------------------------------+---------+------+ | 8f8370fe-bb5c-4fd7-b541-ec513e63e922 | public | None | | fa56cd9f-22b2-4117-899b-85bdff4161a3 | private | None | +--------------------------------------+---------+------+ ### Boot a guest host. If you want to force the VM to the compute node to verify the overlay tunnel use --availability_zone with the admin crendentials. You need to modify the UID Demo role using Keystone if you wish to do the same with that tenant. nova boot --flavor m1.nano --image 166de3dd-b52e-4d04-9237-92d063b9482a --nic net-id=8f8370fe-bb5c-4fd7-b541-ec513e63e922 turtles1 --availability_zone=nova:fedora1 ### View the guest host ### [brent@fedora1 devstack]$ nova list +--------------------------------------+-----------+--------+------------+-------------+------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+-----------+--------+------------+-------------+------------------+ | 62c26e76-1d20-424c-a82d-242d85b45d42 | turtles1 | ACTIVE | None | Running | private=10.0.0.8 | | e3fe70b7-5516-464c-96f3-923062e05173 | r2vm1 | ACTIVE | None | Running | private=10.0.0.3 | +--------------------------------------+-----------+--------+------------+-------------+------------------+ |
I will post VXLAN/VNI configurations for just vanilla DevStack and even cooler, ODL support in the next week or so hopefully (dependency is getting OF v1.3 atm).
Additional Content: OpenStack ML2 Videos from the OpenStack Summit
Check out this great panel with some friends and rockstar leaders from the OpenDaylight project speaking at the ODL mini-summit in Hong Kong. The speakers are Chris Wright (RedHat), Anees Shaikh (IBM), Kyle Mestery (Cisco) and Stephan Baucke (Ericsson).
OpenDaylight: An Open Source SDN for Your OpenStack Cloud
A Deep Dive into the ML2 Plugin
Bob Kukura (RedHat) and Kyle Mestery (Cisco) give a great overview and dig into the critical details.
What Next?
Scott Shenker of UC-Berkeley and a co-founder of Nicira said network virtualization is the “killer application”. If this is the case, the first application of the killer application is supporting OpenStack at scale. Nicira earned this right with the huge investment it made into the SDN reference data plane implementation of OVS. If you have not whatched Shenker’s latest revision of his always impressive SDN vision be sure to make time for it.
Software-Defined Networking at the Crossroads – →
I could go on for days but time is unfortunately not on my side with day job in a couple of hours. Not to mention the all-star Madhu’s Roomba just went off on the Hangout we are on which is the 4AM nightly alarm and always hilarious sounding. We will demo and post instructions for ODL/Neutron plugin in the next few days. We are waiting on OF v1.3 to finish so we can get multi-table support *cough* *hint* *hint* Ed 🙂 OFv1.3 will avoid a full mesh tunnel overlay per tenant/network. In the meantime, take a look ML2 using DevStack. I am really impressed at the thought that went into putting it together. It has limitations obviously but I think in the near term, SDN controllers and hardware w/vTEP capabilities will alleviate a lot of the OpenStack networking pain and abstract the overlay complex state of hacking networks today.
You keep doing this Brent!
Posting great SDN / Openstack post that just temp me to set aside my CCIE and jump head first in.
Bad Brent!! 🙂
Haha!! Fun times!
Brent
I’ve got a question for you that so far I’m getting diverse answers.
Maybe because it might not have been asked before I’m not sure but I do know that searching the web I’ve not been able to find it posted as a question anywhere .. so here goes.
Neutron is but one of the “services” in OpenStack. Many vendors/etc are designing SDN related “plugins” for Neutron.
Could Neutron be used by itself sans OpenStack?
By this I am asking could Neutron be installed on a RHEL/Centos/Ubuntu server and then be utilized to offer the related technologies SDN access to either VMs or LXC containers running on that server w/out the rest of OpenStack installed (Nova, Keystone et al)?
I am asking because it would be a great way to enable applications/vm’s/LXC containers on that Linux servers to gain the capabilities of SDN (via Neutron & its plugins) without installing/configuring an entire OpenStack.
I realize there may be quite of bit of “intertwingling” btwn Neutron, Keystone, Nova code .. but Neutron does have an API and Neutron is open source so my thinking anything like that could possibly be overcome by some smart devs.
Anyway, I’m not sure anyone has asked the question before and the old saying goes that “the only dumb question is the one you don’t ask”.
Hi Bmullan, The core services are all deeply related. Nova is pretty much the glue that binds them together. In the API calls for ML2 those are all communicated through Nova. That said, anything is possible 🙂 Keystone can use a backend LDAP DS services but Nova is really what ties it together.
A controller is mostly used for provisioning topology and forwarding rules. Libvirt actually creates the ports. Cloudstack, or something more lightwieght like oVirt might be interesting for you. I have been trying to find time to revisit oVirt as I haven’t used it in a while. Always curious to hear what you turn up and thanks for the question!
-Brent