OpenStack Folsom Quantum Devstack Installation Tutorial and Screencast
- Installing OpenStack Grizzly with DevStack : Here is an updated Grizzly DevStack tutorial since Folsom is coming to an end.
This is a quick guide that includes a diagram of a working reference architecture for installation of the OpenStack Folsom release using which includes the Quantum networking component using the DevStack installation bash script.
With the OpenStack Essex release, I was writing installers with Python for Linux bridging and the Quantum plugin. So far the OpenStack Folsom install is more complex due to the added options mainly from the Quantum networking plugin. A big feature of Quantum out of the box using this build with OpenvSwitch is it adds features such as layer2 multi-tenancy for path isolation with GRE tunnels and/or Vlan IDs.
This tutorial will install the packages as listed under Folsom below. Trying to keep the Python installer up to date is too much maintenance for me so switching to DevStack for people who get paid to do it. While installing by hand is good for initial builds, this release mainly due to the complexity of Quantum is trickier than Essex in my opinion. I think that’s some ironic commentary on SDN for those who track developments and try and separate reality, fear and myth on that front.
Figure 1. OpenStack Releases (Note Quantum is still in development and not stable until ‘Grizzly’ Q2Y2013.
[/crayon]
Figure 2. Vendor contributions and how they have differed from Essex to Folsom.
OpenStack Folsom Installation Pre-Requisites
I am using Ubuntu 12.04. Never hurts to patch and update but not required. Then we will download Devstack project from GitHub. Since the hooks for OpenvSwitch are coupled with KVM you will need physical hardware that supports hardware virtualization. Nested support is road-mapped and may already be integrated with HEAT in OpenStack for hypervisor hooks into the public cloud.
Check for hardware virtualization on Intel (Intel-VT) with:
1 2 3 |
grep --color vmx /proc/cpuinfo |
Check for hardware virtualizationon AMD (AMD-V) with:
1 2 3 |
grep --color svm /proc/cpuinfo |
You should see VMX or SVM highlighted in red from the commands above if HW virtualization is supported.
Download and install git. Then clone the devstack repo on github.
1 2 3 4 5 |
sudo apt-get install git git clone git://github.com/openstack-dev/devstack.git cd devstack |
Here is what my network NIC configurations look like in /etc/network/interfaces. We want the port to be up in promiscuous mode. Think of Snort or OpenVPN interface configuration 🙂
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
# The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 iface eth0 inet static address 172.31.246.7 netmask 255.255.254.0 network 172.31.246.0 broadcast 172.31.247.255 gateway 172.31.246.1 # dns-* options are implemented by the resolvconf package, if installed dns-nameservers 8.8.8.8 auto eth1 iface eth1 inet manual up ifconfig $IFACE 0.0.0.0 up up ip link set $IFACE promisc on down ip link set $IFACE promisc off down ifconfig $IFACE down |
Above is the /etc/network/interfaces file that determines the “ifconfig” output. Other handy binaries ethtool “apt-get install ethtool” to determine if you have link to a NIC, e.g. “ethtool eth0” would return a Boolean on link detected: yes|no and tcpdump is installed by default, “tcpdump host 172.16.1.100 -i eth0” would promiscuously listen for the host specified on interface eth0.
DevStack Localrc File
Add the following in the devstack directory and name it “localrc”. It is the same directory that contains the “stack.sh” shell script. The localrc file contains all of your build parameters to pass to the script. Finding the right combination took way too many hours but here is what is used in the screencast at the bottom of the post.
Be sure to read the script and understand what other parameters are available. This will create unique Vlan IDs between the host and Hypervisor. If for example you wanted to use GRE tunnels instead you would add the line “ENABLE_TENANT_TUNNELS=True” to the configuration. This is the key to the install. If you have two NICs on a host this should work for you.
1 2 3 4 5 6 7 8 9 10 |
ENABLED_SERVICES=q-meta,q-lbaas,n-obj,n-cpu,n-sch,n-cauth,horizon,mysql,rabbit,sysstat,cinder,c-api,c-vol,c-sch,n-cond,quantum,q-svc,q-agt,q-dhcp,q-l3,n-novnc,n-xvnc,q-lbaas,g-api,g-reg,key,n-api,n-crt ADMIN_PASSWORD=openstack MYSQL_PASSWORD=openstack RABBIT_PASSWORD=openstack SERVICE_PASSWORD=openstack SERVICE_TOKEN=openstack MYSQL_PASSWORD=password LOG=True |
If you downloaded devstack git source a a few days or even hours before you execute it you may consider updating the installer. Assuming you allowed devstack to install into the default /opt/stack/ directory, running this will update the latest code merge.
1 2 3 4 5 6 7 |
cd /opt/stack/quantum && sudo git pull cd /opt/stack/nova && sudo git pull cd /opt/stack/horizon && sudo git pull cd /opt/stack/glance && sudo git pull cd /opt/stack/keystone && sudo git pull |
Run ./stack.sh to Install OpenStack Folsom
With your localrc file in the same directory run ./stack.sh. Once the installation is done run the following to pull-down and import the Amazon EC2 Ubuntu Cloud Image. I had a terrible time with what gets pulled down with the script and you will notice at the end I actually get errors on tokens from Glance but everything still works fine EXCEPT for the Cirros image. You can always test if your image is working by looking at it in the Dashboard through VNC. Nova will report a host up even if it is sitting their with a broken image trying to pxi boot.
Your shell environmentals need to have similar values to the following. This is service authentication. Merely pasting it into your bash shell or adding it to localrc for persistency will give you CLI permissions.
1 2 3 4 5 6 7 8 |
export SERVICE_TOKEN=openstack export OS_TENANT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=openstack export OS_AUTH_URL=http://localhost:5000/v2.0/ export SERVICE_ENDPOINT=http://localhost:35357/v2.0 |
[/crayon]
Figure 3. Check the VM if you are troubleshooting. Nova still reports it up even if it is in this state.
Glance Errors
export SERVICE_TOKEN=openstack
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=openstack
export OS_AUTH_URL=http://localhost:5000/v2.0/
export SERVICE_ENDPOINT=http://localhost:35357/v2.0
The Glance error looks something like this. It’s like the SERVICE_TOKEN variable isnt getting passed even though it is defined in keystone somewhere. Did not spend any time on it since I want Ubuntu Precise Cloud Image. You can build windows images with this tutorial.
1 2 3 4 5 6 |
wget https://cloud-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img glance add name=Ubuntu-12.04 is_public=true container_format=ovf disk_format=qcow2 < precise-server-cloudimg-amd64-disk1.img #You can then view the imported image with: glance index |
Before you boot a host you need to do one thing that is not automated. You have to add your fixed_range or back-end NIC to the br-int OpenvSwitch bridge. You are taking that physical interface eth1 and binding it to br-int. Think of it as a locally significant Vlan with no tags being inserted (Probably worst explanation ever). That physical host still does not have any addresses on it.
Folsom Quantum L3-agent
The L3-agent will take care of arp-proxying the hosts. The eth1 host is up promiscuously. That is how if two hosts in the same tenant/bridge/subnet would always show the same arp entry for neighboring VMs. Example, VM Host1 at 192.168.1.10/24 and VM Host2 at 192.168.1.11/24 and a gateway of 192.168.1.1. Pinging 192.168.1.1 from Host1 would give you the same mac address in the arp entry of the gateway as you would see if your pinged Host2. This is because all traffic is funneled through the L3-agent. The datapaths (Layer2 path) is kept isolated by two mechanisms, one being Vlan tagging from the VM to the Physical host and the other option being GRE tunnels from a VM to Physical. This was not possible prior to the OpenStack Folsom release OpenStack Essex. Essex used default Linux bridging.
Folsom leverages advanced vSwitch features in the case OpenvSwitch in conjunction with IPTables filtering and DNSMASQ to deliver self provisioning L2/L3/L4 header for policy application from flow based forwarding. Elegant solution and more on networking in the next section with some drawings.
Note that when you spawn VM from the DevStack install as shown in the video. I use the “demo” group and in the networking tab select “private”.
1 2 3 4 5 6 7 8 |
#look at the current vSwitch and routing table configuration sudo ovs-vsctl show route -n #add eth1 to br-ex sudo ovs-vsctl add-port br-ex eth1 route -n |
Video 1. Quick walk through of post script operations.
OpenStack Quantum Networking
[/crayon]
Figure 4. Data Path (Layer 2) isolation is achieved between hypervisor and VM host with Vlans a GRE tunnels. Those encapsulations break out at the hypervisor or could be picked up by emerging SDN/Data Center orchestration networking.
The concepts are pretty cool. Our future is apparently going to be reliant on DNSMASQ, IPTables or plug-ins like NVP to build state and forwarding policy, see Cisco VSG and Nexus 1k, its all the same concepts, forwarding instantiation. Remember that Quantum won’t be released as stable until Grizzle in spring of 2013. There are quite a few pieces that are not baked into Horizon (Dashboard) since Horizon development froze for Folsom release. Take a look at the blueprint if interested in seeing upcoming features. Floating addresses is not supported via Horizon so you need to add those via the API or CLI.
[/crayon]
Figure 5. Here are where some of the DevStack parameters fit into the topology and how OpenStack networking from Hypervisor down looks.
Figure 7. Quantum Namespaces are the path isolation components. Getting familiar with ‘ip netns exec’ commands will be part of a deep dive.
Uninstall or Reinstall the Devstack OpenStack Environment
Depending on if I ran into problems or not I will often delete installed packages and directories to get as vanilla an install as I can get. Since this will be physical hosts the snapshot image isn’t available. Something along the following will delete some main packages and delete some directories that will get reinstalled as soon as you run the ./stack.sh script again. Remember to just unload the Devstack script and processes like quantum, nova, glance etc run ./unstack.sh. That will not stop dependencies like your hypervisor or vSwtich which in this case is KVM/libvirt-bin and OpenvSwitch.
1 2 3 4 5 6 7 8 9 10 |
./unstack && sudo apt-get purge libvirt-bin qemu-kvm qemu bridge-utils dkms openvswitch-switch openvswitch-datapath-dkms rabbitmq-server dnsmasq iptables tgt #Delete some directories with configuration files if hitting problems to start from scratch. sudo rm -rf /etc/quantum sudo rm -rf /etc/nova sudo rm -rf /etc/glance sudo rm -rf /var/lib/nova sudo rm -rf /etc/libvirt/ sudo rm -rf /var/lib/libvirt/ |
Potential DevStack Folsom Errors to Troubleshoot
Horizon Error: Template error Dashboard with slug “nova” is not registered. Just delete /opt/stack/horizon/ and pull it done again and the error will clean up. I didn’t troubleshoot Django to try and see why since just rebuilding the directory fixes it up.
1 2 3 4 5 |
In template /opt/stack/horizon/horizon/templates/horizon/common/_sidebar.html, error at Dashboard with slug "nova" is not registered. {% load branding horizon i18n %} |
Thanks much for posting this info! Can you pls point any good docs/videos for understanding openvswitch and quantum(folsom) concepts ?
Hi,
You setup works perfectly but I am unable to ping outside of the VM. I can only ping from the host to the VM and I also noticed that in iptables the nat table is not populated with the VM’s ip.
Great thanks for this outstanding tutorial.
Just my 2 cents. When you write :
sudo ovs-vsctl add-port br-int eth1
I think that should be instead
sudo ovs-vsctl add-port br-ex eth1
Physical interface should be connected to br-ex instead of br-int. No ?
Hi Sebastien, thanks for pointing that out. I think I was doing it to not have to mess with floating addresses and tying in the back end address and routing it instead. Probably a bad work around lol. I need to post a how to on floating addresses anyways. Thanks for the help.
Hi Brent,
Thanks for the great article. I am trying to play with openstack folsom with quantum.
The problem is I have hosts/VMs with one NIC only (eth0). Is there a way I can give the vlan or gre
networks a spin with hosts having just one NIC ??
Thanks
Vinay
Good question Vinay. You can push traffic towards a bridge and encap that into a tunnel off the box with OVS. I did that here from an HP public cloud node.
http://networkstatic.net/public-cloud-network-as-a-service/
Now, the trick will be to get that to play with the Quantum build.
In essex you could use virtual interfaces like, eth0:1 but I haven’t tried with Quantum/OVS. Might want to just try this Devstack install if its still valid with Folsom.
Cya!
-Brent
Hi. With my Folsom setup I can ssh from compute(=management) host to VMs via public/external (floating) connected vIface OK. I can telnet any open port from VM guests to host also.
But I can only ping external network objects from guests and vise versa! No any other TCP connection possibe. e.g.
root@guest1:~# telnet 21
root@external_host:~# telnet 22
failing all!
but
root@guest1:~# ping
root@external_host:~# ping
root@openstack_host:~# ssh
root@guest1:~# telnet
all OK…
Security groups are OK. iptables chains for filters and NATs at compute host and name spaces looks OK also… I have no idea why I can only PING from/to all but ssh only from host to VMs!
I’m sorry for repeat it ate some values 🙂
original was:
root@guest1:~# telnet external_host_ftp 21
root@external_host:~# telnet guest1_floating 22
failing all!
but
root@guest1:~# ping external_host_ftp
root@external_host:~# ping guest1_floating
root@openstack_host:~# ssh guest1_floating
root@guest1:~# telnet openstack_host any_open_port
all OK…
It was a routing misconfig at management host, I added new route for external clients network via br-ex and now all fine. Thank you for the article.
Hi Bogdan, sorry I missed the comment. Glad you got it going. The networking here is fairly tricky for sure, more features will add more complexity 🙂
Cheers,
-Brent
Hi
I’ve followed your good tutorial, but after stack.sh finished its job, I don’t know how to create VLANs. Do I have to do it using the Horizon web interface? Maybe with some cli (ovs-vsctl command)?
By the way, from Horizon, when I try to create a subnet inside a network, it fails……
Thx in advance Brent!
My localrc:
HOST_IP=10.172.251.81
ADMIN_PASSWORD=openstack
MYSQL_PASSWORD=openstack
RABBIT_PASSWORD=openstack
SERVICE_PASSWORD=openstack
SERVICE_TOKEN=openstack
MYSQL_PASSWORD=openstack
disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service quantum
enable_service g-api
enable_service rabbit
SCHEDULER=nova.scheduler.simple.SimpleScheduler
MULTI_HOST=True
FLOATING_RANGE=10.10.251.128/28
EXT_GW_IP=10.172.251.81
FIXED_RANGE=172.24.17.0/24
NETWORK_GATEWAY=172.24.17.1
SYSLOG=True
SYSLOG_HOST=10.10.251.81
LOG=True
DEBUG=True
ENABLE_TENANT_TUNNELS=False
TENANT_TUNNEL_RANGES=1:1000
ENABLE_TENANT_VLANS=true
TENANT_VLAN_RANGE=1:1000
PHYSICAL_NETWORK=eth1
OVS_PHYSICAL_BRIDGE=br-eth1
OVS_ENABLE_TUNNELING=False
Hi J, I need to reread the script as I have been swamped with other stuff and haven’t touched it since I did this post. I have been meaning to write out a fixed 2 NIC custom installer. By default I think I remember the devstack script defaulting to Vlans for the L2 path isolation between VM and Physical host, so you probably dont need to specify either.
If you are stumped paste in your output from ” sudo ovs-appctl fdb/show br-int ”
It should look something like this off the top of my head.
sudo ovs-appctl fdb/show br-int
port VLAN MAC
4 0
5 2
6 3
etc
Each of the ports are Vnics/net attached to a VM and the mac addr tagged with a vlan ID.
Thanks,
-Brent
Hi Brent
thanks again for your reply.
I have some ports in each switch, but only port 4 connected to VLAN2. do you know how to define in horizon web interface a machine to be connected to a specific vlan?
thx again Brent!
root@ubuntu:~# ovs-appctl fdb/show br-eth1
port VLAN MAC Age
root@ubuntu:~# ovs-appctl fdb/show br-ex
port VLAN MAC Age
root@ubuntu:~# ovs-appctl fdb/show br-int
port VLAN MAC Age
4 2 92:a4:57:83:b9:fb 20
root@ubuntu:~# ovs-vsctl show
c83b274e-39fa-48e1-9f47-dad554c08883
Bridge “br-eth1”
Port “br-eth1”
Interface “br-eth1”
type: internal
Port “br-eth1-vlan200”
tag: 200
Interface “br-eth1-vlan200”
Port “phy-br-eth1”
Interface “phy-br-eth1”
Port “vlan100”
tag: 100
Interface “vlan100”
Bridge br-ex
Port “eth1”
Interface “eth1”
type: internal
Port “qg-16f59947-be”
Interface “qg-16f59947-be”
type: internal
Port “br-ex-vlan300”
tag: 300
Interface “br-ex-vlan300”
Port br-ex
Interface br-ex
type: internal
Bridge br-int
Port “qr-ed7cdda7-9a”
tag: 1
Interface “qr-ed7cdda7-9a”
type: internal
Port “tapb1177f75-8c”
tag: 1
Interface “tapb1177f75-8c”
type: internal
Port “int-br-eth1”
Interface “int-br-eth1”
Port br-int
Interface br-int
type: internal
Port “qvo99be0b8d-bb”
tag: 2
Interface “qvo99be0b8d-bb”
ovs_version: “1.4.0+build0”
Did you install devstack on a virtual machine or on a physical node?
If you did could you give more feedback on your actual network configuration, such as what kind of devices are used? Bridged, NAT, host-only etc.
Also could you describe your physical network situation? That would help me a lot when understanding how to configure folsom, as it seems dependent on your physical network configuration?
Hi,
I followed you steps and I have some issues when i try to start an instance I get on the log :
TypeError: can’t compare datetime.datetime to NoneType
ansd also
Instance already created
deleting the old instances and recreating the db doesn’t make things better here and also the error happens at the first step of creation before networking and so
this is what makes me believe it’s related to the scheduler , ntp or rabbitmq maybe …
If you can share any thought on that it would help me a lot.
Thanx
I have installed Openstack Essex – multinode on Ubuntu 12.04 LTS ; The multinode is working fine with all smilies 🙂 with the restart of services –
service nova-compute restart.
1. But the problem over here is we cannot able to ping / SSH to the VM that are created at [@Node1] – error no route to host 22. where as the VM’s created at [@cnt] are able to ping / ssh. we have tried changing the nova.conf and Bridge settings and nothing has gone in resolving the prob.
Network Conf: [Node – Nova Compute]
auto lo
iface lo inet loopback
auto br100
iface br100 inet static
bridge_ports eth1
bridge_stp off
bridge_maxwait 0
bridge_fd 0
============================
auto eth0
iface eth0 inet static
address 172.16.2.5
netmask 255.255.255.0
gateway 172.16.2.1
dns-nameservers 14.140.144.66
auto eth1
iface eth1 inet static
address 192.168.22.1
netmask 255.255.255.0
============================
Can you help me out any changes or modifications in my settings / config;
Thanks,
Sandeep.
Do you have any experience playing with Ubuntu Cloud via Canonical? It is OpenStack Folsom deployed through Juju whatever that is. I have installed DevStack 3 times, each time was a success. I really LOVE using it, especially how it expands into “screen” and rejoining the stack and rejoining the screen after reboots. I even did some manual qemu-kvm to build a Windows 8 instance right in X windows on Ubuntu on my devstack server. But now I am going to build another, and wonder if I should go to the official Ubuntu Cloud “Canonical” version or keep doing it the way I’m comfortable – Ubuntu 12.04LTS + devstack via github.. thoughts???
when installing was done, I checked route -n in host machine , I didn’t see any route for fixed (172.24.17.0/24) and floating (172.31.246.128/25) ip, for fixed ip seems OK but what a bout L3 (floating network ). So, I could not ping external and internal network. When I uploaded one instance from horizon,it showed fixed ip is assigned and even I could allocate floating ip but when I checked the log file of VM,
” cloudsetup: failed to read iid from metadata. tried 30
WARN: /etc/rc3.d/S45cloudsetup failed
Starting dropbear sshd: OK
===== cloudfinal: system completely up in 51.81 seconds ====
wget: can’t connect to remote host (169.254.169.254): Network is unreachable
wget: can’t connect to remote host (169.254.169.254): Network is unreachable
wget: can’t connect to remote host (169.254.169.254): Network is unreachable
instanceid:
publicipv4:
localipv4 :
wget: can’t connect to remote host (169.254.169.254): Network is unreachable
clouduserdata: failed to read instance id
WARN: /etc/rc3.d/S99clouduserdata failed ”
and when I loged in to VM, it could not recieve ip address,
Can anyone help me?
Hi,
How to uninstall openstack..?…
regards
vikas rao…
It’s a pity you don’t have a donate button! I’d without a doubt donate to this superb blog!
I guess for now i’ll settle for bookmarking and adding your
RSS feed to my Google account. I look forward tto new updates and will talk about this site with my Facebook group.
Talk soon!
Hi brent,
thanks for this great article, it was quite helpful . i was trying to add a new compute node to this architecture (in a new system) but am unable to do it(got stuck with the ip problem.. cant connect ). it would be great if you could suggest me a way. the compute node installation which i am following is https://openstack-folsom-install-guide.readthedocs.org/en/latest/#id1
thanks
Hey buddy, I am posting an updated config in the next couple of days.