Public Cloud and Network as a Service

Public Cloud and Network as a Service

Telephone Cabling Mess

Public Cloud and Network as a Service

The first part of the post was building the Public Cloud node. The second part of this post is going to be a primitive blue print for how to build a hybrid private/public network and what network as a service could mean. Without reliable national broadband and backbone networks both provider and customer will be taking serious risks. I will ask one question and move on, what incentive does a carrier have to improve their infrastructure if they are the incumbent with no competition? Puzzling we get so enthralled in the minute details of technology yet something as broken as the very backbone substrate right before our eyes. It isn’t waiting on any technological breakthroughs, only vision and leadership.

So WtF is Hybrid Cloud Networking?
It is a fancy name for GRE and VXLAN tunnels getting pinned from a local cloud stack to a public cloud stack. Voila!

Grandpas Network, What has Changed?
Cloud Internet Fail

There has been a growing swell in the networking industry coined Software Defined Networking (SDN) that has culminated into a full fledged evolution. One primary drivers of change in the networking industry has come from commodity hardware. Researchers at Stanford began working to decouple the network operating system and control plane. As soon as Google bought in things got serious. As a result, merchant silicon manufacturers have supposedly begun to alter their “off the shelf silicon” to facilitate this exploration. Still today, the topic of decoupling the control and management plane into some form of centralization is enough to start a nerd rage brawl.

The second driver, is the need for much more abstraction and scale in the hyper-scale data centers. Public cloud providers need to better align the network with automation and orchestration eco’s in the data center. Folks like Amazon, have made it clear they expect vendors to pursue this support or risk some fairly large accounts. Data center orchestration is merely a subset of network management and operational efficiency which has but nothing but a total joke in the networking industry to date. So here is a rough cut at what one option and some research someone could use if their CEO walked in tomorrow and said he was all in for the public cloud and you had to deliver something other than laughter or even sarcasm, well maybe both.

Tunnels in the Data Center

The early productization that has came from the same Stanford folks like Mckeown (Nicira), Shanker (Nicira), Casado (Nicira), Ericson (Beacon/origon of Floodlight) and Appenzeller (BigSwitch) all rely heeavily on tunnuling encapsulations/overlays. What there seems to be a demand for is the need to unglue the compute and storage from the physical networking limitations. Unfortunately this means sprawling layer2 adjacencies everywhere to facilitate things like live workload migrations and then of course don’t forget poorly written applications that communicate through broadcasts. Spanning-tree cant hack it and hasn’t been able to for the last decade. I plugged a random vendor switch into a data center the other day and probably triple checked myself, to make sure these two vendors were actually speaking the same language and a bridging loop wasn’t going to melt down that data center.

These new ideas are all grounded in traditional networking, which makes adoption much more realistic. Our bread and butter today are overlays and tunnels vis a vis MPLS or lesser scalable technologies. MPLS is a prime example of trying to solve the problem of the edge and on my top 3 list of most important technologies in the history of networks. That said, I think we can deliver better virtualization via software abstraction. That debate is for another day.

Tunnels to Cloud Providers

Is someone going to try and do a live workload migration (vMotion) to a cloud provider? They may try, but the weak speeds of our carrier networks are no where near being able to replicate memory across the Internet reliably. What would be important to many people I think is to provision or extend their network into a cloud providers network and VM. We have been doing this for many years at a huge price tag in leased lines, local and remote networking hardware and exorbitant collocation prices.

Early adoption will no doubt freak many people especially those in the practice of security so one way to maintain those policy controls will be to extend the particular security zone into the cloud through a tunnel. While there are still security problems with this it is a significantly lower price tag to get into the cloud from the cost avoidance of not duplicating services like IDS/Firewall/Flow analysis and all of the other weird appliances we have come to rely on and break out of networking boxes for purpose built operations which is pretty much just x86 processors on a board anymore anyways.

A Dash of Fear

The OpenStack framework is significantly more flexible than the unyielding rigors of the network architecture of Amazon AWS/ECS, this can also present security challenges. You are on a broadcast domain with other tenants. I have never been one to get overly concerned with security since there are hordes of security folks for that already but there is quite a bit of personal responsibility in giving the freedom and mixed tenancy like that. It will be interesting to see what tricks the providers will use until more mature DC ~SDN products start becoming mainstream to deal with virtualizing flat networks. Vlans would not even come close to solving that problem due to ID limitations. For all I know they may be doing some of that today but ICMP is allowed on the flat backend network to other hosts from what I observed.

Still Waiting for New Silicon

Silicon is developed in foundries. Fab and Fabless engineered chips take years. Q1-Q2 of 2013 will bring announcements and more flexibility than what can be done on today’s hardware optimized for distribution. We will start seeing a combination of Hypervisor tunneling over any L2/L3 unbundled network from the likes of BigSwitch, IBM DOVE and the one who started it all VMWare(Nicira). By the way, we already do overlay/encaps today. The future are encaps, for simplicity and packet preservation. Tunneled networks depend on the native network. One drawback is the lack of operational visibility from the substrate. I would expect to start seeing ASIC/Chipset manufacturers begin to added support for these overlay encapsulations in their data center gear in the next few cycles. Usable networking hardware is too far off to even fathom at this point. The vSwitch is the early and only foreseeable SDN win to really change networking. Hardware cycles will get there but I think it will take an Intel to change it. They are the only ones I see really disrupting hardware other then margin deflation.

It is just the beginning of seeing who can embrace change and who isn’t willing to work themselves out of one opportunity and into a greater opportunity. Don’t let vendors explain to you why your needs as a customer is not reasonable, they don’t do that to Google, Facebook, Amazon etc. When was the last field engineer you meet more up to speed on the industry than you? Not likely often, the feedback loops of what these product managers get themselves into by convincing everyone their way is the only way is painful to witness. There is way to much complexity in anything related to technology to walk around sporting arrogance ego. If you are that smart you should be working on a much longer term problem.

Network as a Service (NaaS)

Is this Network as a Service? Not interested in the semantics, but I guess this would be my closest stab at it today, with the primitive tools we have to work with for even basic management . The provider today can provision some but very little networking service. We can get some public IP addresses out of a dynamic pool and we get self-provisioned security policy. The services will mature and become much more robust as the APIs mature and more flexibility is demanded from large scale customers.

The Testbed in Mom’s Basement

There are three nodes in the Private Cloud and one node in the HP Cloud. The transport between the Private and Public Cloud is the Internet. There are two 1Gb connections and a 10Gb Internet connections out of the private cloud site into the Internet. The AS-Path to the HP Cloud node was Asymmetric to what appears to be a site in Las Vegas  (az-3.region-a.geo-1) sourcing from Level 3. Each node has two tunnels, one is VXLan encapsulated and one is GRE encapsulated. Each tunnel endpoint terminates only to the node in the center that acts as a Hub w/spokes. That is enough to get end to end connectivity of all spokes. Compiled on each node is Open vSwitch using a Cisco fork that supports VXLan encap. Detailed instructions for that build can be found here.

For a quick build and install of Open vSwitch this will work for an Ubuntu 12.04 install.

apt-get install -y git python-simplejson python-qt4 python-twisted-conch automake autoconf gcc uml-utilities libtool build-essential git pkg-config linux-headers-uname -r
git clone https://github.com/mestery/ovs-vxlan.git
cd ovs-vxlan
git checkout vxlan
./boot.sh
./configure --with-linux=/lib/modules/uname -r/build
make && make install
insmod datapath/linux/openvswitch.ko
insmod datapath/linux/brcompat.ko
touch /usr/local/etc/ovs-vswitchd.conf
mkdir -p /usr/local/etc/openvswitch
ovsdb-tool create /usr/local/etc/openvswitch/conf.db vswitchd/vswitch.ovsschema
ovsdb-server /usr/local/etc/openvswitch/conf.db \
--remote=punix:/usr/local/var/run/openvswitch/db.sock \
--remote=db:Open_vSwitch,manager_options \
--private-key=db:SSL,private_key \
--certificate=db:SSL,certificate \
--bootstrap-ca-cert=db:SSL,ca_cert --pidfile --detach --log-file
ovs-vsctl --no-wait init
ovs-vswitchd --pidfile --detach
ovs-vsctl show

Baseline Bandwidth

Here is output from the Private Cloud site of bandwidth pulling an image from Argonne National Labs. The HP pull was much less than 20MBytes/sec but I didn’t think that a fair comparison since I was picking up Argonne over 10Gb on Internet2.

wget http://mirror.anl.gov/pub/ubuntu-iso/CDs-Ubuntu/12.04/ubuntu-12.04-server-amd64.iso--2012-08-20 03:21:03--  http://mirror.anl.gov/pub/ubuntu-iso/CDs-Ubuntu/12.04/ubuntu-12.04-server-amd64.iso
Resolving mirror.anl.gov (mirror.anl.gov)... 146.137.96.7, 2620:0:dc0:1800:214:4fff:fe7d:1b9
Connecting to mirror.anl.gov (mirror.anl.gov)|146.137.96.7|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 717533184 (684M) [application/octet-stream]
Saving to: ubuntu-12.04-server-amd64.iso'
100%[================================================>] 717,533,184 35.7M/s   in 26s    
2012-08-20 03:21:29 (26.5 MB/s) - ubuntu-12.04-server-amd64.iso' saved [717533184/717533184]

 

Figure 1. Three OpenStack local nodes as part of the Private Cloud build a tunnel through one box to the HP Cloud instance. Isn’t it great, we use cloud to denote the unknown yet we speculate on a big unknown cloud as the future. Im ok with that though. Much easier to be an optimist and get disappointed then hate everything.

Figure 2. This can scale up and out if properly orchestrated. 

Figure 3. In the OpenStack Dashboards on both the Private and Public Clouds open UDP 8472 to allow for the VXLan packet traversal which is carrying the encapsulated payload of source destination MAC/IP/Transport headers from one VM to another.

Figure 4.The Hub Open vSwitch configuration. Each tunnel endpoint is associated to a bridge with its own unique Data Path ID (DPID)

Figure 5. Baseline Bandwidth from the Private to Public Cloud over the Internet un-encapsulated. Average 29Mbytes/sec.

Figure 6. Iperf output of a spoke to the Public Cloud instance. Average bandwidth of 18Mbytes/sec.

Figure 7. Baseline bandwidth from spoke to spoke un-encapsulated internal to the Private Cloud and traversing a 10Gb top of rack switch w/ an MTU of 9000.

Figure 8. Baseline bandwidth from spoke to spoke with a GRE encapsulated tunnel internal to the Private Cloud and traversing a 10Gb top of rack switch w/ an MTU of 9000.

Figure 8. Baseline bandwidth from spoke to spoke with a VXLan encapsulated tunnel internal to the Private Cloud and traversing a 10Gb top of rack switch w/ an MTU of 9000.

Figure 9. Private Spoke to Cloud Spoke w/GRE encapsulation.

Figure 10. Private Spoke to Cloud Spoke w/VXLan encapsulation.

Figure 11. All Private Spokes to Cloud Spoke simultaneously w/GRE encapsulation.

Figure 12. All Private Spokes to Cloud Spoke simultaneously w/GRE encapsulation.


Thats all for now. I need to proof these posts but time is lacking and have day job work to get done before bed at 4am. The research data is more interesting than my blathering anyways. Thanks for stopping by.

About the Author

Brent SalisburyBrent Salisbury works as a Network Architect, CCIE #11972. He blogs at NetworkStatic.net with a focus on disruptive technologies, that have a focus on operational efficiencies. Brent can be reached on Twitter @NetworkStatic.View all posts by Brent Salisbury →

  1. Ivan PepelnjakIvan Pepelnjak08-21-2012


    There a “slight” problem with live vMotion into the cloud – the data (at least the virtual disk) has to be moved there as well, not to mention the 10 msec RTT requirements.

  2. Brent SalisburyBrent Salisbury08-21-2012


    Thanks for pointing that out Ivan. The shared storage between sites is a much bigger than slight problem, you were being kind :-) Does vMotion test the RTT value prior to attempting a move?

    I cant think of any “good” reason to extend an L2 domain into a public cloud other than for policy de-dup. Maybe heartbeat of some type if doing some clustering but you would need some type of geo/global DNS balancing.

    That should be a relief. L3 load balancing of various types/geo DNS or just downtime seem to be the only solutions that I know of.

    Has anyone ever dug into how Amazon does HA? I saw this the other day and it looks like Amazon is still missing some HA pieces.
    https://forums.aws.amazon.com/thread.jspa?messageID=292839
    There is some nerd rage in there too.

Leave a Reply