Software Defined Network (SDN) -Looking Toward 2013 – Part 2
In the second part of the two parter I am looking at some of the softer sides of the networking futures and what I expect to see in the coming year. Click here for Part I.
SDN for HyperScale and Everyone Else
Today Google back-hauls their inter-data center traffic using its implementation of OpenFlow called G-Scale. While a fairly fixed set of flow patterns they have so far been the largest example of an SDN extracted control plane deployment to date.
OpenFlow is the most significant change to the networking game since he first came to Google more than a decade ago. -Urs Hölzle Senior VP of technical infrastructure and Google Fellow Wired 13 Nov 2012.
Others, such as large carriers are very focused on CapEx reduction. Read how the Service Providers have approached SDN in a white paper from IETF85 summarized here.
For the rest of the world, which makes up an overwhelming majority of the market-share, most are waiting to see what comes out the door as a polished product. There is still much confusion consumer and vendor a like how SDN deployments would occur in a hybrid fashion.
Economies of scale saving are just that, significantly less at smaller scale. Outside of hyper-scale, white box CapEx savings are not as important or significant due to scale as more traditional needs. Vendor support, performance and availability for hospitals, finance, manufacturing and most service providers will top the list. Networks and services down mean lost revenue, production and lives. The incessant desire to reduce cost by all means requires much more discipline and process than most can achieve. Before you can run a hyper efficient business like Amazon, the aforementioned fundamental core metrics need to be 5-star across the board. Walk before you run.
NetOps and DevOps
NetOps is a term being thrown around by marketing and cloudy pundits wanting to wrap the promise of SDN into a sales deck. DevOps is still in incubation but it’s roots come from agile software development life cycles being applied to infrastructure provisioning. At worst this is the combination of buzzwords like IaaS, Cloud and SDN all being mashed together into one big fail sandwich.
Figure 2. Programmatic APIs providing hooks into primitive network elements will enable agility in service and product provisioning.
How the life-cycle will apply to networking is the blending of networking into storage and compute. If you are a really big shop you can fire enough people to automate everything is a not the case. Operators will focus more on much needed policy and orchestration, rather than the repetitive insanity of pushing policy state to scattered systems. This same cycle happened with x86 virtualization. SDN at it’s worst failure will at the very least provide proprietary APIs to do more automation. That said, SDN is not only about automation, that is vendors looking to redefine what is needed and protect market-share.
Developers and operators hand in hand spreading good will all over the network. We have been hearing silo tear downs since the advent of the “converged fabric”. The only technology that will bring that is removing the human element, but we still have another few decades until singularity.
What Does OpenStack Mean to Networking?
Some of us have held a position for a while that the direction and process that is happening in the OpenStack community is a prelude to what the networking industry will be facing at the large scale provider layer. It is open source, shared and forces differentiation in areas of business that actually count like customer service, price, SLAs etc rather than proprietary protocols, extension, encaps, APIs or anything else that is not directly serving the business.
What is OpenStack Quantum?
Quantum is an OpenStack project to provide “networking as a service” between interface devices (e.g., vNICs) managed by other Openstack services (e.g., nova). – OpenStack Wikihttp://wiki.openstack.org/Quantum
OpenStack is a global collaboration of developers and cloud computing technologists producing the ubiquitous open source cloud computing platform for public and private clouds. The project aims to deliver solutions for all types of clouds by being simple to implement, massively scalable, and feature rich. The technology consists of a series of interrelated projects delivering various components for a cloud infrastructure solution.
OpenStack Quantum, headed by Dan Wendlandt of Nicira/VMWare is a prime example of how networking can blend into the rest of the computing world with software abstractions. Vendors will and are modeling their orchestration solutions after this blueprint. Just as the applications world has been living in an API driven environment, so will networking begin to think along these lines. An example would be inter-domain orchestration. BGP extensions will only get us so far. Concepts like Alto and IRS in the pipeline, AS to AS interactions will be happening pragmatically via APIs on x86 hardware, not from the tired CLI UIs we have used for 20 years.
If the world becomes driven by open source projects rather than standards, the OpenStack model will be the future of networking standards. I am not sure I would bet against this from happening either. As standards bodies become less agile and the end of the monolithic networking hardware comes near, Horizontalization will begin taking hold. We are trending behind compute by a decade. It may be worth tracking the OpenStack project, for a possible prelude to networks.
Closing Thoughts
The path forward conceptually has already been paved by other areas of computing. Networking will continue to merge into computing. This shift in methodologies, makes it fairly reasonable to predict that where networking will head next. The outcome of the current data center “stack wars” will likely prelude what happens with networking.
SDN Trends I Expect In 2013:
- OpenStack becomes the de-facto for cloud providers as an Open Source standards body.
- Cisco UCS goes head to head with VMWare. UCS bills advantage of being hypervisor agnostic.
- Hyper-scale networking lean towards pure commodity or even whitebox switches, much like Facebook builds their own custom servers today. switches look more towards the Broadcoms with primitive APIs rather than the OEM partners. These are organizations that save millions on a 1-point delta in a PO.
- The rest of the world continues buying from traditional vendors and leverage finished SDN products rather than rolling their own applications leveraging exposed APIs.
- Network management and automation are marketed under the guise of SDN.
- Incumbents turn to standards bodies, while disrupters turn to open source.
For more information regarding the Open Networking Foundation (ONF) Forwarding Abstraction Working Group (FAWG) Curt Beckmann the Chair of the FAWG working group recently did an update for the community. I will update this post with a link as soon as it posts.
Prologue
As I was finishing this post, I was chatting with my friend and fellow blogger Anthony Burke. A mutual friend of ours relayed a bad day he was having.
- Friend: Someone fat fingered a core router
- Friend:Then rolled back.
- Friend: Ops rolled back with an old config.
- Friend:Now everything f*****.
We both empathized with him because this is the peril of our beloved industries technology today. While some proclaim that distribution is the only path forward, current mechanisms are quickly being outpaced by growth. Processes alone cannot scale and operate networks.
This is a product of human involvement without proper programmatic abstractions and has happened to anyone in network dozens if not hundreds of times. It human error avoidance does not discriminate, even the most disciplined in process like Amazon and Azure are vulnerable.
We have no central programmatic control over the network today outside of cobbled together homegrown applications. There is a lot of money to be made in the application layer, that most vendors took the wait and see approach in 2010-2011 and began scrambling in 2012 with resources and acquisitions.
Thanks for stopping by.
At the most recent OpenStack summit, Somik Behera presented how Nicira/VMWare used Quantum to reduce their QA test cycle times via automation – a great use case for OpenStack
Thanks Umair, I will be sure to check the presentation out. Thanks for sharing.
Hey Brent, Great piece as usual. Curious what your thoughts are about the Gale and Cloupia acquisitions. Have to say it caught me a bit off guard but as soon as I heard it made immediate sense to me. I just cant believe how fast things are moving, it definitely changes the game … i guess I have always anticipated that we would be really focused on a converged infrastructure set of API’s in a very converged way but with now pretty much all of the north american infrastructure suppliers having a complete infrastructure virtualization/automation platform now it definitely changes the dynamics around SDN development pretty significantly imho … I think definitely net positive for the industry but curious about your thoughts.
Hi Art, Thanks for the comment! I think your piece captured the transient nature of 2012 very nicely.
In case anyone missed it I recommend Art’s feed be bumped up to the top of the list here.
The hyper-scale organizations have pioneered the way for the rest of us to begin reaping the benefits of such a competitive environment. The openness will and contribution to “community” from vendors will extend as far as it doesn’t compete with its own revenue generation. Obviously pointing out the obvious but how many developers do you think Microsoft have pushing code upstream into OpenStack? If its more than one hand I would be shocked.
The enterprise applications are so broad and diverse. I think without a doubt, the value adds will be these rolled supported orchestration engines. Devops only exists in scale. Shrinking enterprise budgets are not those environments. Until someone figures out how to deliver IaaS free, (which I personally think will happen at some point) VM Farms stay local while SaaS continues to pear down the application portfolio where the ROI is real.
We are humans, we build complexity for the sake of complexity. Abstractions in infrastructures means its harder to troubleshoot when it breaks 🙂 Couple that with exponential growth and hitting the wall on Moore’s Law we will all pay the mortgage until robots replace us. Then we get the cleaning lady robot from the Jetson’s to grow our food and feed blueprints into our 3D printer because gas is too expensive to drive to Wal-Mart. I am not feeling very progressive today I don’t think lol.
Cya pal!
-Brent
Brent: thank you very much for putting so much information in one place!
One quick question. When you were talking about the motivation for moving to a SDN network, you (of course) mentioned cost savings. However, you skipped over the quicker convergence that you can experience. In the world of finance especially, this is a critical factor. Did you forget about this or do you take exception with it?
Hi Dr. Anderson,
I tend to gloss over the financials anymore as implicit. Off the shelf silicon is what got us here, no doubt. I am tracking the service provider market pretty closely to see what comes out of that as that has been their number one focus (Stuart Elby -Verizon speaks to this a lot). I think we will have more efficiency but not necessarily the over subscription that was gained from x86 virtualization. Network links must still be physically distributed so if there is not demand to oversubscribe on the other end you still have to build it. NSPs oversubscribe anything and everything today. I am guessing the same people that buy x86 servers from unknown Asian companies will likely do similar things with networking down the road.
Horizontalization will get us commodities to force smaller margins but we still have to get there which is a long funnel. Software licensing may offset some profit loss on hardware.
Do you see the applications ecosystem growing to the point that 3rd party apps are the norm?
Thanks for kind remarks and feedback,
-Brent