OpenFlow: SDN Hybrid Deployment Strategies

OpenFlow: SDN Hybrid Deployment Strategies


OpenFlow: SDN Hybrid Deployment Strategies

This series of posts, has been focused on reviewing some practical OpenFlow SDN deployment strategies. Early SDN, will be hybrid networks and early niche applications to begin understanding how to integrate pockets of SDN, mitigate risks and most importantly understand the technology to properly scale solutions. If you try and put 5,000,000 access-lists into a kit that can only do 2,000 rules, does that mean access-lists do not scale? Or does that mean bad designs do not scale? Risk of network instability and pressure of constant availability, has networking trapped in a less than ideal situation. Absorption of the oncoming onslaught of growth on the horizon along with uptake in cloud adoption will be hit and miss at best.

3 Key OpenFlow Hurdles
  1. Performance – Pre-Populating Flows:Pre-populating the flow tables at boot or at policy creation in a proactive manner instead of reacting with packet-in events. The only thing centralized at that point is your management engine, in order to define and push policy and topology awareness to the holistic network.  More on that topic in OpenFlow: Proactive vs. Reactive.
  2. Scale – Flow Rule Allocations:Early OpenFlow enabled hardware is limited in the amount of wildcard flow rules to 750-3,000 depending on vendor. Some vendors such as NEC, have external TCAM that supports tens of thousands of rules. Early designs need to take that into account and be judicious if the amount of coarse vs fine flows chosen to pre-populate into flow tables. Peel niche applications or selective traffic with granular flows and drain the rest of the traffic with broad flows. This will make more room for interesting traffic. More on this topic in OpenFlow: Course vs. Fine Flows.
  3. Risk – Hybrid Architectures: A ships in the night strategy is a pretty straightforward approach to new networking architectures.  Hybrid features from the OpenFlow agent on the switch are critical to maintain logical path and process isolation between legacy networks and OpenFlow enabled Vlans, bridges and interfaces. Hybrid path isolation is well understood and documented. That should may be enough risk reduction for many to introduce early use cases of SDN applications or pockets of exploratory test/dev. The rest of this article is focused on the hybrid component.

Mitigate Risk and Innovate

Hybrid deployments are intrinsic to reducing risk. Hybrid networks are not a new concept. Hybrid techniques should be well understood after consolidating many traditionally parallel network services, into a converged architecture. We slowly introduced services like VOIP and wireless along with deploying virtualization overlays with MPLS. New SDN deployments wether tunneled overlays or logical isolations, will be the important first steps  of innovative progress.

Hybrid SDN Networks: Keys to Unlocking Progress

The most important thing to begin thinking about SDN enabled networks, is helping colleagues, leadership and community understand that almost all risk can be mitigated by using engineering hybrid native forwarding and SDN enabled forwarding networks. The risk lies in vendor firmware on the SDN enabled switch. Since most of the agents are ported Open vSwitch that should offer some stability. All early OpenFlow enabled gear is sharing forwarding hardware between native and SDN pipelines so any planning needs to incorporate existing configurations, particularly QOS and ACLs.

8 Hybrid Strategy Considerations
Openflow SDN Hybrid

Figure 1. An example of a simple topology from a post last year on hybrid SDN.

Think modules, those are the building blocks of scale. Some of the following eight points may help develop your modules strategy to integrate into your environment.

  1. Hybrid Physical Isolation: – Parallel networks make little since outside of test/dev, unless it is part of a hardware lifecycle strategy. If replacing some distribution switches it may make sense to do a phased physical deployment. Many MPLS migrations follow this path.
  2. Hybrid Logical Isolation: Isolate at the Bridge/Vlan. It is as simple as having a Vlan for the legacy packet forwarding and a Vlan to attach to an OpenFlow controller. Any vendor that does not have this option should be avoided unless you are well ahead of the power curve. Almost all vendors have the ability to operate with a legacy and OF vlan.
  3. Self-Provisioning Multi-Tenancy: The switch can have multiple controllers for early multi-tenancy. If you have one physical topology and desire multiple operators this can be achieved by simple creating two vlans or bridges. On each Vlan A attach it to controller A and on Vlan B attach it to controller B.
  4. Self-Provisioning Multi-Tenancy (Alt): The alternative multi-tenant option is logical “slicing” in a single controller (rather than two separate) to allocate tenancies or “slices”. Think of this as VRFs. The advantages are similar such as overlapping IPv4 address space but eventually overlap on any tuple. This is not a trivial task being the arbiter of policy and permissions on a single switch much less moving up the stack into controller federations. Controller maturity will lead to consolidated self-provisioning. I don’t expect that to be solved anytime soon. Inter-AS forwarding table exposure would likely be horrific. SW APIs make much more sense outside of administrative boundaries.
  5. Native and SDN Network Redistributions: This is a very salient topic over the past couple of years. Today we focus on packet routing, SDN tends to revolve around application aware flow forwarding, but how do the two intersect? The answer is, the same way. Controllers will either use static routes that will be transformed into flows to drain out of pockets of SDN or protocols to peer with the legacy networks IGP (e.g. OSPF, RIP, LDP etc). Redistribution from SDN islands will merely have the preferred routes from RIBs pushed into forwarding tables almost identical to how FIBs are produced today.
  6. Protecting the native Network: For early testing some vendor are including software and hardware rate-limiting features. These options can offer protection from the native network and act as a governor until a design is fully understood and vetted. Open vSwitch running on hardware also offers QOS mechanisms that can limit traffic on a bridge.
  7. Full SDN Integration Using Proactive Flows: Rather than logically Isolating OF and native pipelines in a ships in the night manner, putting all hosts into an OpenFlow Vlan/bridge. Now you can use granular matches for something like traffic steering interesting traffic to a preferred path and let the rest of the traffic match in coarse default pre-populated proactive flows. I would suspect Google did something similar to this in their OpenFlow deployment of data center backhauls, but I have not been privy to any insight.
  8. Full SDN Integration Using OFP Normal: The long term goal, may be to fully integrate the SDN and native pipelines. Having flexible handoff between pipelines, would allow for matching in the OF pipeline and then passing the forwarding pipeline for a flow to the non-OF pipeline, or vice versa. The Brocade MLX is a good example of this pipeline processing. OpenFlow Normal and Local are described in more detail in the next two sections. These features should be key deliverables for vendors wanting to lead in the OpenFlow space.

OpenFlow Normal Actions

These are critical actions. As multi-table pipelines solidify out of the ONF and particularly the Forwarding Abstraction Working Group (FAWG) in coming weeks, vendors will have good articulations on how to properly implement OF pipelines moving forward. Any vendor not integrating these components, will possibly be limiting the customers flexibility. These are key concepts for RFP/RFI consideration around future hardware.

“Two optional OpenFlow hybrid capabilities per spec v1.3 are hybrid interactions between the OF and Non-OF pipelines. These features are critically important to phased migrations and early use cases. OFP LOCAL: Represents the switch’s local networking stack and its management stack. Can be used as an ingress port or as an output port. The local port enables remote entities to interact with the switch and its network services via the OpenFlow network, rather than via a separate control network. With a suitable set of default flow entries it can be used to implement an in-band controller connection. OFP Optional: NORMAL: Represents the traditional non-OpenFlow pipeline of the switch (see 5.1). Can be used only as an output port and processes the packet using the normal pipeline. If the switch cannot forward packets from the OpenFlow pipeline to the normal pipeline, it must indicate that it does not support this action.” – OpenFlow Spec v1.3

“OpenFlow and Non-OpenFlow Pipelines

OpenFlow-compliant switches come in two types: OpenFlow-only, and OpenFlow-hybrid. OpenFlow-only switches support only OpenFlow operation, in those switches all packets are processed by the OpenFlow pipeline, and can not be processed otherwise. OpenFlow-hybrid switches support both OpenFlow operation and normal Ethernet switching op- eration, i.e. traditional L2 Ethernet switching, VLAN isolation, L3 routing (IPv4 routing, IPv6 routing…), ACL and QoS processing. Those switches should provide a classification mechanism outside of OpenFlow that routes traffic to either the OpenFlow pipeline or the normal pipeline. For example, a switch may use the VLAN tag or input port of the packet to decide whether to process the packet using one pipeline or the other, or it may direct all packets to the OpenFlow pipeline. This classification mechanism is outside the scope of this specification. An OpenFlow-hybrid switch may also allow a packet to go from the OpenFlow pipeline to the normal pipeline through the NORMAL and FLOOD reserved ports.” – OpenFlow Spec v1.3


Focusing on performance, scale and a hybrid integration may be some places to start for those looking to get early testbeds or use cases going. Approaching your vendor and requesting OpenFlow agents or in the case of some early leaders just downloading the OF agents firmware is an easy start that doesn’t require any purchases. There are very few vendors anymore, that do not have public or beta OpenFlow code. There are also controllers available in alpha and beta programs from networking vendors. Tell your field account teams what you want and hold them to delivering. If they don’t deliver, keep them accountable. For those without hardware, I will link to some software only SDN labs that do not require hardware below.

OpenFlow: Design Considerations Series
Part 1 OpenFlow: Proactive vs. Reactive
Part 2 OpenFlow: Course vs. Fine Flows
Part 3 OpenFlow: Hybrid Deployments
Part 4 OpenFlow: SDN Looking Ahead (Pending)
Additional Resources

Thanks for stopping by!

About the Author

Brent SalisburyBrent has worked in both the Enterprise and vendor sides. In 2014 Brent left RedHat to be a co-founder of, a startup with a focus on reliable, scalable and performant Docker networking. In 2015, Docker Inc. acquired Socketplane. Now working at Docker, he is part of an engineering team that is building community and working to make sure the users experience of Docker networking is as satisfying as the rest of the amazing project that is fundamentally changing the infrastructure market as fast as anything the industry has experienced since the micro processor.View all posts by Brent Salisbury →

  1. Excellent discussion on hybrid networking.

    I would add another alternative that we use to call “controller-centric hybrid networking”, where the legacy control and routing stacks are executed in the controller/application domain.

    In this paper we have described more on this transition path and the ideas and opportunities behind Software Defined IP Routing:

  2. Brent SalisburyBrent Salisbury01-27-2013

    Thanks Christian, appreciate the link to the white paper. Keep those coming, they are very important.


  3. Sam CrooksSam Crooks02-02-2013

    I personally can’t understand why no one is investigating interesting things like:

    – use of a memory hierarchy similar to the cache hierarchy used in CPUs to prioritize flows to be held in fast local flow storage, while holding less frequently used flow rules locally in still fast, but not blazing fast flow storage structures, and with all the benefits of local preloaded proactive flow loading and not having the latency of punting to the controller… what am I missing? TCAM –> SRAM –> DRAM –> flash disk –> network controller?

    – use of interesting switches like Arista 7124FX with an onboard FPGA + SDN for use as an advanced SDN edge node to look at more interesting use cases, like application load balancing, or flow-rules based encryption, or statedul security/IPS distributed natively in4o the network.

    • Brent SalisburyBrent Salisbury02-03-2013

      Hi Sam, I think you are right on target. I perosnally think that some of the products out in the next quarter will allow for much more flexibility on the hardware edge. Embedded systems via SOCs or anything else will start debunking some of the mythology around scale for some more aggressive SDN use cases, particularly around selective reactive forwarding w/ <1ms RTT from DP to CP.

      Like where you are going with it, look forward to hearing more.