OpenFlow: Coarse vs. Fine Flows

OpenFlow: Coarse vs. Fine Flows

Flow Factory2

OpenFlow: Coarse vs. Fine Flows

OpenFlow: Coarse vs. Fine Flows – This series of posts, is to shed light on some illogicalities surrounding the SDN discussion. I will highlight what I propose, will achieve performance and scale when implementing flow based forwarding designs. Soon these debates, will no longer be conceptual or limited deployments in only the hyper-scale networks, as more of us begin real-life implementations. The last post OpenFlow: Proactive vs Reactive Flows, was an attempt to point out some fallacies that OpenFlow performance is weak compared to today’s flood, learn, route and filtering methods. After reviewing some  facts that pre-populating flows proactively via OpenFlow into a switches flow table, eliminates the need to punt the first packet of each new flow into a controller. Packet forwarding now takes place at wire speed with L2-L4 policy implemented. The second misnomer, is flow based forwarding does not scale. Bad designs are what do not scale.  In this article, we will examine the second factor in finding performance and scale in initial OpenFlow SDN: coarse vs. fine flow scaling.

OpenFlow Matching

OpenFlow allows a off-board x86 server create a TCP socket with the switch in order to exploit TCAM memory, creating flow tables rules for line rate forwarding. Just as today’s routers and switches have limited numbers of access-list and QOS policies, the same applies to the number of flow table lines are available. This is especially true with the current OpenFlow enabled hardware since it is being retro fitted to exploit the TCAM that is traditionally used for QOS and access-lists.

The matching of fields in in OpenFlow can either be an explicit match or a wildcard match. A wildcard match means the  switch does not care what the value is in the specified field. An explicit match is a binary match, it matches or doesn’t.

Eye on the Chipset Foundries

Foundry chipset fabrication is a funnel measured in years, not months. 2013 will bring more capable hardware and more agreement on how flow table abstraction should occur in hardware leading to larger and more flexible flow tables in hardware. Until then, when choosing proactive flow table rules, it will be very important to use appropriately sized rules, so as not to overrun the limited length and width in TCAM flow tables. If you have router today that supports x-number of rules, your don’t plan to use x+n when designing the architecture. See the appendix at the bottom of the post for required OpenFlow matching in spec v1.3.

OOpenFlow: Fine vs Coarse Flow Policy

Fine or Granular flows mean just as they sound, very precise flow table rules that match on a very specific set of traffic. In traditional IP routed IPv4 prefix driven networks, this would be the equivalent of a /32 host route. Since OpenFlow and SDN networks operate using flow based forwarding it allows for much more granularity then L3 prefixes. Granularity can mean matching specific values in fields ranging across Ethernet (L2-Data), IP (L3-Network), TCP/UDP ports (L4-Transport).

OpenFlow Granular

Figure 1. OpenFlow 10-Tuples Granular Flow Match. This is a fine match since it is looking for one very specific flow. This is early application awareness.

OpenFlow: Coarse Flows

Coarse flows, are flow rules that will match a broad set of flows. This is very similar to the concepts of route summarization in layer3 routing. Route summarization and CIDR allowed the Internet to scale to what it is today. Search ASICs would not be able to hold Internet routing tables if all routes in the Internet tables were a /24 prefix. Summarizing Internet routes into coarse prefixes like a /8,/9/10 etc reduces forwarding tables to the 450,000 routes in today’s Internet tables, rather than a potential 4,294,967,296 host routes.

One of the fundamental transformation that flow based forwarding presents, is the inclusion of layer4 transport headers as another point to make forwarding match + action logic, being done programmatically. This also adds more possible combinations of forwarding rules which is all the more reason to get our arms around the need for broad course flows that will match the majority of traffic for forwarding. Programmatic policy is vital to manage the enormous combinations that can come from matching 12+ L2-L4 fields.

OpenFlow Coarse

Figure 2. OpenFlow 10-Tuples Coarse Flow Match. This matches a very wide range of flows. This could be even larger networks like /2, /3 that acts very much like default routes today but require very few flow rules. This is how we will deploy SDN on today’s limited hardware using proactive flows.

Flow based forwarding through a protocol like OpenFlow, conceptually consolidates functionality of various types of purpose built hardware into a common mechanism. In theory the same methods to push L2-L4 in many cases can be simplified into one instantiation into the flow table(s). That is the fundamental disruption to the networking hardware market that has quite high profit margins on networking kits. Go take a look at the only BU in the black along with services in HP and Dell’s quarterly earning, spoiler, networking gear.


  • A quick review of some components L2-L4 components:
  • Layer2 – Switching (VLAN,MAC,src/dest Port) Done is SRAM/DRAM CAM. Cheap fast, exact match lookups on keys. This will begin to be leveraged with OpenFlow for large L2 tables not needing wildcard TCAM lookups.
  • Layer2 1/2 – Label Switching (Routers, L3 Switches, MPLS, LSP, FEC, LIB, LFIB)
  • Layer3 – Routing (Routers, L3 Switches, VRFs, IPv4 and IPv6 routing)
  • Layer4 – UDP/TCP Ports (Typically handled w/ Firewalls)

Summary

What the point of this series of posts, is to help folks begin understand scale and performance issues that are relevant when talking about software defined networks and OpenFlow specifically. When designing SDN architectures in 2013, we are doing so by exploiting limited resources in the hardware since it was not designed to do flow based forwarding at scale. However, overall those limitations revolve around the number of flow rules, the number of tuples (header fields ex. tcp, mpls label etc) or the combination of matching and rewrites (ex. match vlan10, change to vlan20) that can be done at wire speed.

When someone says centralized control planes do not scale, that is anything but an accurate statement. Those reluctant to open and honest conversations around SDN are looking to protect revenue or comfortable with the “way we have always done it” mentality. Technology like anything else when discussed in absolutes is rarely grounded in facts.

The promise is in the policy abstractions and applications not how we push flow into hardware. Time would be much more well spent in integrating big data and analytics into maximizing efficient network resource allocations and scale for the looming mobile explosion of 25-50 billion IP enabled mobile devices in 2020. Expecting radical change overnight is unrealistic and ill-advised in the uptime at all cost networks being operated today. But waiting for widespread infrastructure failures carries just as much and arguably more risk.

Wether or not we use OpenFlow or MPLS as the wire protocol is religious. The key is the ability to push flows in a conceptually simple manner. Simplicity will lead to uptime and modular scale. Do we wait another five years to figure out how to create the perfect southbound protocol from a policy factory to a forwarding target? I rather hope not, too many more important problems to solve before we get to 50 billion.

As Shenker put it in his Keynote at Ericsson Research,  An attempt to motivate and clarify Software-Defined Networking (SDN)

“Think of OpenFlow as the x86 instruction set. Is the x86 instruction set the right answer? No, its good enough for what we use it for, then why bother changing. Thats what OpenFlow is. It’s the instruction set that we happen to use, but we shouldn’t get hung up on getting it exactly right.” – Scott Shenker UC Berkely

Additional Resources

The rest of this series of articles will be gluing these concepts together into a reference architecture.

OpenFlow: Design Considerations Series
Part 1 OpenFlow: Proactive vs. Reactive
Part 2 OpenFlow: Course vs. Fine Flows
Part 3 OpenFlow: Hybrid Deployments
Part 4 OpenFlow: SDN Looking Ahead (Pending)

[

Appendix: Required Match Fields in OpenFlow Spec v1.3
OXM_OF_IN_PORT Ingress port. This may be a physical or switch-defined logical port.
OXM_OF_ETH_DST Ethernet source address. Can use arbitrary bitmask
OXM_OF_ETH_SRC Ethernet destination address. Can use arbitrary bitmask
OXM_OF_ETH_TYPE Ethernet type of the OpenFlow packet payload, after VLAN tags.
OXM_OF_IP_PROTO IPv4 or IPv6 protocol number
OXM_OF_IPV4_SRC IPv4 source address. Can use subnet mask or arbitrary bitmask
OXM_OF_IPV4_DST IPv4 destination address. Can use subnet mask or arbitrary bitmask
OXM_OF_IPV6_SRC IPv6 source address. Can use subnet mask or arbitrary bitmask
OXM_OF_IPV6_DST IPv6 destination address. Can use subnet mask or arbitrary bitmask
OXM_OF_TCP_SRC TCP source port
OXM_OF_TCP_DST TCP destination port
OXM_OF_UDP_SRC UDP source port
OXM_OF_UDP_DST UDP destination port

Thanks for stopping by.


About the Author

Brent SalisburyI have over 20 years of experience wearing various hats from, network engineer, architect, ops and software engineer. More at Brent's LinkedInView all posts by Brent Salisbury →

  1. Wes FelterWes Felter01-20-2013


    Good post, Brent. This is the kind of education that the market needs to move forward in understanding SDN.

    • Brent SalisburyBrent Salisbury01-21-2013


      Thanks Wes, certainly the same goes for the work you have accomplished. Very impressive.
      Respect,
      -Brent

  2. AmitAmit01-25-2013


    Hi Brent,

    This is a brilliant post. It helps me answer the question “why do we need OpenFlow and why it is better than traditional routing/switching (atleast in some aspects)”.

    Thanks.

    Amit