I remember about two years ago having a conversation with my boss and we were speculating on the lifespan of a wired network in an Enterprise campus. We both agreed it would all be driven by the App. Well, thanks to carrier networks being what they are, content, cloud and application providers typically develop to the lowest common denominator bandwidth, the cellular mobile networks. Those are networks that struggle to pull off double digit Mbps performance yet we (not necessarily the user, well except for my residential service, another story) are demanding 1Gbps to the desktop. The days of fat client server apps being the bandwidth driver are trending down as SaaS and cloudy like application delivery rapidly grows. Is this the death of the wired Enterprise network?
The traditional wired edge, does not allow for oversubscription of the network, it is one port and one host. Wireless networking allows for oversubscription since the air is a shared medium that all clients attach to. Don’t let shared medium scare you like it used to me. 802.3ac has begun to solve the wireless duplexing problem and the 5GHz spectrum has helped with client density issues as compared to 2.4GHz of the old. Just this week at Wireless Field Day #3 thanks to Foskett and team, were demos of 600Mbps client throughputs on 802.11ac on gear beginning to ship now. That performance will begin to tip the scale of this idea that we must wire every nook and cranny of the enterprise regardless of actual consumption.
Start thinking like a carrier. One of the number one ways service providers make money is from over subscription. Forget networks for a second, think about your corporate VM farm. That shows cost saving by taking a number of standalone physical boxes and collapses them into one physical box. Lets say that phyisical server has 64Gb of memory and you deploy 50 virtual machines with up to 2Gb of RAM. That works becuase we can effectively predict that each one of those virtual machines will not be using %100 of allocated memory at all times. Now you can effectively manage your resource pools in a central pile by oversubscription and make sure you are actually using what you buy. Instead of 50 physical servers all using %5 of its resources (aka memory, compute, storage) you are now able to allocate excess capacity and scale up and out centrally. In the enterprise that is cost avoidance in a provider business model that is more revenue.The compute and storage world has these tools that allow for this in an orchestrated fashion, networking, not so much.
Align with Business
Providers allocate and sell more than they have, while banking on the odds that the customer will not actually use all of what they have purchased and leased. Providers do this fairly well on expensive long haul link that are pricey to own and operate, in the Enterprise we typically do a horrible job at this. We often build the biggest pipes we can get away with and life cycle gear because the vendors come out with a new a speed or EOL a switch. Thats not good enough. We need to start thinking in terms of blending business in with IT. Avoiding intelligence in the access edge of the network allows for much longer lifecycles. I leave some 3500xl in production on purpose, 100Mb POE serves %99 of average workers just fine. The laptop they are plugging in likely has I/O off the board around 120Mbps. I have gone to war many a time over upgrading a switch with no value just because it was an EOL switch. Internet content while rapidly increasing is often more people doing something rather than higher per flow usage. Five years ago my mom did not go to Youtube, she does now. The other increase is the actually device count, which brings us to the meat.
What is driving speeds and power over Ethernet (PoE) requirements on the edge? One thing, wireless. %100 growth year to year in some places is not even close. Uplinks of 1Gb are required for todays access points and then gets oversubscribed. That will cost money since the end point count is growing rapidly. That cost will need to be offset. The port density in a communications closet should start decreasing assuming BYOD and mobile clients continue to trend up as the primary work device.
Figure 1. This is a 1Gb uplink of a typical IDF or access closet. There are 48 x 1Gb ports south of that have an average of 2 Mbps transmit and 7Mbps receive and over 30 days it peaked at 34Mbps received. The oversubscription is 1 to 48 1Gb.
Figure 2. This is a 1Gb uplink of a typical BDF (Building Distribution), there are 48 x 1Gb ports south of that have an average of 10.5 Mbps transmit and 37Mbps receive and over 30 days it peaked at 240Mbps transmitted. Thats a 1 to probably 300 (mix of 100/1000 Mb) oversubscription. Sounds crazy but is it? Traffic is purely north south, typically Internet bound or ERP and fat client email would be local.
Part of the SDN Landscape
I am fully anticipating this rapid increase in 802.11 clients and the growing commitment to actually taking the demand for more flexible and OPEN networks in the battle cry of SDN to begin to consolidate the wireless and wired architecture into a controller driven model. We need the same operational ease that wireless operators have on the wireless network. We need the bump in the wire through some for of a centralized control plane which can still be deployed in a modular fashion for scale just as wireless networks are today. Go look for a Gartner magic quadrant on wireless, guess what, there isn’t one anymore, it has been replaced by “Magic Quadrant for the Wired and Wireless LAN Access Infrastructure”. The end is near for the two different worlds and hopefully technologies between the wired and wireless network. Wireless manufacturers are beginning to get into the wired business, that should be a pretty good indicator alone. So as controllers start dropping in as SoCs on our switches, I hope to see the integration of open technologies to encourage interop and offer flexibility. The vendors that get greedy after vendor locks would be taking a risk. Standardization between controller and switch.
Distributed Yes, But How Much Makes Sense?
Aerohive and others have begun pushing controller intelligence and control plane to the AP, I think that has value when there is limited BW between controller and AP to avoid hairpins. Just as importantly if not more in fiber rich enterprises is the need to apply coherent ubiquitous policy in management, security and quality that comes from the control plane. As speeds continue to increase as will distribution to scale but that can done in a modular fashion to avoid the cost every point in the network being this fully featured device that is sitting at %5 utilization. Centralization brings a cost benefit, wether better management or less silicon needed to service the applications on the edge. It is a distributed systems theory problem, not a put a man on the moon problem or collide photons and take pictures at the same time problem. Choosing the best widget technologically is only part of the solution, consumers designing architectures that fit both business and technological needs and product development by vendors to be flexible enough to provide the building blocks for a lasting and scalable solution is the path to excellence.
Things to question
Do you really need to go after that 3 or 4db improvement at a large CapEx to get shiny new Category 6a wiring in for the average office worker in their cube? Category 5 still supports Gigabit Ethernet.
If you are lucky enough to have a hardware refresh budget should you revisit the port counts in your communications closet? Odds are a good chunk of those ports have been replaced with a laptop that has built in wireless. When and if BYOD and telecommuting becomes the norm this will further decrease the number of wired ports.
Let the facts drive your bandwidth increase not the vendor. Why upgrade for the sake of upgrading? Watch your traffic and trends, set thresholds. Incorporate Enterprise and cloud applications, VoIP and emerging wireless like 802.11ac become part of the capacity planning forecasting.
Wireless is not reliable enough. Hmm, I would have agreed a few years back wholeheartedly. Today I am more confident in stable device drivers and improved spectrums. That said troubleshooting broken microwaves with spectrometers is a mess. I see hospitals deliver drugs and monitor patient vital signs on wireless networks 24x7x365, scary but it works. When it doesn’t it rarely has anything to do with the lack of wires and more to do with lack of QA in a vendors code or poorly written device drivers on the end device.
We will start seeing GigE get saturated between traditional distribution and access for typical 1Gb uplinks to a building or large floor in a campus. If your budget is tight and fiber is already pulled in a bundle with plenty of capacity Link Aggregation Groups (LAG) will give you double the capacity to give 10Gb cost to continue to come down.
Be open minded, not preaching here btw, I love routine and dislike change as much as anyone but betting against technology, progress and change is a gamble. Someday sooner rather than later the business will say we want to cut a million dollars in cabling to that building, operate on facts. Don’t worry we will always have wired networks, something has to backhaul wireless traffic those wired networks will be much more software defined vis a vi SDN just as the wireless industry has been doing for a number of years now. Or, just ask the carriers that continue to under provision to cell towers to take profit.
The wired days in the Enterprise are numbered. How long who knows but BYOD, mobility and amazing problem solving of physics problems in the finite wireless spectrum lead me to believe it is not measured in double digit years. This is just some musing, I get paid to do whats best for my organization not build the best network in history, sometimes good enough is just that and the reality of budgets force prioritization. Long live networks.