I remember about two years ago having a conversation with my boss and we were speculating on the lifespan of a wired network in an Enterprise campus. We both agreed it would all be driven by the App. Well, thanks to carrier networks being what they are, content, cloud and application providers typically develop to the lowest common denominator bandwidth, the cellular mobile networks. Those are networks that struggle to pull off double digit Mbps performance yet we (not necessarily the user, well except for my residential service, another story) are demanding 1Gbps to the desktop. The days of fat client server apps being the bandwidth driver are trending down as SaaS and cloudy like application delivery rapidly grows. Is this the death of the wired Enterprise network?
The traditional wired edge, does not allow for oversubscription of the network, it is one port and one host. Wireless networking allows for oversubscription since the air is a shared medium that all clients attach to. Don’t let shared medium scare you like it used to me. 802.3ac has begun to solve the wireless duplexing problem and the 5GHz spectrum has helped with client density issues as compared to 2.4GHz of the old. Just this week at Wireless Field Day #3 thanks to Foskett and team, were demos of 600Mbps client throughputs on 802.11ac on gear beginning to ship now. That performance will begin to tip the scale of this idea that we must wire every nook and cranny of the enterprise regardless of actual consumption.
Already Happening
Start thinking like a carrier. One of the number one ways service providers make money is from over subscription. Forget networks for a second, think about your corporate VM farm. That shows cost saving by taking a number of standalone physical boxes and collapses them into one physical box. Lets say that phyisical server has 64Gb of memory and you deploy 50 virtual machines with up to 2Gb of RAM. That works becuase we can effectively predict that each one of those virtual machines will not be using %100 of allocated memory at all times. Now you can effectively manage your resource pools in a central pile by oversubscription and make sure you are actually using what you buy. Instead of 50 physical servers all using %5 of its resources (aka memory, compute, storage) you are now able to allocate excess capacity and scale up and out centrally. In the enterprise that is cost avoidance in a provider business model that is more revenue.The compute and storage world has these tools that allow for this in an orchestrated fashion, networking, not so much.
Align with Business
Providers allocate and sell more than they have, while banking on the odds that the customer will not actually use all of what they have purchased and leased. Providers do this fairly well on expensive long haul link that are pricey to own and operate, in the Enterprise we typically do a horrible job at this. We often build the biggest pipes we can get away with and life cycle gear because the vendors come out with a new a speed or EOL a switch. Thats not good enough. We need to start thinking in terms of blending business in with IT. Avoiding intelligence in the access edge of the network allows for much longer lifecycles. I leave some 3500xl in production on purpose, 100Mb POE serves %99 of average workers just fine. The laptop they are plugging in likely has I/O off the board around 120Mbps. I have gone to war many a time over upgrading a switch with no value just because it was an EOL switch. Internet content while rapidly increasing is often more people doing something rather than higher per flow usage. Five years ago my mom did not go to Youtube, she does now. The other increase is the actually device count, which brings us to the meat.
Wireless Growth
What is driving speeds and power over Ethernet (PoE) requirements on the edge? One thing, wireless. %100 growth year to year in some places is not even close. Uplinks of 1Gb are required for todays access points and then gets oversubscribed. That will cost money since the end point count is growing rapidly. That cost will need to be offset. The port density in a communications closet should start decreasing assuming BYOD and mobile clients continue to trend up as the primary work device.
Figure 1. This is a 1Gb uplink of a typical IDF or access closet. There are 48 x 1Gb ports south of that have an average of 2 Mbps transmit and 7Mbps receive and over 30 days it peaked at 34Mbps received. The oversubscription is 1 to 48 1Gb.
Figure 2. This is a 1Gb uplink of a typical BDF (Building Distribution), there are 48 x 1Gb ports south of that have an average of 10.5 Mbps transmit and 37Mbps receive and over 30 days it peaked at 240Mbps transmitted. Thats a 1 to probably 300 (mix of 100/1000 Mb) oversubscription. Sounds crazy but is it? Traffic is purely north south, typically Internet bound or ERP and fat client email would be local.
Part of the SDN Landscape
I am fully anticipating this rapid increase in 802.11 clients and the growing commitment to actually taking the demand for more flexible and OPEN networks in the battle cry of SDN to begin to consolidate the wireless and wired architecture into a controller driven model. We need the same operational ease that wireless operators have on the wireless network. We need the bump in the wire through some for of a centralized control plane which can still be deployed in a modular fashion for scale just as wireless networks are today. Go look for a Gartner magic quadrant on wireless, guess what, there isn’t one anymore, it has been replaced by “Magic Quadrant for the Wired and Wireless LAN Access Infrastructure”. The end is near for the two different worlds and hopefully technologies between the wired and wireless network. Wireless manufacturers are beginning to get into the wired business, that should be a pretty good indicator alone. So as controllers start dropping in as SoCs on our switches, I hope to see the integration of open technologies to encourage interop and offer flexibility. The vendors that get greedy after vendor locks would be taking a risk. Standardization between controller and switch.
Distributed Yes, But How Much Makes Sense?
Aerohive and others have begun pushing controller intelligence and control plane to the AP, I think that has value when there is limited BW between controller and AP to avoid hairpins. Just as importantly if not more in fiber rich enterprises is the need to apply coherent ubiquitous policy in management, security and quality that comes from the control plane. As speeds continue to increase as will distribution to scale but that can done in a modular fashion to avoid the cost every point in the network being this fully featured device that is sitting at %5 utilization. Centralization brings a cost benefit, wether better management or less silicon needed to service the applications on the edge. It is a distributed systems theory problem, not a put a man on the moon problem or collide photons and take pictures at the same time problem. Choosing the best widget technologically is only part of the solution, consumers designing architectures that fit both business and technological needs and product development by vendors to be flexible enough to provide the building blocks for a lasting and scalable solution is the path to excellence.
Things to question
Do you really need to go after that 3 or 4db improvement at a large CapEx to get shiny new Category 6a wiring in for the average office worker in their cube? Category 5 still supports Gigabit Ethernet.
If you are lucky enough to have a hardware refresh budget should you revisit the port counts in your communications closet? Odds are a good chunk of those ports have been replaced with a laptop that has built in wireless. When and if BYOD and telecommuting becomes the norm this will further decrease the number of wired ports.
Let the facts drive your bandwidth increase not the vendor. Why upgrade for the sake of upgrading? Watch your traffic and trends, set thresholds. Incorporate Enterprise and cloud applications, VoIP and emerging wireless like 802.11ac become part of the capacity planning forecasting.
Wireless is not reliable enough. Hmm, I would have agreed a few years back wholeheartedly. Today I am more confident in stable device drivers and improved spectrums. That said troubleshooting broken microwaves with spectrometers is a mess. I see hospitals deliver drugs and monitor patient vital signs on wireless networks 24x7x365, scary but it works. When it doesn’t it rarely has anything to do with the lack of wires and more to do with lack of QA in a vendors code or poorly written device drivers on the end device.
We will start seeing GigE get saturated between traditional distribution and access for typical 1Gb uplinks to a building or large floor in a campus. If your budget is tight and fiber is already pulled in a bundle with plenty of capacity Link Aggregation Groups (LAG) will give you double the capacity to give 10Gb cost to continue to come down.
Be open minded, not preaching here btw, I love routine and dislike change as much as anyone but betting against technology, progress and change is a gamble. Someday sooner rather than later the business will say we want to cut a million dollars in cabling to that building, operate on facts. Don’t worry we will always have wired networks, something has to backhaul wireless traffic those wired networks will be much more software defined vis a vi SDN just as the wireless industry has been doing for a number of years now. Or, just ask the carriers that continue to under provision to cell towers to take profit.
The wired days in the Enterprise are numbered. How long who knows but BYOD, mobility and amazing problem solving of physics problems in the finite wireless spectrum lead me to believe it is not measured in double digit years. This is just some musing, I get paid to do whats best for my organization not build the best network in history, sometimes good enough is just that and the reality of budgets force prioritization. Long live networks.
It is not that rosy as you say. Wireless boundaries are not controllable as with wires.
Yes you can have dot1x in both worlds, but if you want it really secure you need eap-tls, which requires internal PKI infrastructure.
You can argue that most enterprises should have one, but in practice – not many of them really do. In most cases that simply live with switched and unprotected Ethernet, which is still more controllable (due to physical security at least) than wireless coverage.
Now let’s get back to wireless. Take a look at 802.11ac offerings? How many enterprise class solutions you can see there? None? Hmmm. I have seen the same story many years ago, when 802.11n was in countless drafts and only few enterprise solutions were available.
Nowadays you have offerings of 11n APs and controllers from all vendors. But have you tried to compare prices? Why, for example Cisco AP541n, which can cluster up to 10 boxes and share wireless settings, and associations (so by all means not home grade netgear router) is cheaper than dumb and controller operated AP from HP?
As a result we and up with many BYODs and corporate laptops in rather outdated wireless infrastructure or with tons of standalone APs from different vendors and roaming is performed by better but still not ideal endpoints’ device drivers with unpredictable results
The physical security of unsecured Ethernet ignores the simple fact that most security events occur due to employees who already have physical access to the network. Wireless installations using WPA2-Enterprise (PEAP) are generally more secure and easier to audit than unsecured wired networks.
On a wireless network with WPA2-Enterprise I can track what username logged in, at what time, where they were located within 50 feet, and what the MAC Address of the connecting device was. Look at Aruba’s Airwave Management Platform for an amazing wireless monitoring tool (even for monitoring Cisco wireless networks).
Compare to a wired network, where I get a MAC Address if I’m logging them (probably not). Far too many people rely on the physical security of their wired networks.
Also, don’t forget that the Ether (you know, the 19th-century one) is a shared medium, whereas Ethernet has been a dedicated point-to-point one for ~two decades.
The total bandwidth you can provide over wireless networks to a certain area is thus limited. Maybe that’s good enough, maybe it’s not.
I keep hearing people citing this against wireless. I have two points in rebuttal. One, “The total number of human bodies you can put to a certain area has always been limited”. Two, check out MIMO, MU-MIMO, Ether is a shared medium, but only in a given space, given time. Wired cable defines space with plastics; wireless technologies are defining spatial properties to increase capacity.
Great post. Thanks a lot. I also see wireless as eventually replacing wired networks.
Today though, who has the guts to put their head on the line and tell the VP that the new corporate headquarters does not need to be wired to the desktop? We’ll just run a few APs on each floor and be done with it. Can anyone honestly say that there will be less issues than wired?
All I can think about are the first three or four days of Cisco Live past three years tring to VPN to the home office.
Hi guys, Sorry I am late on replying. Yes shared medium is a problem for sure. We add more channels and now we bond those channels to get more BW so didn’t really open up more lanes. There is lobbying going on now for more spectrum so I expect that to add more “lanes”.
Couldn’t agree more that crowded places, sporting events, lecture halls and LOL Will Cisco Live. Those are painful. A lot of them we couldn’t wire in anyways (sporting events etc). There is risk with it for sure but bandwidth consumption is slower than it should be because the lowest common denominator is the cellular network so apps are written as light as possible to work on those sh**y (at least mine) networks.
Certainly time and places for it and a room of hundreds of people frequently may not be it, but who is hardwiring an auditorium anyways? A lot of University dorms are going fully wireless these days. A friend of mine at U of Louisville told me a great story about handing an Enet cable to a Freshman and she looked at him like “wtf is this?”. Kids dont even know what wired is anymore. When Mac Book Pros stop shipping with wired Enet, the times are a changin.
I am a Kurzweilian optimist at heart tho.
Thanks for the comments, they are all great points and interesting!
http://en.wikipedia.org/wiki/Ray_Kurzweil
Perhaps most significantly, Kurzweil foresaw the explosive growth in worldwide Internet use that began in the 1990s. At the time of the publication of The Age of Intelligent Machines, there were only 2.6 million Internet users in the world,[45] and the medium was unreliable, difficult to use, and deficient in content. He also stated that the Internet would explode not only in the number of users but in content as well, eventually granting users access “to international networks of libraries, data bases, and information services”. Additionally, Kurzweil claims to have correctly foreseen that the preferred mode of Internet access would inevitably be through wireless systems, and he was also correct to estimate that the latter would become practical for widespread use in the early 21st century.
Brent, thank you for you post/blog or whatever the heck this thing is. i enjoyed reading it and it made me think a lot! 🙂
Wireless killing wired? That’s fine with me… as long as people still need someone to setup those networks for them!!! 🙂
Heya, thanks for the comments. Certainly no shortage of networking jobs. If new RF spectrum or something better then a hack for shared mediums doesn’t improve we may need as many wired APs as we have clients today to scale 🙂
Great time to be in networking,
-Brent