Fluidity of Network Capacity, Commoditization, DIY Openness and the Demise of the Network Engineer

Be forewarned, this post is a bit of rant on variety of subjects that typically get asked of me at conferences or I see written by analysts, sycophants and self decreed intelligentsia.  The four most frequently asked questions or suppositions inquired about are:

  • Will network virtualization result in fewer network elements (i.e. switches and routers)?
  • The network is ripe for commoditization, so will this commoditization process result in lower margins for network vendors?
  • If end users are adopting DIY network devices via open source software, will network vendors still be around?
  • Will the network engineer or network administrator still be around in a few years? 

This is my attempt to write down the answers.  I think I have been answering these questions over the past two years on this blog, but perhaps I was somewhat indirect with my answers.  I will try to be direct. 

1. Fluidity of Network Capacity

At the end of the day, what can a network do?  It can connect and disconnect.  This is called connectivity or reachability.  The resource that is in the network that we want to control, manage and apply is called capacity.  Once we deploy the network (i.e. reachability), the only other function is to control and apply capacity.  If you think I am incorrect, ask yourself what have we done in the past when the network is blamed for IT problems?  The answer is always to deploy more capacity; which probably occurred after a reset of capacity.  The age old answer to network related issues is to deploy more capacity.  Here is something I wrote a few in a post a few weeks ago and I will expand further:  

Network switches are not like disks and servers.  Disks and servers are not the same either.  The cheap-and-cheerful [i.e. DIY] disposable hardware model only works: 

…for servers if the workload is fluid
…for disks if the data is fluid
…for switches if the capacity is fluid
 
The word fluid is being used to as a condensation/conflation of: replicated, replicable, re-locatable, re-allocatable.  Interesting that when you think about it this way, virtualization is not a means, but a result of fluidization.  Pointedly, network virtualization does not make capacity fluid — it makes workloads fluid.  If workloads are fluid, it would be helpful to have fluid network capacities to allocate to the demands of the workloads.

I often hear the analogy that servers and switches are the same and nothing is further from the truth.  Networks and network elements are built for interoperability.  That is the heart of networking.  That is the characteristic that makes networks function and that is the characteristic that makes them fragile and complex.  We distribute state and run protocols to make it all work and when it does not work the problems are often complex, requiring knowledgeable people with specialized training to resolve the problem.      

It is the design goal of interoperability that makes networking different from compute.  Networking elements need to work with each other and they must be able to exchange information about state and connectivity.  When the complex design goal of interoperability is expanded to include universality, meaning it should work in every network regardless of individual nature of end-users network, that is what creates a high barrier to entry in the networking market.    

The detail about the network that is unspoken when the idea of building cheaper network switches is discussed, is the ability to dynamically control and redistribute capacity.  Capacity in the network is and has been constrained from the beginning.  Capacity is constrained by the physical wires and the protocols that we run over the wires to allow for interoperability, fault detection and restoration.  The idea of building overlay networks for virtualized machines is an attempt to solve the challenge of static capacity with programming.  If all the compute resources are virtualized, it is possible to build an overlay network that can then move VMs around to pools of capacity in the network.  

I have been thinking about the idea of applying the idea of cognitive fluidity, taken from Steven Mithen’s book The Prehistory of the Mind, to networking.  What we have today is a network that is really incapable of combining different ways of processing knowledge to build different network topologies to harness or utilize pools of capacity.  That concept is really the end goal of SDN.  The network today looks like a domain specific structure with a series of isolated cognitive domains for specific types of network applications and users.  

The ideas, experimentation and development around what is called SDN and network virtualization movements, are all attempts to build a new and better network.  In this process that is underway there will be success and failure.  We are very much in the process of inventing the new network and the next few years will see the barriers between network domains erode and like the human cognitive function, the capacity in the network and between domains will become fluid.  This will unlock the resource of capacity in the network and I am confident that in ten years will will look back on the singularity of today’s network as see one era as unconscious and the new era as conscious.  The fallacy of building the same network we have been building since the dawn of networking is were still not solving the constrained capacity problem and we are not collapsing domains.  We are just replicating the past a little less expensive.      

2. Network Commoditization

The idea of network commoditization is certainly receiving a lot of publicity these days.  To me it is a lazy thesis.  I think if I was to take a poll asking which market is most suffering from commoditization, I think the leading answer would be the storage market.  After all, everyone knows you can buy a 64GB flash drive for $64 online (I checked newegg.com this morning).  That is a $1 per GB.  This must be the best example of commoditization.  As such, the traditional storage companies are going to suffer, it is only a matter of time.  

I was reading a fairly comprehensive storage report by Credit-Suisse this past week.  There is an interesting chart in the document that I inserted to the left with all credit to Credit-Suisse research.  The chart shows that Screen Shot 2013-06-02 at 10.24.39 AMDIY (i.e. commoditization) storage is now 48% of the total storage market up from 17% in 2007.  It would seem to me that legacy companies like EMC and NTAP should be having going out of business sales if 50% of their total addressable market (TAM) is now not addressable.  Obviously this is not the case.  The rate of data growth is still 50% annually.  Data is growing at such a prestigious rate that a high tide lifts all boats.  DIY and JBOD and traditional storage arrays are all methods being used to store data.  The world produces a lot of data and this is a result of virtualization, clustering applications, cloud. etc.  To be direct, virtualizing, replicating and commoditizing 50% of the storage market does not mean the world is buying less storage.  Remember, when you virtualize a technology (e.g. servers, storage, network) that does not mean you will own less, it means you will own more.   

FB Preso Screen ShotFor all the people who think we that somehow we are going to buy less network elements because we commoditized the network, I give you the Facebook chart from their OFC presentation in February 2013.  Go ahead and click on it and internalize the 5th bullet.  The bullet that says there is a 930x increase in network traffic inside the datacenter for each 1kb of external traffic. Again this is a result of cluster applications like Hadoop and large memory caching tiers and the strategy of cheap and cheerful is not going to solve this challenge, which is coming to a data center near you soon.

3. Openness

I hear and read a lot about openness and people often tell me that networks need to be open.  A few thoughts about open.  Open usually equals hard and it is often the most proprietary.  For example, Linux is open and it is hard.  If it was so easy then we would have no need for Red Hat and large group of people inside IBM and other companies.  Crowd sourced projects that are described as open are really the opposite of a corporate structure.  The Open Daylight Project (ODP) is attempting to correct the lack of oversight and accountability found in open projects by forcing the highest tier members to commit full time employees (FTE) to the project.   

To me, the term open means that the source code is available and customizable.  When users customize source code and decide to use 20% or 34% or 86% of the available code, that is by that definition the most proprietary because it is being tailored to a specific need.  This is how Google runs their infrastructure.  Everything is customized to fit their needs, their applications, by their people for their needs.  It is not consumable by any other organization other than Google.  Open source communities are complex and that is why we have companies like Red Hat who state in their corporate description that “Red Hat is more than a software company. We’re the bridge between the communities that create open source software and the enterprise customers who use it. Red Hat makes the rapid innovation of open source technology consumable in mission-critical, enterprise environments.”  Remember the statement from the first section that networks are built for interoperability.  That is a fundamental design goal and when networking elements are made open and customizable, their interoperability envelop is reduced.  I am not saying that customization is a bad idea as it has clearly worked for Google, but that does not mean there is no value in the art of integration and interoperability.  If the open compute switch movement is going to make progress in crowd sourcing networking switches they are going to need something to play the role of the x86 instruction set for switch silicon and free or minimally priced licenses for the SDKs from Broadcom, Intel and Marvell.   

4. Demise of the Network Engineer

I hear this a lot and again it cannot be further from the truth.  Switches and routers are not servers.  Network elements are built for interoperability and interoperability by its definition is complex.  Networking is going to require highly skilled people to solve complex problems for a long time.  SDN might change the nature of those problems and it may even free network engineers and administrators from the drudgery of CLIs and multiple panes of glass, but network engineers are not going away anytime soon.  In my opinion, network engineers will get to design much better networks in the future and spend their time architecting the fluidity of capacity in the network.  I think that will be pretty cool and the ones who master that art will make a lot of money.  

 As always it is possible that I could be completely wrong and have no idea what I am writing about.  

/wrk

One thought on “Fluidity of Network Capacity, Commoditization, DIY Openness and the Demise of the Network Engineer

  1. Thanks, great article.

    About Open, yes of course Open Source is all about being able to customize.

    Now when you take, Commoditization, Open, Capacity (planning) and compatibility together, I think on the short term, you’ll end up with islands of capacity build from of custimizable parts.

    Which connect to other islands of capacity and the outside world through existing routing protocols like BGP and new and old configuration-protocols to configure individual islands.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.