Post Antediluvian Period of Networking
Somewhere in my basement, in a seldom-opened storage container is my work notebook from 1991. In that notebook, I am certain I would find an entry describing how I replaced a hard drive in a Novell server. That process required spending several hours one night loading disk after disk (5.25 inch floppy) and then restoring the drive from a set of tapes. If I remember correctly (note this is updated as of 5.11.12, I received an email from a old friend who was with me that night providing the exact details of the HD purchased) we paid $3,500 for a 1G Seagate WREN STxxx drive overnighted from CDW. Twenty-two years later you can by a 1G USB stick for $35. It all seems so quaint looking back on those days.
I was on the road visiting clients this week as well as taking the time to meet with a few industry analysts. In many of those conversations the same meme is reoccurring – but people are still trying to quantify it, describe it, and formulate what it will look like. I describe it as being on the verge of leaving the murky, antediluvian period of networking that has been forced upon us by having to solve the client/server problem and then the Internet problem. Good news, those problems are solved. We now need to solve a new set of problems and the solutions will have nothing to do with layered protocol stacks, spanning tree and CLOS fabric debates. I first started writing about this a year ago. I reference these older musings because I think there is value to look back and see the evolution of thought:
April 29 2011: It is all about the Network
May 2, 2011: Next Stop
May 7, 2011: Waiting on the Exaflood
May 14, 2011: Exiting the Lost Decade
This year and next year, we will start deploying the new systems and new networks and in a few years we are going to look back on the past thirty years and laugh at how complicated and rigid we made the network. There are many reasons why this will happen and I am not the first person to describe it. Here is another person who is thinking the same way. In a year or two we will look back, as I did to my Novell server admin days, and reflect on how complicated technology was for a long period of time. I think you will hear people say things like…
“Remember when it took us two years for a major code upgrade and it was 20 million lines of code.”
“I remember when we used to order computers and the big question was how much memory and how big a hard drive could you afford. Who cares now?”
“Networking used to be so complicated, now it is just a capacity dial with the end points being off and maximum.”
“In the old days, we used to design the network, now we just connect stuff and design the application.”
“Only three dials = total cost: compute, storage and network. “
It is clear to me that there are two basic groups of people in the world of networking. The Type A people is the largest group and they are rigid network construct people trying to force fit everything into the layered protocol stack model. They are the people who are trying to figure out how to insert OpenFlow into the 20-30 million lines of code they have been writing for the past decade. They are also the people who think the answer to simplifying the network means adding complexity. What this group does not want to tell you is if we rebooted all the data centers and the Internet at the same time, when it restarts it will work. So why do we make it so complicated? The answer is: the fragile and a distributed protocol stack was a good way of solving the client/server and Internet problems in the era of scarcity, but it is not method of solving problems in the post Antediluvian period of networking.
Today, we are not solving problems from the 1980s or the 1990s. Ask yourself, if you were given the task of networking a data center or a group of data centers and you were afforded a clean sheet opportunity to design a network with various capabilities and no pre-conditions would you really think that a distributed protocol and a 1952 CLOS design was the solution of choice? Hardly, in fact a distributed protocol approach is probably the last solution option you would choose. Anytime you have to distribute the configuration state and ensure that it is correct for N instances you are asking for trouble. The distributed protocol stack is the origin of comments such as “don’t touch the network” and “if it is it not broken, don’t change it.” It is from these pre-conditions, that I have become an advocate of solving problems in the era of PLENTY with new approaches and these approaches primarily begin with compute. Using a distributed protocol stack to guess your way up to a state answer that may or may not be correct is not a workable strategy for the future – it is a process from the past. The Google keynote at ONS 2012 reached the same conclusion and I wrote about it here.
The Type B group, which is the smaller group but growing, is comprised of people seeking a better answer to the complicated, murky, overly technical network infrastructure pre-condition set. The Type B person thinks the network should be a setting, like compute and storage – not a towering, teetering stack of code subdivided into seven layers. I met one of these people last week and he told me that capital was not his problem; his problem was how to solve the complicated mess that was his network that serves the needs of hundreds of thousands of users that spans the globe and is maintained by a few thousand people distributed around the world who fear having to make a change the network.
If you want to know how to identify the Type A networking people, ask them what they or their company does. If they tell you a list of priorities for their company and they include words like: cloud, transformational, big data, little data, globalization, connected smart communities, I would say you are talking to a Type A person.
/wrk
* It is all about the network stupid, because it is all about compute. *
** Comments are always welcome in the comments section or in private. **
The question in my mind is not “whether to SDN or not”. The question is “how to scale the centralised control past a small number of tens of nodes”. I am yet to see a convincing answer to that question.
Perhaps I should write a separate post on your observation, but the evolution of the network is a stochastic process. Graph theory and compute tell you what is possible. It is similar to the black swan event. Before the known existence of a black swan, all swans were white. Then a black swan was found and all previous swan knowledge was changed.
/wrk
Another thing perhaps worth further clarifying is: what is meant by the “network” in the context this particular discussion? Is it “the Internet”? Is it an individual Service Provider’s network? Is it an Enterprise WAN (type A: bunch of p2p links; or type B: a 3rd party “cloud” (VPRN/VPLS))? Campus? DC?
Pingback: New Network Meme « SIWDT
Pingback: Working Thoughts on SDN #5: Infrastructure as a System « SIWDT