Working Thoughts on SDN #5: Infrastructure as a System
I received a few comments and several emails from my last post, which was a surprise. It seems I am always surprised as to which posts receive responses and it is not something I am good at predicitng. My last post was just a quick post written somewhere over the middle part of the country on a VA flight from LAX. I actually posted it to my blog using MarsEdit while drinking a scotch and glancing at the TV. For all the complaining I do about traveling, contemporary travel is far better than my early career years when I had a choice of a smoking seat. Here is one of the comments from my last post that got me thinking:
V interesting. When you say “enterprise infrastructure” do you mean hardware or software. And is it across networking, storage, layer 4-7, middleware etc etc.
When I use the phrase “enterprise infrastructure” I am thinking about IT and that includes compute, storage, apps and the network. Perhaps I should start from a frame a reference that I reject: IT Doesn’t Matter. I enjoy reading Nicholas Carr, but he was wrong. We know that now and when I hear people tell me about buying our IT resources from the great IT Utility I know they are reading Carr and not speaking to IT practitioners. I can see why Carr could hypothesis (1) that IT would not matter in future and (2) there would be the creation of the great IT utility in the sky. These hypothesis were possible because he did not know about the changes occurring at deep fundamental levels within the IT infrastructure. It was entirely rational to look at the excesses of the bubble and expect that IT would not matter. From 1995 to 2000 ~$300B was spent fixing the Y2K problem. It was fixed really well or it did not matter. The hypothesis that IT would not matter really comes from the premise that IT infrastructure was complex, expensive, etc and that cost efficentcies were achievable at scale, hence the concept of central IT utility like a power company. I have commented several times in this blog that two weeks do not go by without an IT person telling me how they have or are planning to exit their IT outsourcing contract.
The theory of rational expectations finds its roots in the development of computational economics that began in the U.S. after the Second World War and accelerated through the work of John Muth in the early 1960s. The core meaning of rational expectations is the belief that people form judgments, or best guesses, or expected models of the future using all available information. If the same set of information is available to all people, the conclusions by separate people and groups will be the same or similar. Unexpected events, termed shocks by many economists, cannot be anticipated by everyone and therefore, everyone’s conclusion will be equally affected. Robert Lucas advanced the rational expectations hypothesis and for his work he was awarded the Nobel Prize in Economics in 1995. The contribution that Lucas made was in promoting a theory that became known as the Lucas Critique. The Lucas Critique states that prediction of the future using historical variables and the rational expectations theory would be false if an event alters the relationship between the variables (i.e. actors). Lucas went further and concluded that the market had only one equilibrium point and it was from this point that participants in the market will begin to form opinions and judgments. This was important because of the rise of adaptive expectations.
The concept of adaptive expectations implies that people will form judgments about the future, based on what has happened in the past. If a series of shocks occur, the models using historical data will lead people to believe that this is the norm, which is a departure from the long term past, and thus begin to anticipate further shocks as their prediction of the future. Hence, the use of adaptive expectations increases the level of endogenous risk in the models or opinions of the future if the data used to build conclusions does not remain stationary. The process becomes self-fulfilling because the use of historical (i.e. lagging) indicators to develop models or expected results becomes driven by assumed future changes, events, or shocks. For example, economists and analysts often anticipate interest rate changes from central banks before these changes are actually announced and implemented by central banks. Financial models are then fine tuned, based upon the actual information, when it is made available to all people. When the new data is entered into the models with the historical data, a revised future prediction is created, by the models using the historical data with the newest data points added. Before a central bank changes interest rates, the financial markets are already trading on models that have been built using the presumed equilibrium point of the market, post the interest rate change. The changes that were occurring in the past decade in the modern data center and internet revealed themselves to be endogenous risks to the theory that IT would not matter.
Drawing upon historical data, economists and stock analysts build models that reflect their expectations for the future, based on assumed risks and the potential to create value. As the future is non-quantifiable, these models are based on assumptions. “If people were rational then their rationality would cause them to figure out predicable patterns from the past and adapt, so that past patterns would be completely useless for predicting the future,” [see Fooled by Randomness, Nassim Nicholas Taleb, 2001, page 98]. The assumptions used to model future expectations may or may not have some historical basis as a foundation. If we were building a five year future model of General Motor’s ability to sell cars, we have a massive amount of historical data to draw upon. If we were building a five year model of a new company called X that was just created, we have no historical data upon which to predict revenues. Both models are highly speculative with a varying degree of risk. In the early 1990s as telecom deregulation neared and the internet began to permeate society, the investment in the technology sector was driven by market models that predicted a shift in productivity, thus creating new value in new sectors of the economy. The concept of building an information based economy was a central theme to the Telecom Act of 1996. Thought leadership injected into society by economists, visionaries, analysts, venture capitalists, consultants and business leaders who created business cases based upon future projections of market development, market sizes, technology life-cycles, and the ability of companies to capture market share (i.e. revenues) and generate earnings. “Practitioners of these methods measure risks, using a tool of past history as an indication of the future. We will jut say at this point that the mere possibility of the distributions not being stationary makes the entire concept seem like a costly (perhaps very costly) mistake,” [see Taleb, page 98].
I have been wrong too and have pointed out the errors of my thinking. If you are not making errors, you are not trying hard enough. IT and the modern data center have been transformed over the past decade. A data center of today does not look like a data center of ten years ago. The nature of applications have changed too. Servers spend more time talking to servers than they do users. Servers form clusters, groups, affinities of servers. I hear people say “cloud” to me a lot, but to me that is like saying good, bad, black, white. There are lot gray areas. When I talk to IT leaders, they talk about lowering the operating cost of their IT infrastructure. No one talks to me about transitioning to the cloud or big data. Yes, cloud, big data, BYOD are abstract over arching constructs to the conversation, but I do not schedule a meeting with a client to talk about transitioning to the cloud and handling big data. I do schedule a meeting with a client to talk about the network and what they are trying to do with their IT infrastructure and how the network needs to change or evolve to meet their IT goals. That is why the post from last week had many refreshing comments from the IT practitioners. Here is another example.
I had a short introductory meeting with a CTO last week. It was one hour and I was not intended to get into the details of what I was presenting. I expected to go over details in the next meeting. After about twenty minutes we are standing at the white board drawing out the network, which includes storage, locations, fiber, servers, IPS, backup, DR, VRF, VPlex, ASRs, N7Ks, N5Ks, N2Ks, 6509s, etc. It was his IT infrastructure and it was all acquired from a few of the big name vertically integrated IT providers. After awhile he turned to me and said “I have a tank. That is my infrastructure. It is a tank. It can take many shots and keep going, but it takes a bunch of people to run it, I have no flexibility, I bought more than I need, it is in the wrong places, I have no flexibility and my users and applications are changing and I cannot change my infrastructure fast enough to meet the needs of the users. I have people swiveling between provisioning systems. How can you help me?” He was clearly looking at his infrastructure and wondering about the post Antediluvian period of networking. At the end of our conversation in his office, he said “look what I am reading on my desk, it is a deployment and design guide for some of the stuff we bought and are still deploying. I am reading a guide to figure out what to do with it.”
I heard from another IT practitioner last week about the challenges within their network. This person told me about the need to transform their IT organization into a group that understands the transformation trend and those who do not. When I look back on two weeks of interaction with IT professionals, the following themes from my prior post keep repeating:
- IT Practitioner: We need to have a focus on applications as in defining the infrastructure from the needs of the application. Let the applications automate network infrastructure by having the applications tell the network what to build or provision.
- IT Practitioner: Our goals are to automate, normalize and build orchestration and contextual references to data
- IT Practitioner: Networks were designed from the wires up and that is bad. We are turning that upside down. We are designing networks that are application aware. We started with a modeling tool that looks at the app and designs the network to meet the needs of the app. We want to deliver the network to the applications needs.
I do think there is confusion is the market as to what SDN is or will become. I have posted about this before. After ONS 2012, it is clear that everyone who sells service provider infrastructure equipment is now selling SDN. Their definition of SDN is definitely different from what I think of as SDN. I received the following from an analyst and he was using an SDN definition derived from Margaret Chiosi (MEF board member and Executive Director of Global Optical and Ethernet Service Development at AT&T Labs) from a recent SDN conference in Europe:
SDN enables networking applications to request and manipulate services provided by the network, and allows the network to expose network state back to the applications, for example:
- Give video traffic priority over e-mail
- Create rules for traffic coming from or going to a certain destination
- Quarantine traffic from a computer suspected of harboring viruses
- Try out new applications in the mobility market
- Design networks to recover quickly from network outages or congestion conditions
I added the quote from my prior post (above) with the definition of SDN by Chiosi to make a point. In fairness, I did not personally hear her definition, I am going by what I was told. The point I am making is perspective. I have a hypothesis that a lot of the Service Provider centric view of SDN was created from the Google SDN WAN presentation at ONS 2012. If you missed that, here is a link to my post after ONS.
The first observation I would make is that networks already do four or five of the examples listed and I fail to see why we need SDN to do things we already do today. All five examples are of designing the network from the perspective of the wires up. The network needs to be designed from the application down and SDN enables the network to be freed from fixed configurations. From a previous posting, I think SDN is:
- Computation and algorithms, in a word math. That is what SDN is.
- Derive network topology and orchestration from the application and/or the tenant. This is how SDN is different from switched networks.
- Ensure application/tenant performance and reliability goals are accomplished. This is part of the value proposition.
- Make network orchestration concurrent with application deployment. This is what SDN is for.
- A properly designed controller architecture has the scope, the perspective and the resources to efficiently compute and fit application instantences and tenants onto the network. This is how SDN will work.
If you have managed to read this far and are still coherent, you might have made it through the post I wrote a few weeks back on social networking. Check out this site for dedicated hosting and note the references to (1) Bitcoin, (2) censorship and (3) security. Another perspective on Bitcoin can be found here.
Here is a site that monitors the status of internet services like email, Linkedin, Google, Twitter, etc. With so many people relying on services in the cloud we now have sites reporting on sites. In closing, please remember that friends don’t let friends buy a core switch. More on that later.
/wrk