The Mendoza Line for Networking Vendors
Wired posted one of those articles that again makes me ask what year is it over there? The article describes a content deep strategy by a number of web properties. If you have been reading this blog over the past year this is not news despite the “secret deals.” Here are a few excerpts:
July 27, 2011: “The internet is no longer built vertically to a few core peering points. Content is no longer static. State is now distributed across the network and state requires frequent if not constant updating. Content is no longer little HTML files, it is long form content such has HD video which other people are calling this Big Data. AKAM created amazing solutions for Web 1.0 and into Web 2.0 when the web was architected around the concept of a vertically built over subscription model. AKAM distributed content around the network and positioned content deeper in the network. That is not the internet of today.”
August 8, 2011: “A year ago most service providers (SPs) who thought they had an OTT video problem viewed YT as the biggest problem, but as a problem it was small. NFLX has overtaken YT traffic. From a SP perspective, there are several ways to handle the problem of OTT video or user requested real time traffic. (i) SPs can ignore it, (ii) SPs can meter bandwidth and charge consumers more for exceeding traffic levels, (iii) SPs can block it or (iv) SPs can deploy variations on content deep networking strategies.”
October 20, 2011: “[Cisco] acquired a private company in which they had previously made an investment. This is just further evidence of a content deep networking trend and that CDNs can easily be built by service providers.”
When I wrote the April 29, 2012 post about the “Compute Conundrum for Vendors” I was specifically thinking about being in the position of supplying networking equipment to service providers within the evolving trend of content deep networks.
There are really two types of suppliers of networking equipment: Group A are the suppliers who are in contact with the compute element (this is a small group) and Group B (this is a large group) are the suppliers who are not in contact with the compute element. With content pushing deeper into the network (i.e. internet edge land grab), it will become mandatory to have the ability to direct application workflows from the compute elements that are distributed throughout the network. This is the Mendoza Line for networking vendors. The question is how will this be done? Will the application flow function reside in the transport and switching portion of the traditional service provider network or in the data center, which the central office (CO) will become over the next decade? I already posted the thesis that data centers will be the last mile of the twenty-tens and clearly I believe that the vendors who are integrated at the compute/VM level win and the others who are not will lose.
I now submit that being in the B group will be increasingly difficult and that in these edge networks as described by Wired, which I called the content deep networks, is the exact point where SDN will find its initial hold in the service provider portion of the network. As supporting evidence of this thesis I will submit a couple points and let you decide as I could be incorrect.
1. Google WAN. See this NW article or read my Compute Conundrum post.
2. DIY content…before (March 11, 2012) the Google presentation at ONS, I wrote about what Google said at OFC 2012. That post is here. “I was very interested to listen to Bikash Koley’s answer to question about content and global caching. He referenced the effect of Google’s High Speed Global Caching network inAfrica. This network built by Google is not without controversy. Here is a link to a presentation in 2008 about the formative roots of this network. My point is I increasingly see service providers and content owners taking a DIY approach to content and these providers do not have to be the size of Google.”
Side note, the Google global cache presentation is from 2008 – so the internet edge land grab has been going on for ~5 years and I would say the trend or thinking around the maturity of the solution is further along than most people realize.
* It is all about the network stupid, because it is all about compute. *
** Comments are always welcome in the comments section or in private. **