Data Center and Bandwidth Metering Thoughts
I am off to CA for the week, as such posting will be infrequent at best. GigaOM had a short piece on server architecture. I call attention to it because there are number of debates going on in the world of “Cloud Networking-OTT-Data Centers-Bandwidth Metering-Server Virtualization-Net Neutrality” or as it is frequently called by its initials CNOTTDCBMSWNN, which is pronounced “see-knott-d-see-bm-swin.”
As I have written before, I am in the high power processor camp with Urs Hölzle. I am much more interested in high-powered core processors, virtualizing the I/O, running 50, 75, 100 VMs and making the network element interface with the processor element to not only provide a layer 1-3 connection, but proceed higher in the stack to manage VM flows through the I/O to the compute element. I think the evidence is clear that the network needs to ride the price/performance curves of storage (i.e. drives) and compute (i.e. microprocessor) have set. The network is the lagging element and throwing more switches and transceivers at the problem is not a solution. If you are missing the context of my prior posts read the networking section bottom up.
Metered billing and usage caps will not go away. NFLX had an editorial in the WSJ decrying bandwidth metering and usage caps. There is a lot to think about in the 578 word editorial. As a starting point, I understand why NFLX wants to promote unlimited bandwidth and low usage costs to promote their business model. If I was on the NFLX leadership team, I would do the same. However, this is not a new debate. A few of my readers remember my old blog from 2006-2007. The subject of free bandwidth to all was a subject of a series of posts in which I received the most email and comments. I even wrote a book in which the drive to unlimited bandwidth and the deconstruction of the regulated telecom industry on a global basis was the primary subject. I will start by repeating something I wrote back in February 2007 which was specific to LVLT:
Extra Fiber Story:
I do not believe in the promotion that extra fiber capacity provides L3 with a distinct competitive advantage. This comment was again propagated by 24/7 Wall St. post on the breakup value of L3. Here is a quote from the 1999 L3 Annual Report on the company’s strategy to “…Become the Low Cost Provider of Communications Services,” page 4. The challenge with being the low-cost provider; it usually means you are also the low cost profit center and unless you have business scale on the level of Wal-Mart, it is hard to generate vast profits from low-cost bids. For those who are reading and thinking “low cost” and “low bid” are mutually exclusive, I state that in would of service provider pipes, low cost means lowest bid and win on price. Now L3 does have nice margins and they promote this fact in their presentations, and they have taken out other LH competitors to stabilize market prices, but something is wrong with the business model because they are not profitable. Joseph Nacchio was famous for using AT&T’s balance sheet to destroy LDD pricing in the 80-90s in war with MCI. Service provider wars are fought on price. I think L3 knows that competing on price in a bits/per second market structure is not a strategy for long term success. When Level(3) tells the world about all their excess wholesale fiber and low-cost network, I think the smart purchasing agents in companies like Google, Microsoft and Wal-Mart are eager to start the next round of contract negotiations because they view optical circuits as a commodity. The question to be answered is how do L3’s assets evolve into a profit mechanism that allows them to win and control market share?
Video Driving Bandwidth Story:
Here is the challenge with the whole video is consuming internet bandwidth story: too many people are promoting this story liberally and detailed metrics are hard to find. The challenge has always been the last mile connection to the end-user. I have posted in the past how Verizon is affecting this market by pulling fiber to the home with additional data here. Even though we are seeing increased broadband penetration in the US read this link and this link, we are still missing data points to collaborate assertions that backbones are running out of capacity. One would assume that Verizon and AT&T, who have been very aggressive to pull fiber or broadband to homes and offer triple play services, would need to upgrade the capacities of their long-haul backbones to support the video traffic. Has anyone seen an RFP from Verizon or AT&T for a new backbone? I have not. I know they are adding 10G channels to support capacity demands and I know they are looking 40G solutions, but they are not pulling fiber and building new backbones. If the two biggest service providers are not upgraded their backbones, why should we believe that video is going to drive a vast amount of traffic (i.e. business) onto the L3 backbone?
Here is a 1999 profile of Level(3) from Network World, “Crowe’s Level 3 is building an end-to-end IP network that will – if everything goes according to plan – carry a big chunk of the converged voice and data traffic of tomorrow. Level 3 will build local networks in major U.S. cities and interconnect them over a fiber backbone, all at a cost of between $8 billion and $10 billion. Crowe’s aim: to undercut entrenched carriers on price and gobble up big corporate users’ rapidly growing data traffic. Crowe figures his advantage is that he ‘s building Level 3’s network, which won’t be fully operational until at least 2001, from the ground up to support IP data packets. The incumbent telcos, on the other hand, are struggling to retrofit their circuit-switched systems to handle all the data flows. That means higher costs for them and plenty of opportunity for him.”
When I look back on that post (note some the links might not work) it seems we are still having the same debates four years later. I am not a believer in the concept that bandwidth is free. It costs money to build networks and the margin on broadband services is not great. Service providers are trapped in a “bits R us” model and NFLX wants to use this network to deliver content from a centralized DC model. I do not see OTT as a network problem – it is economic problem. I am sure if NFLX wanted to work out a deal with a service provider in which there was some sort monetary exchange for each broadband sub accessing the service, we would not hear about the subject. In the current network architecture choices I do not see a service provider saying “here is $20B, build me a network with unlimited bandwidth that others can use that I can charge a user $75 a month and let others build a far more profitable business over my network without bearing the cost of the $20B build.”
What I do agree with in the NFLX editorial is that “bandwidth caps with fees piled on top are a lousy way to manage traffic.” That is one of the reasons why I think that the network element is out of phase with the compute and storage elements. The bandwidth flow control needs to be in software and configurable by the user – it is not a hardware device. That was the conclusion I wrote in one of my very first posts on the network. It is written here and here.
A colleague sent me an email last week in which he was using the Kübler-Ross five stages of a dying model to explain a technology adoption process. I thought is was entertaining so I adopted his format to fit this blog post for the service provider, legacy network infrastructure provider as well as the OTT content provider. I remind readers that I view video as an application and there are many types of OTT content such as gaming, higher performance compute applications, etc. The OTT debate is not just about video, it is about any type of compute application (i.e. content) that is hosted in the data center and sent to the end-user in which state control is important.
Network Infrastructure Provider
OTT Content Provider
|Network is fast enough||Deploy More Routers and Switches||Bandwidth is free|
|Usage Caps||OTN/Ethernet OAM hate speeches||Consumer access to unlimited bandwidth is good for society. It fosters innovation, drives commerce, and advances political and social discourse. Given that bandwidth is cheap and plentiful and will only grow more so with time, there is no good reason for bandwidth caps and fees to take root.|
|Bandwidth Metering||Just do MPLS||Transfer payment?|
|Buy Content / Cloud Providers||Lower margins||Growth limits / Exclusivity Deals|
|There is a limit to how much a user will pay||New solutions / New Architectures||Digestion of new accord|
Many aspects of this process are playing out before us, but often we do not want to accept the implication of the changes we see of evidence of because we are limited by induction and survivorship biases. Already in Europe and other parts of the world wireless service providers are sharing capacity. With sharing comes rationing and that is what we are seeing in the US with bandwidth caps. The final stage which will start in the next 18-36 months and last for the better part of a decade is a sustained process to add capacity to networks, but this will only start when the chain of commerce for content finishes a recomposition period and network technologies and architectures are proved into the network that support new controls and content.
Miscellaneous Notes from Past Week and Thoughts on Week Ahead
A few more pre-negatives to close the week: XXIA (not a good indicator for service provider CAPEX), PFL.L (supply chain) and LG (mobile devices). More and more it looks like AAPL, Samsung and HTC are your mobile device market winners with MMI, NOK and RIMM share losers.
This coming week three preliminary technology reports: AMAT on Tuesday, ADTN on Wednesday (numbers out the night before, guidance around 10:50am on Wed) and GOOG on Thursday.
* It is all about the network stupid, because it is all about compute. *
** Comments are always welcome in the comments section or in private. Just hover over the Gravatar image and click for email. **