It appears my supposition from 2006 will prove to be correct. There are was article in Forbes recently about Intel’s set top box (STB) initiative. I will save you from reading the article by quoting the key paragraph:
It appears my supposition from 2006 will prove to be correct. There are was article in Forbes recently about Intel’s set top box (STB) initiative. I will save you from reading the article by quoting the key paragraph:
Wired posted one of those articles that again makes me ask what year is it over there? The article describes a content deep strategy by a number of web properties. If you have been reading this blog over the past year this is not news despite the “secret deals.” Here are a few excerpts:
July 27, 2011: “The internet is no longer built vertically to a few core peering points. Content is no longer static. State is now distributed across the network and state requires frequent if not constant updating. Content is no longer little HTML files, it is long form content such has HD video which other people are calling this Big Data. AKAM created amazing solutions for Web 1.0 and into Web 2.0 when the web was architected around the concept of a vertically built over subscription model. AKAM distributed content around the network and positioned content deeper in the network. That is not the internet of today.”
August 8, 2011: “A year ago most service providers (SPs) who thought they had an OTT video problem viewed YT as the biggest problem, but as a problem it was small. NFLX has overtaken YT traffic. From a SP perspective, there are several ways to handle the problem of OTT video or user requested real time traffic. (i) SPs can ignore it, (ii) SPs can meter bandwidth and charge consumers more for exceeding traffic levels, (iii) SPs can block it or (iv) SPs can deploy variations on content deep networking strategies.”
October 20, 2011: “[Cisco] acquired a private company in which they had previously made an investment. This is just further evidence of a content deep networking trend and that CDNs can easily be built by service providers.”
When I wrote the April 29, 2012 post about the “Compute Conundrum for Vendors” I was specifically thinking about being in the position of supplying networking equipment to service providers within the evolving trend of content deep networks.
There are really two types of suppliers of networking equipment: Group A are the suppliers who are in contact with the compute element (this is a small group) and Group B (this is a large group) are the suppliers who are not in contact with the compute element. With content pushing deeper into the network (i.e. internet edge land grab), it will become mandatory to have the ability to direct application workflows from the compute elements that are distributed throughout the network. This is the Mendoza Line for networking vendors. The question is how will this be done? Will the application flow function reside in the transport and switching portion of the traditional service provider network or in the data center, which the central office (CO) will become over the next decade? I already posted the thesis that data centers will be the last mile of the twenty-tens and clearly I believe that the vendors who are integrated at the compute/VM level win and the others who are not will lose.
I now submit that being in the B group will be increasingly difficult and that in these edge networks as described by Wired, which I called the content deep networks, is the exact point where SDN will find its initial hold in the service provider portion of the network. As supporting evidence of this thesis I will submit a couple points and let you decide as I could be incorrect.
1. Google WAN. See this NW article or read my Compute Conundrum post.
2. DIY content…before (March 11, 2012) the Google presentation at ONS, I wrote about what Google said at OFC 2012. That post is here. “I was very interested to listen to Bikash Koley’s answer to question about content and global caching. He referenced the effect of Google’s High Speed Global Caching network inAfrica. This network built by Google is not without controversy. Here is a link to a presentation in 2008 about the formative roots of this network. My point is I increasingly see service providers and content owners taking a DIY approach to content and these providers do not have to be the size of Google.”
Side note, the Google global cache presentation is from 2008 – so the internet edge land grab has been going on for ~5 years and I would say the trend or thinking around the maturity of the solution is further along than most people realize.
/wrk
* It is all about the network stupid, because it is all about compute. *
** Comments are always welcome in the comments section or in private. **
A few reactions to reports from yesterday, this morning and other news:
RIMM: I got an email last night asking me if I was following the activists who are posturing for changes at RIMM and one of their proposals was to break up the company into three companies: devices, networks and patents? I have no idea how or why you would want to separate networks from devices, this is not NOK or ERIC with large independent business units. RIMM’s devices are directly tied to their NOC operations. I do not think that separation is even possible. It is beyond me why people with a poor understanding of the company’s technical operations are promoted by various media outlets when all they are doing is talking their book.
RVBD: The numbers look fine. I was thinking about a theory on RVBD that if there is a slowing trend of spending in enterprises, that RVBD should do better in that environment as they provide a lower cost solution that extends the life of the current network and enables enterprises to put off upgrades which is what is really needed to solve network performance issues. In time, I think RVBD’s WAN acceleration will go the way of the distributed CDN, but this is probably a few years out.
ERIC: Wow…those margins suck, but I would say that mid-30s is going to be the new normal over the LT and some companies will need to adjust to that trend.
ATT: The CAPEX number is out and it is $5,220B for Q3. Waves of relief emails are pouring into my inbox. I have been bearish on ATT CAPEX, as noted in prior posts, and I still think something is amiss based on APKT, PWAV and JNPR commentary. I would say that I modeled ATT CAPEX to be a little ahead of this number and if ATT still plans to spend to $20B in CAPEX, then the Q4 number should come in around $5,300-5,340B. In all, that still tells me that multiples are going to be finishing their correction, contraction process. The CFO will probably make a statement on CAPEX later this morning on their call. I have updated the charts I posted in July and added a simple chart of ATT CAPEX back to 2002. The wireless revenue miss is going to cause all sorts of questions to be asked. Does this mean they have made enough investment in wireless or not enough? I will wait to make a final call on ATT CAPEX until I hear from CSCO in November and CIEN in December.
CSCO: Acquired private company in which they had previously made an investment. This is just further evidence of a content deep networking trend and that CDNs can easily be built by service providers. I covered all this is prior posts.
/wrk
* It is all about the network stupid, because it is all about compute. *
** Comments are always welcome in the comments section or in private. Just hover over the Gravatar image and click for email. **
I have written a fair amount about CDNs here, here as well as a post specific to over the top (OTT) video here. Last week it was with some degree of skepticism that I read this article in Bloomberg about AKAM being an M&A target of IBM or Verizon. I know nothing about that speculation. What I do know is a lot about service provider networks, CDNs, networking technology and how content is moved around the internet.
Starting with the premise that it “…may finally be cheap enough…” thesis regarding companies. I have objected to this notion in the past as a lazy thesis on equity prices. I do not understand why people really think if a stock price declines, then suddenly some other company is going to jump in and buy it because now it is cheap. The stock price declined for a reason. Ask the team at HPQ how well that PALM deal worked out after they waited for the stock price to decline.
The next flaw in the article is the notion that website acceleration and video consumption is one and the same. As I have pointed out, this is a lazy thesis in which a detailed understanding of how content is provisioned, distributed and consumed is missing from the quote. This is called context. People often confuse the initial technology solution provided by AKAM, which was an extremely intelligent and clever idea, with “bandwidth explosion” as quoted in the article.
When I hear grand standing comments such as “bandwidth explosion” without some form of statistical reference I just ignore it. The initial solution that Akamai provided had more to do with providing distributed (i.e. localized) HTML content and querying for content from uncongested sources via alternative routes than exploding internet bandwidth usage, video, the internet will collapse, blah, blah. The argument can be made that if the internet worked well enough, meaning that service providers provisioned high capacity service to local users and removed over subscription ratios in the hierarchy of the internet structure, thus flattening the network structure, then there would be little need for a distributed CDN as actual service providers or content providers (e.g. AAPL, GOOG, MSFT, YHOO, AMZN) would easily deploy this capability with little need for third party CDNs. That is one of the reasons why we have seen the rise of centralized CDNs (e.g. Limelight) and software companies like PeerApp. It is also the reason why we see a rise in SSDs, flash and the ability to deploy huge amounts of storage in the network.
Let us all keep grounded in reality. Akamai is a tremendous company and they have extremely valuable intellectual property. The network is changing and that is why I think AKAM would want to be acquiring companies. They need new solutions and technologies that address how the internet is changing as their legacy solutions become less relevant.
/wrk
* It is all about the network stupid, because it is all about compute. *
** Comments are always welcome in the comments section or in private. Just hover over the Gravatar image and click for email. **
This is a small add-on to the post I had on CDNs a few weeks ago. GigaOm had a post on Cotendo using Equinix data centers to scale their CDN business. What I find interesting about this strategy is it provides another method for service providers and in this example a startup CDN to leverage network infrastructure to build-out and scale up a CDN service. In my prior post I focused on companies using new network appliances to deploy a content deep strategy which mimics the capabilities of larger CDNs. I think that is one of big differences in the world of Web 3.0 from Web 1.0 and I summarized that difference by stating that “….content deep networking strategies, increased fiber penetration and software based network controls (e.g. ADCs in virtual form, cloud based SLA controllers, etc) that enable the end-user to control the network access point in a world of state aware apps and real time content is not the best environment for companies that provide solutions intended to improve the inefficiencies of the network. In other words, companies that created value by improving or fixing the network may see less applicability if the network requires less fixing and less improving.” I would add to that the ability to scale up a service in vast amount of cloud based infrastructure deployed in the last ~5 years has an adverse effect on the value of the service. Just a thought heading into the weekend, I could be easily incorrect.
/wrk
* It is all about the network stupid, because it is all about compute. *
** Comments are always welcome in the comments section or in private. Just hover over the Gravatar image and click for email. **
I had an email question two weeks ago regarding CDNs and where they are going or not going and who will be the winner or loser. I answered the question privately, but it gave me cause to think about content deep networking, CDNs and what is going on in the network because of the evolution to longer form data, or big data depending on what term you prefer. There is no question that Web 1.0 (~1995-2000) built on small HTML files is much different than Web 2.0 (~2002-2008) and Web 3.0 (~2009-?) with streaming HD content, state aware apps and access points at the edge that have higher connection speeds and capacities; all that being said, I am still a bit of an OTT skeptic. Here is a chart I produced over a year ago using data from AKAM and CDN pricing trends. The chart is not updated, but I think it shows the conundrum of having to serve longer form data in a market of declining ASPs. I noted on the chart the start of iTunes, which is the poster child for the old content consumption model in which the user had to own the rights locally for the content. The new content model which iTunes is using too, the rights are licensed by the content provider (AAPL, NFLX, AMZN, etc) and the end-user rents usage rights, usually as a monthly fee.
When I wrote that I was an OTT skeptic, I meant that I find it hard to quantify the OTT problem and I find that service providers (SPs) find it hard to quantify the problem. I think there is no shortage of emotion, but I am not sure everyone is talking about the same problem or maybe they are just using the perception of a problem to force a discussion about another subject matter, which is what I really believe.
To start, let us step back and ask what video/OTT problem are service providers and the infrastructure companies are trying to solve? Is it a bandwidth problem (i.e. real capacity constraints), a revenue problem (i.e. SPs want a share of NFLX revenues) or a CAPEX problem (i.e. SPs do not want to spend)? I talk to a lot of people on many sides of the debate; I talk to equipment companies and I read the industry and investment reports. I am skeptic when smart people tell me that it is a well known and understood problem that video is clogging the network. Is it? Can someone show me some stats? When I read puff pieces like this, I struggle to grasp the meaning.
If OTT video is growing 40-50% over the next four years it is somewhat meaningless to me because network technologies and network capacities are not static. The whole OTT space is a bit of conundrum. There is a lot of noise around it and that is good for selling, marketing and thought leadership, but it seems vastly under invested if there is such a problem on the scale it is made out to be. I think the data center (compute) scaling (more VMs on a Romley MB and the virtualization of the I/O) into the network is a much, much bigger market.
What are CDNs really good at? Distributed CDNs like AKAM are really good at distributed content hosting like big file upgrades and regional specific content distribution like day and date. iTunes is hosted by AKAM and they do a good job of ensuring you cannot download content specific to the UK in the US. AKAM also offers really good site acceleration services for web properties that have low to medium traffic demands, but might have a spike in traffic due to an unforeseen event.
Centralized CDNs like LLNW and LVLT do really well at serving up specific content events and they are much better at hosting content that requires that state be updated, think Gmail which likes to update state on a regular basis. Before thinking about CDNs, think about NFLX or Youtube.com (YT).
A year ago most service providers (SPs) who thought they had an OTT video problem viewed YT as the biggest problem, but as a problem it was small. NFLX has overtaken YT traffic. From a SP perspective, there are several ways to handle the problem of OTT video or user requested real time traffic. (i) SPs can ignore it, (ii) SPs can meter bandwidth and charge consumers more for exceeding traffic levels, (iii) SPs can block it or (iv) SPs can deploy variations on content deep networking strategies.
Content deep strategies use products from companies like BTI Systems and JNPR (Ankeena acquisition) to mention a couple. These companies deploy a caching CDN product in the network around the 10-50k user stub point. The device replicates popular content that it sees requested from sites like NFLX (it is a learning algorithm) and thus the 10-50k user group does not have to traverse the entire network topology for popular content from streaming sites.
Similar to a cable node-splitting strategy, hosting popular content deeper in the network works well and seems to slow bandwidth consumption growth rates to very manageable levels. CDNs win because they do not have to provision as much capacity and the SPs win because they have less money going to the CDN and less capacity issues in the network.
The user experience is better too. When you see ATT and LVLT wanting to build a CDN service (GOOG too) it is really about content deep and putting content local to the user. This is something I wrote about in my blog back in April. Recently, there were reports of LVLT and LLNW combining CDNs and this makes sense to me as scale will matter in the business.
In terms of BTI, I listened to a webinar they produced about a month ago that was hosted on Dan Rayburn’s site. BTI is claiming 10 content deep networking customers and in trials with a tier 1. Specifically (if I heard the presentation correctly), they said that at the Tier 1 SP trial, OTT video traffic was growing at 3% per month. 311 days after insertion, video traffic is growing at 0% a month and that was during the rise of NFLX. When BTI started their content deep solution it was all about YT, but this has changed in the last 9 months due to NFLX.
What I really think this entire debate is all about is money. I put a chart in the April post that you can view here. It is all about the chain of commerce. Why did we pay $15 dollars for album in the 1980s and $9.99 for CDs in 1990s? The answer is the chain of commerce could support that pricing model. Today, the chain of commerce is shrinking and consumption habits have changed. SPs do not want to be relegated to a “bits r us” business model. They want a piece of the revenue stream from the content creator, to the content owner, to the content distributor, to the CDN, to the SPs and finally to the consumer. I think the real issue is not the network, but the network is being used as a facility to broker a bigger discussion about the division of revenues. I could be wrong too and the whole internet could collapse by 1996.
/wrk
* It is all about the network stupid, because it is all about compute. *
** Comments are always welcome in the comments section or in private. Just hover over the Gravatar image and click for email. **
I was just down in my basement looking at the area around my FiOS FTTH equipment (pic below). I have been expecting a flood of data any day now. I have been so concerned about the impending exaflood that I have been considering a home defense. What would it cost to build Petabyte Wall (1000 TBs) to handle in the incoming flood of data? There have been plenty of warnings of the impending event.
If I wanted to build my PB Wall out of 32GB flash drives, I would need 31,250 sticks. That would set me back ~$2M assuming I could get a volume discount. There are some nice 12TB NAS systems, so I thought that 84 of those systems for $124k might be a better option. Google is offering 16TB for $256 per year, so that would set me back $256k for my PB Wall, but the problem with this option is it is from Google’s personal storage product and I need a commercial class solution – not a digital locker for files. Here are four options for my PB Wall assuming I could direct the flood bytes to the storage options:
Solution | Price Per GB | Price Per TB | PB Wall Total Cost |
32 GB Flash Drives | $2.34 | $2,340.00 | $2.34M |
12TB NAS (Seagate) | $0.12 | $124.91 | $124k |
Google Storage (Developers) | $0.17 | $170 | $2.04M (1 Yr) |
Amazon S3 | $3.72 | $3,726.49 | $3.72M (1 Yr) |
A few items to note in my high level quest to build a PB Wall. Amazon and Google both have upload/download transactions costs. I calculated the cost to fill a TB with the Amazon pricing tool; Google numbers do not include this cost. Google charges $0.10 per GB for upload and $0.15 per GB for download for Americas and EMEA. If you are in APAC make that $0.30 per GB for download. Google and Amazon also charge $0.01 per 1000 PUT, POST, LIST, GET and HEAD requests. Note those are transaction costs for compute, which is a reoccurring theme in my blog. To upload a TB to Google per month it would cost me $100k or an additional $1.2M per year putting the total Google cost for my PB Wall from Google at $3.24M. Throw in the some more charges and Google and Amazon are pretty close. My total costs do not include power and cooling for my in home NAS and flash drive solutions, not mention the time it would take to figure out how to wire 31,250 flash drives together.
Where is the Exabyte Flood?
I am not going to take the time to critique the various predictions from 2007-2008 about the impending exaflood and the internet breaking. These types of hyperbole always lack hubris and neglect to correct for black swans and human adaptation. That is what networks do. Networks adapt because they are managed by humans. Humans adapt. Not a lot of people where talking about broadband usage caps back in 2007-2008. Not many people thought that service providers would throttle bandwidth connections. Verizon FiOS offers storage with their internet service for $0.07 per GB per month or $0.95 cents per year. My PB Wall from Verizon using my FiOS service would cost me an additional $950k per year.
If I was provide a short answer to the complicated question of the exaflood, I would really say the answer lies in Shannon-Hartley theorem and that this law from 1927 will have more to do with network design and build-out over the next decade than life before or after television. In the past, it was easier to deploy more bandwidth to obtain more capacity. Buy another T1, upgrade to a DS3 and get me a GbE or more. Today we are approaching the upper end of spectral efficiency and this is going to force networks to be built differently. As I stated in a prior post I think that transmission distances decline, more compute (i.e. data centers) are put into the network and bandwidth limiting devices like ROADMs and routers/switches that have an OEO conversion will go away probably on the same time line as the TDM to packet evolution. This means the adoption rate is slower than first predicted and just when despair at the slow adoption takes hold the adoption rate rapidly increases and continues to gain momentum as the new network models are proved in.
The Network Always Adapts
The other assumption missed by the exaflood papers was that the network adapts. It just does. People who run networks put more capacity in the network for less money, because the people who build the network infrastructure are always working to improve their products.
One market effect I know is that when discrete systems in the network become harmonized or balanced, there is a lot of money to be made. Look at the fibre channel market. When adapters, drives and fabrics converge around a performance level like 2GB or 4GB, the market becomes white hot. The same goes for the 1G and 10G optical transmission markets. Today we are a maturing 10G market, there is a transition market called 40G, but the real magic is going to happen at 100G. At 100G huge efficiencies start to occur in the network as it relates to I/O, compute process, storage, etc. With the building of huge datacenters, how much bandwidth is required to service 100k servers? These large datacenters are being built for many reasons such as power, cooling, security, but the one reason that is often not quoted is processing and compute. There have been really innovating solutions to the compute problem that I wrote about before such as RVBD and AKAM. I look at what the Vudu team did for meshed, stub storage of content on a community of systems. Is this a model for future wherein numerous smaller data centers look like a community of Vudu devices?
Analytics Matter
Going forward in a network world of usage caps, distributed storage and parallel processing I know what element that will need to solved and that is service levels. Commercial and consumer end-users want to get what they are paying for and service providers do not want to be taken advantage of in the services they are offering. Security and defined service level agreements will push down to the individual user just as search and content is being optimized and directed to the group of one from broader demographic groups. Why are their wireless broadband usage caps? Because there is a spectrum problem and that same problem and solution set is coming to your FTTH connection sooner than you think. Why do you think Google and Amazon are charging for compute transactions? Anyone who used a mainframe in the days when you had to sign up for processing time is having flash back and wonder what happened to all the punch cards.
The reason Google and Amazon charge for compute is because transactions cost money. The whole digital content revolution, disintermediation, OTT video, music downloads, YT, blah, blah, whatever you want to call it goes back to one force and that is transaction economics. The profit margin that can be derived from selling content has and continues to decline. Distribution is easier; hence the transaction cost is smaller, resulting in a lower profit margin thus supporting fewer intermediaries. It does not mean the cost is zero, it just means less. What is the cost to store, compute and transmit content? Answer that, add some profit margin at each step and you know the future.
The companies who are providing the tools and systems that provide the analytics around the economic transaction of store, compute and transmit (SCT) are going to be big winners.
/wrk
** It is all about the network stupid, because it is all about compute. **