Content Delivery Networks (CDNs) #3

I have written a fair amount about CDNs here, here as well as a post specific to over the top (OTT) video here.  Last week it was with some degree of skepticism that I read this article in Bloomberg about AKAM being an M&A target of IBM or Verizon.  I know nothing about that speculation.  What I do know is a lot about service provider networks, CDNs, networking technology and how content is moved around the internet.

Starting with the premise that it “…may finally be cheap enough…” thesis regarding companies.  I have objected to this notion in the past as a lazy thesis on equity prices.  I do not understand why people really think if a stock price declines, then suddenly some other company is going to jump in and buy it because now it is cheap.  The stock price declined for a reason.  Ask the team at HPQ how well that PALM deal worked out after they waited for the stock price to decline.

The next flaw in the article is the notion that website acceleration and video consumption is one and the same.  As I have pointed out, this is a lazy thesis in which a detailed understanding of how content is provisioned, distributed and consumed is missing from the quote.  This is called context.  People often confuse the initial technology solution provided by AKAM, which was an extremely intelligent and clever idea, with “bandwidth explosion” as quoted in the article.

When I hear grand standing comments such as “bandwidth explosion” without some form of statistical reference I just ignore it.  The initial solution that Akamai provided had more to do with providing distributed (i.e. localized) HTML content and querying for content from uncongested sources via alternative routes than exploding internet bandwidth usage, video, the internet will collapse, blah, blah.  The argument can be made that if the internet worked well enough, meaning that service providers provisioned high capacity service to local users and removed over subscription ratios in the hierarchy of the internet structure, thus flattening the network structure, then there would be little need for a distributed CDN as actual service providers or content providers (e.g. AAPL, GOOG, MSFT, YHOO, AMZN) would easily deploy this capability with little need for third party CDNs.  That is one of the reasons why we have seen the rise of centralized CDNs (e.g. Limelight) and software companies like PeerApp.  It is also the reason why we see a rise in SSDs, flash and the ability to deploy huge amounts of storage in the network.

Let us all keep grounded in reality.  Akamai is a tremendous company and they have extremely valuable intellectual property.  The network is changing and that is why I think AKAM would want to be acquiring companies.  They need new solutions and technologies that address how the internet is changing as their legacy solutions become less relevant.

/wrk

* It is all about the network stupid, because it is all about compute. *

** Comments are always welcome in the comments section or in private.  Just hover over the Gravatar image and click for email. ** 

Content Delivery Networks (CDNs) 08.21.11

I had an email question two weeks ago regarding CDNs and where they are going or not going and who will be the winner or loser.  I answered the question privately, but it gave me cause to think about content deep networking, CDNs and what is going on in the network because of the evolution to longer form data, or big data depending on what term you prefer.  There is no question that Web 1.0 (~1995-2000) built on small HTML files is much different than Web 2.0 (~2002-2008) and Web 3.0 (~2009-?) with streaming HD content, state aware apps and access points at the edge that have higher connection speeds and capacities; all that being said, I am still a bit of an OTT skeptic.  Here is a chart I produced over a year ago using data from AKAM and CDN pricing trends.  The chart is not updated, but I think it shows the conundrum of having to serve longer form data in a market of declining ASPs.  I noted on the chart the start of iTunes, which is the poster child for the old content consumption model in which the user had to own the rights locally for the content.  The new content model which iTunes is using too, the rights are licensed by the content provider (AAPL, NFLX, AMZN, etc) and the end-user rents usage rights, usually as a monthly fee.

When I wrote that I was an OTT skeptic, I meant that I find it hard to quantify the OTT problem and I find that service providers (SPs) find it hard to quantify the problem.  I think there is no shortage of emotion, but I am not sure everyone is talking about the same problem or maybe they are just using the perception of a problem to force a discussion about another subject matter, which is what I really believe.

To start, let us step back and ask what video/OTT problem are service providers and the infrastructure companies are trying to solve?  Is it a bandwidth problem (i.e. real capacity constraints), a revenue problem (i.e. SPs want a share of NFLX revenues) or a CAPEX problem (i.e. SPs do not want to spend)?  I talk to a lot of people on many sides of the debate; I talk to equipment companies and I read the industry and investment reports.  I am skeptic when smart people tell me that it is a well known and understood problem that video is clogging the network.  Is it?  Can someone show me some stats?  When I read puff pieces like this, I struggle to grasp the meaning.

If OTT video is growing 40-50% over the next four years it is somewhat meaningless to me because network technologies and network capacities are not static.  The whole OTT space is a bit of conundrum.  There is a lot of noise around it and that is good for selling, marketing and thought leadership, but it seems vastly under invested if there is such a problem on the scale it is made out to be.  I think the data center (compute) scaling (more VMs on a Romley MB and the virtualization of the I/O) into the network is a much, much bigger market.

What are CDNs really good at?  Distributed CDNs like AKAM are really good at distributed content hosting like big file upgrades and regional specific content distribution like day and date.  iTunes is hosted by AKAM and they do a good job of ensuring you cannot download content specific to the UK in the US.  AKAM also offers really good site acceleration services for web properties that have low to medium traffic demands, but might have a spike in traffic due to an unforeseen event.

Centralized CDNs like LLNW and LVLT do really well at serving up specific content events and they are much better at hosting content that requires that state be updated, think Gmail which likes to update state on a regular basis.  Before thinking about CDNs, think about NFLX or Youtube.com (YT).

A year ago most service providers (SPs) who thought they had an OTT video problem viewed YT as the biggest problem, but as a problem it was small.  NFLX has overtaken YT traffic.  From a SP perspective, there are several ways to handle the problem of OTT video or user requested real time traffic.  (i) SPs can ignore it, (ii) SPs can meter bandwidth and charge consumers more for exceeding traffic levels, (iii) SPs can block it or (iv) SPs can deploy variations on content deep networking strategies.

Content deep strategies use products from companies like BTI Systems and JNPR (Ankeena acquisition) to mention a couple.  These companies deploy a caching CDN product in the network around the 10-50k user stub point.  The device replicates popular content that it sees requested from sites like NFLX (it is a learning algorithm) and thus the 10-50k user group does not have to traverse the entire network topology for popular content from streaming sites.

Similar to a cable node-splitting strategy, hosting popular content deeper in the network works well and seems to slow bandwidth consumption growth rates to very manageable levels.  CDNs win because they do not have to provision as much capacity and the SPs win because they have less money going to the CDN and less capacity issues in the network.

The user experience is better too.  When you see ATT and LVLT wanting to build a CDN service (GOOG too) it is really about content deep and putting content local to the user.  This is something I wrote about in my blog back in April.  Recently, there were reports of LVLT and LLNW combining CDNs and this makes sense to me as scale will matter in the business.

In terms of BTI, I listened to a webinar they produced about a month ago that was hosted on Dan Rayburn’s site.  BTI is claiming 10 content deep networking customers and in trials with a tier 1.  Specifically (if I heard the presentation correctly), they said that at the Tier 1 SP trial, OTT video traffic was growing at 3% per month.  311 days after insertion, video traffic is growing at 0% a month and that was during the rise of NFLX.  When BTI started their content deep solution it was all about YT, but this has changed in the last 9 months due to NFLX.

What I really think this entire debate is all about is money.  I put a chart in the April post that you can view here.  It is all about the chain of commerce.  Why did we pay $15 dollars for album in the 1980s and $9.99 for CDs in 1990s?  The answer is the chain of commerce could support that pricing model.  Today, the chain of commerce is shrinking and consumption habits have changed.  SPs do not want to be relegated to a “bits r us” business model.  They want a piece of the revenue stream from the content creator, to the content owner, to the content distributor, to the CDN, to the SPs and finally to the consumer.  I think the real issue is not the network, but the network is being used as a facility to broker a bigger discussion about the division of revenues.  I could be wrong too and the whole internet could collapse by 1996.

/wrk

* It is all about the network stupid, because it is all about compute. *

** Comments are always welcome in the comments section or in private.  Just hover over the Gravatar image and click for email. **

July 2011 Tech Earnings 1.5: Network Transitions

Most of the tech investment world is trying to figure out what happened to JNPR in the last few months.  The Fast Money team on CNBC blamed Japan and macro.  I have counted five downgrades so far this morning.  A number of analysts are maintaining buy ratings, but I think they need to go back and do some research.  This is not a Japan problem, this is not a macro problem, and this is not a product transition problem.

I have written extensively that there is a major sea change going on in world of networking.  I am too lazy to repeat what I have previously written, but I will provide a short reading list.  Read these posts:

Back from the Valley, Things are Going to be Different (June 4, 2011)

Three Dislocations, Will they Meet? (June 7, 2011)

Looking at the Big Picture and Connecting the Dots (June 14, 2011)

Clouds and the Network (June 24, 2011)

Thinking About Moore’s Law (July 4, 2011)

Networking is like last dark art left in the world of tech.  The people who run networks are like figures out of Harry Potter.  Most of them have no idea what is in the network that they manage; they do not know how the network is connected and hope everyday that nothing breaks the network.  If the network does break, their first reaction is to undo whatever was done to break the network and hope that fixes the problem.  Router, switch and firewall upgrades break the network all the time.  The mystics of the data center/networking world are the certified internet engineers.  These are people with special training put on the planet to perpetuate and extended the over paying for networking products because no one else knows how to run the network.

Intended network design has changed little in twenty years.  I look back in my notebooks at the networks I was designing in the early 1990s and they look like the networks that the big networking companies want you to build today.  If you are selling networking equipment for CSCO, JNPR, BRCD, ALU, CIEN, etc, you go to work every day trying to perpetuate the belief that Moore’s Law rules.  You go to work everyday and try to convince customers to extend the base of their networks horizontally to encompass more resources, vertically build the network up through the core and buy the most core capacity you can and hope the over subscription model works.  When the core becomes congested or the access points slow down, come back to your vendor to buy more.  When you read the analyst reports that say JNPR is now a 2012 growth story that is code words for “we are hoping customers come back and buy more in 2012.”  Keep the faith.  Keep doing what you are doing.  Queue the music…don’t stop believin’.

Tonight we are going to get results from AKAM.  I will be curious to see the results and commentary because I think AKAM is another company looking at the evolution of the network and thinking this is not the network from 2000 or 2005.  The internet is no longer built vertically to a few core peering points.  Content is no longer static.  State is now distributed across the network and state requires frequent if not constant updating.  Content is no longer little HTML files, it is long form content such has HD video which other people are calling this Big Data.  AKAM created amazing solutions for Web 1.0 and into Web 2.0 when the web was architected around the concept of a vertically built over subscription model.  AKAM distributed content around the network and positioned content deeper in the network.  That is not the internet of today.

As always…thanks for reading.  I am very humbled by the emails I get from people I have never met.

/wrk

* It is all about the network stupid, because it is all about compute. *

** Comments are always welcome in the comments section or in private.  Just hover over the Gravatar image and click for email. **