Comcast/Netflix Deal…what does it mean?

I am curious to learn the details around the Comcast/Netflix deal that is being widely reported this afternoon.  Having spent the better part of the past twenty-years selling equipment to service providers of all types on most continents and in a variety of regulated constructs, the subject of net-neutrality and OTT have been a prominent subject in my blogs over the past eight years.  SIWDT is coming up three years old and one benefit that content has to me is it is searchable and I can go back and critique my thoughts.  I did a search on “net neutrality” and it came back with four prior posts.  I carved out a relevant quote from each post:

Continue reading

Exaflood Does Not Make Sense

Another nice weekend on the east coast of the US and there is so much to think about that happened over the past week.   CSCO spent five billon of their off shore pile of cash on a video company.  ADTN announced a terrible quarter to start the year.  These two events are part of the conundrum of the past 10-11 years that when I hear people telling me how the internet is going to break, the upcoming exaflood and video is a huge problem it makes little sense to me.  At first I was going to write a big post with all sorts of charts, but the weather is too nice and I am not going to bother with the details.  It might be better to just state my opinion and if you feel differently go ahead and put money to work.  Just remember:

Innovative, Alternative Technology Solutions = Velocity

Developed Technology = Spent Capital, Doctrine, Incrementalism and Creativity Fail 

  • Since 2001 mobile phone and now smartphone growth has been great.  We have more smartphones and now tablets that sales exceed a billion devices a year.  2G went to 3G and now 4G…all of it means more devices using more capacity.
  • Many of the local loop upgrades are done in the US.  DOCSIS 3.0, FTTx, xDSL…yes, we have solved the local loop bottleneck.  It took ten years, but we are on our way to universal broadband.
  • OTT video is everywhere and soon we will have 15 or more NFLX copy cats and that is because content, especially video is a DIY content solution
  • File sharing, Dropbox, Box.net, Megaupload, S3…yes we have storage in the cloud
  • State aware apps like Gmail are the norm

All of these drivers of bandwidth are ongoing in the ecosystem and yet service provider CAPEX is lumpy!  CSCO has to buy a software based STB/codec company and ADTN who is in the heart of the local loop upgrade market has a huge miss because service providers CAPEX is lumpy, spotty, insert word of your choice.  Go look at any 10-year weekly chart of a service provider communication equipment company and you will see that the trend is down.  If the traffic trend is up, why is the equity value chart trend down?  Enough said on my part on the subject.

The NYT reported that CSCO was supporting a spin-in startup called Insieme.  I have a lot of thoughts on this subject, but I am going to take a few days to collect and organize my thoughts on the subject.  It will be the topic of my next post.  I posted some additional thoughts over on the Plexxi blog as to what I am seeing in the DC market.

 

/wrk

* It is all about the network stupid, because it is all about compute. *

** Comments are always welcome in the comments section or in private. **

2007 Thesis on P2P Video, Bandwidth and Broadband

I have been thinking a lot about bandwidth, telecom services and the traditional telecom equipment market of selling to service providers.  I posted a snippet of my thinking in my 11.23.11 Notebook post: “Telecom Equipment Markets: I sent the four charts to the left to a friend the other day.  Both of us had careers in networking in the 1990s.  He came back at me with following argument:  “Carrier traffic growth is 40-60% annually, carrier CAPEX growth is ~3% annually and carrier revenue growth is <10% annually.”  The only way to reconcile that construct is to drop the cost per bit.  Who will bear the burden of that cost reduction?  I think the most likely candidate is the equipment providers.  Continue reading

Content Delivery Networks (CDNs) 08.21.11

I had an email question two weeks ago regarding CDNs and where they are going or not going and who will be the winner or loser.  I answered the question privately, but it gave me cause to think about content deep networking, CDNs and what is going on in the network because of the evolution to longer form data, or big data depending on what term you prefer.  There is no question that Web 1.0 (~1995-2000) built on small HTML files is much different than Web 2.0 (~2002-2008) and Web 3.0 (~2009-?) with streaming HD content, state aware apps and access points at the edge that have higher connection speeds and capacities; all that being said, I am still a bit of an OTT skeptic.  Here is a chart I produced over a year ago using data from AKAM and CDN pricing trends.  The chart is not updated, but I think it shows the conundrum of having to serve longer form data in a market of declining ASPs.  I noted on the chart the start of iTunes, which is the poster child for the old content consumption model in which the user had to own the rights locally for the content.  The new content model which iTunes is using too, the rights are licensed by the content provider (AAPL, NFLX, AMZN, etc) and the end-user rents usage rights, usually as a monthly fee.

When I wrote that I was an OTT skeptic, I meant that I find it hard to quantify the OTT problem and I find that service providers (SPs) find it hard to quantify the problem.  I think there is no shortage of emotion, but I am not sure everyone is talking about the same problem or maybe they are just using the perception of a problem to force a discussion about another subject matter, which is what I really believe.

To start, let us step back and ask what video/OTT problem are service providers and the infrastructure companies are trying to solve?  Is it a bandwidth problem (i.e. real capacity constraints), a revenue problem (i.e. SPs want a share of NFLX revenues) or a CAPEX problem (i.e. SPs do not want to spend)?  I talk to a lot of people on many sides of the debate; I talk to equipment companies and I read the industry and investment reports.  I am skeptic when smart people tell me that it is a well known and understood problem that video is clogging the network.  Is it?  Can someone show me some stats?  When I read puff pieces like this, I struggle to grasp the meaning.

If OTT video is growing 40-50% over the next four years it is somewhat meaningless to me because network technologies and network capacities are not static.  The whole OTT space is a bit of conundrum.  There is a lot of noise around it and that is good for selling, marketing and thought leadership, but it seems vastly under invested if there is such a problem on the scale it is made out to be.  I think the data center (compute) scaling (more VMs on a Romley MB and the virtualization of the I/O) into the network is a much, much bigger market.

What are CDNs really good at?  Distributed CDNs like AKAM are really good at distributed content hosting like big file upgrades and regional specific content distribution like day and date.  iTunes is hosted by AKAM and they do a good job of ensuring you cannot download content specific to the UK in the US.  AKAM also offers really good site acceleration services for web properties that have low to medium traffic demands, but might have a spike in traffic due to an unforeseen event.

Centralized CDNs like LLNW and LVLT do really well at serving up specific content events and they are much better at hosting content that requires that state be updated, think Gmail which likes to update state on a regular basis.  Before thinking about CDNs, think about NFLX or Youtube.com (YT).

A year ago most service providers (SPs) who thought they had an OTT video problem viewed YT as the biggest problem, but as a problem it was small.  NFLX has overtaken YT traffic.  From a SP perspective, there are several ways to handle the problem of OTT video or user requested real time traffic.  (i) SPs can ignore it, (ii) SPs can meter bandwidth and charge consumers more for exceeding traffic levels, (iii) SPs can block it or (iv) SPs can deploy variations on content deep networking strategies.

Content deep strategies use products from companies like BTI Systems and JNPR (Ankeena acquisition) to mention a couple.  These companies deploy a caching CDN product in the network around the 10-50k user stub point.  The device replicates popular content that it sees requested from sites like NFLX (it is a learning algorithm) and thus the 10-50k user group does not have to traverse the entire network topology for popular content from streaming sites.

Similar to a cable node-splitting strategy, hosting popular content deeper in the network works well and seems to slow bandwidth consumption growth rates to very manageable levels.  CDNs win because they do not have to provision as much capacity and the SPs win because they have less money going to the CDN and less capacity issues in the network.

The user experience is better too.  When you see ATT and LVLT wanting to build a CDN service (GOOG too) it is really about content deep and putting content local to the user.  This is something I wrote about in my blog back in April.  Recently, there were reports of LVLT and LLNW combining CDNs and this makes sense to me as scale will matter in the business.

In terms of BTI, I listened to a webinar they produced about a month ago that was hosted on Dan Rayburn’s site.  BTI is claiming 10 content deep networking customers and in trials with a tier 1.  Specifically (if I heard the presentation correctly), they said that at the Tier 1 SP trial, OTT video traffic was growing at 3% per month.  311 days after insertion, video traffic is growing at 0% a month and that was during the rise of NFLX.  When BTI started their content deep solution it was all about YT, but this has changed in the last 9 months due to NFLX.

What I really think this entire debate is all about is money.  I put a chart in the April post that you can view here.  It is all about the chain of commerce.  Why did we pay $15 dollars for album in the 1980s and $9.99 for CDs in 1990s?  The answer is the chain of commerce could support that pricing model.  Today, the chain of commerce is shrinking and consumption habits have changed.  SPs do not want to be relegated to a “bits r us” business model.  They want a piece of the revenue stream from the content creator, to the content owner, to the content distributor, to the CDN, to the SPs and finally to the consumer.  I think the real issue is not the network, but the network is being used as a facility to broker a bigger discussion about the division of revenues.  I could be wrong too and the whole internet could collapse by 1996.

/wrk

* It is all about the network stupid, because it is all about compute. *

** Comments are always welcome in the comments section or in private.  Just hover over the Gravatar image and click for email. **

Waiting on the Exaflood

I was just down in my basement looking at the area around my FiOS FTTH equipment (pic below).  I have been expecting a flood of data any day now.  I have been so concerned about the impending exaflood that I have been considering a home defense.  What would it cost to build Petabyte Wall (1000 TBs) to handle in the incoming flood of data?  There have been plenty of warnings of the impending event.

If I wanted to build my PB Wall out of 32GB flash drives, I would need 31,250 sticks.  That would set me back ~$2M assuming I could get a volume discount.  There are some nice 12TB NAS systems, so I thought that 84 of those systems for $124k might be a better option.  Google is offering 16TB for $256 per year, so that would set me back $256k for my PB Wall, but the problem with this option is it is from Google’s personal storage product and I need a commercial class solution – not a digital locker for files.  Here are four options for my PB Wall assuming I could direct the flood bytes to the storage options:

Solution Price Per GB Price Per TB PB Wall Total Cost
32 GB Flash Drives $2.34 $2,340.00 $2.34M
12TB NAS (Seagate) $0.12 $124.91 $124k
Google Storage (Developers) $0.17 $170 $2.04M (1 Yr)
Amazon S3 $3.72 $3,726.49 $3.72M (1 Yr)

A few items to note in my high level quest to build a PB Wall.  Amazon and Google both have upload/download transactions costs.  I calculated the cost to fill a TB with the Amazon pricing tool; Google numbers do not include this cost.  Google charges $0.10 per GB for upload and $0.15 per GB for download for Americas and EMEA.  If you are in APAC make that $0.30 per GB for download.   Google and Amazon also charge $0.01 per 1000 PUT, POST, LIST, GET and HEAD requests.  Note those are transaction costs for compute, which is a reoccurring theme in my blog.  To upload a TB to Google per month it would cost me $100k or an additional $1.2M per year putting the total Google cost for my PB Wall from Google at $3.24M.  Throw in the some more charges and Google and Amazon are pretty close.  My total costs do not include power and cooling for my in home NAS and flash drive solutions, not mention the time it would take to figure out how to wire 31,250 flash drives together.

Where is the Exabyte Flood?

I am not going to take the time to critique the various predictions from 2007-2008 about the impending exaflood and the internet breaking.  These types of hyperbole always lack hubris and neglect to correct for black swans and human adaptation.  That is what networks do.  Networks adapt because they are managed by humans.  Humans adapt.  Not a lot of people where talking about broadband usage caps back in 2007-2008.  Not many people thought that service providers would throttle bandwidth connections.  Verizon FiOS offers storage with their internet service for $0.07 per GB per month or $0.95 cents per year.  My PB Wall from Verizon using my FiOS service would cost me an additional $950k per year.

If I was provide a short answer to the complicated question of the exaflood, I would really say the answer lies in Shannon-Hartley theorem and that this law from 1927 will have more to do with network design and build-out over the next decade than life before or after television.  In the past, it was easier to deploy more bandwidth to obtain more capacity.  Buy another T1, upgrade to a DS3 and get me a GbE or more.  Today we are approaching the upper end of spectral efficiency and this is going to force networks to be built differently.  As I stated in a prior post I think that transmission distances decline, more compute (i.e. data centers) are put into the network and bandwidth limiting devices like ROADMs and routers/switches that have an OEO conversion will go away probably on the same time line as the TDM to packet evolution.  This means the adoption rate is slower than first predicted and just when despair at the slow adoption takes hold the adoption rate rapidly increases and continues to gain momentum as the new network models are proved in.

The Network Always Adapts

The other assumption missed by the exaflood papers was that the network adapts.  It just does.  People who run networks put more capacity in the network for less money, because the people who build the network infrastructure are always working to improve their products.

One market effect I know is that when discrete systems in the network become harmonized or balanced, there is a lot of money to be made.  Look at the fibre channel market.  When adapters, drives and fabrics converge around a performance level like 2GB or 4GB, the market becomes white hot.  The same goes for the 1G and 10G optical transmission markets.  Today we are a maturing 10G market, there is a transition market called 40G, but the real magic is going to happen at 100G.  At 100G huge efficiencies start to occur in the network as it relates to I/O, compute process, storage, etc.  With the building of huge datacenters, how much bandwidth is required to service 100k servers?  These large datacenters are being built for many reasons such as power, cooling, security, but the one reason that is often not quoted is processing and compute.  There have been really innovating solutions to the compute problem that I wrote about before such as RVBD and AKAM.  I look at what the Vudu team did for meshed, stub storage of content on a community of systems.  Is this a model for future wherein numerous smaller data centers look like a community of Vudu devices?

Analytics Matter

Going forward in a network world of usage caps, distributed storage and parallel processing I know what element that will need to solved and that is service levels.  Commercial and consumer end-users want to get what they are paying for and service providers do not want to be taken advantage of in the services they are offering.  Security and defined service level agreements will push down to the individual user just as search and content is being optimized and directed to the group of one from broader demographic groups.  Why are their wireless broadband usage caps?  Because there is a spectrum problem and that same problem and solution set is coming to your FTTH connection sooner than you think.  Why do you think Google and Amazon are charging for compute transactions?  Anyone who used a mainframe in the days when you had to sign up for processing time is having flash back and wonder what happened to all the punch cards.

The reason Google and Amazon charge for compute is because transactions cost money.  The whole digital content revolution, disintermediation, OTT video, music downloads, YT, blah, blah, whatever you want to call it goes back to one force and that is transaction economics.  The profit margin that can be derived from selling content has and continues to decline.  Distribution is easier; hence the transaction cost is smaller, resulting in a lower profit margin thus supporting fewer intermediaries.  It does not mean the cost is zero, it just means less.  What is the cost to store, compute and transmit content?  Answer that, add some profit margin at each step and you know the future.

The companies who are providing the tools and systems that provide the analytics around the economic transaction of store, compute and transmit (SCT) are going to be big winners.

/wrk

** It is all about the network stupid, because it is all about compute. **