An Open Letter to Hobbyists

  • Bandwidth is the Software of the Network
  • Regulating the Single Network Pipe is Driving Forward while looking in the Rearview Mirror

With age and experience, time provides the ability to clearly spot irony. In 1976, Bill Gates sent an open letter to computer hobbyists expressing his displeasure for software piracy. The letter even has a Wikipedia page.  When I read the FCC proposals regarding new neutrality, I feel like we have been over and over this ground before. Thirty-nine years ago Bill Gates wrote his letter to hobbyists and the majority of it is worth reading in the context of the net-neutrality debate: Continue reading

Completely Unscientific Study of Consumer Internet Prices

Bandwidth is deflationary and I find any arguments to the contrary to be foolish. This is a subject I have written about before here, here, here and here. Over the past few weeks, I have been reminded that it is always easier to solve most networking problems by applying bandwidth. A few weeks ago I found myself reading one of Marc Andreessen tweet streams of conciseness and I replied (see below). After making the tweet I wondered if I was correct?

Screen Shot 2014-09-07 at 9.31.41 AM
Continue reading

Service Provider CAPEX, Bits R Us and Compute Conundrum

What is wrong with service provider CAPEX?  Specifically I am referring to commentary from CIEN this week and ALU in July.  I think I have been writing about this trend for over a year, but I am going to summarize it (read through prior postings for more details):

– Connected devices (i.e. computers, tablets, smartphones) cause vastly more bandwidth to be consumed inside the data center than over the network

–  Wifi off-boarding removes RAN congestion

– CDNs position content closer to the consumption point

– Bandwidth is deflationary

– I have coverage at my residence for video (TV) from: Comcast, RCN, Verizon, DirecTV, Dish

– I have coverage at my residence for my mobile devices from: ATT, VZ, Sprint, MetroPCS, Boost, T-Mobile, VirginMobile and I think three others

– Every two months Comcast, Verizon and RCN mail me offers and canvass my neighborhood knocking on doors offering more for less

– ARPU is really capped by disposable income.  Here is a link to a chart at the St. Louis Fed.  You can play with it, but the general up trend from 1959 was decisively broken in 2008.  We are now just recovering.  I am not sure how much more consumers can pay to providers and content providers like Netflix, Apple, Amazon, etc.

– Recent CAPEX trends in terms of expansion and contraction are decisively in favor of content companies like Google, Amazon, Yahoo, Apple and Microsoft.  Traditional service provides still spend a lot of CAPEX, but it is to maintain and expand a transport network.  Content providers are spending

This is what I call the compute conundrum.

– The unspoken or ignored trend is the ability of content companies to (1) build their own data centers for compute, (2) store user data and desirable content in these data centers, (3) build their own networks by leasing fiber and (4) off load the access business (which is deflationary) to the service providers.

– Every year we hear the siren’s song for a CAPEX rebound on lazy reasoning like “billions of connected devices, trillions of streaming videos, transformational network projects” and what we get is a lot of spending at lower prices.

/wrk

* It is all about the network stupid, because it is all about compute. *

** Comments are always welcome in the comments section or in private. ** 

Service Provider Bandwidth: Does it Matter?

Let us start with a question: service provider bandwidth does it matter?  Perhaps a better question would be: is service provider bandwidth a meaningful problem to work on?  I think it does matter, but I am not certain it is meaningful.  This post is not going to be a scientific study and it should not be construed as a start of a working paper.  This post is really a summary of my observations as I am trying to understand the significance of the messaging in the broader technology ecosystem.  I sometimes call these posts framing exercises.  I am really trying to organize and analyze disparate observations, urban myths and inductive logical failings of doctrine.

Frame 1: Bandwidth Pricing Trend

There is no debate on this point; the price trend of bandwidth is more for less.  Bandwidth is deflationary until someone shows me a data set that proves it is inflationary.  I agree that bandwidth is not ubiquitous, it is unevenly distributed and that falls into the category of: life is not fair; get used to it.  In areas in which there is a concentration of higher wage earning humans organized into corporations with the objective of being profit centers, there seems to be an abundance of bandwidth and the trend in bandwidth is deflationary.  Here are a few links:

  • Dan Rayburn Cost To Stream A Movie Today = Five Cents; in 1998 = $270In 1998 the average price paid by content owners to deliver video on the web was around $0.15 per MB delivered. That’s per bit delivered, not sustained. Back then, nothing was even quoted in GB or TB of delivery as no one was doing that kind of volume when the average video being streamed was 37Kbps. Fast forward to today where guys like Netflix are encoding their content at a bitrate that is 90x what it was in 1998.  To put the rate of pricing decline in terms everyone can understand, today Netflix pays about five cents to stream a movie over the Internet.”
  • GigaOm: See the 2nd Chart.
  • Telegeoraphy: See the chart “MEDIAN GIGE IP TRANSIT PRICES IN MAJOR CITIES, Q2 2005-Q2 2011”

Frame 2 Verizon Packet/Optical Direction

Here is a presentation by Verizon at the Infinera DTN-X product briefing day.  The theme of the presentation is that the network is exhausted due to 4G LTE, video, FTTx, etc and that this is driving the need for more bandwidth to include 100G in the metro, 400G and even terabit ethernet in the core.  I have heard these arguments for terabit ethernet before; I am firmly in the minority that it is a network design/traffic engineering problem – not a bandwidth problem to be solved.  It took the world fifteen years to move from 1G to 10, I wonder how long it will take to get to terabit ethernet.

Frame 3 Are the design assumptions incorrect?

When I look at the network, I think of it as a binary solution set.  It can connect and it can disconnect.  For many decades we have been building networks based on the wrong design assumptions.  I have been posting on these errors in prior posts.  Here is a link to a cloud hosting company.  I know this team and I know their focus has been highest IOPs in their pod architecture.  We can use any cloud provider to make the point, but I am using Cloud Provider USA because of the simplicity of their pricing page.  All a person has to do is make five choices: DC location, CPU cores, memory, storage and IP address.  Insert credit card and you are good to go.  Did you notice what is missing?  Please tell me you noticed what is missing, of course you did.  The sixth choice is not available yet, it is network bandwidth; the on or off network function.  The missing value is not the fault of the team at Cloud Provider USA; it is the fault of those of us who have been working in the field of networking.  Networking has to be simple; on or off and at what bandwidth.  I know it is that simple in some places, but my point is it needs to be as easily configured and presented in the same manner as DC-CPU-Memory-Storage-IPs purchase options are presented on the Cloud Provider website.  My observation is the manner in which we design networks results in a complexity of design that is prohibitive to ease of use.

Frame 4 Cisco Cloud Report

I think most people have read Cisco’s Cloud Report.  Within the report there are all sorts of statistics and charts that go up and to the right.  I want to focus on a couple of points they make in the report:

  • From 2000 to 2008, peer-to-peer file sharing dominated Internet traffic. As a result, the majority of Internet traffic did not touch a data center, but was communicated directly between Internet users. Since 2008, most Internet traffic has originated or terminated in a data center. Data center traffic will continue to dominate Internet traffic for the foreseeable future, but the nature of data center traffic will undergo a fundamental transformation brought about by cloud applications, services, and infrastructure.”
  • In 2010, 77 percent of traffic remains within the data center, and this will decline only slightly to 76 percent by 2015.2.  The fact that the majority of traffic remains within the data center can be attributed to several factors: (i) Functional separation of application servers and storage, which requires all replication and backup traffic to traverse the data center (ii) Functional separation of database and application servers, such that traffic is generated whenever an application reads from or writes to a central database (iii) Parallel processing, which divides tasks into multiple smaller tasks and sends them to multiple servers, contributing to internal data center traffic.”

Here is my question from the above statistic.  If 77% of traffic stays in the data center, what is the compelling reason to focus on the remaining 23%?

Frame 5 Application Aware an the Intelligent Packet Optical Conundrum

I observe various transport orientated equipment companies, as well as service providers (i.e. their customers) and CDN providers (i.e. quasi-service provider competitors) discussing themes such as application aware and intelligent packet optical solutions.  I do not really know what is meant by the use of these labels.  They must be marketing terms because I cannot find the linkage between applications and IP transit, lambdas, optical bandwidth, etc.  To me a pipe is a pipe is a pipe.

The application is in the data center – it is not in the network.  Here is a link to the Verizon presentation at the SDN Conference in October 2011.  The single most important statement in the entire presentation occurs on slide 11 “Central Offices evolve to Data Centers, reaping the cost, scaling and service flexibility benefits provided by cloud computing technologies.”  In reference to my point in Frame 3, networks and the network element really do not require a lot of complexity.  I would argue that the dumber the core, the better network.  Forget about being aware of my applications; just give me some bandwidth and some connectivity to where I need to go.  Anything more than bandwidth and connectivity and I think you are complicating the process.

Frame 6 MapReduce/Application/Compute Clustering Observation

Here is the conundrum for the all the people waiting for the internet to break and bandwidth consumption to force massive network upgrades.  When we type a search term into a Google search box it generates a few hundred kilobytes of traffic upstream to Google and downstream to our screen, but inside Google’s data architecture a lot more traffic is generated between servers.  That is the result of MapReduce and application clustering and processing technologies.  This is the link back to the 77% statistic in Frame 4.  Servers transmitting data inside the data center really do not need to be aware of the network.  They just need to be aware of the routes, the paths to other servers or devices and they do not need a route to everywhere, just where they need to go.

Frame 8 What we value is different from what we talk about

Take a look at the chart to the left.  I put a handful of public companies on the list and I am not including balance sheets, debt and other financial metrics.  All I am pointing out is that companies focused on the enterprise (i.e. the data center) enjoy higher margins and richer valuations than companies that focus on the service provider market.  Why is that true?  Is that a result of the 77% problem?  Is it a result of the complexity of the market requirements imposed by service provider customer base?  Is it a result of the R&D requirements to sell to the service provider market?

Frame 7 Do We Need a New Network Architecture?

I have been arguing that we need a new network architecture for some time, but I think the underlying drivers will come from unexpected places.  It was not long ago that we had wiring closets and the emergence of the first blade servers in the early 2001-2002 time period started to change how data centers were built.  When SUN was at their peak, it was because they made the best servers.  It was not long ago that the server deployment philosophy was to buy the most expensive, highest performance servers from SUN that you could afford.  If you could by two, that was better than one.  The advent of cheap servers (blades and racks), virtualization and clustering applications changed the design rules.  Forget about buying one or two high end servers, buy 8 or 10 cheap ones.  If a couple go down on the weekend, worry about it on Tuesday.  I think the same design trend will occur in the network.  It will start in the DC and emerge into the interconnect market.

/wrk

* It is all about the network stupid, because it is all about compute. *

** Comments are always welcome in the comments section or in private. ** 

Content Delivery Networks (CDNs) 08.21.11

I had an email question two weeks ago regarding CDNs and where they are going or not going and who will be the winner or loser.  I answered the question privately, but it gave me cause to think about content deep networking, CDNs and what is going on in the network because of the evolution to longer form data, or big data depending on what term you prefer.  There is no question that Web 1.0 (~1995-2000) built on small HTML files is much different than Web 2.0 (~2002-2008) and Web 3.0 (~2009-?) with streaming HD content, state aware apps and access points at the edge that have higher connection speeds and capacities; all that being said, I am still a bit of an OTT skeptic.  Here is a chart I produced over a year ago using data from AKAM and CDN pricing trends.  The chart is not updated, but I think it shows the conundrum of having to serve longer form data in a market of declining ASPs.  I noted on the chart the start of iTunes, which is the poster child for the old content consumption model in which the user had to own the rights locally for the content.  The new content model which iTunes is using too, the rights are licensed by the content provider (AAPL, NFLX, AMZN, etc) and the end-user rents usage rights, usually as a monthly fee.

When I wrote that I was an OTT skeptic, I meant that I find it hard to quantify the OTT problem and I find that service providers (SPs) find it hard to quantify the problem.  I think there is no shortage of emotion, but I am not sure everyone is talking about the same problem or maybe they are just using the perception of a problem to force a discussion about another subject matter, which is what I really believe.

To start, let us step back and ask what video/OTT problem are service providers and the infrastructure companies are trying to solve?  Is it a bandwidth problem (i.e. real capacity constraints), a revenue problem (i.e. SPs want a share of NFLX revenues) or a CAPEX problem (i.e. SPs do not want to spend)?  I talk to a lot of people on many sides of the debate; I talk to equipment companies and I read the industry and investment reports.  I am skeptic when smart people tell me that it is a well known and understood problem that video is clogging the network.  Is it?  Can someone show me some stats?  When I read puff pieces like this, I struggle to grasp the meaning.

If OTT video is growing 40-50% over the next four years it is somewhat meaningless to me because network technologies and network capacities are not static.  The whole OTT space is a bit of conundrum.  There is a lot of noise around it and that is good for selling, marketing and thought leadership, but it seems vastly under invested if there is such a problem on the scale it is made out to be.  I think the data center (compute) scaling (more VMs on a Romley MB and the virtualization of the I/O) into the network is a much, much bigger market.

What are CDNs really good at?  Distributed CDNs like AKAM are really good at distributed content hosting like big file upgrades and regional specific content distribution like day and date.  iTunes is hosted by AKAM and they do a good job of ensuring you cannot download content specific to the UK in the US.  AKAM also offers really good site acceleration services for web properties that have low to medium traffic demands, but might have a spike in traffic due to an unforeseen event.

Centralized CDNs like LLNW and LVLT do really well at serving up specific content events and they are much better at hosting content that requires that state be updated, think Gmail which likes to update state on a regular basis.  Before thinking about CDNs, think about NFLX or Youtube.com (YT).

A year ago most service providers (SPs) who thought they had an OTT video problem viewed YT as the biggest problem, but as a problem it was small.  NFLX has overtaken YT traffic.  From a SP perspective, there are several ways to handle the problem of OTT video or user requested real time traffic.  (i) SPs can ignore it, (ii) SPs can meter bandwidth and charge consumers more for exceeding traffic levels, (iii) SPs can block it or (iv) SPs can deploy variations on content deep networking strategies.

Content deep strategies use products from companies like BTI Systems and JNPR (Ankeena acquisition) to mention a couple.  These companies deploy a caching CDN product in the network around the 10-50k user stub point.  The device replicates popular content that it sees requested from sites like NFLX (it is a learning algorithm) and thus the 10-50k user group does not have to traverse the entire network topology for popular content from streaming sites.

Similar to a cable node-splitting strategy, hosting popular content deeper in the network works well and seems to slow bandwidth consumption growth rates to very manageable levels.  CDNs win because they do not have to provision as much capacity and the SPs win because they have less money going to the CDN and less capacity issues in the network.

The user experience is better too.  When you see ATT and LVLT wanting to build a CDN service (GOOG too) it is really about content deep and putting content local to the user.  This is something I wrote about in my blog back in April.  Recently, there were reports of LVLT and LLNW combining CDNs and this makes sense to me as scale will matter in the business.

In terms of BTI, I listened to a webinar they produced about a month ago that was hosted on Dan Rayburn’s site.  BTI is claiming 10 content deep networking customers and in trials with a tier 1.  Specifically (if I heard the presentation correctly), they said that at the Tier 1 SP trial, OTT video traffic was growing at 3% per month.  311 days after insertion, video traffic is growing at 0% a month and that was during the rise of NFLX.  When BTI started their content deep solution it was all about YT, but this has changed in the last 9 months due to NFLX.

What I really think this entire debate is all about is money.  I put a chart in the April post that you can view here.  It is all about the chain of commerce.  Why did we pay $15 dollars for album in the 1980s and $9.99 for CDs in 1990s?  The answer is the chain of commerce could support that pricing model.  Today, the chain of commerce is shrinking and consumption habits have changed.  SPs do not want to be relegated to a “bits r us” business model.  They want a piece of the revenue stream from the content creator, to the content owner, to the content distributor, to the CDN, to the SPs and finally to the consumer.  I think the real issue is not the network, but the network is being used as a facility to broker a bigger discussion about the division of revenues.  I could be wrong too and the whole internet could collapse by 1996.

/wrk

* It is all about the network stupid, because it is all about compute. *

** Comments are always welcome in the comments section or in private.  Just hover over the Gravatar image and click for email. **

Data Center and Bandwidth Metering Thoughts

I am off to CA for the week, as such posting will be infrequent at best. GigaOM had a short piece on server architecture.  I call attention to it because there are number of debates going on in the world of “Cloud Networking-OTT-Data Centers-Bandwidth Metering-Server Virtualization-Net Neutrality” or as it is frequently called by its initials CNOTTDCBMSWNN, which is pronounced “see-knott-d-see-bm-swin.”
Continue reading

Bandwidth Metering II

A prominent subject for my blog has been bandwidth metering.  The GigaOM site had some interesting discussion on bandwidth and capacity trends.  The capacity versus bandwidth chart was the most interesting chart to me.  The last chart shows that storage capacity is outstripping the bandwidth made available in the network; hence the network has been lagging the advances in compute and storage.

This is not the first time I have pointed this out and why I am not a believer in the exaflood thesis as a growth driver for networking companies.  There is a cruel humorous aspect to the future.  It rarely unfolds the way you think it will and it is often unevenly distributed.  Maybe that is why there are now weekly notes on price erosion in the ethernet switching market.  The legacy tail of networking infrastructure is not keeping pace with the other dislocations in the tech ecosystem.  More expensive hardware in the GPM range of 60-70 points is not going to solve the network from lagging the growth of the other parts of the tech ecosystem.

This brings us back to bandwidth metering and usage caps. When it costs a lot to deploy a network, you limit the use of the network.  That is how networks were built in the in years before the client/server network.  Most people forget, but even Novell had usage based caps built into their software.  When I built my first Novell server, one of the options in the setup was to create usage caps for users, groups and time blocks.  This was clearly a leftover from the mainframe era as it was not really necessary on a LAN, but traditions are hard to remove.  Fast forward more than twenty years and service providers are still limiting the use of the network because bandwidth is not free (it is expensive) and they are charging more for premium users because growth of the other parts of the tech ecosystem have been putting pressure on the network.

So how is the future going to unfold for networking companies?  There are only guesses and some guesses are better than others.  I think expensive network infrastructure means that service providers respond with higher prices (caps and tiers) and within the data center/enterprise market new forces emerge.  I have written extensively about this in the past two months.  I think succinctly summarized it in my 06.24.11 post#2 Infrastructure Vendors Create 4th Leg: Shared cache concept and I think this is big, but this is the part of the network evolution that I wrote is really two networks: a network for humans and a network for machines to maintain a shared cache.  I believe in point #2, but maybe in a different way.  I really think that more VMs on a blade and I/O virtualization are a big way to achieve statistical gain.  I also think this is going to put pressure on the network element to do something different.  Network vendors that can integrate the network element into the I/O of the compute element are going to be very valuable.  Application delivery controllers (ADCs) become a virtual (i.e. software control) capability that is stretched across the compute/IO/network element.  This will allow it to scale and achieve maximize stat gain.  The networking vendor that figures this out with #6 below = big time winners.”

Some Random Notes for the Macro Inclined

I compiled a list of dates and events from my friends at DB and ISI for the European Summer Vacation 2011….actually one of the notes said “don’t go on vacation.”  Here it is:

  • 27-30 June (unconfirmed): Greek parliament debates and votes on austerity.  There will be two votes.
  • 3 July: EU finance ministers to approve fifth loan tranche.
  • Early July: IMF Board to approve fifth loan tranche.
  • 7 July: ECB meeting.
  • 11 July: EU finance ministers to approve new Greek loan program.
  • 15 July: Greek sovereign liability due.
  • The EU vowed to rescue Greece, but markets closed Friday still little reassured, eg, Spain 10-year above the firewall at 5.68%, Greece stock market -1.7% w/w, Italy stock market -4.7%, and STOXX Europe Banks index down -1.6% d/d.
  • Trichet said risk signals “Red Alert” as the crisis threatens banks.  Mervyn King said the crisis in Europe is a major threat to British banks. Moody’s warned Italian banks and 2 government institutions they were on watch for a ratings downgrade.
  • Wen said China inflation rate will be “firmly under control” and pointed to a moderation in lending, an “oversupply of industrial products, and abundant grain.”  China mfg PMI of input prices declined to 53.1% in June versus its recent peak of 83.2%. China mfg PMI slipped to 50.1% in June with China Daily reporting “China’s mfg sees sizable slowdown.”

/wrk

* It is all about the network stupid, because it is all about compute. *

** Comments are always welcome in the comments section or in private.  Just hover over the Gravatar image and click for email. **

Bandwidth Metering, Usage Caps and the Optical Upgrade Cycle Talk….

Historically users have preferred flat rate pricing, which is often in the favor of the service provider.  I have some thoughts on this concept, but I would state up front that my thinking is based on these two papers from Andrew Odlyzko and this one by Peter Fishburn.

The net of the decision for users to prefer flat rate versus usage billing has to do with (1) insurance effect as people like to have capacity available in case of unforeseen circumstances, (2) over estimation of what they typically use and (3) what Nick Szabo termed as “mental transaction costs” but I would tell you in that in real life it has more to do with the prospect theory.  The prospect theory was developed by Amos Tversky and Daniel Kahneman were pioneers in the study of cognitive science – how humans handle risk and the manner in which cognitive powers can be applied to economics.

In short, the prospect theory suggests that people are more motivated by losses than by gains.  They developed a model which showed that the way in which an investment is framed affects the choice that people will make.  The term “frame” is being used in the economic context, but you would call it “marketing” or “positioning.”  A foundation of behavioral economics is that framing biases affect how investment and lending decisions are made.  Investors are more likely to be motivated by losses, than by rational choices, hence it is better to buy or overbuild than get caught short in a networking world.  It turns out that the prospect theory is true for buying a service…like a network connection…as well as making a financial investment.  Humans by a large margin tend to want to over buy and the few cases that you find when people under buy, it is because they do not have the financial means to avoid points 1-2-3 above.

There is a long list of failures in the world of bandwidth management…companies like Enron Broadband…which does not mean that it is a poor business path – but rather that the few successful business examples played on the three themes from above.  A few examples:

  • Akamai Dynamic Site Accelerator:  This is an extremely profitable business for AKAM…what are they selling?  Peace of mind.
  • Nortel 10G LH: When Nortel commercialized their 10G optical systems, they were betting that the growth of the internet would rapidly drive telecom service providers to deploy 10G systems faster than projected – but when they went to sell the 10G system, they did not find any buyers for their more expensive 10G system.  John Roth (NT CEO) made deal with Joe Nacchio.  Nortel would sell 10G systems to Qwest at 2.5G prices and when the traffic on the 10G systems exceeded the capacity of the 2.5G systems Qwest was considering, Qwest would pay Nortel a premium for use of the capacity above 2.5G.  It was a win, win deal.  If Qwest’s network never needed the capacity that Nortel’s 10G systems provided, then they had not over paid – but if the network usage grew, Qwest had the bandwidth installed in the network to support the unanticipated demand.  What was NT selling?  Peace of mind.

I can come up with half dozen more examples, but this is really a long introduction into some thoughts about bandwidth metering that I started writing about a few weeks ago.  One of the developments I see going forward is really two networks.  The human network is going to have usage caps and the machine network will not.  Humans are living with them today in the mobile market.  Just go to any wireless service provider website and try to buy a 3G or 4G device without a data plan.  Not going to happen, unless it is wifi only.

One of the reasons I am not in the exaflood camp, is I cannot find a reason why service providers (SPs) would let that happen.  The SPs are already in a “bits R us” business, so why would they want to continue to roll down the price curve and let users consume/create more and more bits for less?  It is simplistic to think that OTT services will ruin the network.  The real question to consider is how will OTT services be monetized and where in the network will value be created?

On a day in which CIEN is (16%) after reporting earnings and a variety of analysts are shouting “only a pause in the optical upgrade cycle” I think it is prudent to really think about what upgrade cycle are they referring too.  This is not a blog post about CIEN, I think there is a long life to service provider network upgrades to 40G, 100G, SONET/SDH replacement, evolution to packet, etc…the question I am exploring is who or what is driving the network upgrade/evolution and should we expect the winners to be the companies before us today because that is precisely what many of Wall Street analysts what you to believe.

I think what is missing is an understanding of how I/O virtualization is going to affect network design.  We continue to move into a persistent connected device network in which we want to consume more long form or big data content.  If the next evolution of statistical gain is not achieved in the switch fabric, in the metro or core of the network, then where will it be found?  Will it be achieved in the I/O and the CPU?  If that assumption is true then how will the network interface change?  If I/O virtualization is going to enable the CPU to run at higher efficiency rates, then that has the potential to put pressure on the network.

The exaflood debate and how it affects the network is really a two part debate: (1) machine or (2) human.  Where will the flood occur if it occurs at all?  How will devices such as routers, L2 switches, optical switches, WDM elements and network appliances interface with the server which I call a compute point?  Do these devices now need to be I/O and CPU aware?  Something tells me it is all about how the network interfaces with the compute element and I would submit that the number of companies thinking this way is very few.  Most companies have been focused on the 2009 catch-up spend.

/wrk

* It is all about the network stupid, because it is all about compute. *

** Comments are always welcome in the comments section or in private.  Just hover over the Gravatar image and click for email. **

Do you want to own content? Reviewing thoughts from Sep 2006

I am off to CA for a few days, so blogging will be on hold until the weekend.  With reports today about a COD network, the OTT pompom waving, Apple’s rumored iCloud announcement in the WSJ and the negative NOK pre-announcement, I thought this would be a good opportunity to review a post I wrote on my old blog from September 2006 (full text of that post below for your review). Continue reading

Sandvine and Lightreading are making my point…

When I wrote about waiting on the coming exaflood, one of my conclusions is that fixed broadband connections are going to face usage caps just like a wireless broadband connection.  The more you use, the more you will pay.  Apparently Lightreading.com has come to this conclusion as well and it is courtesy of Sandvine and their internet traffic report.  Here are the two relevant charts in my opinion.  Chart 1 and Chart 2.  The rise of real time traffic is what I was referring to on Friday’s post about “…persistent network connections like Gmail, Facebook, applications, non-streaming content, games, etc are changing the rules as to how content is stored and handled.”

/wrk

** It is all about the network stupid, because it is all about compute. **