Bandwidth is deflationary and I find any arguments to the contrary to be foolish. This is a subject I have written about before here, here, here and here. Over the past few weeks, I have been reminded that it is always easier to solve most networking problems by applying bandwidth. A few weeks ago I found myself reading one of Marc Andreessen tweet streams of conciseness and I replied (see below). After making the tweet I wondered if I was correct?
I am curious to learn the details around the Comcast/Netflix deal that is being widely reported this afternoon. Having spent the better part of the past twenty-years selling equipment to service providers of all types on most continents and in a variety of regulated constructs, the subject of net-neutrality and OTT have been a prominent subject in my blogs over the past eight years. SIWDT is coming up three years old and one benefit that content has to me is it is searchable and I can go back and critique my thoughts. I did a search on “net neutrality” and it came back with four prior posts. I carved out a relevant quote from each post:
The following is from my friend and former investment manager Doug Rudisch. The essay has already been reblogged on ZeroHedge.com I asked Doug to post it to my blog a few weeks ago when I read an early draft. Even though it is now replicated on few major financial sites, I thought it was worth posting here for the technology focus.
In a conversation the other day, there was a random supposition thrown out for discussion as to whether the data center (DC) will become the last mile (LM) of the future. Most Americans have a choice between service providers for their LM needs. Typically there is a RBOC or ILEC, usually a MSO (i.e. cable company) and a wireless broadband carrier. In my home location I have one RBOC (VZ), two MSOs (Comcast and RCN who is an over builder) and seven wireless broadband choices: ATT, VZ, Sprint, MetroPCS, Leap, Clearwire and T-Mobile. I realize that the wireless BB offerings vary from EVDO to 4G. For this post I am going to put aside the wireless BB options as these services typically come with usage caps and I going to focus on the three LM options which for me is really four because VZ offers FTTH (FiOS) or DSL and Comcast and RCN offer DOCSIS options.
As a side note, the current Administration is really misguided in their desire to block the T-Mobile acquisition by AT&T. My only conclusion is American business leaders are not allowed to run their businesses without approval of the Administration. This all goes back Brinton and the revolutionary process that we are working through. If AT&T wanted to declare C11, default on their bonds and give their unions 18% of the company in a DIP process the Administration would have probably approved the transaction in a day or maybe if ATT was solar company they would get the Administration as an investment partner. T-Mobile will now wither away because their parent company will be unwilling to invest in the US with so much investment required in their home country for FTTH builds and managing through or reserving for European unity issues. I am saying this as a former T-Mo customer since 1998, but I recently decided to switch to an iPhone and ATT. Anyone want to wager on the condition of T-Mobile in 3-5 years? One other point, having some weak wireless providers in the market is not a benefit to the consumer.
When the internet connection craze of the 1990s started to move from dialup to always on broadband connections, that is when the value of the LM anchored market share for incumbent service providers. This was made clear to Congress in 1998 when Charles J. McMinn, then CEO of Covad, testified before the U.S. Senate Commerce, Science, and Transportation Subcommittee on Communications and said, “Failing to ensure a competitive environment would condemn the deployment of crucial next-generation digital communication services to the unfettered whims of the ILECs; precisely the opposite of what Congress intended sections of the Telecommunications Act of 1996 to accomplish.”
As I posted earlier here, I see the evolution of the DC and the cloud hype not going quite the way most people expect. The cloud will be a deflationary trend for the market in the same way smartphones and higher capacity connection speeds were for the mobile market. I have posted before here and here that the broadband market has clearly seen the deflationary pressures associated with broadband. As we move deeper into the twenty-tens, will the DC provide a competitive anchor in the manner in which the LM did for incumbent service providers in the 1990s and 2000s?
I see the DC market evolving in three forms. The mega-warehouse scale DCs that Google, Apple, Amazon, Microsoft and others are building are for the consumer market. This is the market for smartphones, tablets, computers, DVRs, media library, applications, games; this is our personal digital footprint. That is big market. Then second market will be the DC for the SMB market focused on business applications. I call this the commercial tier that starts at the point at which a company cannot or does not want to own their IT infrastructure such as data centers. As I wrote the other day there are many reasons why a corporation wants to use a private cloud or private DC over the public cloud and a public DC. I think this market is the smallest of the three markets.
The third market is the F2000 or F5000. I am not really sure and it might be as small as the F500 or F1000. This it the market what wants to use cloud technologies, utility computing and various web based technologies internally within the control of their own IT assets. This is the primary commercial market of the future. Futurists think that within twenty years or so the private DC/cloud market collides with the consumer DC/cloud market. Maybe it does, maybe it does not, but I know they will not collide in the next five years.
My answer to the question I posed at the start is I think the DC/cloud will be an important component for accessing and anchoring the consumer market. Companies will be forced to build their own DC/cloud asset or outsource to a cloud provider. The example I used in an earlier post was NFLX using the AWS infrastructure which you can find here. Over time this strategy could be an issue depending on the deflationary trend of the market. It will be deflationary if it is easy for anyone and everyone to do as the chain of commerce shrinks. Again it goes back to the lessons of Braudel. In the SMB market, I think the RBOCs/ILECs roll up this space. It will be the CLEC demise all over again as pure cloud providers will not be able to support the ecosystem required to sell to the SMB market. In the F500-2500 market, I think companies will want to retain control of assets for a long time and this desire is deeply rooted in the IT culture of the ~F2500 market. The cultural roots of owning and retaining IT assets go back to the introduction of the S/360 mainframe by IBM in 1965. Behaving in a specific manner for forty-six years is a reinforced habit that is hard to break when corporations are flush with cash and IT is considered a competitive asset when deployed well.
* It is all about the network stupid, because it is all about compute. *
** Comments are always welcome in the comments section or in private. **
I had an email question two weeks ago regarding CDNs and where they are going or not going and who will be the winner or loser. I answered the question privately, but it gave me cause to think about content deep networking, CDNs and what is going on in the network because of the evolution to longer form data, or big data depending on what term you prefer. There is no question that Web 1.0 (~1995-2000) built on small HTML files is much different than Web 2.0 (~2002-2008) and Web 3.0 (~2009-?) with streaming HD content, state aware apps and access points at the edge that have higher connection speeds and capacities; all that being said, I am still a bit of an OTT skeptic. Here is a chart I produced over a year ago using data from AKAM and CDN pricing trends. The chart is not updated, but I think it shows the conundrum of having to serve longer form data in a market of declining ASPs. I noted on the chart the start of iTunes, which is the poster child for the old content consumption model in which the user had to own the rights locally for the content. The new content model which iTunes is using too, the rights are licensed by the content provider (AAPL, NFLX, AMZN, etc) and the end-user rents usage rights, usually as a monthly fee.
When I wrote that I was an OTT skeptic, I meant that I find it hard to quantify the OTT problem and I find that service providers (SPs) find it hard to quantify the problem. I think there is no shortage of emotion, but I am not sure everyone is talking about the same problem or maybe they are just using the perception of a problem to force a discussion about another subject matter, which is what I really believe.
To start, let us step back and ask what video/OTT problem are service providers and the infrastructure companies are trying to solve? Is it a bandwidth problem (i.e. real capacity constraints), a revenue problem (i.e. SPs want a share of NFLX revenues) or a CAPEX problem (i.e. SPs do not want to spend)? I talk to a lot of people on many sides of the debate; I talk to equipment companies and I read the industry and investment reports. I am skeptic when smart people tell me that it is a well known and understood problem that video is clogging the network. Is it? Can someone show me some stats? When I read puff pieces like this, I struggle to grasp the meaning.
If OTT video is growing 40-50% over the next four years it is somewhat meaningless to me because network technologies and network capacities are not static. The whole OTT space is a bit of conundrum. There is a lot of noise around it and that is good for selling, marketing and thought leadership, but it seems vastly under invested if there is such a problem on the scale it is made out to be. I think the data center (compute) scaling (more VMs on a Romley MB and the virtualization of the I/O) into the network is a much, much bigger market.
What are CDNs really good at? Distributed CDNs like AKAM are really good at distributed content hosting like big file upgrades and regional specific content distribution like day and date. iTunes is hosted by AKAM and they do a good job of ensuring you cannot download content specific to the UK in the US. AKAM also offers really good site acceleration services for web properties that have low to medium traffic demands, but might have a spike in traffic due to an unforeseen event.
Centralized CDNs like LLNW and LVLT do really well at serving up specific content events and they are much better at hosting content that requires that state be updated, think Gmail which likes to update state on a regular basis. Before thinking about CDNs, think about NFLX or Youtube.com (YT).
A year ago most service providers (SPs) who thought they had an OTT video problem viewed YT as the biggest problem, but as a problem it was small. NFLX has overtaken YT traffic. From a SP perspective, there are several ways to handle the problem of OTT video or user requested real time traffic. (i) SPs can ignore it, (ii) SPs can meter bandwidth and charge consumers more for exceeding traffic levels, (iii) SPs can block it or (iv) SPs can deploy variations on content deep networking strategies.
Content deep strategies use products from companies like BTI Systems and JNPR (Ankeena acquisition) to mention a couple. These companies deploy a caching CDN product in the network around the 10-50k user stub point. The device replicates popular content that it sees requested from sites like NFLX (it is a learning algorithm) and thus the 10-50k user group does not have to traverse the entire network topology for popular content from streaming sites.
Similar to a cable node-splitting strategy, hosting popular content deeper in the network works well and seems to slow bandwidth consumption growth rates to very manageable levels. CDNs win because they do not have to provision as much capacity and the SPs win because they have less money going to the CDN and less capacity issues in the network.
The user experience is better too. When you see ATT and LVLT wanting to build a CDN service (GOOG too) it is really about content deep and putting content local to the user. This is something I wrote about in my blog back in April. Recently, there were reports of LVLT and LLNW combining CDNs and this makes sense to me as scale will matter in the business.
In terms of BTI, I listened to a webinar they produced about a month ago that was hosted on Dan Rayburn’s site. BTI is claiming 10 content deep networking customers and in trials with a tier 1. Specifically (if I heard the presentation correctly), they said that at the Tier 1 SP trial, OTT video traffic was growing at 3% per month. 311 days after insertion, video traffic is growing at 0% a month and that was during the rise of NFLX. When BTI started their content deep solution it was all about YT, but this has changed in the last 9 months due to NFLX.
What I really think this entire debate is all about is money. I put a chart in the April post that you can view here. It is all about the chain of commerce. Why did we pay $15 dollars for album in the 1980s and $9.99 for CDs in 1990s? The answer is the chain of commerce could support that pricing model. Today, the chain of commerce is shrinking and consumption habits have changed. SPs do not want to be relegated to a “bits r us” business model. They want a piece of the revenue stream from the content creator, to the content owner, to the content distributor, to the CDN, to the SPs and finally to the consumer. I think the real issue is not the network, but the network is being used as a facility to broker a bigger discussion about the division of revenues. I could be wrong too and the whole internet could collapse by 1996.
* It is all about the network stupid, because it is all about compute. *
** Comments are always welcome in the comments section or in private. Just hover over the Gravatar image and click for email. **
I have been writing about bandwidth metering and exafloods for awhile. Last week I read this article on Bloomberg. It had been tweeted and shared a few times. The article is about Time-Warner Cable considering putting usage caps on their broadband customer connections. It has an interesting quote from CEO Glen Britt who said “Moving from a flat fee to consumption-based billing will likely allow consumers who use the Internet for just e-mail and basic searches to pay less.”
Here is a link to article in which News Corp CEO Chase Carey advocates for an additional charge to programmers for iPad streaming “I think the consumer is willing to pay fair value for a good experience…”
I am not sure there is an over the top (OTT) video problem. Blasphemy! I just wrote it and I am now awaiting the hate mail. I must be a crazy person because everyone knows there is a huge OTT video problem. It is so obvious. No brain power required. Even a sixth grader using an iPad knows what pixelization is and that is clearly a symptom of OTT video congestion. Correct? We are all sufferers of the OTT curse on a daily basis. Correct? I am not so sure. If there is such a huge OTT problem, why is the silence deafening when it comes to complaints?
At first I thought I was ignorant because I have been reading and hearing about the OTT problem for what seems like years. There are all sorts of corporate white papers and research reports on the OTT problem. Sell side research on Wall Street is full of OTT reports and references as the basis for buying all sorts of video, media and networking stock baskets. If this is such a well known problem, why is no one complaining? If it is such a threat to everyone’s business, where are the complaints? If the internet is breaking due to OTT congestion, where are the complaints?
I used Google and Bing, but choose any search you want and just run a bunch searches on “OTT video,” “OTT video complaints, “over the top video” any term you like and you will find the problem is clearly identified and articulated by the analysts, but not so well be the service providers and users. I know, it is strange, it is weird, but that does not preclude the existence of the problem.
I think the problem that OTT video is causing has very little to do with the network and all the networkers lining up to solve the OTT problem are going to find there is not much a of problem to solve. My hypothesis is the problem is (1) access to content and (2) the chain of commerce. Those are the real problems and I am not yet convinced that a bunch of network infrastructure companies are going to benefit as if this was some sort of DOCSIS 2.0 / xDSL / FTTH broadband build out redux from the last decade.
There are clearly long form/big data content stresses in the network, but I do not think that there is a lot of value to be had in thinking that this is a network problem. I think those stresses are a data center (DC) problem with the two most important elements being the compute point and storage point.
When I read all the comments about bandwidth metering and up selling higher connection rates for high traffic consumers it tells me the network is not going to break, but rather service providers want to find ways to boost ARPU. When I think about the all the content migrating from old form media to digital, I read storage and compute – not networking. The evolution to digital media is really the shortening of the chain of commerce. I wrote about that a few weeks ago, but if you want to start at the beginning you must start with Braudel. If you do not want to read volume two, The Wheels of Commerce, then here are three quotes that frame what the internet and digital media revolution has produced, except Braudel wrote these quotes about long distance sea trade:
“One’s impression then…is that there were always sectors in economic life where high profits could be made but that these sectors varied. Every time one of these shifts occurred, under the pressure of economic developments, capital was quick to seek them out, to move into the new sector and prosper…” [see Braudel, The Wheels of Commerce, page 432].
Long-distance trade provides an interesting base for contrasting the evolution from a push to a pull economic model or what many call the digital disintermediation revolution. “Long distance trade certainly made super profits [like the music industry in the 1970s and 1980s]: it was after all based on the price difference between two markets very far apart, with supply and demand in complete ignorance of each other and brought into contact only by the activities of middleman. There could only have been a competitive market if there had been plenty of separate and independent middlemen. If, in the fullness of time competition did appear, if super-profits vanished from one line, it was always possible to find them again on another route with different commodities.” [see Braudel, The Wheels of Commerce, page 405]. Braudel’s observation that super profits between supply and demand occurring over a great distance was the product of information ignorance, implies that the internet and emerging pull model will couple geographic markets and thus shorten the information gap. Supply and demand will be closely linked and large variations of price will be limited as global consumers will have relevant, if not near real time, market data.
“One’s impression then (since in view of paucity of evidence, impressions are all we have) is that there were always sectors in economic life where high profits could be made but that these sectors varied. Every time one of these shifts occurred, under the pressure of economic developments, capital was quick to seek them out, to move into the new sector and prosper. Note as a rule it had not precipitated such shifts, This differential geography of profit is key to short-term fluctuations of capitalism, as it veered between the Levant, America, the East Indes, China the slave trade, etc., or between trade, banking, industry or land.” [see Braudel, The Wheels of Commerce, page 432].
It seems everyday there is another news article about how service providers are monetizing the pipe. That hyperlink leads to a blog post about new pricing plans for VZ Wireless in which there is a surcharge for tethering. I think if there was a real OTT problem, then we would be hearing the complaints. The complaints I am reading and hearing are about people wanting to get paid. That is a very different argument than the video killed the network song.
From an access point perspective, I think the evolution from the STB to some sort of hybrid femto cell / network access point with localized meshed wifi is far more interesting conversation than a Netflix problem.
* It is all about the network stupid, because it is all about compute. *
** Comments are always welcome in the comments section or in private. Just hover over the Gravatar image and click for email. **
I was just down in my basement looking at the area around my FiOS FTTH equipment (pic below). I have been expecting a flood of data any day now. I have been so concerned about the impending exaflood that I have been considering a home defense. What would it cost to build Petabyte Wall (1000 TBs) to handle in the incoming flood of data? There have been plenty of warnings of the impending event.
If I wanted to build my PB Wall out of 32GB flash drives, I would need 31,250 sticks. That would set me back ~$2M assuming I could get a volume discount. There are some nice 12TB NAS systems, so I thought that 84 of those systems for $124k might be a better option. Google is offering 16TB for $256 per year, so that would set me back $256k for my PB Wall, but the problem with this option is it is from Google’s personal storage product and I need a commercial class solution – not a digital locker for files. Here are four options for my PB Wall assuming I could direct the flood bytes to the storage options:
|Solution||Price Per GB||Price Per TB||PB Wall Total Cost|
|32 GB Flash Drives||$2.34||$2,340.00||$2.34M|
|12TB NAS (Seagate)||$0.12||$124.91||$124k|
|Google Storage (Developers)||$0.17||$170||$2.04M (1 Yr)|
|Amazon S3||$3.72||$3,726.49||$3.72M (1 Yr)|
A few items to note in my high level quest to build a PB Wall. Amazon and Google both have upload/download transactions costs. I calculated the cost to fill a TB with the Amazon pricing tool; Google numbers do not include this cost. Google charges $0.10 per GB for upload and $0.15 per GB for download for Americas and EMEA. If you are in APAC make that $0.30 per GB for download. Google and Amazon also charge $0.01 per 1000 PUT, POST, LIST, GET and HEAD requests. Note those are transaction costs for compute, which is a reoccurring theme in my blog. To upload a TB to Google per month it would cost me $100k or an additional $1.2M per year putting the total Google cost for my PB Wall from Google at $3.24M. Throw in the some more charges and Google and Amazon are pretty close. My total costs do not include power and cooling for my in home NAS and flash drive solutions, not mention the time it would take to figure out how to wire 31,250 flash drives together.
Where is the Exabyte Flood?
I am not going to take the time to critique the various predictions from 2007-2008 about the impending exaflood and the internet breaking. These types of hyperbole always lack hubris and neglect to correct for black swans and human adaptation. That is what networks do. Networks adapt because they are managed by humans. Humans adapt. Not a lot of people where talking about broadband usage caps back in 2007-2008. Not many people thought that service providers would throttle bandwidth connections. Verizon FiOS offers storage with their internet service for $0.07 per GB per month or $0.95 cents per year. My PB Wall from Verizon using my FiOS service would cost me an additional $950k per year.
If I was provide a short answer to the complicated question of the exaflood, I would really say the answer lies in Shannon-Hartley theorem and that this law from 1927 will have more to do with network design and build-out over the next decade than life before or after television. In the past, it was easier to deploy more bandwidth to obtain more capacity. Buy another T1, upgrade to a DS3 and get me a GbE or more. Today we are approaching the upper end of spectral efficiency and this is going to force networks to be built differently. As I stated in a prior post I think that transmission distances decline, more compute (i.e. data centers) are put into the network and bandwidth limiting devices like ROADMs and routers/switches that have an OEO conversion will go away probably on the same time line as the TDM to packet evolution. This means the adoption rate is slower than first predicted and just when despair at the slow adoption takes hold the adoption rate rapidly increases and continues to gain momentum as the new network models are proved in.
The Network Always Adapts
The other assumption missed by the exaflood papers was that the network adapts. It just does. People who run networks put more capacity in the network for less money, because the people who build the network infrastructure are always working to improve their products.
One market effect I know is that when discrete systems in the network become harmonized or balanced, there is a lot of money to be made. Look at the fibre channel market. When adapters, drives and fabrics converge around a performance level like 2GB or 4GB, the market becomes white hot. The same goes for the 1G and 10G optical transmission markets. Today we are a maturing 10G market, there is a transition market called 40G, but the real magic is going to happen at 100G. At 100G huge efficiencies start to occur in the network as it relates to I/O, compute process, storage, etc. With the building of huge datacenters, how much bandwidth is required to service 100k servers? These large datacenters are being built for many reasons such as power, cooling, security, but the one reason that is often not quoted is processing and compute. There have been really innovating solutions to the compute problem that I wrote about before such as RVBD and AKAM. I look at what the Vudu team did for meshed, stub storage of content on a community of systems. Is this a model for future wherein numerous smaller data centers look like a community of Vudu devices?
Going forward in a network world of usage caps, distributed storage and parallel processing I know what element that will need to solved and that is service levels. Commercial and consumer end-users want to get what they are paying for and service providers do not want to be taken advantage of in the services they are offering. Security and defined service level agreements will push down to the individual user just as search and content is being optimized and directed to the group of one from broader demographic groups. Why are their wireless broadband usage caps? Because there is a spectrum problem and that same problem and solution set is coming to your FTTH connection sooner than you think. Why do you think Google and Amazon are charging for compute transactions? Anyone who used a mainframe in the days when you had to sign up for processing time is having flash back and wonder what happened to all the punch cards.
The reason Google and Amazon charge for compute is because transactions cost money. The whole digital content revolution, disintermediation, OTT video, music downloads, YT, blah, blah, whatever you want to call it goes back to one force and that is transaction economics. The profit margin that can be derived from selling content has and continues to decline. Distribution is easier; hence the transaction cost is smaller, resulting in a lower profit margin thus supporting fewer intermediaries. It does not mean the cost is zero, it just means less. What is the cost to store, compute and transmit content? Answer that, add some profit margin at each step and you know the future.
The companies who are providing the tools and systems that provide the analytics around the economic transaction of store, compute and transmit (SCT) are going to be big winners.
** It is all about the network stupid, because it is all about compute. **
I am on my way to CA this week and while preparing for the trip I have been thinking about network trends and skating where the puck is going to be – not where it is today. Blogging will be sparse this week until after I have some time to digest thoughts from my trip over the weekend. I was sorting through some old work notebooks looking at networks designed in the pre-Internet era (1990-1993), the client/server boom and the Internet boom and crash. It is interesting to see how network topologies cyclically shift between two poles – centralized compute and distributed compute. This ebb and flow is often confused with content, but I will hold off in discussing content till a later date
Here are three slides from the Atlas Internet Observatory 2009 Annual Report. I think these three slides are still relevant today because they foreshadow where I think the network is going, but not how the network will address these trends.
If you reviewed the entire Atlas presentation there is an interesting trend summary slide stating the effect of the new internet is:
- Commoditization of IP and Hosting / CDN
- Drop price of wholesale transit
- Drop price of video / CDN
- Economics and scale drive enterprise to “cloud”
- Bigger get bigger (economies of scale)
- e.g., Google, Yahoo, MSFT acquisitions
- Success of bundling / Higher Value Services
- Triple and quad play, etc.
- New economic models
- Paid content (ESPN 360), paid peering, etc.
- Difficult to quantify due to NDA / commercial privacy
- Direct interconnection of content and consumer
- Driven by both cost and increasingly performance
In the Atlas presentation the authors state that the Internet is in “transition from focus on connectivity to content.” Content is a huge driver in the network, but it is the compute point that is changing in the network. I think most people confuse content with compute. Ask yourself where content is stored and how do users access the content? The answer is a compute function. When I look at the all the mega data centers being built and how users are consuming content that is a direct result of transaction economic trend enabled by technology. Looking at these trends and how people are trying to conceptualize them has forced me to go back and look at what I wrote in 2006. Here is an excerpt with a chart:
“The emergence of the pull economic model is driven from the desire to leverage the internet to manage growing uncertainty in the chain of commerce. The conceptual objectives of the pull economic model is focused on exploiting uncertainty by enabling collaboration between the participants involved in an economic transaction to complete the transaction immediately. The resulting structure of increased controls placed upon companies executing push models has been to constrain resources, dictate process, lengthen decision making, and delay economic transactions. In the push model, the demands of the end-user is analyzed and anticipated by a central decision making process who by definition is at the furthest point of immediate knowledge from the end-user conducting the economic transaction. This is the process by which old regimes are formed from revolutionary companies. Old regimes are formed when they lose control and knowledge of market share.
Pull economic models place the initiative to complete the economic transaction within the dictates of the end-user through collaboration. By rapidly and collaboratively placing the power to complete a transaction at the point in the market wherein transactions occur, it obsolesces the need to anticipate demand by central planning. Braudel observed that long distance trade produced enormous profits because the chain of commerce supported a long chain of commerce in terms of time and density in the number of people required to deliver the goods to the end-user. Long-distance trade provides an interesting base for contrasting the evolution from a push to a pull economic model. “Long distance trade certainly made super profits: it was after all based on the price difference between two markets very far apart, with supply and demand in complete ignorance of each other and brought into contact only by the activities of middleman. There could only have been a competitive market if there had been plenty of separate and independent middlemen. If, in the fullness of time competition did appear, if super-profits vanished from one line, it was always possible to find them again on another route with different commodities.” [see Braudel, The Wheels of Commerce, page 405]. Braudel’s observation that super profits between supply and demand occurring over a great distance was the product of information ignorance, implies that the internet and emerging pull model will couple geographic markets and thus shorten the information gap. Supply and demand will be closely linked and large variations of price will be limited as global consumers will have relevant, if not near real time, market data.
How will this affect the creation and destruction of markets on a global basis? Again we look to an observation made by Braudel. “One’s impression then (since in view of paucity of evidence, impressions are all we have) is that there were always sectors in economic life where high profits could be made but that these sectors varied. Every time one of these shifts occurred, under the pressure of economic developments, capital was quick to seek them out, to move into the new sector and prosper. Note as a rule it had not precipitated such shifts, This differential geography of profit is key to short-term fluctuations of capitalism, as it veered between the Levant, America, the East Indes, China the slave trade, etc., or between trade, banking, industry or land.” [see Braudel, The Wheels of Commerce, page 432]. The shifting and variation of profitable economic sectors is the essence of globalization. Instead of the shifts occurring over time, they will occur rapidly in a pull economic model. The shifts will not be in ignorance of the market – but rather they will define the market. The companies building businesses for the emerging pull economic model must have infrastructure, real time market data, and the capability to shift their business with the rapidity of the sectorial economic shifts.
Pull economic models are intended to accelerate the pace of transactions by matching the participants to a closely defined set of transaction criteria. For service providers, this is an evolution from packaged service plans to on-demand service plans. An example of this evolution can be prophesied for both cable and wireline service providers who are feverishly competing to enter each other’s core markets. Cable providers offer to their customers a flat rate package of selected channels. The more money the end-user wishes to spend, the more television programming they are offered. The premium channels are offered as part of the high-end package. This is a push economic model of service pricing by a cable company. They determine the market barring price for basic service packages and then build higher priced packages that include premium content. This model works well in a market structure in which demand can be planned for and there is no competition or competition is minimized through market share control. The evolution to a pull economic model will affect the ability of a cable provider to rely upon programming packages. How many channels do the end-users really watch of the 200 channels available? This is where a pull model affects the market structure. End-users will access the specific channels and content they desire, when the want it, to view the content or they will download the content and store it locally for future viewing. End-users will not pay for 200 channels, of which 35 they watch occasionally.” – Six Years that Shook the World, July 2006 Page 334-335
The disintermediation of media (i.e. content) consumption was something I was thinking about in 2005 and wrote about in 2006 before it even had an impressive term to describe it. Today, we can see rapid collapse of this long chain of commerce in Borders, Blockbuster, etc. I created this chart which I think summarizes the destruction of the media chain of commerce.
What I was not thinking about in 2006 was the scale of the evolution from distributed compute to centralized compute and how this would affect the design of the network. I just assumed that it would result in another upgrade of the existing network with appropriate modifications for the advancement of technology. More routers, more switches, more optics, more storage, more compute and more hype. I was short sighted and I was wrong. I am not in the Cisco Exabyte flood will kill the network camp, but rather I am in the camp of two networks will be built to handle the problem.
The first network is pretty much the network we have today. It is full human users and traffic ebbs and flows with the sun (daily life cycle of humans). This is the network that will have more routers and switches and storage. Service Providers will upgrade backbones and super metros to handle the growth in traffic from local access (FTTH). Companies like Ciena, Juniper, Cisco and others will sell more equipment as I think this is ~3 year cycle.
There will be another network built and it will not follow the sun. This will not be a network for humans, it will a network for machines. This will be a machine to machine compute network built between data centers. It will look like a terrestrial under-sea network designed to maximize the compute and caching flows between data centers. The other network, the legacy network that includes inputs from unpredictable humans, will pull data from this network. The machine to machine network is at the begging of its evolutionary cycle because as it is built out, the need will arise to position content out of the mega data centers (i.e. move the compute point again). Content will become local and the machine to machine network will begin to evolve into lower compute tiers. Kind of sounds like how SNA networks evolved to client/server. Just a hypothesis on my part.