Next Stop in the Network: Two Networks
I am on my way to CA this week and while preparing for the trip I have been thinking about network trends and skating where the puck is going to be – not where it is today. Blogging will be sparse this week until after I have some time to digest thoughts from my trip over the weekend. I was sorting through some old work notebooks looking at networks designed in the pre-Internet era (1990-1993), the client/server boom and the Internet boom and crash. It is interesting to see how network topologies cyclically shift between two poles – centralized compute and distributed compute. This ebb and flow is often confused with content, but I will hold off in discussing content till a later date
Here are three slides from the Atlas Internet Observatory 2009 Annual Report. I think these three slides are still relevant today because they foreshadow where I think the network is going, but not how the network will address these trends.
If you reviewed the entire Atlas presentation there is an interesting trend summary slide stating the effect of the new internet is:
- Commoditization of IP and Hosting / CDN
- Drop price of wholesale transit
- Drop price of video / CDN
- Economics and scale drive enterprise to “cloud”
- Bigger get bigger (economies of scale)
- e.g., Google, Yahoo, MSFT acquisitions
- Success of bundling / Higher Value Services
- Triple and quad play, etc.
- New economic models
- Paid content (ESPN 360), paid peering, etc.
- Difficult to quantify due to NDA / commercial privacy
- Direct interconnection of content and consumer
- Driven by both cost and increasingly performance
In the Atlas presentation the authors state that the Internet is in “transition from focus on connectivity to content.” Content is a huge driver in the network, but it is the compute point that is changing in the network. I think most people confuse content with compute. Ask yourself where content is stored and how do users access the content? The answer is a compute function. When I look at the all the mega data centers being built and how users are consuming content that is a direct result of transaction economic trend enabled by technology. Looking at these trends and how people are trying to conceptualize them has forced me to go back and look at what I wrote in 2006. Here is an excerpt with a chart:
“The emergence of the pull economic model is driven from the desire to leverage the internet to manage growing uncertainty in the chain of commerce. The conceptual objectives of the pull economic model is focused on exploiting uncertainty by enabling collaboration between the participants involved in an economic transaction to complete the transaction immediately. The resulting structure of increased controls placed upon companies executing push models has been to constrain resources, dictate process, lengthen decision making, and delay economic transactions. In the push model, the demands of the end-user is analyzed and anticipated by a central decision making process who by definition is at the furthest point of immediate knowledge from the end-user conducting the economic transaction. This is the process by which old regimes are formed from revolutionary companies. Old regimes are formed when they lose control and knowledge of market share.
Pull economic models place the initiative to complete the economic transaction within the dictates of the end-user through collaboration. By rapidly and collaboratively placing the power to complete a transaction at the point in the market wherein transactions occur, it obsolesces the need to anticipate demand by central planning. Braudel observed that long distance trade produced enormous profits because the chain of commerce supported a long chain of commerce in terms of time and density in the number of people required to deliver the goods to the end-user. Long-distance trade provides an interesting base for contrasting the evolution from a push to a pull economic model. “Long distance trade certainly made super profits: it was after all based on the price difference between two markets very far apart, with supply and demand in complete ignorance of each other and brought into contact only by the activities of middleman. There could only have been a competitive market if there had been plenty of separate and independent middlemen. If, in the fullness of time competition did appear, if super-profits vanished from one line, it was always possible to find them again on another route with different commodities.” [see Braudel, The Wheels of Commerce, page 405]. Braudel’s observation that super profits between supply and demand occurring over a great distance was the product of information ignorance, implies that the internet and emerging pull model will couple geographic markets and thus shorten the information gap. Supply and demand will be closely linked and large variations of price will be limited as global consumers will have relevant, if not near real time, market data.
How will this affect the creation and destruction of markets on a global basis? Again we look to an observation made by Braudel. “One’s impression then (since in view of paucity of evidence, impressions are all we have) is that there were always sectors in economic life where high profits could be made but that these sectors varied. Every time one of these shifts occurred, under the pressure of economic developments, capital was quick to seek them out, to move into the new sector and prosper. Note as a rule it had not precipitated such shifts, This differential geography of profit is key to short-term fluctuations of capitalism, as it veered between the Levant, America, the East Indes, China the slave trade, etc., or between trade, banking, industry or land.” [see Braudel, The Wheels of Commerce, page 432]. The shifting and variation of profitable economic sectors is the essence of globalization. Instead of the shifts occurring over time, they will occur rapidly in a pull economic model. The shifts will not be in ignorance of the market – but rather they will define the market. The companies building businesses for the emerging pull economic model must have infrastructure, real time market data, and the capability to shift their business with the rapidity of the sectorial economic shifts.
Pull economic models are intended to accelerate the pace of transactions by matching the participants to a closely defined set of transaction criteria. For service providers, this is an evolution from packaged service plans to on-demand service plans. An example of this evolution can be prophesied for both cable and wireline service providers who are feverishly competing to enter each other’s core markets. Cable providers offer to their customers a flat rate package of selected channels. The more money the end-user wishes to spend, the more television programming they are offered. The premium channels are offered as part of the high-end package. This is a push economic model of service pricing by a cable company. They determine the market barring price for basic service packages and then build higher priced packages that include premium content. This model works well in a market structure in which demand can be planned for and there is no competition or competition is minimized through market share control. The evolution to a pull economic model will affect the ability of a cable provider to rely upon programming packages. How many channels do the end-users really watch of the 200 channels available? This is where a pull model affects the market structure. End-users will access the specific channels and content they desire, when the want it, to view the content or they will download the content and store it locally for future viewing. End-users will not pay for 200 channels, of which 35 they watch occasionally.” – Six Years that Shook the World, July 2006 Page 334-335
The disintermediation of media (i.e. content) consumption was something I was thinking about in 2005 and wrote about in 2006 before it even had an impressive term to describe it. Today, we can see rapid collapse of this long chain of commerce in Borders, Blockbuster, etc. I created this chart which I think summarizes the destruction of the media chain of commerce.
What I was not thinking about in 2006 was the scale of the evolution from distributed compute to centralized compute and how this would affect the design of the network. I just assumed that it would result in another upgrade of the existing network with appropriate modifications for the advancement of technology. More routers, more switches, more optics, more storage, more compute and more hype. I was short sighted and I was wrong. I am not in the Cisco Exabyte flood will kill the network camp, but rather I am in the camp of two networks will be built to handle the problem.
The first network is pretty much the network we have today. It is full human users and traffic ebbs and flows with the sun (daily life cycle of humans). This is the network that will have more routers and switches and storage. Service Providers will upgrade backbones and super metros to handle the growth in traffic from local access (FTTH). Companies like Ciena, Juniper, Cisco and others will sell more equipment as I think this is ~3 year cycle.
There will be another network built and it will not follow the sun. This will not be a network for humans, it will a network for machines. This will be a machine to machine compute network built between data centers. It will look like a terrestrial under-sea network designed to maximize the compute and caching flows between data centers. The other network, the legacy network that includes inputs from unpredictable humans, will pull data from this network. The machine to machine network is at the begging of its evolutionary cycle because as it is built out, the need will arise to position content out of the mega data centers (i.e. move the compute point again). Content will become local and the machine to machine network will begin to evolve into lower compute tiers. Kind of sounds like how SNA networks evolved to client/server. Just a hypothesis on my part.