Let us start with a question: service provider bandwidth does it matter? Perhaps a better question would be: is service provider bandwidth a meaningful problem to work on? I think it does matter, but I am not certain it is meaningful. This post is not going to be a scientific study and it should not be construed as a start of a working paper. This post is really a summary of my observations as I am trying to understand the significance of the messaging in the broader technology ecosystem. I sometimes call these posts framing exercises. I am really trying to organize and analyze disparate observations, urban myths and inductive logical failings of doctrine.
Frame 1: Bandwidth Pricing Trend
There is no debate on this point; the price trend of bandwidth is more for less. Bandwidth is deflationary until someone shows me a data set that proves it is inflationary. I agree that bandwidth is not ubiquitous, it is unevenly distributed and that falls into the category of: life is not fair; get used to it. In areas in which there is a concentration of higher wage earning humans organized into corporations with the objective of being profit centers, there seems to be an abundance of bandwidth and the trend in bandwidth is deflationary. Here are a few links:
- Dan Rayburn Cost To Stream A Movie Today = Five Cents; in 1998 = $270 “In 1998 the average price paid by content owners to deliver video on the web was around $0.15 per MB delivered. That’s per bit delivered, not sustained. Back then, nothing was even quoted in GB or TB of delivery as no one was doing that kind of volume when the average video being streamed was 37Kbps. Fast forward to today where guys like Netflix are encoding their content at a bitrate that is 90x what it was in 1998. To put the rate of pricing decline in terms everyone can understand, today Netflix pays about five cents to stream a movie over the Internet.”
- GigaOm: See the 2nd Chart.
- Telegeoraphy: See the chart “MEDIAN GIGE IP TRANSIT PRICES IN MAJOR CITIES, Q2 2005-Q2 2011”
Frame 2 Verizon Packet/Optical Direction
Here is a presentation by Verizon at the Infinera DTN-X product briefing day. The theme of the presentation is that the network is exhausted due to 4G LTE, video, FTTx, etc and that this is driving the need for more bandwidth to include 100G in the metro, 400G and even terabit ethernet in the core. I have heard these arguments for terabit ethernet before; I am firmly in the minority that it is a network design/traffic engineering problem – not a bandwidth problem to be solved. It took the world fifteen years to move from 1G to 10, I wonder how long it will take to get to terabit ethernet.
Frame 3 Are the design assumptions incorrect?
When I look at the network, I think of it as a binary solution set. It can connect and it can disconnect. For many decades we have been building networks based on the wrong design assumptions. I have been posting on these errors in prior posts. Here is a link to a cloud hosting company. I know this team and I know their focus has been highest IOPs in their pod architecture. We can use any cloud provider to make the point, but I am using Cloud Provider USA because of the simplicity of their pricing page. All a person has to do is make five choices: DC location, CPU cores, memory, storage and IP address. Insert credit card and you are good to go. Did you notice what is missing? Please tell me you noticed what is missing, of course you did. The sixth choice is not available yet, it is network bandwidth; the on or off network function. The missing value is not the fault of the team at Cloud Provider USA; it is the fault of those of us who have been working in the field of networking. Networking has to be simple; on or off and at what bandwidth. I know it is that simple in some places, but my point is it needs to be as easily configured and presented in the same manner as DC-CPU-Memory-Storage-IPs purchase options are presented on the Cloud Provider website. My observation is the manner in which we design networks results in a complexity of design that is prohibitive to ease of use.
Frame 4 Cisco Cloud Report
I think most people have read Cisco’s Cloud Report. Within the report there are all sorts of statistics and charts that go up and to the right. I want to focus on a couple of points they make in the report:
- “From 2000 to 2008, peer-to-peer file sharing dominated Internet traffic. As a result, the majority of Internet traffic did not touch a data center, but was communicated directly between Internet users. Since 2008, most Internet traffic has originated or terminated in a data center. Data center traffic will continue to dominate Internet traffic for the foreseeable future, but the nature of data center traffic will undergo a fundamental transformation brought about by cloud applications, services, and infrastructure.”
- “In 2010, 77 percent of traffic remains within the data center, and this will decline only slightly to 76 percent by 2015.2. The fact that the majority of traffic remains within the data center can be attributed to several factors: (i) Functional separation of application servers and storage, which requires all replication and backup traffic to traverse the data center (ii) Functional separation of database and application servers, such that traffic is generated whenever an application reads from or writes to a central database (iii) Parallel processing, which divides tasks into multiple smaller tasks and sends them to multiple servers, contributing to internal data center traffic.”
Here is my question from the above statistic. If 77% of traffic stays in the data center, what is the compelling reason to focus on the remaining 23%?
Frame 5 Application Aware an the Intelligent Packet Optical Conundrum
I observe various transport orientated equipment companies, as well as service providers (i.e. their customers) and CDN providers (i.e. quasi-service provider competitors) discussing themes such as application aware and intelligent packet optical solutions. I do not really know what is meant by the use of these labels. They must be marketing terms because I cannot find the linkage between applications and IP transit, lambdas, optical bandwidth, etc. To me a pipe is a pipe is a pipe.
The application is in the data center – it is not in the network. Here is a link to the Verizon presentation at the SDN Conference in October 2011. The single most important statement in the entire presentation occurs on slide 11 “Central Offices evolve to Data Centers, reaping the cost, scaling and service flexibility benefits provided by cloud computing technologies.” In reference to my point in Frame 3, networks and the network element really do not require a lot of complexity. I would argue that the dumber the core, the better network. Forget about being aware of my applications; just give me some bandwidth and some connectivity to where I need to go. Anything more than bandwidth and connectivity and I think you are complicating the process.
Frame 6 MapReduce/Application/Compute Clustering Observation
Here is the conundrum for the all the people waiting for the internet to break and bandwidth consumption to force massive network upgrades. When we type a search term into a Google search box it generates a few hundred kilobytes of traffic upstream to Google and downstream to our screen, but inside Google’s data architecture a lot more traffic is generated between servers. That is the result of MapReduce and application clustering and processing technologies. This is the link back to the 77% statistic in Frame 4. Servers transmitting data inside the data center really do not need to be aware of the network. They just need to be aware of the routes, the paths to other servers or devices and they do not need a route to everywhere, just where they need to go.
Frame 8 What we value is different from what we talk about
Take a look at the chart to the left. I put a handful of public companies on the list and I am not including balance sheets, debt and other financial metrics. All I am pointing out is that companies focused on the enterprise (i.e. the data center) enjoy higher margins and richer valuations than companies that focus on the service provider market. Why is that true? Is that a result of the 77% problem? Is it a result of the complexity of the market requirements imposed by service provider customer base? Is it a result of the R&D requirements to sell to the service provider market?
Frame 7 Do We Need a New Network Architecture?
I have been arguing that we need a new network architecture for some time, but I think the underlying drivers will come from unexpected places. It was not long ago that we had wiring closets and the emergence of the first blade servers in the early 2001-2002 time period started to change how data centers were built. When SUN was at their peak, it was because they made the best servers. It was not long ago that the server deployment philosophy was to buy the most expensive, highest performance servers from SUN that you could afford. If you could by two, that was better than one. The advent of cheap servers (blades and racks), virtualization and clustering applications changed the design rules. Forget about buying one or two high end servers, buy 8 or 10 cheap ones. If a couple go down on the weekend, worry about it on Tuesday. I think the same design trend will occur in the network. It will start in the DC and emerge into the interconnect market.
/wrk
* It is all about the network stupid, because it is all about compute. *
** Comments are always welcome in the comments section or in private. **
Like this:
Like Loading...