How the Network and a F-4 Phantom II are Alike

I was reading this Cisco blog post the other day and it is about how the network is the architectural foundation of the data center and the key element for cloud computing, application strategy and hosting services.  As I was reading the post, it occurred to me that the network we have today is a lot like the F-4 Phantom II.

The design principle for the F-4 was to find the biggest engine possible, wrap an airframe around it and make the plane support every possible mission requirement by every service branch in the world.  I will be the first to admit that growing up as a kid that I thought the F-4 was a cool looking plane.  Latter in life I learned that is was cool looking because it was not very airworthy.  All the wing modifications, tails, fins, knobs, bulges, do dads were added to make it controllable in flight.  Yes, the engines were the biggest money could buy, but the aerodynamics characteristics of the plane were terrible.

I think the network we have today is pretty much the F-4 Phantom II of networks.  The network we have today has had a very long service life and we have been living the Moore’s Law dream for good thirty years, but that is all ending as I have posted before.  If the network we have today is the architectural foundation of the future, we are in big trouble.  We have been applying the same design principle to the network that the USAF did to the F-4 Phantom II.  Earlier I posted that this was because of doctrine.  The network we have today is a collection of band aids, do dads, modifications, knobs, thingamajiggers all intended to make it fly; I mean work.  We have data compression, load balancers, firewalls, spanning tree, VLANs, data de-duplication, WAN acceleration, TRILL, Virtual Port-Channels (vPC), Overlay Transport Virtualization (OTV), Locator/ID Separation Protocol (LISP), FabricPath, FibreChannel-over-Ethernet (FCoE), Virtual Security Gateway (VSG), OSPF, RIP 1, RIP 2, IGRP, middleware, OpenFlow, etc.

I am ready to throw it all out.  When I look at the network we have today, I think we have the F-4 Phantom II of networks.  Both have had a long service life and both were continuously modified to meet changing roles, but the network today like the F-4 in its time is a performance loser.  The United Statesaddressed the flight envelop limitations of the F-4 by designing the first airplane in US jet age to be solely designed for the air superiority mission.  That plane became the F-15.

We need a new network and we need to start with two design principles.  The first is to be guided by principle of end-to-end arguments in system design and push complexity to the edge and the second is to accept that the network does only one of two actions: connect and disconnect.  All the protocols and techniques I listed in the third paragraph (which is about 1 bps of all the stuff out there) were created because as networking people we failed to heed the lessons of the before mentioned principles.  I have been posting about this before here and here, and this is post is an extension of those thoughts because I am continually surprised that people think that network is more important than the application and the compute point and that the way to fix the network is to add more stuff to make it work better.

I think this is just crazy talk from people who are buried so deep in the networking pit that they do not realize that they are still using Geocities and they are wondering where everyone has gone.  There is a new network rising and instead of connecting a device to all devices and then using 500 tools, protocols and devices to break, shape, compress, balance and route those connections between devices, we are going to have a network that connects the compute elements as needed.  We are not going to build a network to do all things; we are going to build a network that facilitates the applications at the compute point, thus pushing complexity to the edge.  I think of it as the F-15 – not the F-4 and with this new network, we will need less consultants to explain how it works.

 

/wrk

* It is all about the network stupid, because it is all about compute. *

** Comments are always welcome in the comments section or in private. ** 

Are Data Centers the Last Mile of the Twenty-Tens?

In a conversation the other day, there was a random supposition thrown out for discussion as to whether the data center (DC) will become the last mile (LM) of the future.  Most Americans have a choice between service providers for their LM needs.  Typically there is a RBOC or ILEC, usually a MSO (i.e. cable company) and a wireless broadband carrier.  In my home location I have one RBOC (VZ), two MSOs (Comcast and RCN who is an over builder) and seven wireless broadband choices: ATT, VZ, Sprint, MetroPCS, Leap, Clearwire and T-Mobile.  I realize that the wireless BB offerings vary from EVDO to 4G.  For this post I am going to put aside the wireless BB options as these services typically come with usage caps and I going to focus on the three LM options which for me is really four because VZ offers FTTH (FiOS) or DSL and Comcast and RCN offer DOCSIS options.

As a side note, the current Administration is really misguided in their desire to block the T-Mobile acquisition by AT&T.  My only conclusion is American business leaders are not allowed to run their businesses without approval of the Administration.  This all goes back Brinton and the revolutionary process that we are working through.  If AT&T wanted to declare C11, default on their bonds and give their unions 18% of the company in a DIP process the Administration would have probably approved the transaction in a day or maybe if ATT was solar company they would get the Administration as an investment partner.  T-Mobile will now wither away because their parent company will be unwilling to invest in the US with so much investment required in their home country for FTTH builds and managing through or reserving for European unity issues.  I am saying this as a former T-Mo customer since 1998, but I recently decided to switch to an iPhone and ATT.  Anyone want to wager on the condition of T-Mobile in 3-5 years?  One other point, having some weak wireless providers in the market is not a benefit to the consumer.

When the internet connection craze of the 1990s started to move from dialup to always on broadband connections, that is when the value of the LM anchored market share for incumbent service providers.  This was made clear to Congress in 1998 when Charles J. McMinn, then CEO of Covad, testified before the U.S. Senate Commerce, Science, and Transportation Subcommittee on Communications and said, “Failing to ensure a competitive environment would condemn the deployment of crucial next-generation digital communication services to the unfettered whims of the ILECs; precisely the opposite of what Congress intended sections of the Telecommunications Act of 1996 to accomplish.”

As I posted earlier here, I see the evolution of the DC and the cloud hype not going quite the way most people expect.  The cloud will be a deflationary trend for the market in the same way smartphones and higher capacity connection speeds were for the mobile market.  I have posted before here and here that the broadband market has clearly seen the deflationary pressures associated with broadband.  As we move deeper into the twenty-tens, will the DC provide a competitive anchor in the manner in which the LM did for incumbent service providers in the 1990s and 2000s?

I see the DC market evolving in three forms.  The mega-warehouse scale DCs that Google, Apple, Amazon, Microsoft and others are building are for the consumer market.  This is the market for smartphones, tablets, computers, DVRs, media library, applications, games; this is our personal digital footprint.  That is big market.  Then second market will be the DC for the SMB market focused on business applications.  I call this the commercial tier that starts at the point at which a company cannot or does not want to own their IT infrastructure such as data centers.  As I wrote the other day there are many reasons why a corporation wants to use a private cloud or private DC over the public cloud and a public DC.  I think this market is the smallest of the three markets.

The third market is the F2000 or F5000.  I am not really sure and it might be as small as the F500 or F1000.  This it the market what wants to use cloud technologies, utility computing and various web based technologies internally within the control of their own IT assets.  This is the primary commercial market of the future.  Futurists think that within twenty years or so the private DC/cloud market collides with the consumer DC/cloud market.  Maybe it does, maybe it does not, but I know they will not collide in the next five years.

My answer to the question I posed at the start is I think the DC/cloud will be an important component for accessing and anchoring the consumer market.  Companies will be forced to build their own DC/cloud asset or outsource to a cloud provider.  The example I used in an earlier post was NFLX using the AWS infrastructure which you can find here.  Over time this strategy could be an issue depending on the deflationary trend of the market.  It will be deflationary if it is easy for anyone and everyone to do as the chain of commerce shrinks.  Again it goes back to the lessons of Braudel.  In the SMB market, I think the RBOCs/ILECs roll up this space.  It will be the CLEC demise all over again as pure cloud providers will not be able to support the ecosystem required to sell to the SMB market.  In the F500-2500 market, I think companies will want to retain control of assets for a long time and this desire is deeply rooted in the IT culture of the ~F2500 market.  The cultural roots of owning and retaining IT assets go back to the introduction of the S/360 mainframe by IBM in 1965.  Behaving in a specific manner for forty-six years is a reinforced habit that is hard to break when corporations are flush with cash and IT is considered a competitive asset when deployed well.

/wrk

* It is all about the network stupid, because it is all about compute. *

** Comments are always welcome in the comments section or in private. **