Framing Exercise:::What if we Turned the Network Off?

While I was out at VMWorld, I was telling my colleague @cloudtoad about how I learned to sell multi-protocol networking to SNA shops.  This conversation started me thinking about network.  Since that conversation, I have been reading some recently issued RFXs that we need to respond to and it led me to an interesting framing exercise that I thought I would share on the blog. Continue reading

Notebook 12.18.12

The last few months have been a blistering pace at Plexxi and it has impacted my time to write.  Writing is important to me as it is my method of thinking in depth without the interruptions of email, calls, text and tweets.  Outside my window a Biblical rain is falling and I have Zac Brown playing.  As with past notebook entries, here is collection of topics I have been reading and thinking about over the past few weeks.
Continue reading

Infrastructure as a Service

Coming out of VMworld and talking to a lot of cloud providers (CPs) over the past few months, I am seeing many more RFXs and requests for presentations around network designs to support higher capacity compute and storage requirements.  I am not referring to CPs selling virtualized servers to the SMB market; I am referring to new cloud providers focused on IaaS.  These companies have customers that require large core (20k, 40k, 60k, 100k) clusters for bandwidth intense compute applications.  The networks are straightforward in design and with high-end compute and elastic storage requirements targeting pharma/bio-tech, quantitative analysis and research applications.  Many have dedicated compute and storage resources for compliance requirements.  These CPs are not looking for low latency network switches.  They are looking for networks that can dynamically configure high capacity bi-sectional bandwidth on demand to cluster compute and to reconfigure the network to support data replication requirements.

/wrk

* It is all about the network stupid, because it is all about compute. *

** Comments are always welcome in the comments section or in private. ** 

DEC, OpenVMS, Miles Davis and Being Swayed by the Cool

I joined my first startup in 1989.  I was the fourteenth employee.  Down the street in the Old Mill was the headquarters of Digital Equipment Corporation (DEC).  They had 118,400 employees, which was their peak employment year.  In the late 1980s, DEC was dominating the minicomputer market with their proprietary, closed, custom software.  DEC was a cool company and the fifth company to register a .com address (dec.com); a decade before NetScape would go public.  DEC was one of the first real networking companies outside of the world of SNA.  Radia Perlman inventor of spanning tree worked at DEC.  Most recently her continued influence on networking can be seen in the development of TRILL.

The continued rise in microprocessor capabilities would prove to be an insurmountable challenge for DEC.  DEC had built over the years a massive, closed software operating system with a non-extensible control plane that was extended across proprietary hardware.  This architecture would be eclipsed by the workstation and client/server evolutions.

To counter the these threats, DEC considered openning their software ecosystem by adding extensibility and programmability including a standardized interoperability mechanism (API).  The idea of opening the DEC software ecosystem would culminate in 1992 when DEC announced an significant update to their closed, proprietary operating system.  The new software release would be called OpenVMS or Open Virtual Memory System.  The primary objective of OpenVMS was allow for many of the different technology directions in the market to become one with the DEC ecosystem.  It made a lot of sense.  In 1992, workstations were the hot emerging trend of the market, the Internet was only two years removed from ARPANET control.  Windows 3.0 was two years old and anyone who used Win 3.0 knows it was a huge improvement over 2.1.  The rack server was a decade away.  Some companies still chose OS/2 over Windows.  The Apple Newton (i.e. iPhone) was a year away from release.  Here is a summary of DEC’s OpenVMS release:

—————————————————

OpenVMS is a multi-usermultiprocessing virtual memory-based operating system (OS) designed for use in time sharingbatch processingreal-time (where process priorities can be set higher than OS kernel jobs), and transaction processing. It offers high system availability through clustering, or the ability to distribute the system over multiple physical machines. This allows the system to be “disaster-tolerant”[10] against disasters that may disable individual data-processing facilities. VMS also includes a process priority system that allows for real-time processes to run unhindered, while user processes get temporary priority “boosts” if necessary.[11][12][13]

OpenVMS commercialized many features that are now considered standard requirements for any high-end server operating system. These include:

—————————————————

In 1997, five years after announcing OpenVMS, DEC was acquired by Compaq; a personal computer company.

/wrk

Global Cloud Networking Survey

I think the latest Cisco Global Cloud Networking Report has flown under the radar.  I read a few articles on it, but for the most part I think it was ignored with all the InterOp news.  It is a really interesting report and summarizes a lot of what I see and hear in the world of IT.  I have been posting about the frustrations of IT leaders with the state of the network.  Despite what the skeptics have written about SDN, when you talk to IT leaders and then read a report like Cisco’s Global Cloud Survey, one conclusion is possible:  the network is f’ing broken.  Two additional corollaries: (1) all of us in the networking industry have had a hand in creating the mess and (2) the first people to innovate and fix the network will be the winners.  The networking industry is divided into two groups: those perpetuating the mess and those who are trying to fix it.  Here are some choice quotes from the report:

  • Almost two in five of those surveyed said they would rather get a root canal, dig a ditch, do their own taxes than address network challenges associated with public or private cloud deployments.
  • More than one quarter said they have more knowledge on how to play Angry Birds—or know how to change a spare tire—than the steps needed to migrate their company’s network and applications to the cloud.
  • Nearly one quarter of IT decision makers said that over the next six months, they are more likely to see a UFO, a unicorn or a ghost before they see their company’s cloud migration starting and finishing.
  • More than half of IT decision makers said they have a better overall application experience at home with their personal networks than they do at work.

Thinking about Unicorns and six months, I spent some time listening to a Lightreading webinar on the Evolving the Data Center for Critical Cloud Success.  On slide 11 the presenters have “A Facts-based Reality Check for Cloud Delivery” which includes the following facts about the “Largest Live Test Bed in The Industry:”

  • 6 Months of Planning
  • 8 Weeks of On-Site Testing
  • 25 Test Suites Across DC, Network and Applications
  • $75M Equipment in the Tes
  • 80 Engineers Supporting Testing

NewImage

The more things change, the more they stay the same.  Which group are you in?

/wrk

Are Data Centers the Last Mile of the Twenty-Tens?

In a conversation the other day, there was a random supposition thrown out for discussion as to whether the data center (DC) will become the last mile (LM) of the future.  Most Americans have a choice between service providers for their LM needs.  Typically there is a RBOC or ILEC, usually a MSO (i.e. cable company) and a wireless broadband carrier.  In my home location I have one RBOC (VZ), two MSOs (Comcast and RCN who is an over builder) and seven wireless broadband choices: ATT, VZ, Sprint, MetroPCS, Leap, Clearwire and T-Mobile.  I realize that the wireless BB offerings vary from EVDO to 4G.  For this post I am going to put aside the wireless BB options as these services typically come with usage caps and I going to focus on the three LM options which for me is really four because VZ offers FTTH (FiOS) or DSL and Comcast and RCN offer DOCSIS options.

As a side note, the current Administration is really misguided in their desire to block the T-Mobile acquisition by AT&T.  My only conclusion is American business leaders are not allowed to run their businesses without approval of the Administration.  This all goes back Brinton and the revolutionary process that we are working through.  If AT&T wanted to declare C11, default on their bonds and give their unions 18% of the company in a DIP process the Administration would have probably approved the transaction in a day or maybe if ATT was solar company they would get the Administration as an investment partner.  T-Mobile will now wither away because their parent company will be unwilling to invest in the US with so much investment required in their home country for FTTH builds and managing through or reserving for European unity issues.  I am saying this as a former T-Mo customer since 1998, but I recently decided to switch to an iPhone and ATT.  Anyone want to wager on the condition of T-Mobile in 3-5 years?  One other point, having some weak wireless providers in the market is not a benefit to the consumer.

When the internet connection craze of the 1990s started to move from dialup to always on broadband connections, that is when the value of the LM anchored market share for incumbent service providers.  This was made clear to Congress in 1998 when Charles J. McMinn, then CEO of Covad, testified before the U.S. Senate Commerce, Science, and Transportation Subcommittee on Communications and said, “Failing to ensure a competitive environment would condemn the deployment of crucial next-generation digital communication services to the unfettered whims of the ILECs; precisely the opposite of what Congress intended sections of the Telecommunications Act of 1996 to accomplish.”

As I posted earlier here, I see the evolution of the DC and the cloud hype not going quite the way most people expect.  The cloud will be a deflationary trend for the market in the same way smartphones and higher capacity connection speeds were for the mobile market.  I have posted before here and here that the broadband market has clearly seen the deflationary pressures associated with broadband.  As we move deeper into the twenty-tens, will the DC provide a competitive anchor in the manner in which the LM did for incumbent service providers in the 1990s and 2000s?

I see the DC market evolving in three forms.  The mega-warehouse scale DCs that Google, Apple, Amazon, Microsoft and others are building are for the consumer market.  This is the market for smartphones, tablets, computers, DVRs, media library, applications, games; this is our personal digital footprint.  That is big market.  Then second market will be the DC for the SMB market focused on business applications.  I call this the commercial tier that starts at the point at which a company cannot or does not want to own their IT infrastructure such as data centers.  As I wrote the other day there are many reasons why a corporation wants to use a private cloud or private DC over the public cloud and a public DC.  I think this market is the smallest of the three markets.

The third market is the F2000 or F5000.  I am not really sure and it might be as small as the F500 or F1000.  This it the market what wants to use cloud technologies, utility computing and various web based technologies internally within the control of their own IT assets.  This is the primary commercial market of the future.  Futurists think that within twenty years or so the private DC/cloud market collides with the consumer DC/cloud market.  Maybe it does, maybe it does not, but I know they will not collide in the next five years.

My answer to the question I posed at the start is I think the DC/cloud will be an important component for accessing and anchoring the consumer market.  Companies will be forced to build their own DC/cloud asset or outsource to a cloud provider.  The example I used in an earlier post was NFLX using the AWS infrastructure which you can find here.  Over time this strategy could be an issue depending on the deflationary trend of the market.  It will be deflationary if it is easy for anyone and everyone to do as the chain of commerce shrinks.  Again it goes back to the lessons of Braudel.  In the SMB market, I think the RBOCs/ILECs roll up this space.  It will be the CLEC demise all over again as pure cloud providers will not be able to support the ecosystem required to sell to the SMB market.  In the F500-2500 market, I think companies will want to retain control of assets for a long time and this desire is deeply rooted in the IT culture of the ~F2500 market.  The cultural roots of owning and retaining IT assets go back to the introduction of the S/360 mainframe by IBM in 1965.  Behaving in a specific manner for forty-six years is a reinforced habit that is hard to break when corporations are flush with cash and IT is considered a competitive asset when deployed well.

/wrk

* It is all about the network stupid, because it is all about compute. *

** Comments are always welcome in the comments section or in private. ** 

Are Cloud Providers the CLECs of the Twenty-Tens?

In a conversation this past week the question was asked are cloud providers the CLECs of the twenty-tens?  It is an interesting question to ponder especially if you accept that the “cloud” is the basis for 453,000,546,001 investment ideas on Wall Street.  Moments before the markets opened on Friday December 9, Jim Cramer and Melissa Lee must have said the word cloud ten times on CNBC.  I am clearly on the record as a bit of a cloud bear. Continue reading

Hypothesizing About the Future of Networking

I am off to CA for the week, so posting will be infrequent, which of late is par for the course.  With earnings beginning, I would like to offer some thoughts, but most likely it will have to wait till the weekend.  One daily activity that I was consistent about early in my career was keeping a notebook.  I have a storage bin in my basement full of notebooks.  In these notebooks are network designs, customer meeting notes and internal meeting notes across five companies.  I became inconsistent and lost the skill of keeping a notebook in my thirties, but a few years ago I read George Soros’s Alchemy of Finance.

The book made me remember the value of keeping a notebook; a short list of my day’s activity with reminders to follow-up, random ideas to research and a lesson learned or valuable observation to remember.  For the past few months I have been writing this blog somewhat from my daily notebook.  I have been asked many times what SIWDT means.  It was an acronym from my notebook for stuff I would do today.  When I started the blog, I had every intention for it to be online, public view of stuff I was doing and I did not care if I made mistakes, got stuff wrong – that is life and all part of learning.  I used to keep a hard copy notebook and for me there is a personal preference to the tactile aspect of a notebook and pen – but I have adapted to using an on-line document that allows me to paste links, emails and web content.

I have been spending most of time looking at and engaging with people who operate service provider networks, enterprise networks and data center networks.  I do not know if my predictions will be correct, but that is why I keep a notebook and that is why I have a blog which is a distillation of notebook accessible by all.  My notes help me to remember, to check, to confirm, and to look for supporting evidence of hypothesis A or B and it helps me frame the relevancy of a data point.

When I look at the three groups of networks (i) service provider, (ii) enterprise and (iii) datacenter, I see the future of the network structured differently than how it is today and my interactions with the vast majority of technology people lead me to believe they are in a state of denial when comes to pending changes in network design and what will be important in the future.  This has been a reoccurring theme in my blog over the past few months.  I have high level of confidence in my hypothesis because the trends and data points continue to confirm and support.  I go back to my notebook and check on reoccurring data points and when I participate in a series of meetings with disparate entities and I see and hear the same objectives stated differently, I feel confident of direction.  In the past few weeks I have seen network people at a MSO, PTT, a financial services company; a data center operator and an ILEC all present a desire to reclaim bandwidth in the network.  Primarily they all want to use bandwidth they have allocated for network protection to be used actively.  In four of these scenarios the traffic amount was very large.  I remember one meeting at service provider in which a person diagramed how they wanted traffic flows to occur across their nation network; describing the desire for some flows to stay local and others to move across the backbone.  To me this was a familiar request because I had seen the same from datacenter person wanting some flows to stay within racks or clusters and not transit backbones.  Same architecture and network flow request – just different geographic challenges.

When I look at the technology direction in the datacenter (DC), I see it differently than most incumbent networking companies.  I see a desire to flatten the network; to remove the vertical stacking and leaf/spine network architectures.  VMs need to be software configurable and portable on the network element.  In many aspects, I see the architecture and technical demands emerging from the modern day DC to be the new leaders that will influence how networks will be designed in the future.  I view the end of the client/server era to be May 2001.  It was in May 2001 that RLX Technologies shipped the first blade server called the RLX 324.  That was the event that started the transition from the client/server era to our contemporary datacenter era in which the large commercial datacenter is possible.  With the rise of concentrated compute, increased storage and virtualization, we have a new technology era – except the network failed to evolve.  We are very much building networks that look like networks from 1995-1999 – the client/server era.  I know this because I can see them in my notebooks from the 1980s and 1990s.

With leadership flowing from the DC, that is why we see the rise of applications like Hadoop which are designed to split up work groups and distribute data intensive processes across many servers.  I look at this application and think it is only the beginning and five years from now a variation will be running in service provider networks serving the needs subscribers for content (e.g. video) and games.  It is not stretch to equate the original Akamai CDN service to a contemporary Hadoop cluster.  The scale and application was different, but the distribution and division of content was not too far apart in terms of the end result.  As the modern era of the DC emerges, we are now seeing a convergence around common IT metrics and I dare say we will see a flatter network in the DC and that same evolution will begin in the service provider network in ~5 years.  The DC network leads the enterprise network and ergo it will lead the service provider network.

The network drivers have been and continue to change.  Complexity will be pushed out of the network and into the I/O and the CPU.  Moving network complexity to the CPU element is the future because it is the CPU element that is most critical to content and applications.  The I/O is the next point of statistical gain.  The network will become simpler and tools of the network operator become the tools that allow applications to be managed, measured and servers and VMs clustered.  Scaling software to manage these elements as network capacities grow significantly over the next 2-10 years is where a lot of value will be created.  In all, just some thoughts about the future and it is possible that I am incorrect.

/wrk

* It is all about the network stupid, because it is all about compute. *

** Comments are always welcome in the comments section or in private.  Just hover over the Gravatar image and click for email. ** 

Networking Hardware in a Virtualized World

Back on April 29 2011, I wrote one of my first thought pieces on networking; at the bottom of that post I wrote the following:

———————————————————————————————————————————————

What Do Networking Products Look like in the Future?

If we assume the network trends discussed above are true, then networking companies need to be focused on applications and compute to be a strong value creator – instead of a hardware box supplier.  As part of this thesis here are five key elements:

Back end analytics: Virtualization and content evolution from short form to long form means that data processing will dominate the future. The network has to enable that function. By the way, that function is called compute.  Enable that function to occur and you are a winner.  This means networking companies need to sell tools in a software form that changes the value proposition from a box seller to compute enabler.  I would be looking to add analytics and processing software tools on the platform.

Modular software

User Definable Platform Tools [Controls]

Ability to Embed Value Add Apps into the Network Element or Call for Virtualized Apps in the Network

Security and Policy Tools

It is all about the network stupid, because it is all about compute.

———————————————————————————————————————————————
Sixty days is a short period of time, but it appears that many elements of the tag line created in that post are increasingly evident and the five themes are becoming stronger.  Network World had interesting article on what Gartner thought was going on around cloud computing and virtualization.  The article is full of all sorts scandalous suppositions, such as “The transition to more virtualization-focused software-based security controls, though now filled with uncertainties, is still expected to occur, and though only deployed “in the single digits today” by 2015, Gartner predicts 40% of security controls, such as antivirus, will be virtualized. This will happen, MacDonald added, despite the fact that vendors such as Cisco and Juniper have been dragging their feet because they like to sell “overpriced physical hardware.””

Here is another quote “Until about two years ago, we were talking about how to do identity management internally,” said Gartner analyst Gregg Kreizman. “Now, it’s about how do we get our arms around the SaaS [software-as-a-service] problem? Or we used to manage the applications but now they’re in the cloud”…so it’s leading to a never-before-asked question, “How about if we have our identities there?

If I go back and look at the five points I highlighted on April 29, 2011, I suspect that all of these become the real important driving trends for networking companies.  I think it is going to be less about who has the fastest processor and highest throughput.  Platform choices will be made based on virtualization of many of the core functions of network device.  That is what I mean by modular software.  There is a hardware platform, but it is appliance and some appliances are bigger and faster than others.  How the networking software reaches into the virtualized I/O, interacts with the compute element and how it is controlled is what will matter.  Kind of sounds like a smartphone.  The most advance hardware device did not win the smartphone war.  The device that won the smartphone wars had the best user defined controls, the best modular software (i.e. app store) and the best analytics all of which can be found in the iPhone.  If the best hardware device had won, we would all be carrying a Nokia device.

Selling network devices is going to be about selling user definable controls and modular software (i.e. app support).  That is what end-users and service providers are going to want.  Let the users set application flow controls and SLAs.  Turn up the bandwidth and you get a bigger bill.  Security, policy and backend analytics all fall into this category.  These areas of the network are not going to be solved by selling more hardware and that is a big problem for legacy vendors.  The compute point will be extended into the network and traditional networking HW vendors will not like that trend.  Three posts in a row I will repeat what I wrote from last week:

#2 Infrastructure Vendors Create 4th Leg: Shared cache concept and I think this is big, but this is the part of the network evolution that I wrote is really two networks: a network for humans and a network for machines to maintain a shared cache.  I believe in point #2, but maybe in a different way.  I really think that more VMs on a blade and I/O virtualization are a big way to achieve statistical gain.  I also think this is going to put pressure on the network element to do something different.  Network vendors that can integrate the network element into the I/O of the compute element are going to be very valuable.  Application delivery controllers (ADCs) become a virtual (i.e. software control) capability that is stretched across the compute/IO/network element.  This will allow it to scale and achieve maximize stat gain.  The networking vendor that figures this out with #6 below = big time winners.”

I am traveling the next couple days…no new posts until Friday.

/wrk

* It is all about the network stupid, because it is all about compute. *

 

** Comments are always welcome in the comments section or in private.  Just hover over the Gravatar image and click for email. **

Clouds and the Network

Before we journey into the cloud, there was a report on ISPs throttling P2P connections.  I have been writing about the race to provide all the bandwidth you need via 4G, LTE, FTTx, DOCSIS…it is good to know that this race comes with a governor.  You can have all the bandwidth you want as long as you pay for it.

Network World had an interesting article on the 12 ways the cloud will change the network.  I think there are some interesting thoughts in the article, but if I was asked to frame how large enterprises want to use the web or cloud or whatever term you want to call it, I would start with this December 21, 2006 post from Fred Wilson.

When I to talk to enterprise IT leaders, they are saddled with decades of technology.  It is a long tail that the organization supports because it has to in order to maintain service levels.  Most of this technology found its way into the enterprise via the suppliers.  Technology companies innovate and create products based on interaction with customers and then sell them the solution.  The result is the assemblage of architectures, paradigms, etc.  The article starts out with a reference to the mainframe era and the client/server era, but I would tell you that I fully believe in the mainframe and I am not the only one.

The reason I referenced a Fred Wilson blog post from 2006 is to show how easy it had become in a few short years to start a web services company.  This is NOT new news as you can see from the date of Fred’s blog entry.  When I speak with IT leaders, this is how they want their internal IT service to run.  It should run like a web property.  Work groups or organizations inside the enterprise should be able to request from IT storage, compute, access, apps and it should all be setup through portal in a matter of minutes.  It does not take long (half a day at most) to setup a fully functioning web site with commerce capabilities hooked into a number of social media sites.  That is how internal IT services should work.  Customers (i.e. work groups) should be able to request and turn up corporate IT services quickly (point #4 in the NW article).  Unfortunately, those pesky legacy resources get in the way and most organizations are not about to overbuild a new IT structure (point #5 in the NW article).  For simplicity, I am just going to summarize my thoughts on all twelve points.

#1 Cloud as a Third Platform:  I am not in agreement.  I have written in the past that network designs in my work lifetime (SNA era to today) have ebbed between centralized compute and distributed compute.  When WAN bandwidth was expensive centralized compute worked better.  When LANs proliferated it was easier to put content and compute local to users (gened many a Novell server in my day).  The evolution of the cloud is just a repeat of the past which is an analog between centralized and distributed compute; which I expect it to continue.

#2 Infrastructure Vendors Create 4th Leg: Shared cache concept and I think this is big, but this is the part of the network evolution that I wrote is really two networks: a network for humans and a network for machines to maintain a shared cache.  I believe in point #2, but maybe in a different way.  I really think that more VMs on a blade and I/O virtualization are a big way to achieve statistical gain.  I also think this is going to put pressure on the network element to do something different.  Network vendors that can integrate the network element into the I/O of the compute element are going to be very valuable.  Application delivery controllers (ADCs) become a virtual (i.e. software control) capability that is stretched across the compute/IO/network element.  This will allow it to scale and achieve maximize stat gain.  The networking vendor that figures this out with #6 below = big time winners.

#3 Bring Your License: I think this is big issue for a lot vendors and not just software companies.  The whole SSL/certificates challenge of who has them, who keeps them current, how do devices know who has current certificates, who does not have current certs, is all about state and state change.  I do think the network is positioned to solve this problem and that it integrates in some way into the shared cache from #2 above.

#4 IT Organizations Become Internal App Stores: Agree.

#5 Public Clouds More Important that Private Clouds: Disagree.  I think many corporations will still want to retain data and app control in private data centers for compliance and security reasons.  Value will be created in the transition point between private and public clouds.  If I was launching a startup or working at F5 I would look at this transition point.  I have found in my career that being in the middle of the transition point is a higher value creation point then being in the next generation point.

#6 Clouds Source for Big Data: Agree.  I wrote in my CSCO Part II post that “Back end analytics: Virtualization and content evolution from short form to long form means that data processing will dominate the future. The network has to enable that function. By the way, that function is called compute.  Enable that function to occur and you are a winner.”

#7 IT Organizations as Cloud Brokers: I guess so, but does not seem really significant to me.

#8 Cloud Disrupts IT Work Force: Maybe, but I think these types of prognostications rarely come true. The future is just unevenly divided.

#9 1/3 of Vendors Go Out of Business: Maybe, but then again maybe not.  I think new vendors replace old vendors who cannot adapt R&D to cannibalize their base.  I do not think vendors go out of business because of the cloud.

#10 Vertical Specialization: Agree and that is why I think legacy suppliers get replaced with new suppliers.  Just like the last 50 years.

#11 Personal Cloud: Maybe, maybe not.  The only barrier to me wanting to put all my personal content in the cloud is who has access to it and who owns it?  I think these are unresolved issues and if you look at the Government seizing domain sites, confiscating servers, issuing subpoenas to track your browsing history you may think twice about where and how you store your content.  When I read through the NW article I think it is very plausible that many personal users might have more advanced IT services than their employer if they are using cloud services from AMZN or GOOG.  The question is will they keep them in the cloud in the future?

#12 Cloud Fosters Innovation: Um…yea.

/wrk

* It is all about the network stupid, because it is all about compute. *

** Comments are always welcome in the comments section or in private.  Just hover over the Gravatar image and click for email. **