There was a time in my life that I went to a lot of tech conferences. The years 2010-2016 were pretty busy for me in terms of conferences, booths, speaking, attending, etc. For the most part, I have very little interest in these events. I think they are a huge waste of time outside of the socializing. This week I took a half day on a Friday to attend the New England Peering Forum in Cambridge MA. This was a small conference, with an interesting talk track and being local, I thought I would try it as the worse case outcome was I would leave early and start the weekend.
Continue reading
Category Archives: Data Centers
July 2019 Essay: Naval Lessons From the Lead Up to the First World War
My wife would tell you that for some reason I own far too many books on First World War Naval History. Personally, I thoroughly enjoy the history of Europe post the German wars of Unification (>1871) through the outbreak of the First World War. Some of my favorite college courses covered the treaty system of Bismarck and debates as to who was to blame for the outbreak of the First World War. Over time, I have become familiar with the naval history of the First World War. I think it might have started when I read Robert Massie’s book Dreadnaught. My fascination with this period of history is both tactical and strategic in nature.
Continue reading
Running to Stand Still
For me the last several weeks of 2014 had been running to stand still. I made one last sales call before Christmas Eve and then eased into a long beak till the New Year. I had some interesting sales calls over the past year. I wrote about the perfect Clayton Christensen, hydraulics versus steam shovels moment here. I learned a lot from that sales call and went back to using a framing meme we had developed a couple years earlier. That meme I posted in this blog here, seven months ago. In this post I am refreshing that meme and highlighting a few insights I read and thought were meaningful. Most if not all of the mainstream tech media is some technology company’s marketing message in disguise; hence it might be entertaining, but it is not informative and or thought provoking. Continue reading
Future Generations Riding on the Highways that We Built
When I was in high school and college, I never thought about a career in networking; it was just something I did because it was better than all the other jobs I could find. I worked at my first networking startup in the late ‘80s and twenty-five years later, I am still working in networking. Continue reading
Are You Building a Company or Managing an Engineering Project?
Earlier today I read this post titled “SDN is Not a Technology, It’s A Use Case.” Shortly after, I found myself in a conversation with one of our lead algorithmic developers. We were discussing recent developments in the deployment of photonics inside the data center and papers we had read from Google researchers. At Plexxi, we have already begun the thinking around what our product architecture will look like in 3-5 years. In the conversation I was having with the algorithmic developer, it occurred to me that we sometimes become so immersed in what we are doing on a daily, weekly, quarterly basis that we lose track of whether we are working on a project or building a company.
Fluidity of Network Capacity, Commoditization, DIY Openness and the Demise of the Network Engineer
Be forewarned, this post is a bit of rant on variety of subjects that typically get asked of me at conferences or I see written by analysts, sycophants and self decreed intelligentsia. The four most frequently asked questions or suppositions inquired about are:
- Will network virtualization result in fewer network elements (i.e. switches and routers)?
- The network is ripe for commoditization, so will this commoditization process result in lower margins for network vendors?
- If end users are adopting DIY network devices via open source software, will network vendors still be around?
- Will the network engineer or network administrator still be around in a few years?
This is my attempt to write down the answers. I think I have been answering these questions over the past two years on this blog, but perhaps I was somewhat indirect with my answers. I will try to be direct.
Continue reading
The Bigger Picture – Beyond Incrementalism
I was on a panel (with Chris MacFarland of Masergy and Thomas Isakovich of Nimbus) at the Jefferies technology conference in NYC this past week, when a question from Peter Misek caused me to pause and think about the answer. The question was about about the bigger picture of IT change, adoption, the next big thing, etc. I provided an answer to the question and later had time to reflect on the answer through various airport delays and airplane rides. I think the narrative goes something like this….
Continue reading
SDN, It’s Free just like a Puppy!
I have written both and will post at the same time because I believe we are conflating many issues when it comes to networking. For example: SDN, ONF, OpenFlow, Overlays, Underlays, Tunneling, VXLAN, STT, White Box, Merchant Silicon, Controllers, Leaf/Spine, Up, Down, Top, Bottom, DIY, Cloud, Hybrid, NetContainers, Service Chaining, DevOps, NoOps, SomeOps, NFV, Daylight, Floodlight, Spotlight to name a few. Both of these posts are intended to be read back to back. I wrote them in two parts to provide an intentional separation.
Continue reading
Dr. Strangelove: Or How I Learned to Stop Worrying and Love SDN
If you have not seen the movie Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb, you should. It is an important point of cultural reference in our contemporary history. I have been thinking about all this SDN stuff and various technical and business strategies over the past few weeks. Today, a colleague made reference to movie Dr. Strangelove in a passing conversation about network design. It occurred to me that there are a lot of humorous parallels between the movie and networking. This is a blog and I think it is a place between unfinished thoughts and longer form content.
Continue reading
Self Similar Nature of Ethernet Traffic
With all the debates around networking at ONS 2013, I found myself reading competitive blog posts and watching competitive presentations from vendors. It was the most entertaining part of ONS and it has certainly invigorated InterOp this past week with a new sense of purpose. Many vendors announced new switches and products ahead of the InterOp show. There has also been a steady discussion post-ONS on the definition of SDN. With all the talk around buffer sizes, queue depths and port densities, I think something has been lost or I missed a memo. I often hear people talk about leaf/spine networks, load-balancing, ECMP and building “spines of spines” in large DC networks.
Continue reading
It is all about Doctrine (I am talking about Networking and that SDN thing)
Last year, I wrote a long post on doctrine. I was reminded of that post three times this week. The first was from a Plexxi sales team who was telling me about a potential customer who was going to build a traditional switched hieracrhical network as test bed for SDN. When I asked why they were going to do that, they said the customer stated because it was the only method (i.e. doctrine) his people had knowledge of and it was just easier to do what they have always done. The second occurrence was in a Twitter dialog with a group of industry colleagues across multiple companies (some competitors!) and one of the participants referenced doctrine as a means for incumbents to lock out competitors from markets. The third instance occurred at the at the Credit Suisse Next-Generation Datacenter Conference when I was asked what will cause people to build networks differently. Here are my thoughts on SDN, networking, doctrine, OPEX and building better networks.
Continue reading
Networking: Time to Change the Dialog
I am off to NYC to present at an SDN gathering hosted by Oktay Technology. I am going to change up my standard pitch deck, so I am curious to see the reaction. I have decided that I have been too nice and I plan to be more provocative and change the network dialog from speeds, feeds, ports and CLIs to a discussion about the network as a system and orchestrating the network from the applications down – opposed to the bottom up wires approach.
Continue reading
Working Thoughts on SDN #5: Infrastructure as a System
I received a few comments and several emails from my last post, which was a surprise. It seems I am always surprised as to which posts receive responses and it is not something I am good at predicitng. My last post was just a quick post written somewhere over the middle part of the country on a VA flight from LAX. I actually posted it to my blog using MarsEdit while drinking a scotch and glancing at the TV. For all the complaining I do about traveling, contemporary travel is far better than my early career years when I had a choice of a smoking seat. Here is one of the comments from my last post that got me thinking: Continue reading
Working Thoughts on SDN #3
Yesterday HP announced some SDN products to include a controller. If you had read my SDN Working Thoughts #3 post, then you already knew this data point. I have many questions about this announcement, starting with why would they announce an OpenFlow based controller when you can get one from Big Switch Networks (BSN)? I am sure there is a smart answer, but that is not my point. In addition to HPQ, IBM announced a controller using the NEC controller. My point is there is has been and continues to be a lot of development and design of controllers going on. My hypothesis is that the controller architecture will play a role as to where the battle of SDN market share will be won and lost in the coming years and simplification of the market into “separating the data plane from the control plane” is not specific enough and does not encompass a broad enough data set. I have written several times before SDN is more than APIs and reinventing the past thirty years of networking in OpenFlowease.
I think a person’s perspective of the controller is directly related to how you see the network evolving and how your company wants to run their business. There is no stand alone controller market. If I was to summarize the various views of the controller I would say that incumbent vendors view a third party controller as a threat and need to provide a controller as hedge in their portfolio in case it becomes a strategic point of emphasis. Incumbents really do not know what to do with a controller in terms of their legacy business, which is why they market a controller as some sort of auto-provisioning, stat collecting NMS on steroids. It will enable you to buy more of their legacy stuff, which for HPQ after today’s guidance cut may not be the case. The emerging SDN companies view the controller as point of contention for network control. All the companies in the market share labeled “other” or “non-Cisco” view the controller as a means to access the locked-in market share of Cisco. In the past, I would have told you that control planes have enormous monetary value if you can commercialize them inside customers. Cisco did this with IGRP, IOS, Cisco IOS and NX-OS. Ciena did this with the CoreDirector. Sonus failed to do this. Ipsilon failed to do this. Does anyone remember the 56k modem standard battle between US Robotics and the rest of the world who were working on the 56k standard and who won that market battle? The question becomes over the next year or two is how many controllers become commercialized in the market place and what are these controllers doing? I think there is a difference between controllers doing network services and controllers providing network orchestration based on application needs.
The following quote is from Jim Duffy’s article in Network World on HP’s controller announcement:
“HP’s Virtual Application Networks SDN Controller is an x86-based appliance or software that runs on an x86-based server. It supports OpenFlow, and is designed to create a centralized view of the network and automate network configuration of devices by eliminating thousands of manual CLI entries. It also provides APIs to third-party developers to integrate custom enterprise applications. The controller can govern OpenFlow-enabled switches like the nine 3800 series rolled out this week, and the 16 unveiled earlier this year. Its southbound interface relays configuration instructions to switches with OpenFlow agents, while it’s northbound representational state transfer interfaces — developed by HP as the industry mulls standardization of these interfaces — relays state information back to the controller and up to the SDN orchestration systems.”
Reading Duffy’s description I think the SDN orchestration system (is that application orchestration?) is more valuable than the controller he describes, but that is a side discussion. I also took the time to read this blog post from HP. Much of this controller architecture discussion has been on my mind for months as well as in my day to day work conversations for the past few months. It seems a day cannot go by without a conversation on this matter. I have no conclusions to offer in this post, so if you are looking for one please stop reading. The point of this post is that controller architecture, controller design and how SDN will evolve is in process and I think it is little early to be declaring the availability of solutions that offer marginal incremental value at best. The evolution of the controller thought process can be summarized at a high level by the following:
- Wired Article from Apr. 2012
- Urs Hoezle’s presentation from ONS in 2012
- Google A Software Defined WAN Architecture (81 Slides) from ONS 2012
- Martin’s Blog
From the Martin’s blog in the section on General SDN Controllers:
“The platform we’ve been working on over the last couple of years (Onix) is of this latter category. It supports controller clustering (distribution), multiple controller/switch protocols (including OpenFlow) and provides a number of API design concessions to allow it to scale to very large deployments (tens or hundreds of thousands of ports under control). Since Onix is the controller we’re most familiar with, we’ll focus on it. So, what does the Onix API look like? It’s extremely simple. In brief, it presents the network to applications as an eventually consistent graph that is shared among the nodes in the controller cluster. In addition, it provides applications with distributed coordination primitives for coordinating nodes’ access to that graph.”
Regarding, ONIX here’s a brief summary of the architecture but you can read a paper on it here and note who the author’s are and where they work:
- Centralized approach. Central controller configures switches using either OpenFlow along with some lower-level extensions for more fine grained control.
- Default topology is computed using legacy protocols (e.g. OSPF, STP, etc.), or static configuration.
- Collects and presents a unified topology picture (they call it a network information base – NIB) to Apps that run on top of it.
- Multiple controllers (residing in Apps) are allowed to modify the NIB by requesting a lock to the data structure in question.
- Scalability and Reliability:
- Cluster + Hierarchy of Onix instances, but NIB is synchronized across all instances (e.g. via a distributed database). For the hierarchical design, there is further discussion on partitioning the scope and responsibilities of each Onix instance.
- Transactional database for configuration (e.g. setting a forwarding table entry), DHT for volatile info (e.g. stats). Lot of focus on database synchronization and design.
- Example of “apps” mentioned in the paper:
- Security policy controller
- Distributed Virtual Switch controller
- Multi-tenant virtualized datacenter (i.e. NVP)
- Scale out BGP router
- Flexible DC architectures like Portland, VL2 and SEATTLE for large DCs
Combining the info from multiple sources, Google uses ONIX for a network OS (see the link to the ONIX paper above). ONIX appears to be Nicira’s closed source version of NOX, and both Nicira and Google use it. NEC has something called Helios that involves OpenFlow, which noted above was OEMed by IBM. I not sure about HPQ and their recent controller announcement, but I think it serves us well to understand the history of the ONIX architecture.
- ONIX users think that fast failover at the switch level while maintaining application requirements is a hard problem to solve. They think it is better to focus on centralized reconfiguration in response to network failures.
- ONIX synchronizes state only at the ONIX controller
- ONIX wants to use multiple controllers writing to the network information base interface and probably to any table in any switch
Is ONIX a direction for some OpenFlow evolution or a design point? I think one of the early visions for OpenFlow and ONIX was for it to become a cloud OS, which it has yet to become, but others are trying. The evolution of OF/ONIX vision looks something like this:
- Build a fabric solutions company with software and hardware, which is largely about controlling physical switches with OpenFlow (Read NOX paper here)
- Build a commercial controller (ONIX) and sell it as a platform product to a community of applications developers
- Build a network virtualization (multi-tenancy through overlays…this is the part where Nicira renames ONIX to NVP?) application that happens to embed their controller (formerly ONIX). Control the forwarding table with OpenFlow and every other aspect of overlay implementation using OVSDB protocol talking to OVS (it is largely about controlling virtual switches with only a pinch of OpenFlow).
- Nicira purchased by VMWare for their general expertise in SDN and for future applications of the technology assets (VMWare today ships a virtualization/overlay solution using VXLAN that does not include any Nicira IP).
It will be interesting over the next year or so to see how the architecture of the controller evolves. I wrote about some of this in the SDN Working Thoughts #3 post. I think we are coming to an understanding that there are variations to just running a controller in band with the data flows. I think we will conclude that having a controller act as session border control device translating between the legacy protocol world and the OpenFlow world is also a non-starter, but this is the current hedge strategy of most incumbent vendors. As the world of SDN evolves, we will look back and see the path to what SDN has become by looking at the failures as proofs along the way. The industry will solve the scaling and large state questions, but I think the solutions will be shown to exist closer to the hardware (i.e. network) than most envision in the pure software only view.
In a prior post I had made a reference to an article that was partially inspired by a post by Pascale Vicat-Blanc on the Lyattis blog. The Lyatiss team has been working on a cloud language for virtual infrastructure modeling. In particular, it generalizes the Flowvisor concept of virtualizing the physical network infrastructure to include application concepts. I am not sure of the extent of their orchestration goals. Do they expect Cloudweaver to spin up the VMs and storage, place them on specific servers, configure the network to satisfy specific traffic engineering constraints, and finally tear down the VMs? I am not sure. With Nicira now part of VMWare what is the future for NOX/ONIX and will other companies be innovators or implementors?
There is another potential market evolution to consider when we think about the controller. The silicon developers are looking to develop chips that disaggregate servers into individual components. The objective is to make the components of the server, especially the CPU upgradable. Some people have envisioned this type of compute cluster to be controlled by OpenFlow, but I think that is unlikely. Network protocols will be around for a very long time, but putting that aside, the question is what does this type of compute clustering do for the network? How much server to server traffic stays in the rack / cluster / pod / DC? I am not sure how much of this evolution will have to do with OpenFlow, but what I do know is that this type of compute evolution will have a lot to do with SDN, if you believe that SDN is about defining network topologies based the needs of the applications that use the network.
In a true representation of the title, this post is just some working thoughts on SDN with hypotheses to be proven. Comments and insights welcome…
/wrk
SDN Controllers and Military Theory
A long time reader of my blog will know that I enjoy technology analogies to military strategy and doctrine. It is with this in mind, that a colleague sent me the following link to a post on the F5 blogs about centralized versus decentralize controllers in which there is a reference to a DoD definition of centralized control and decentralized execution. There is a lot going on in the post and most of it is code words for do not change the network which can be transposed from this quote “one thing we don’t want to do is replicate erroneous strategies of the past.”
My first question is what erroneous strategies would those be? Military or technical? I think I understand what the message is in this post. It is found in this paragraph:
The major issue with the notion of a centralized controller is the same one air combat operations experienced in the latter part of the 20th century: agility, or more appropriately, lack thereof. Imagine a large network adopting fully an SDN as defined today. A single controller is responsible for managing the direction of traffic at L2-3 across the vast expanse of the data center. Imagine a node, behind a Load balancer, deep in the application infrastructure, fails. The controller must respond and instruct both the load balancing service and the core network how to react, but first it must be notified.
Why does a single controller have to be responsible for managing the direction of traffic across the vast expanse of the data center? Is there a rule somewhere that states this or is it an objective? I think there can be many controllers. I think controllers can talk to controllers. Why does the controller have to respond and instruct the load balancing service? That is backwards. I would say this is exactly the model that people who build networks want to move away from. Would it not be easier to direct application flows to a pool of load balancers?
In terms of the military doctrine analogy a complete reading of John Boyd is in order. I have mentioned Boyd in a few prior posts here and here. The most applicable military theory to SDN is Boyd’s O-O-D-A loop. Anyone who worked with me at Internet Photonics or Ciena in the prior decade know that I have used this model in meetings to illustrate technology markets and sales operations. Here is the definition of the O-O-D-A loop from Wikipedia:
The OODA loop (for observe, orient, decide, and act) is a concept originally applied to the combat operations process, often at the strategic level in military operations. It is now also often applied to understand commercial operations and learning processes. The concept was developed by military strategist and USAF Colonel John Boyd.
Readers should note that I have book on Boyd as well as copies of some of his presentations and he always used the dashes in the acronym. If you are looking for a military theory to apply to SDN and controller architecture is can be found in Boyd – not joint task force working documents. SDN is not about central control, it is about adapting to the dynamic changes occurring in the network based on the needs of the application. We are building systems — not silos.
/wrk
SDN: What it Means, Max Hype Level Achieved and Code Words
The SDN washing is reaching new heights. Apparently, I missed the memo that all companies were doing SDN two years ago. The WSJ published a piece on SDN. With all the SDN washing and the WSJ article, I think we might have hit the peak (i.e. inflated expectations) of the Gartner hype cycle for SDN. As another example, here is an email that is representitive of about five emails I have received over the past month requesting the same. The ironic aspect of this email is readers of my blog know that I have been posting the answers to these questions for over year.
——————–
Hi William,
I work at XXXX with XXXX, I think we may have actually met [in the past]. I came across Plexxi in my readings on SDNs. I am trying to work the implications across the value chain of SDN, for instance where will it be adopted, what does it do to traditional networking world, what are the implications for chipsets, modules etc. Was keen to hear your views on it and get more familiar with Plexxi; any way we can try a call soon? Thanks.
Regards,
———————–
My view of SDN is different than most. That was the subject of my post prior to Structure. When I look at how SDN is defined today, I call it the neoclassical definition or view of SDN. Neoclassical SDN is concerned with the separation of control plane and the data path and how there can be APIs that allow a central controller to inject forwarding decisions. I think there is a little if any chance of mainstream adoption for neoclassical SDN. What will happen is new networks will be built and they will be built using some aspects of neoclassical SDN, but components of the solution and the application of these components will be different than what is generally available at present. My definition of SDN is:
- Computation and algorithms, in a word math. That is what SDN is.
- Derive network topology and orchestration from the application and/or the tenant. This is how SDN is different from switched networks.
- Ensure application/tenant performance and reliability goals are accomplished. This is part of the value proposition.
- Make Network Orchestration concurrent with application deployment. This is what SDN is for.
- A properly designed controller architecture has the scope, the perspective and the resources to efficiently compute and fit application instantences and tenants onto the network. This is how SDN will work.
All of the SDN articles and emails coming out the research world are really code words for an soft economy. Business is soft, so let’s blame and hype SDN. Much the recent SDN talk is really dancing around the underlying point of SDN. SDN is not about network programability. SDN is not about APIs. SDN is not really about retrofitting legacy networking equipment with APIs and replicating old technologies in new wrappers. The problem that has been growing over the past 18 years (note this is the era of switched networks) is the problem of IT OPEX. There is no Moore’s Law for OPEX in world of IT. Companies have dealt with it through several strategies with one of those strategies being outsourcing. The thinking here was that companies could cap or at least predict their IT OPEX by outsourcing networks over to firms who would manage their network as well as others; thus allowing the outsourcing firms to get to scale. The companies outsourcing get a fixed operating cost line, reduce headcount and create a lean organization. Good in theory, but the result has not been as expected – it has been disappointing. This is clearly my view that is only validated by my in person observations of the market.
When I look at how SDN will develop and I look at companies like Nicira, I understand the implications on the network. The problem being addressed is the problem of OPEX and in part CAPEX in terms of customers asking do I need to upgrade now or can I wait? In terms of SDN, the inverted Moore’s Law curve for IT OPEX is the driving force around the concept of SDN. SDN is one possible tool to reduce OPEX. The interesting part that I have found over the past 6-12 months is that IT leaders are exploring SDN as means to deploy new networks, but differently and for the first time in more than a decade they are thinking about tackling the OPEX curve and SDN holds promise as a strategy or tool in this objective. As we close out the era of switched networks (1992-2012) and begin the next era of networking we have part of the market that will go for IT outsourcing to the cloud provider. I still see this market segment (outsourcing group) as the broader SMB market served by cloud providers. The high end of the market will build and deploy their own data centers using the post neoclassical definition of SDN. I think the real SDN market develops in two segments: cloud providers and high-end enterprises deploying a hybrid utility compute DC. My time line looks like this:
- 2012 is the proof of concept (POC) year for SDN. In 2012 we will see the first SDN proof of concept networks in the deployment range of 250-1000 10G servers.
- 2013 is the year that at scale SDN builds start transitioning from POCs. I think we will see SDN networks that scale up to 10,000 10G servers by mid to late 2013.
- 2014 is the first big bang year. 2014 will be the year that we go to hyper scale and build DCs with 100k 10G servers, minimal hops, no OSR in a post neoclassical SDN world with controllers.
As always, this is just a blog and it is very possible that I am incorrect.
/wrk
The Mendoza Line for Networking Vendors
Wired posted one of those articles that again makes me ask what year is it over there? The article describes a content deep strategy by a number of web properties. If you have been reading this blog over the past year this is not news despite the “secret deals.” Here are a few excerpts:
July 27, 2011: “The internet is no longer built vertically to a few core peering points. Content is no longer static. State is now distributed across the network and state requires frequent if not constant updating. Content is no longer little HTML files, it is long form content such has HD video which other people are calling this Big Data. AKAM created amazing solutions for Web 1.0 and into Web 2.0 when the web was architected around the concept of a vertically built over subscription model. AKAM distributed content around the network and positioned content deeper in the network. That is not the internet of today.”
August 8, 2011: “A year ago most service providers (SPs) who thought they had an OTT video problem viewed YT as the biggest problem, but as a problem it was small. NFLX has overtaken YT traffic. From a SP perspective, there are several ways to handle the problem of OTT video or user requested real time traffic. (i) SPs can ignore it, (ii) SPs can meter bandwidth and charge consumers more for exceeding traffic levels, (iii) SPs can block it or (iv) SPs can deploy variations on content deep networking strategies.”
October 20, 2011: “[Cisco] acquired a private company in which they had previously made an investment. This is just further evidence of a content deep networking trend and that CDNs can easily be built by service providers.”
When I wrote the April 29, 2012 post about the “Compute Conundrum for Vendors” I was specifically thinking about being in the position of supplying networking equipment to service providers within the evolving trend of content deep networks.
There are really two types of suppliers of networking equipment: Group A are the suppliers who are in contact with the compute element (this is a small group) and Group B (this is a large group) are the suppliers who are not in contact with the compute element. With content pushing deeper into the network (i.e. internet edge land grab), it will become mandatory to have the ability to direct application workflows from the compute elements that are distributed throughout the network. This is the Mendoza Line for networking vendors. The question is how will this be done? Will the application flow function reside in the transport and switching portion of the traditional service provider network or in the data center, which the central office (CO) will become over the next decade? I already posted the thesis that data centers will be the last mile of the twenty-tens and clearly I believe that the vendors who are integrated at the compute/VM level win and the others who are not will lose.
I now submit that being in the B group will be increasingly difficult and that in these edge networks as described by Wired, which I called the content deep networks, is the exact point where SDN will find its initial hold in the service provider portion of the network. As supporting evidence of this thesis I will submit a couple points and let you decide as I could be incorrect.
1. Google WAN. See this NW article or read my Compute Conundrum post.
2. DIY content…before (March 11, 2012) the Google presentation at ONS, I wrote about what Google said at OFC 2012. That post is here. “I was very interested to listen to Bikash Koley’s answer to question about content and global caching. He referenced the effect of Google’s High Speed Global Caching network inAfrica. This network built by Google is not without controversy. Here is a link to a presentation in 2008 about the formative roots of this network. My point is I increasingly see service providers and content owners taking a DIY approach to content and these providers do not have to be the size of Google.”
Side note, the Google global cache presentation is from 2008 – so the internet edge land grab has been going on for ~5 years and I would say the trend or thinking around the maturity of the solution is further along than most people realize.
/wrk
* It is all about the network stupid, because it is all about compute. *
** Comments are always welcome in the comments section or in private. **
Global Cloud Networking Survey
I think the latest Cisco Global Cloud Networking Report has flown under the radar. I read a few articles on it, but for the most part I think it was ignored with all the InterOp news. It is a really interesting report and summarizes a lot of what I see and hear in the world of IT. I have been posting about the frustrations of IT leaders with the state of the network. Despite what the skeptics have written about SDN, when you talk to IT leaders and then read a report like Cisco’s Global Cloud Survey, one conclusion is possible: the network is f’ing broken. Two additional corollaries: (1) all of us in the networking industry have had a hand in creating the mess and (2) the first people to innovate and fix the network will be the winners. The networking industry is divided into two groups: those perpetuating the mess and those who are trying to fix it. Here are some choice quotes from the report:
- Almost two in five of those surveyed said they would rather get a root canal, dig a ditch, do their own taxes than address network challenges associated with public or private cloud deployments.
- More than one quarter said they have more knowledge on how to play Angry Birds—or know how to change a spare tire—than the steps needed to migrate their company’s network and applications to the cloud.
- Nearly one quarter of IT decision makers said that over the next six months, they are more likely to see a UFO, a unicorn or a ghost before they see their company’s cloud migration starting and finishing.
- More than half of IT decision makers said they have a better overall application experience at home with their personal networks than they do at work.
Thinking about Unicorns and six months, I spent some time listening to a Lightreading webinar on the Evolving the Data Center for Critical Cloud Success. On slide 11 the presenters have “A Facts-based Reality Check for Cloud Delivery” which includes the following facts about the “Largest Live Test Bed in The Industry:”
- 6 Months of Planning
- 8 Weeks of On-Site Testing
- 25 Test Suites Across DC, Network and Applications
- $75M Equipment in the Tes
- 80 Engineers Supporting Testing
The more things change, the more they stay the same. Which group are you in?
/wrk
Notebook 03.13.12: Datacenter, Equities and HPC for Wall Street
I have been sans posts for the last week. I did write a long post on the evolution of datacenter and broader network architecture. I think it came in around 2.2k works, which is long for a blog. I shared with a few colleagues and it stirred up some interesting responses and the consensus was to wait and post after integrating reviewer comments. On a rainy Saturday I thought some bookkeeping was in order:
Tech Field Day Networking Event: I spent part of the past week listening to presentations from a number of companies. I really did not see anything new, just more of the same. I hope ONF 2012 is not the same as ONF 2011. Presenters should be banned if they reuse an old presentation. Back to TFD, I thought the event was interesting and clearly geared towards a technical audience; I am really disappointed in the vendors – not the TFD team. Innovation seems dormant and no amount of science projects and Crossing the Chasm buttering will take the place of bold innovation. I will take another look around on Monday in NYC, but ODM hardware + Openflow does not = innovation.
RIMM: What is there to say that I have not said before? Prior posts are in the RIMM category. Missing a product cycle really sucks. I thought the other day that maybe they should just focus on the keyboard market for smartphones and uses who want to use BBM. It looks like it will go the way of Palm.
APKT, BRCM and FNSR: I am recently long all three of these stocks. If there is really an uptick in orders for optical stuff, I think inventories are lean and FNSR should work. I like the product cycle at BRCM. APKT is the biggest risk. If NA CAPEX is better, APKT should work and I think there are a lot of lazy shorts in the stock right now.
Old Trading Instincts: Back and forth with a few buysiders during the past week has led me to trim overall gross exposure to equities. It feels like correlations are breaking down and after the best Q1 since 1998, I feel better being less exposed to equities because theUS economy in Q1 2012 does not feel like theUS economy in Q1 1998 or even at the end ofClinton’s first term.
I am off tomorrow to NYC for the Flagg 2012 High Performance Computing Linux for Wall Street conference on Monday. Looking forward to seeing many of you at the show on Monday, I will be in town for drinks Sunday night.
/wrk
* It is all about the network stupid, because it is all about compute. *
** Comments are always welcome in the comments section or in private. **
SNA to Client/Server to Switching to a New Era
I have been posting frequently about networking. Perhaps my posts have been too piecemeal. This is an attempt to frame my thoughts over a broader period of time; thirty-two years. When I look back over this time period, I see the rising and falling of four great networking waves. I believe we are at the end of switching era. A new era is already taking hold despite many people missing the memo. In this next wave, there will be the creation of many new companies, market share will shift again and like the prior periods, this new era will witness the dismantling of the networking scaffolding erected during the prior era which I call the switching era.
Wave | Time Period | Foundations |
SNA Era | 1974-1994 | Mainframe, PU_Type 2 Node |
Client/Server Era | 1981-1995 | IBM PC, Novell, Hubs, Bridges, 10BaseT, Routers |
Switching Era | 1995-2011 | GigE, ASICs to MS, Cut-Trough and Store/Forward |
New Era | 2012-2027 | 10GE LOMs, VMs, NAS, App Clusters, Optics |
Frame 1: SNA Era
In 1980, the industry of computers was dominated by International Business Machines or IBM. Every major corporation had an IBM mainframe computer worth millions of dollars in their data center. The data center was typically referred to as the glass house, as it was a restricted facility, with special power, cooling, and glass windows that enabled the mere mortals of the corporation to walk around it, but not gain access. In 1974, IBM introduced a technology called Systems Network Architecture or SNA. SNA was one of the first major initiatives at developing networking technology to connect mainframes and mid-size computers using traditional telephony lines supplied by another giant corporation called American Telephony and Telegraph or AT&T. SNA enabled corporations with IBM computers to extend the reach of those computers to other users at remote locations around the world through a process called networking. Computers that were networked were able to communicate with other computers as well as allow users to access computers and information at distant locations.
By 1980, IBM was building multi-host (i.e. mainframe) networks with parallel paths, mesh architectures that could support multiple sessions (i.e. users). What had started as a solution to provide simple point-to-point network connections had begun to evolve towards sophisticated network designs that could route around failures and handle increased traffic loads. The simple need to connect two remote computers, evolved into what is generally referred to as a network of many computers and many paths. The importance of SNA was summarized by the well known SNA consultant Anura Guruge (who is a friend and now semi-retired) in January 2004, “SNA’s remarkable twenty year reign over IBM sector networking is now finally and unequivocally at an end. Most IBM mainframe ‘shops’, though still inescapably reliant on mission-critical SNA applications, are in the process of inexorably moving toward TCP/IP centric networks – in particular those that combine TCP/IP and Internet [ie. Web] technology to create what are now referred to as ‘intranets’ and ‘extranets’. Mainframe resident SNA applications will, nonetheless, continue to play a crucial role in successfully sustaining worldwide commerce well into the next millennium, way past the Year 2000.”
The seminal achievement of SNA in the late 1970s to mid 1980s was to make minicomputers viable from an enterprise perspective. Enterprisecomputer networks were completely dependent on the mainframe computers supplied from IBM or one of the minor mainframe suppliers. SNA was a proprietary solution implemented by IBM, but it was an open source solution. This enabled the suppliers of mini computers such as DEC, Wang, Prime, Data General, Apollo, and others to use SNA technology to deploy their systems into the network. Open source meant that competitors as well as providers of non-competitive systems had access to the technical implementation of SNA and thus could use SNA to add their computers to an SNA network. The mini-computer vendors implemented a PU_Type 2 node capability on their computers, which enabled these machines to seamlessly interact with mainframe computers as well as each other. This was the genesis of distributed computing. It was a seminal moment that gave birth to the commercial network within the enterprise market and started the progression towards the client/server network. This occurrence may not have had the dramatic overtones of Roger Kildall flying his plane while IBM waited in his lobby to license CP/M for the personal computer – but it is significant because networking of computers started with IBM.
Frame 2: Client/Server Era
The client/server architecture is the reason why Cisco Systems exists as a company today. The general population may think of Cisco in the context of the internet, but Cisco Systems was built by providing corporate America, and most of the industrialized world, with multi-protocol routers and switches that link client/server local area networks with back office mainframe and mid-range computing platforms. The introduction of the PC created the necessity for the creation of the server – which would drive the development of the client/server network. Cisco is the company that providing the routers to connect the computer networks of corporateAmerica, but with this accomplishment it achieved a deeper position of strategic importance. By plan, accident, or both, Cisco’s routers became the control plane for the enterprise network.
In the early days of the PC industry, computer hard drives and storage space were expensive. It was technically easier and financially superior to network PCs in the same way mainframe computers and terminals were networked. The server would act as a host (i.e. mainframe) and users (i.e. personal computers) would access applications and data on the server over the network. To create the local area network (i.e. LAN) of PCs and servers, networking technologies such as Ethernet, Token-Ring, and AppleTalk was deployed. Ethernet was developed at Xerox’s Palo Alto Research Center (PARC). Token-Ring was developed and introduced by IBM. For much of the late 1980s and into the early 1990s, Token-Ring and Ethernet were the competing technologies for the deployment of LANs. Today, Ethernet is the most widely deployed LAN technology and will soon dominate the Wide Area Network (WAN).
The development of applications to reside on servers by companies such as Novell enabled the desktop computer to access a central computer (i.e. server) that housed data and applications. To create a LAN, companies needed to provide connectivity from each desktop computer to the location of the client/server. The architecture for networking a client/server network came from the heritage of the SNA network. IBM deployed mainframe computers in a central location (i.e. glass house) and linked them to remote terminals (i.e. personal computers). The client/server LAN deployed servers in a central location, called a data center or wiring closet, and networked each personal computer to the server using Ethernet or Token-Ring. Suddenly, there was a market beyond the desktop computer and it began to spawn many new companies. In the early to mid-1980s the first networking companies began to emerge. The first explosive growth market for networking companies was not for routers or bridges – it was for hubs. Hubs were bridging platforms that enabled corporations to rewire their office facilities and provide to each desktop computer access to central locations where the servers resided. The mass introduction of the PC into corporateAmericabegan in the early to mid 1980s. At the time, LANs were new. Buildings had to be wired to support the personal computer and products installed to provide connectivity. The two dominant companies to emerge in this market segment were Cabletron and Synoptics. The second tier players were Ungermann-Bass, Chipcom, Bytex, 3Com, DEC, and a host of minor players. The wiring of corporateAmericawas the first great networking market to find its roots directly linked to the PC.
Frame 3: Switching Era
The switching era finds its roots in a company called Kalpana, full duplex ethernet and the development of gigabit ethernet. In 1990, Kalpana introduced an ethernet networking device that used cut through switching technology to quickly parse the header of the ethernet packet to complete a fast processing decision. Soon store and forward ASICs would equal cut-through switching speed. ASICs would evolve to what we know today as the merchant silicon phase with networking vendors buying chips from Broadcom or Intel.
The era was dominated by M&A in the early years: Cresendo, Kalpana,Grand Junction, Synernetics, Xedia, Xylan, and Prominet are a few examples of M&A deals. Synoptics and Wellfleet merged and Cabletron broke up. Cisco acquired IBM’s networking division in 1999 and thirteen years later I expect IBM to be active in building a new networking division as they have a start with XIV and BNT.
I inserted a few charts to highlight the switching era. The first is just a collection of network diagrams taken from various presentations, web sites and the internet archive from 1991 to 2011. My conclusion is a slow rate of change. I also inserted three diagrams that show various startups in during the switching era by technology. I have not updated since Dec 2007
Frame 4: New Era
In my opinion we have reached the end of the 15-20 year cycle called switching. There are many reasons for me to make this conclusion: 10G LOM, Virtualization, Big Data, 100G, NAS, etc. These technology drivers are the engines behind the need to scrap the old approaches. We have reached a point of Moore’s Law Exhaustion. The new era of networking is going to be built on a different frame work of application, compute, storage an network assumptions. As with prior eras, the compute element is a driver. The network must become extensible in the same way compute, applications and storage are evolving. Ridged network structures that scale vertically become passé. The network becomes a dynamic and extensible resource. That is the future and each week I meet more people searching for this type of network and it is emerging from the data center.
/wrk
* It is all about the network stupid, because it is all about compute. *
** Comments are always welcome in the comments section or in private. **