How Does Change Occur?

I was having a DM conversation (140 characters at a time) the other day with network architect.  We were discussing the reluctance of networking people, especially at the CxO or leadership level to do something different.  Personally, I have heard from ~50 people at the leadership level over the past 18 months that state they want to do something different with their network infrastructure.  The network has not changed in twenty years and now the time has come to change the network.  What is the result of all the pent up desire to do something different?  More network incrementalism; at least in the near term.  The DM conversation I was having was around the subject of getting network people to do something different.  Why do people say they want to make big changes and fail to seize the day?  That is the subject of this post.
Continue reading

Talking SDN or Just Plain Next Generation Networking…

Tomorrow in SF, I will be talking about SDN, or as I like to call it next generation networking at the Credit Suisse Next Generation Data Center Conference.  It will be a panel discussion and each participant has a few minutes to present their company and thoughts on the market adoption of SDN.  Explaining the next twenty years of networking in fifteen minutes is a challenge, but I have been working with a small slide deck that helps make the point.  Here are those slides (link below).  I posted a variation of those slides few weeks ago that I used in NYC, but I tailored this deck to strict time limit of 15 minutes.  I will post more frequently after Plexxi is done at NFD #5 this week and around the time of OFC.

CS Next Gen DC Conference

 

/wrk

Notebook 02.19.13: Merchant Silicon, Insieme, Controllers, Portfolio and stuff…

Before the week is over, I am going to write a blog post entitled “Imagine that SDN Did Not Exist, What Would Do with All the Free Time?”  That is something I will find the time to crank out either on my current flight to SFO or the return flight.  Until then, here are a few things I have been collecting in my SIWDT Evernote to write about.
Continue reading

Networking: Time to Change the Dialog

I am off to NYC to present at an SDN gathering hosted by Oktay Technology.   I am going to change up my standard pitch deck, so I am curious to see the reaction.  I have decided that I have been too nice and I plan to be more provocative and change the network dialog from speeds, feeds, ports and CLIs to a discussion about the network as a system and orchestrating the network from the applications down – opposed to the bottom up wires approach.
Continue reading

Ich bin ein SDNer!

I am an admirer of John Kennedy and I think he was a wonderful speaker, especially with gifted writers such as Ted Sorensen.  Kennedy’s administration changed social culture in America.  It ended the era of JFK 1957the fedora.  The White House went from functionary to glamorous.  America transitioned from the antiseptic 50s to the dynamic 60s.  The country embraced big aspirations, from the moon to human rights.  I included a picture of JFK stopping by a news stand from 1957.  It was taken by the father of a family friend, six years before his famous speech in Berlin.  I saw the picture again a few weeks ago at a show for the photographer in Boston and it made me wonder how many people were Berliners in 1957.
Continue reading

Notebook 12.18.12

The last few months have been a blistering pace at Plexxi and it has impacted my time to write.  Writing is important to me as it is my method of thinking in depth without the interruptions of email, calls, text and tweets.  Outside my window a Biblical rain is falling and I have Zac Brown playing.  As with past notebook entries, here is collection of topics I have been reading and thinking about over the past few weeks.
Continue reading

Working Thoughts on SDN #5: Infrastructure as a System

I received a few comments and several emails from my last post, which was a surprise.  It seems I am always surprised as to which posts receive responses and it is not something I am good at predicitng.  My last post was just a quick post written somewhere over the middle part of the country on a VA flight from LAX.  I actually posted it to my blog using MarsEdit while drinking a scotch and glancing at the TV.  For all the complaining I do about traveling, contemporary travel is far better than my early career years when I had a choice of a smoking seat.  Here is one of the comments from my last post that got me thinking: Continue reading

Working Thoughts on SDN #4: Relieving Workload Placement Constraints

I have been on the road making sales calls for Plexxi.  I am still amazed at the effort being put into the discovery of SDN use cases.  I was going to tweet the other day that I had found the perfect use case for SDN “fixing the network,” but self-restraint won the day.  There is a lot of FUD in the system presently and it leads to statements such as  “the most common use cases for SDN will be in network monitoring and provisioning as an added value.”  If that is SDN, I am going home.  Luckily, I have done 60+ sales calls selling solutions that encompass elements of SDN and I can say with 100% accuracy that no potential customer has ever taken a few hours out of their day to talk to me about network monitoring and provisioning.  Maybe network monitoring and provisioning are a compelling use case for SDN, I am just stating that I cannot secure a time commitment from a potential client to talk about that subject matter.  I am not critical of people thinking that SDN is about network monitoring, because I know if enough people say that is what SDN is and they keep repeating it, many people will just assume that is what SDN is.  The SDN Central site lists a number of SDN use cases, but these are high level, thematic type applications and I have found that starting a customer presentation at this level triggers a Gong Show moment.  What customers do take time out of their day to talk about is fixing their network.
Continue reading

Working Thoughts on SDN #2

I was in CA earlier in the week to attend the JP Morgan SDN conference in Palo Alto.  To start I would like to thank Rod and the JPM team for the invite.   A number of companies presented (Arista, Cisco, HP, Vyatta, Big Switch, Ciena, Embrane, Vello, Cumulus, Cyan and Brocade) and there were a number of startups in the audience: Insieme, Plumgrid and Plexxi.  Rather than a reprint of my conference notes, I thought it might be more useful to describe what I thought were interesting discussion points, debates and themes to emerge from my perspective as an observer.  I paraphrased the comments from various speakers.  My observations below are not exact quotes, but rather thematic summaries of discussions.

Attendance: Most of the startups or young companies sent their CEOs to speak.  This includes Arista, Vyatta, Big Switch, Embrane and Cumulus.  Juniper was not present.  Cisco sent a Sr. Director level person while Brocade and HPQ sent CTO types.  I am not sure if there is anything to read into that, but I noted who was participating.

Openflow: There was a lot discussion around the next version of OpenFlow.  Dan Pitt (Executive Director of the ONF) and Guru Parulkar (Executive Director, Consulting Professor at Stanford) both spoke on the direction of OpenFlow and the ONF community.

Controller Market Share: Just listening throughout the day I got the sense that Big Switch (BSN), HPQ, CSCO and JNPR (recent press article confirmed by a friend) are all building controllers.  I am not sure if everyone’s controller is based on OpenFlow, but Google also built a controller which I am sure it is for internal use and I know they contribute to the OpenFlow working group.  I listened to Guido Appenzeller, CEO of BSN, speak and he made some interesting points about controllers.  He contrasted BSN with Nicira by saying that Nicira was overlay tunnels for VMs/HyperV, while BSN was focused on three elements of SDN: (i) hardware (HW) switches, (ii) open source Controller which is FloodLight (iii) and apps.  He commented that Floodlight was ahead of all controllers and the market (i.e. end-users) is going to want a open source controller and will not be locked into a single controller from a single vendor.  The BSN controller is being architected for service providers and enterprises and he believes the SDN market will evolve along the lines of Linux.  BSN is not a hardware company, so they are not focused on switches.  BSN is focused on the controller for the good of the community and plan to build their business around the applications using SDN.  They want to be the App Store of SDN networks.  He commented that the first phase of SDN is overlays on the existing brown field networks, but soon there will be a new hardware/switch layer that enables the second phase of the SDN evolution.  In a later panel, Jayshree of Arista commented that there might be application specific overlay controllers.

App Stores for Controllers: When I hear people describe this view of SDN it always involves applications like load balancers, firewalls and routing.  I think of these things as network services and I do not think that network services belong in App Stores and I am not convinced they have anything to do with SDN.

When Does SDN Matter: Everyone agreed that SDN is starting now, it will accelerate and reach critical mass in three years which is the start of the sustained period of adoption and growth years.  Dave Stevens, Brocade’s CTO commented that SDN does not reduce the power of incumbency and that the big stacked networks that have been built over the past twenty years are complex and really hard to switch out.  I would comment that this is exactly the problem statement for SDN.  He commented that SDN is about automation and that many of their customers are asking what does SDN mean to them.  Brocade’s plan is to partner for SDN.

VXLAN: There was lots of chatter on VXLAN.  I understand it, I just do not get the excitement.  Wrapping something in more of something is not really innovation.  If you get excited about VXLAN, you probably think fondly of the following terms: Multi-bus, IGRP, AGS, RIP2 and you have not missed a standards body meeting in a few decades.

SDN in WAN: There were three transport focused companies at the conference: CIEN, Cyan and Vello.  Ever since the Google OpenFlow WAN presentation at2012 ONS, there has been a lot of interest in how SDN fits in the WAN.  The VP of Engineering for Vello commented that transport is complex, resources (i.e. fiber?) are scarce and cited the Google OF WAN.  They are focused on DC to DC interconnects.  The CTO of Ciena (an old colleague from my CIEN days) commented that the problem SPs have is how to get a lot of capacity to many locations cheaply.  He describes this decade as the M2M decade coming off the mobility and internet booms from the prior decades.  He also made some other comments that I would agree with: (i) if SPs can separate compute of topology from path calculation that could become very powerful for carriers and (ii) the SDN proposition for the carrier is the ability to have levers that change the underlying wires of the network more than once every 3-7 years.  Imagine if they could change them many times a day!  That last sentence was a veiled Plexxi comment by me.

SDN for High Capacity Infrastructure: One of the speakers I was most impressed with was J.R. Rivers of Cumulus Networks.  I had never heard him speak before.  He was part of a SDN panel with three other companies, but had time to make some interesting comments.  He said that Cumulus was focused on high capacity infrastructure.  He talked about building DCs in which servers were made up of communities of systems and the relationship of how these communities peer with other communities in the DC.  He cited a example of how a community of ten servers might work together and that there are many apps in a DC and that clusters of machines want to work with other apps in different clusters.  He thinks SDN is a way to deploy apps in the DC and that enterprises are in the proof of concept phase for SDN.  He commented the SDN might be the SLA for the application in the enterprise.  He does not think that SDN obsoletes hardware, but rather SDN allows the network to run different parts of software on different end points.  SDN is not path computation, it is connectivity and SDN enables virtual and physical machines to communicate and in totality this becomes an enabler for SLAs to move into the overall network system.

Panel on Switching and Networking: There was interesting fireside chat with Arista, BSN, Brocade and HPQ towards the end of the ay.  Jayshree of Arista views SDN as a seminal moment in networking where the orchestration of the network is improved and this translates into OPEX savings.  Guido commented that controller architectures will evolve and we will see an evolution of the first generation centralized controllers to a distributed controller model.  There might be one logical controller and a distributed set of computational controllers like a hadoop cluster.  There was a discussion about the development of controllers of controllers and application specific controllers for overlay functions.  At the end of the discussion of controllers, there was a discussion that there was no good solution for the controller of controllers and that this was probably a good PhD thesis to be written.  There was unanimous agreement that the VSwitch is a strategic point in the network.  With all the SDN talk, HPQ pointed out that not everything is about SDN and managing the legacy network is just as important and customers want a single pane of glass to manage the network.

Cisco Systems Meme: Throughout the conference there were discussions on the future for Cisco and what will be the affect of SDN on Cisco; this is kind of what happens when you run around the world with a presentation entitled “Cisco and the Seven Dwarfs” from eight years go.  This has not been a topic I have been shy about, but there was a neatly packaged thesis that made sense to me from several of the CEOs of SDN startups who were former Cisco executives.  The thesis is that Cisco has grown up to be IBM.  They develop their own ASICs, they package their ASICs/Hardware with their closed software and sell it as a system at very high margin.  Think IBM mainframe.  The only way to beat Cisco is to be cheaper, better and faster.  This was clearly the strategy of Arista.  Along comes SDN and it is going to unleash the value that is contained in the closed system.  I once wrote a book on what happens when you deregulate a market called telecom and clearly many of the SDN startups are looking to deregulate Cisco’s market share.  The companies that are building SDN solutions see Cisco as the IBM of the 1980s and they want to unlock the value in the closed system.  In one way this makes sense to me as I started my technology career selling multi-protocol bridges and routers into SNA networks.  I was told as a young sales person that no one gets fired for buying IBM and to beat IBM you need to be cheaper, better and faster.  I know today that no one gets fired for buying Cisco.  This was reinforced just yesterday when I got an email from a CEO of Cloud Provider who told me he was not interested in any alternatives to Cisco because they are a Cisco shop and Cisco now has SDN.  The caveat with this story making sense to me is I might be falling victim to the power of stories and legends.  😦

/wrk

SDN Controllers and Military Theory

A long time reader of my blog will know that I enjoy technology analogies to military strategy and doctrine.  It is with this in mind, that a colleague sent me the following link to a post on the F5 blogs about centralized versus decentralize controllers in which there is a reference to a DoD definition of centralized control and decentralized execution.  There is a lot going on in the post and most of it is code words for do not change the network which can be transposed from this quote “one thing we don’t want to do is replicate erroneous strategies of the past.”

My first question is what erroneous strategies would those be?  Military or technical?  I think I understand what the message is in this post.  It is found in this paragraph:

The major issue with the notion of a centralized controller is the same one air combat operations experienced in the latter part of the 20th century: agility, or more appropriately, lack thereof. Imagine a large network adopting fully an SDN as defined today. A single controller is responsible for managing the direction of traffic at L2-3 across the vast expanse of the data center. Imagine a node, behind a Load balancer, deep in the application infrastructure, fails. The controller must respond and instruct both the load balancing service and the core network how to react, but first it must be notified.

Why does a single controller have to be responsible for managing the direction of traffic across the vast expanse of the data center?  Is there a rule somewhere that states this or is it an objective?  I think there can be many controllers.  I think controllers can talk to controllers.  Why does the controller have to respond and instruct the load balancing service?  That is backwards.  I would say this is exactly the model that people who build networks want to move away from.  Would it not be easier to direct application flows to a pool of load balancers?

In terms of the military doctrine analogy a complete reading of John Boyd is in order.  I have mentioned Boyd in a few prior posts here and here.  The most applicable military theory to SDN is Boyd’s O-O-D-A loop.  Anyone who worked with me at Internet Photonics or Ciena in the prior decade know that I have used this model in meetings to illustrate technology markets and sales operations.  Here is the definition of the O-O-D-A loop from Wikipedia:

The OODA loop (for observe, orient, decide, and act) is a concept originally applied to the combat operations process, often at the strategic level in military operations. It is now also often applied to understand commercial operations and learning processes. The concept was developed by military strategist and USAF Colonel John Boyd.

Readers should note that I have book on Boyd as well as copies of some of his presentations and he always used the dashes in the acronym.  If you are looking for a military theory to apply to SDN and controller architecture is can be found in Boyd – not joint task force working documents.  SDN is not about central control, it is about adapting to the dynamic changes occurring in the network based on the needs of the application.  We are building systems — not silos.

/wrk

DEC, OpenVMS, Miles Davis and Being Swayed by the Cool

I joined my first startup in 1989.  I was the fourteenth employee.  Down the street in the Old Mill was the headquarters of Digital Equipment Corporation (DEC).  They had 118,400 employees, which was their peak employment year.  In the late 1980s, DEC was dominating the minicomputer market with their proprietary, closed, custom software.  DEC was a cool company and the fifth company to register a .com address (dec.com); a decade before NetScape would go public.  DEC was one of the first real networking companies outside of the world of SNA.  Radia Perlman inventor of spanning tree worked at DEC.  Most recently her continued influence on networking can be seen in the development of TRILL.

The continued rise in microprocessor capabilities would prove to be an insurmountable challenge for DEC.  DEC had built over the years a massive, closed software operating system with a non-extensible control plane that was extended across proprietary hardware.  This architecture would be eclipsed by the workstation and client/server evolutions.

To counter the these threats, DEC considered openning their software ecosystem by adding extensibility and programmability including a standardized interoperability mechanism (API).  The idea of opening the DEC software ecosystem would culminate in 1992 when DEC announced an significant update to their closed, proprietary operating system.  The new software release would be called OpenVMS or Open Virtual Memory System.  The primary objective of OpenVMS was allow for many of the different technology directions in the market to become one with the DEC ecosystem.  It made a lot of sense.  In 1992, workstations were the hot emerging trend of the market, the Internet was only two years removed from ARPANET control.  Windows 3.0 was two years old and anyone who used Win 3.0 knows it was a huge improvement over 2.1.  The rack server was a decade away.  Some companies still chose OS/2 over Windows.  The Apple Newton (i.e. iPhone) was a year away from release.  Here is a summary of DEC’s OpenVMS release:

—————————————————

OpenVMS is a multi-usermultiprocessing virtual memory-based operating system (OS) designed for use in time sharingbatch processingreal-time (where process priorities can be set higher than OS kernel jobs), and transaction processing. It offers high system availability through clustering, or the ability to distribute the system over multiple physical machines. This allows the system to be “disaster-tolerant”[10] against disasters that may disable individual data-processing facilities. VMS also includes a process priority system that allows for real-time processes to run unhindered, while user processes get temporary priority “boosts” if necessary.[11][12][13]

OpenVMS commercialized many features that are now considered standard requirements for any high-end server operating system. These include:

—————————————————

In 1997, five years after announcing OpenVMS, DEC was acquired by Compaq; a personal computer company.

/wrk

D’où Venons Nous / Que Sommes Nous / Où Allons Nous

Why do we seek?

When do you feel most alive?

What causes you to be up at 4am and feel invigorated?

When I first thought about writing this post, I was trying to come up with something pithy to write in parallel to Brad’s post. I failed. I do not have something pithy. What I have is a feeling that I have not had in a long time in tech. It is the feeling of being most alive. I have this feeling because I am meeting people who are seeking. Let me describe a scene:

You are an experienced IT/technology executive. You are told in a few days that some company you have never heard of will be in town and they might have something of interest you should see. You do not really have an accurate description of what you should see; yet you make appointment to go see it. You are given an address. You find the building. It is difficult to gain access, as you need to be on the list at the desk. Once you have arrived at the correct floor, you find another set of doors and then a reception area where you name is checked again. You state your business and are led down hallway past nice offices and well-apportioned conference rooms. A few turns later, down a narrowing hallway you arrive at the smallest, windowless conference room that is next to the storeroom. Inside your conference room you find a few people, some IT equipment. It is hot. There is a tabletop fan swirling the warm air. You sit down. There are no PowerPoint slides – just the white board, a flat panel, a rack of gear and some people. The conversation begins. You can leave at anytime; yet you stay. It is hot; yet you stay. Sleeves are rolled up, your brow fills with sweat and still you stay. The white board is filled many times over. Time goes by and the conversation goes on. When it is done you leave hot, tired, exhausted and alive.

What would you call that? I call that a meeting of the revolutionaries. “It is not the critic who counts, not the man who points out how the strong man stumbled, or where the doer of deeds could have done better. The credit belongs to the man who is actually in the arena; whose face is marred by the dust and sweat and blood; who strives valiantly; who errs and comes short again and again; who knows the great enthusiasms, the great devotions and spends himself in a worthy cause; who at the best, knows in the end the triumph of high achievement, and who, at worst, if he fails, at least fails while daring greatly; so that his place shall never be with those cold and timid souls who know neither victory nor defeat.”

/wrk

* It is all about the network stupid, because it is all about compute. *

** Comments are always welcome in the comments section or in private. **

Missing the Point on Software Defined Networking (SDN)

A week after attending ONS I have been reading a lot of analysis of SDN.  My general conclusion is most people are missing the point.  This is not about separating the control plane from the data plane.  Despite what socks said, asking the question what problem is SDN trying to solve is not the point of SDN.

I generally viewed ONS as a creativity fail.  Why?  Change does not come from the company who controls 75% of the market and the seven dwarfs.  If you listen to almost all of the incumbent vendors who spoke at ONS they are all wrapped around the axle about state, hybrid networks, software defined interfaces, overlays, decentralized versus centralized control; all of which are issues the legacy suppliers care about.   I saw of number of investment management people at ONS and nearly all of them said they were there to see if CSCO was going out of business.  In my view, we can stop that discussion now, as CSCO is not going out of business.  They are going to be around for a very long time.  CSCO did very well transforming the legacy infrastructure that IBM laid down from 1974 to ~1994, perhaps the companies that emerge from starting point of SDN will be the companies that transform the legacy infrastructure laid down by CSCO from 1992 to ~2013.

Software defined networks is not about retooling the legacy network of switched networks that the world has been piling up for the past fifteen to twenty years.  SDN is about doing something different.  We need to stop trying to figure out how to fit SDN into the past – let the incumbents figure that out.  SDN is a starting point for the next round of innovators to ask a different set of questions.

What is possible if I designed the network differently?  What is possible if I threw out two decades of design assumptions and principles?  What is possible if I started with the premise that the network can do only one of two functions: connect and disconnect?

Asking the incumbents to lead this discussion is waste of time because they are bound by conditions that lead to the same answer.  Here is an interesting press release from CSCO in which they state that Cisco is “Reinforcing its commitment to the industrialization of the Internet…”  The industrialization of the internet is an interesting phrase.  What are they trying to tell us?   Is the current internet state equivalent to an evolving agrarian society and we are in the social, economic evolution into an industrial society?  If that is an accurate assumption, than perhaps SDN will be the impetus for the re-composition of the industrial age into a workable form.

I am not going to repeat a list of all the signs of this re-composition age, if you need a refresher just read through the networking category on my blog.  Even this morning I read some interesting tweets decrying the evolutionary path of the internet from agrarian to industrial; I removed the twitter handles, but the participants will laugh:

“If the cloudtards keep pushing for utility computing, wait until *that* gets regulated. *giddy with excitement*”

Like SOPA? PIPA? Extraditing people from other countries? The internet is starting to become heavily politicized.”

In my view, SDN is not a tipping point.  SDN is not obsoleting anyone.  SDN is a starting point for a new network.  It is an opportunity to ask if I threw all the crap in my network in the trash and started over what would we build, how would we architect the network and how would it work?  Is there a better way?

All we are seeing today are failures and proofs of varying success to a new network.  Drawing conclusions at this stage of development is pointless.  We should be intrigued that people are asking: is there a better way?  When we start making compromises to fit what is new into the past that is when it becomes homogenized (i.e. industrialized).  I am not saying that the evolution of the network will be easily done and that is why CSCO is going to be around for a long time, but this is also the reason why I think the architects, the revolutionaries, the entrepreneurs, the leaders of the next twenty years of networking are not working at the incumbents.

/wrk

* It is all about the network stupid, because it is all about compute. *

** Comments are always welcome in the comments section or in private. **

Drive the Network Smarter – Not Harder

I spent the better part of week on the road visiting clients and attending the OFC/NFOEC show in Los Angeles.  The exhibit show floor does not really provide much interest to me as they are all a letdown when I think back on InterOp+Network World 1994 (Vegas) and Supercomm 1997 (New Orleans), but I do find the panel discussions to have a high chance of being interesting.  I missed the panel on Lighting Up the Data Center on Tuesday, but I did attend the Role of the Network in the Age of Social Media and I found the presentations thought provoking.

Not Big Enough, Not Cheap Enough, More Means More:

One panelist who had spent nearly his entire career working for service providers presented a loud and strong message to the audience that optical innovation is not going fast enough, he needs more and more and more and it needs to be cheap, cheap and cheap.  He then went on to say that networking (i.e. optical) innovation is not thinking big enough, not cheap enough and more means more.  Anyone who has had had a conversation with me on this subject should have no question as to where I stand on this topic.  I think it is all non-sense and the equipment companies that what to build more for less are crazy.  That was the point of this post.  In no real order here are my thoughts on the subject of not big enough, not cheap enough and more is more:

1.  More is not more, but cheaper is definitely cheaper.  I understand why service providers (SPs) want to off load their R&D requirements on equipment providers.  SPs have billions to spend on their network and it easier to use this capital as leverage to get what they want from equipment companies.  As long as this trend continues, innovation will be dormant and value creation will be nascent.  That was the point of this post.

2. 100M ethernet was introduced around 1996.  1G ethernet was ratified in June 1998 with shipping systems in the 1999 timeframe.  10G ethernet was ratified in June 2002.  40/100G ethernet was ratified in June, 2010.  This past week Intel formerly shipped the Romley motherboard called the E5-2600 with 10G ethernet LAN on motherboard (LOM).  Ten years after the 10G standard was ratified it is shipping on a mother board.  Hmm…let us extrapolate the trend here, the 10G server upgrade cycle is kicking off 14 years after the 1G cycle started.  Anyone want to guess when that 100G server cycle is going to kickoff?

3. The last thirty-three years of building networks to the OSI Reference Model is at an end.  We have taken the model as far as we can with Moore’s Law, but the time has come to speak of Moore’s Law Exhaustion.

4. It is time to drive the network smarter – not harder.  That is what I am working on.  Real innovation occurs when people dare to step outside the ridge construct of legacy doctrines that have been enforced by the past.  If you are working hard on innovation with the intention of repeating the past, but providing more for less I think you missed the point. “The success of a technology company is really about product cycles.  Technology companies become stuck in loops because product cycles become affected by success.  The more success a technology company has in winning customers, the more these customers begin to dominant product cycles.  As product cycles become anchored by the customer base, the plan of record (POR) suffers from feature creep and the ability of the company to invest in products for new customers and new markets declines.  Consider the following:

New Technology = Velocity

Developed Technology = Spent Capital, Doctrine, Incrementalism and Creativity Fail”            

That quote from a prior post was the point I was making in my F-4 Phantom II post.  More on the effect of doctrine in technology companies here.  My summation to the more is more and cheaper argument is I am not surprised by the argument; it is expected from those conditioned over many years to think in a ridged construct.  That is the affect of doctrine and doctrine must be exterminated from the organization or we are doomed to repeating loops.

DIY Content:

I was very interested to listen to Bikash Koley’s answer to question about content and global caching.  He referenced the effect of Google’s High Speed Global Caching network in Africa.  This network built by Google is not without controversy.  Here is a link to a presentation in 2008 about the formative roots of this network.  My point is I increasingly see service providers and content owners taking a DIY approach to content and these providers do not have to be the size of Google.

Cloud Providers Getting Ready for Big Data:

I was visiting a cloud provider on the west coast with a colleague and I left with a lot of notes, but as usual what I hear when speaking with cloud providers and what I read about cloud providers are at odds with each other.  This cloud provider had a growing hosting business with clients in India, South America as well as the US.  All of their compute elements are physically located in the US.  No one is worried about big data and the external network (this thing called the internet) is running just fine for their video hosting customers as well as avatar hosted gaming clients.  They spend a lot of their time on back end optimization of their compute and storage networks.  That is the 77% problem versus the 23% problem I posted about here.

/wrk

* It is all about the network stupid, because it is all about compute. *

** Comments are always welcome in the comments section or in private. ** 

PE-Buyout, Technology Companies and Vertical Integration

When I read articles in the financial press that contain quotes from market participants about the market, I often wonder whose agenda is being served.  I read this Bloomberg article that contains some quotes from Charlie Giancarlo at buyout firm Silver-Lake.  The title is “Silver Lake Sees Fresh Round of Telecommunications Takeovers,” and it begins by saying that PE-Buyout firms are going to “zero in on makers of telecommunications equipment and mobile devices this year.”  I have no idea if this is what Charlie meant to say, but any PE firm buying ALU, NSN, RIMM, TLAB, etc, my comment is good luck.  I have immense respect for Charlie and what he did at Cisco, but I question if his list of buyout candidates include aging telecom equipment makers and fading handset makers.

I have written before about a PE-buyout of NSN here and RIMM here and I have also written about the fallacy of valuations being considered cheap.  There is a difference between unrealized or hidden value and companies with cheap valuations.  In the Bloomberg article there is a quote from an investment advisory saying “The big story to me is the incredible undervaluation of Alcatel, this stock is a screaming buy. Private equity should be all over this.”  Everyone is entitled to their opinion and for all I know bankers could he putting the final touches on deal for ALU, but I think we need to be very careful when we discuss technology buyouts, they are the not the same as taking RJR/Nabisco, Albertson’s, Hertz and even Alltel private.  Technology companies are different from traditional consumer and industrial markets because of the innovation/product cycle nature of their markets. The Oreo has done well for Nabisco for one hundred years come March 6, 2012 selling over 491 billion cookies according to Wikipedia.  That is a long and successful product cycle with no comparison in technology.  I inserted two charts a 20 Year and a 10 Year of ALU for review with no comment.

If I was to get a call tomorrow from a PE firm and they were to ask me about buying out any of the public technology companies that have cheap or compelling valuations I would think about their question in four areas and none of them have to do with their balance sheet and current market valuations:

1.  What is the state of the present day market construct: I just posted on the service provider market last week.  Selling infrastructure equipment to global service providers is hard work and Huawei does not make it easier.  Why anyone would want to spend billions to own a business that is difficult and not getting easier is beyond my reasoning skills.

2. Can the legacy company fail fast enough to be successful again? Legacy companies often lack creativity and this is manifested in their product cycles which turn negative and result of that development is a lack of creativity.  See my December 2011 post for the details.  I do think that large public companies can innovate, but to see the market beyond their past/present view they need to separate a team to prevent innovation from being distracted by the present and blinded by the past.  IBM did that with the PC and Apple the same with the iPhone and iPad.

3.  Product Cycle Management is the True Measure of Success: I have spent a lot of time posting about product cycles on my blog.  They are all listed here.  I would ask myself with each buyout candidate: can the product cycle be fixed, can it become a weapon, can the company innovate to take share and differentiate in the market?  Just because a company is cheap or has a large patent portfolio does not mean that the team and talent is present to fix the product cycle.  Technology companies are very much about the talent level of the team.  Hertz may have secured the best physical locations at each airport and that is hard to compete with, but in tech others can find ways to attack your market share and location is not a barrier to entry.

4.  How is the Market Changing: Another subject that I have spent considerable time on is the changing nature of networks and the end-user markets.  All the posts starting in May 2011 are here.  The question I would need to answer is can these legacy companies be players in the changing structure of compute-storage-networking market construct?

I know it is possible to take a company private (e.g. Avaya) or use a private company as a platform (e.g. Genband) to put other companies around it to build solution mass and market share.  It is not easy, but it can be done.

There is another market ebb and flow for technology companies that is playing out and that is vertical integration and the world is flat crap.  In the 1980s-1990s companies were all about vertical integration such as IBM.  The competitors who did well against IBM were singularly focused solution companies such as MSFT, Compaq, DELL, CSCO, etc.  Post 2001 the mantra for technology companies was to divest non-core assets, out source businesses such as in-house silicon and functions such as their supply chain and manufacturing to APAC.  If you look at what is going on today, the large public technology companies are in the midst of assembling a vertically integrated company from the assets they spent a good 15 years outsourcing.

Take Cisco for an example.  Most people think of Cisco as a networking company, but they sell blade servers (i.e. compute) and they have been acquiring optics and component companies as well.  IBM who sold their networking business to Cisco in 1999, is now putting it back together.  HPQ purchased 3Com (i.e. networking) as well as 3Par (i.e. storage) to go with their compute (i.e. Compaq) business.  DELL who started as a PC business and got into servers now owns Equallogic (i.e. storage) and recently purchased Force10 (i.e. networking).

My point is these are large companies all trying to cover compute-storage-networking market and Cisco is no longer a singularly focused networking company.  We are on the verge of a new 10-15 year cycle in the network as I described here, here and here.  Even Verizon is telling you the network is going to change.  My question is why would any of the companies speculated about as buyout targets be the platform of choice to take share in the new emerging compute-storage-network market construct?
/wrk

* It is all about the network stupid, because it is all about compute. *

** Comments are always welcome in the comments section or in private. ** 

Dreadnaught for Change

In 1903, Vittorio Cuniberti published an article entitled “An Ideal Battleship for the British Fleet,” in Jane’s publication All the World’s Fighting Ships.  Vittorio was the chief naval architect of the Italian Navy.  His article called for the development of all big gun ship (i.e. twelve 12 inch guns) displacing 17,000 tons.  The thinking in the naval community around an all big gun ship had already been occurring, but Vittorrio’s article provided thought leadership and helped generate perceived momentum around the concept of an all big gun ship.  The United States as well as Japan had budgeted for the construction of ships with 12 inch guns, but they were not due for completion until 1907.  In the early part of the twentieth century, naval construction required public and political oversight as it was the single largest military expense and consumed vast resources.  Ship construction at the time was a significant capital commitment of a nation’s budget and required legislative approval.  Even today, the most powerful navy in the world has publicly available ship construction plan.  This post is a rehash of a post I wrote in 2007.  I was inspired to use it after reading the Saturday Essay in the WSJ about the US Navy.

On October 1, 1906, H.M.S. Dreadnaught slipped into the English Channel, one year and one day from the laying of her keel.  Over the next few days, she conducted high-speed runs in excess of 21 knots displaying the power of her steam turbine engines, a first for a warship of her size (18,000 tons).  Her ten 12-inch guns could engage targets at 10,000 yards.  She was fast enough to outrun any ship her size and she was powerful enough to outgun any other ship of her class.  Jane’s declared she was the equivalent of two or three of battleships of the designs that had preceded her.  Dreadnaught was a lethal and brilliant combination of striking power, speed, reliability and armor that could choose to fight when, where and who she desired.  In 1907, Dreadnaught became the flagship of English Home Fleet and assumed the responsibility of the guardian of Pax Britannia at the very zenith of the Empire and only six years after the death of Queen Victoria.  When details of her design became public, it was apparent that this single ship hailed as a triumph by her designers, was in fact a colossal blunder in the eyes of many.

It took time for the full impact of Dreadnaught to be realized.  Her combination of speed and striking power turned every ship in every navy, in every port of the world obsolete in 31 months.  Critics called Dreadnaught a “piece of wanton and profligate ostentation” that “…morally scraped and labeled obsolete [the entire British Fleet] at the moment when it was at the zenith of its efficiency and not equal to two, but practically to all the other navies of the world combined.

David Lloyd George, a future Prime Minister in 1916 succeeding Herbert Henry Asquith, said “We said, ‘Let there be dreadnaughts’ what for?  We did not require them.  Nobody was building them and if anyone had started building them, we, with our greater shipbuilding resources, could have built them faster then any country in the world.”  In the end, the critics were wrong and the pioneers were correct.  Reginald Bacon, Dreadnaught’s first captain said to the critics “knowing as we did that Dreadnaught was the best type to build, should we knowingly have built the second-best type of ship?  What would have been the verdict of the country [i.e. England] if Germany…had built the Dreadnaught?  What would have been the verdict of the country if a subsequent inquiry had elicited the fact that those responsible at the Admiralty for the safety of the nation had deliberately recommended the building of second-class ships?

The history of this era is beautifully composed by Robert Massie in two books: Dreadnaught and Castles of Steel.  I generously borrowed from Massie’s first book Dreadnaught (pages 468-489) to compose the first three paragraphs.  If you enjoy naval history, The Grand Fleet and Battlecruisers are two cherished books in my library that complement Massie’s books.  I began this post with the history of the Dreadnaught because of the many parallels the history of naval innovation and technology have with networking.

Over the course of that past couple of years the intellectuals and thought leaders have been calling for change.  See Hamilton here, Gourlay here and even the Pax Britannia of networking states that networks must change.  I draw one conclusion from the observations I see and that is we are still waiting on the change that must happen.  We are still waiting on Dreadnaught.  I know some companies think they have built a new Dreadnaught for the data center, but in reality it does not outclass any other solution and it does not hopelessly obsolete any of the existing solutions.  The point of my Tubthumping post was we are still waiting on someone to build the best type of network for a new era.  Merchant silicon does not equal a new era.  Big fat switches in the core do not equal a new era.  The 10G and 40G and 100G server eras will be enabled by different network architecture that when it reveals itself will be as obvious as Dreadnaught was to the navies of the world.  I will know it when I see it, but I know it is not a white box switch with merchant silicon running some homogenized software stack.

/wrk

* It is all about the network stupid, because it is all about compute. *

** Comments are always welcome in the comments section or in private. ** 

2007 Thesis on P2P Video, Bandwidth and Broadband

I have been thinking a lot about bandwidth, telecom services and the traditional telecom equipment market of selling to service providers.  I posted a snippet of my thinking in my 11.23.11 Notebook post: “Telecom Equipment Markets: I sent the four charts to the left to a friend the other day.  Both of us had careers in networking in the 1990s.  He came back at me with following argument:  “Carrier traffic growth is 40-60% annually, carrier CAPEX growth is ~3% annually and carrier revenue growth is <10% annually.”  The only way to reconcile that construct is to drop the cost per bit.  Who will bear the burden of that cost reduction?  I think the most likely candidate is the equipment providers.  Continue reading

Looking back to go forward; what was I thinking in 2005

I have been spending a lot of time thinking about the future of networking and how to build a solution and business in the future, call it a 12-48 month window from today – not in the present day as I am fully convinced that there is a high rate of external change in networking.  I am engaged in the process of thinking about technology decision drivers, technology adoption, what is or will be important to people and anticipating the actions of competitors.  By far the best tool for that process is the O-O-D-A loop developed by John Boyd, but that is best left to a separate post for another day.  After a week of intellectual exercises, starts, failures and restarts, I found myself on a Saturday morning espresso in hand, looking back on documents and presentations I did over the last ten years.

I surmise that most people think this exercise is a waste of time; but I have posted on thought anchors and biases before.  I also believe we are all susceptible to diminished breadth in our creativity as we get older.  Diminished breadth in our creativity the root cause as to why history repeats itself and another reason why when we change companies we tend to take content and processes from our prior company and port them to our new company.  This is especially true in the technology industry.  We recycle people; hence we recycle ideas, content and value propositions from what worked before.  Why be creative when it is easier to cut and paste?  As a casual observation it seems to me that most people working in tech have a theta calculation as to their creativity.  I believe a strategy to guard against creativity decay is to look back on the past and critique the work.  That is how I spent part of my Saturday.  I was looking back, to go forward.

I found a presentation that I had produced and presented in January 2005 – almost seven years ago.  I started going through the presentation and I found many elements of the presentation are relevant today and to my surprise represent elements of projects I am working on or thinking about.  I have attached a link to the presentation below (JAN 2005 for SIWDT) and it is mostly unedited, but I did remove one slide specific to my employer at the time and removed a reference to the same employer on another side.  Excluding those two edits, the presentation is intact and readers are welcome to laugh at my presentation with perfect hindsight.  Here are my thoughts and you are welcome to review the presentation before or after reading my thoughts.

JAN 2005 for SIWDT

–         Slide 1: Everything on this slide was accurate.  Our exit from the rampant waste of the client/server era continues to accelerate as we enter a new era, which I call the datacenter era for IT and I posted about last week.

–         Slide 2: Many elements are still true, but I have seen an acceleration of enterprises wanting to bring external elements of their outsourced network in-house.

–         Slide 3-4: Still true today.

–         Slide 5-6: Everything proves true today and I would argue that in sourcing is accelerating.

–         Slides 7-8: All still true.

–         Slide 9: This is the key slide.  Everything I am thinking about today to create in the network is encapsulated on this slide.

–         Slide 10: In 2005 I called the cloud, grid computing.

–         Slide 12: At every marketing meeting I attend I think of Dr. G.Clotaire Rapaille.

–         Slide 13-14: Just directed points for breakouts…

I am not a huge Jack Welch fan, but I do appreciate the honesty and succinctness of this quote which I used on the cover the presentation “When the rate of change outside exceeds the rate of change inside, the end is in sight.” – Jack Welch

/wrk

* It is all about the network stupid, because it is all about compute. *

** Comments are always welcome in the comments section or in private.  Just hover over the Gravatar image and click for email. ** 

Stream of Consciousness on the Data Center and Networking

I was thinking today that it had been seven days since I took the time to write a blog post.  This is partly because I have been busy working and also because I have not found anything interesting to write about.  I use writing as a way to frame my thoughts.  Sometimes when I cannot articulate what I want to say I go for a long bike ride and I find myself writing the argument in my brain and when I get home it is just a matter of typing the words.

I was thinking I would post something about RIMM’s earnings, but I think I have said enough about RIMM although I will reiterate for those who think stocks can be “oversold” and “will bounce,” negative product cycles are hard to break.  This past week has been analyst week in Silicon Valley with a variety of public networking companies hosting capital market days or technology briefings.  I feel comfortable with my decision to spend time in SV next week and not this week.  Judging by the number and content of the phone calls I have gotten at dinner time in the Eastern Time zone, I think a lot of people are more confused than before the week began.

With the all news out of SV this week, I learned that Cisco is not really going to grow 12-17% anymore, Juniper’s QFabric is ready and Infinera has built something really big.  Side note…the JNPR and INFN marketing teams must share marketing visions.  A few days ago Doug Gourlay posted this on his personal blog.  I was also included in number of email threads sorting out all the news.  Here are my stream of consciousness thoughts on the data center debates specific to networking.  This is really an attempt by me to clarify my previously posted thoughts on the subject while assimilating all the debates I have been part of this week.  I am looking forward to people calling me out and disagreeing.

1. My first reaction to all the data center architecture debates and product announcements is to throw them all in the trash.  Push the delete button.  I have tried to say this in a nice manner before, but maybe I need to be specific as to the problem, the evolution and where the innovation is going to occur.  I am going to use virtualization as the starting point.  I am not going to review virtualization, but if you have not heard the news VMs are going to be big and the players are VMW, RHT and MSFT.

2. I wrote this on 07.27.2011, but it is worth repeating.  Networking is like last dark art left in the world of tech.  The people who run networks are like figures out of Harry Potter.  Most of them have no idea what is in the network that they manage; they do not know how the network is connected and hope everyday that nothing breaks the network.  If the network does break, their first reaction is to undo whatever was done to break the network and hope that fixes the problem.  Router, switch and firewall upgrades break the network all the time.  The mystics of the data center/networking world are the certified internet engineers.  These are people with special training put on the planet to perpetuate and extended the over paying for networking products because no one else knows how to run the network.

Intended network design has changed little in twenty years.  I look back in my notebooks at the networks I was designing in the early 1990s and they look like the networks that the big networking companies want you to build today.  If you are selling networking equipment for CSCO, JNPR, BRCD, ALU, CIEN, etc, you go to work every day trying to perpetuate the belief that Moore’s Law rules.  You go to work everyday and try to convince customers to extend the base of their networks horizontally to encompass more resources, vertically build the network up through the core and buy the most core capacity you can and hope the over subscription model works.  When the core becomes congested or the access points slow down, come back to your vendor to buy more.  When you read the analyst reports that say JNPR is now a 2012 growth story that is code words for “we are hoping customers come back and buy more in 2012.”  Keep the faith.  Keep doing what you are doing.  Queue the music…don’t stop believin’.

3. The big inflection point in the DC is where the VM meets the VSwitch.  When I wrote the above portion about the network being the last dark art, it is because the people running the network are different from the people who are running the servers.  These are two different groups and they are on the same team, but often do not play well together.  Now we have this thing called the virtualized I/O that runs on the NIC in the server.  Is the virtualized I/O part of the network or part of the server?  I am not sure this is answered yet because there are different systems for managing parts of this configuration built for the server team and for the network team.

4.  I have written this before, but it is worth repeating and clarifying.  The number of VMs running on a server is increasing.  It is not stable.  It has not peaked.  It is not in decline.  The outer boundary today for the number of VMs in a rack is around 2k.  With Romley MBs, SMF and 10GE connections this limit is going higher.  Maybe closer to 3K sooner than you realize.  At 2.5k VMs in a rack, 35 servers in rack, 40 racks in cluster that is 3.5M VMs per cluster.  Multiple 3.5M by the number of clusters in your DC footprint. To add to the complexity, how many interfaces are going to be defined per VM?  What if your answer is 4?  That creates 10,000 MAC addresses per rack or 400,000 per cluster.  The Arista 7548S ToR supports 16,384 MAC addresses.  I have already heard of people wanting to push five or more interfaces per VM!  I really do not care if your numbers are less or more than above, because my point has yet to come.

5. The problem that I am looking at has nothing to do with the size of your ToR or switch fabric.  Configuring VMs and gathering server utilization statistics is easy.  The problem starts with connecting all the desired VMs to the network.  This is not an easy task

The problem gets really complicated when you try to figure out what is going on in the network when something fails or performance degrades.  Inside the DC performance is relatively simple to obtain.  Configuring the network and diagnosing problems in a distributed network of data centers in which the compute end points are scaling more VMs per server, per rack, per cluster; well that is a big boy problem.  Adding to the complexity is the need to diagnose and solve network connection problems BEFORE humans can react.  That is the network world to which we are evolving.  Data sizes get larger, the cost of storage declines and the cost of compute declines with the virtualization of the I/O and improved MPUs.  The only architecture not changing and moving along the same cost curve is the network and the manner in which the network is configured and managed.

Earlier in the week I was visiting a large service provider who had spent the last five years unifying and collapsing their networks.  One of their forward looking initiatives is now figure out how to better engineer traffic flows.  As part of the process they want to keep various traffic flows local, they want to have a topology aware network in order to not have to provision so much redundancy.  In my view they want to be able to have a connection aware (my terminology) network that identifies and reacts to network failures which is important in a world in which outages for services and SLA now trigger rebates.  This last point has increased sensitivity because connection costs are a declining cost curve – not an upward sloping cure.  They also described how multiple NOCs deal with problems in different parts of the network.  The entire conversation was a repeat of what I hear from DC operators, the difference was the geographic scale of the network.

/wrk

* It is all about the network stupid, because it is all about compute. *

** Comments are always welcome in the comments section or in private.  Just hover over the Gravatar image and click for email. **


Looking at the Past to See the Future

In early May 2011, I wrote a blog post in which I dubbed the last ten years the “lost decade” for networking and venture backed networking startups.  The last ten years were not all bad.  Google went public and there were a number of networking IPOs starting with RVBD in 2006 and a handful recently such as BSFT, but none of these networking companies are going to be the next CSCO, JNPR, CIEN, ASND, etc.  My supposition is that none of them are solving a big enough problem and changing the rules by which the incumbents play.  I think the world of APKT, but I am not sure they are going to force some companies out of business.  I think it is more likely they grow for a long time and eventually get bought by a large incumbent who capitulates; realizing they will never catch APKT.  I followed my Lost Decade post with a post on the end of the era for large scale venture capital backed innovation.
Continue reading