Part II Cisco: It is all about the network stupid
This section is a modified email of an exchange I had with a CTO of networking startup. I have removed any specific references to the company and focused on what I think are the emerging trends in networking and this is where Cisco needs to be focusing. I am going to start with how value has been created in the networking segment post 2000.
When I look at a handful of IPOs in recent years (RVBD, INFN, DDUP, BBND, STAR, MITL, BSFT, APKT, etc) the parallels to the 1990s are clear. I looked at all of these companies while on the buy side. I became very familiar with the management teams, operating characteristics of the companies, what really worked and what did not work.
– RVBD: What did they do? Put a drive in a network element to cache content and improve the compute function at the edge of the network because the CIFS protocol was too chatty and the network did not have an enough bandwidth/capacity to overcome the software coding limitations. So the content was moved deeper into the network. In the case of RVBD, this was not really innovative. I was on a engineering team that coded various run length compression algos into bridges and routers in the late 1980s and put PBX functions in an ATM based CPE box in the 1990s. What was innovative about RVBD was the mechanism used to identify the content frequently called by remote users and move that content locally to improve the local compute cycles and then sync the data between the storage points. The key point to focus on is compute.
– FFIV: In my view, the game change for FFIV came after the 2001 crash. They opened their platform (Big IP) up to 3rd party apps with a developer’s kit, API, simulators, tools etc. They then let the community of users share these apps. When you talk to big banks (I spent a lot of time with CIOs and developers of the top ~10 banks over the past few years) you find out they have 10s of thousands of apps. FFIV organically created a community of users who became very loyal because of this community. This community also became compelling to new users. Lesson learned was opening Big IP platform to market to create a community of application users. BTW…applications are about compute. Note…JNPR is trying to do this with JUNOS, but it is not quite the same. JNPR is more partner OS related than end-user related.
– INFN: I am going to say very little about INFN. I will just summarize by saying that the commoditization of the optical component supply chain happened far faster than they estimated and when they brought a system level product to market the price disruption was far less than expected. Hence…a lesson…processor, memory, capacities tend to be commoditized on a horrible curve of buy more for less. When processing, capacity, memory are sliding on the curve of more for less, this means software will just consume the compute cycles. If you doubt me, there is a reason Apple developed their own processors.
– DDUP: I think these guys wrote the book on how to build an innovative, disruptive business in a market structure with large dominant companies. Lesson here is that operational software that removes the need to spend huge amounts of CAPEX on storage arrays and increases the effective utilization of sunk cost is a winner. Why would a customer what to effective move data from tier 1 storage to other tiers? Answer is to allow compute cycles and new apps to use the top tier.
– STAR: They made a really smart bet that EVDO would have much longer theta than the incumbents where willing to assume. Hence, they were the only real innovator of software on a HW platform. If you have ever seen an ST-40 up close it takes up a whole rack of power in a CO. This thing has more silicon in than a beach and runs hot, hot, hot – yet it has software margins. How? When you looked at what STAR sold it was a software MCP with all the management tools built on a custom hardware platform.
– APKT: I wrote a blog post on APKT the other day so that will stand as my place holder.
Network Trends I Believe:
Processors, memory, storage and bandwidth are elements that you can buy more for less over time. The companies that create extraordinary value find ways to arbitrage the gap when one or more of these elements become dislocated. Look at RVBD; when drives became cheap enough and large enough it made economic sense to fix the transaction cost of CIFS for SQL servers by caching the content. The ultimate need is compute, but the problem was an inefficient, poorly designed protocol and not enough network capacity that was cheap enough to fix the protocol problem with the network. DDUP is another example.
Akamai, what did they do? Built a better way to call for and process content. They cached content which means they moved the compute function deeper in the network to overcome network bandwidth/capacity limitations with their routing algorithms. What problem did AKAM really address? The answer is compute. They made compute (consumption of web content) easier because the network could not handle the traffic load so they distributed the compute function by moving the content deeper into the network, replicating it and finding more efficient routing paths to call for the content.
Virtualization and all the commodity cloud networking trends are forcing applications and data to be partitioned and replicated across multiple data centers. Data sizes will continue to grow as we move from click streams, to scientific data, to user audio, photo, and video collections. Processing or the compute function of these trends will really run in parallel on thousands of machines. All of this is leading to the enablement of scale out networking (to use the Google term) which is…
- Scalable interconnection bandwidth
- Aggregate bandwidth = # hosts × host NIC capacity
- Economies of scale: Price/port constant with number of hosts, Must leverage commodity merchant silicon
- Anything anywhere: Don’t let the network limit benefits of virtualization
- Network Management: Modular design that avoids actively managing 100’s-1000’s network elements
If you believe in this compute trend and that optics/bandwidth will follow the curves of processing, memory then the network is out of sync with the other elements. The need becomes TB and PB metro and access networks that evolve to EB networks in ~10 years (a guess on my part). Virtualization should really be a positive driver for network equipment suppliers because of replication and accessibility of data for compute. CSCO had the correct thesis, it is all about the network – the problem is they got to big and lost focus on the network. That created opportunities for companies like RVBD, FFIV, APKT, DDUP, to really hurt them in market silos that CSCO cannot seem to get the A team to focus on.
What Do Networking Products Look like in the Future?
If we assume the network trends discussed above are true, then networking companies need to be focused on applications and compute to be a strong value creator – instead of a hardware box supplier. As part of this thesis here are five key elements:
- Back end analytics: Virtualization and content evolution from short form to long form means that data processing will dominate the future. The network has to enable that function. By the way, that function is called compute. Enable that function to occur and you are a winner. This means networking companies need to sell tools in a software form that changes the value proposition from a box seller to compute enabler. I would be looking to add analytics and processing software tools on the platform.
- Modular software
- User Definable Platform Tools
- Ability to Embed Value Add Apps into the Network Element or Call for Virtualized Apps in the Network
- Security and Policy Tools
It is all about the network stupid, because it is all about compute.