This post is not intended to be a rant and it is quite plausible that I am just missing where the action is, but 2020 feels to me like 2006/2007 and that is kind of boring. Perhaps I should clarify that statement.
Category Archives: Product Cycle Theory
It is Hard to Grow Large Cap Tech Companies
I am often asked what my opinion is of Cisco. Is it going out of business because of white box? Should they buy Arista? Should they buy NetApp or EMC or Citrix or RedHat? The news today that HP is going to break into two companies tells me that we have reached a point where it is difficult to grow large cap tech that have multiple business units. No CEO of a large cap tech company wants to be the AOL/Time-Warner of this market era. A few thoughts on the subject of large cap tech companies. Continue reading
Charles VIII and the Emerging Modern IT Force
I am careful with the use of the term revolution. This discipline comes from my academic studies and too many years studying actual revolutions and revolutionaries. We can debate the impact of technological advances on the field of battle, but these advances would be limited if they were not organized, trained and led with purpose. I understand the impact of the percussion cap and rifled barrel, but it is the adoption and use of the technological that is important — not the invention of the technology in isolation. Continue reading
Quick Equity Thoughts
I received a number of emails, tweets and so forth regarding my last network posts. I am traveling to SFO this week and that means I will have some time to write a post or two. I was in a meeting this past week with a company who was presenting some technology to Plexxi regarding multi-flow commodity graph theory modeling of networks at scale. They did some really interesting work on the efficiencies of networks at scale and I plan to write up some of their work in a post for all the people who sent me emails about virtues of Valiant load balancing.
Continue reading
SDN, It’s Free just like a Puppy!
I have written both and will post at the same time because I believe we are conflating many issues when it comes to networking. For example: SDN, ONF, OpenFlow, Overlays, Underlays, Tunneling, VXLAN, STT, White Box, Merchant Silicon, Controllers, Leaf/Spine, Up, Down, Top, Bottom, DIY, Cloud, Hybrid, NetContainers, Service Chaining, DevOps, NoOps, SomeOps, NFV, Daylight, Floodlight, Spotlight to name a few. Both of these posts are intended to be read back to back. I wrote them in two parts to provide an intentional separation.
Continue reading
It is all about Doctrine (I am talking about Networking and that SDN thing)
Last year, I wrote a long post on doctrine. I was reminded of that post three times this week. The first was from a Plexxi sales team who was telling me about a potential customer who was going to build a traditional switched hieracrhical network as test bed for SDN. When I asked why they were going to do that, they said the customer stated because it was the only method (i.e. doctrine) his people had knowledge of and it was just easier to do what they have always done. The second occurrence was in a Twitter dialog with a group of industry colleagues across multiple companies (some competitors!) and one of the participants referenced doctrine as a means for incumbents to lock out competitors from markets. The third instance occurred at the at the Credit Suisse Next-Generation Datacenter Conference when I was asked what will cause people to build networks differently. Here are my thoughts on SDN, networking, doctrine, OPEX and building better networks.
Continue reading
DEC, OpenVMS, Miles Davis and Being Swayed by the Cool
I joined my first startup in 1989. I was the fourteenth employee. Down the street in the Old Mill was the headquarters of Digital Equipment Corporation (DEC). They had 118,400 employees, which was their peak employment year. In the late 1980s, DEC was dominating the minicomputer market with their proprietary, closed, custom software. DEC was a cool company and the fifth company to register a .com address (dec.com); a decade before NetScape would go public. DEC was one of the first real networking companies outside of the world of SNA. Radia Perlman inventor of spanning tree worked at DEC. Most recently her continued influence on networking can be seen in the development of TRILL.
The continued rise in microprocessor capabilities would prove to be an insurmountable challenge for DEC. DEC had built over the years a massive, closed software operating system with a non-extensible control plane that was extended across proprietary hardware. This architecture would be eclipsed by the workstation and client/server evolutions.
To counter the these threats, DEC considered openning their software ecosystem by adding extensibility and programmability including a standardized interoperability mechanism (API). The idea of opening the DEC software ecosystem would culminate in 1992 when DEC announced an significant update to their closed, proprietary operating system. The new software release would be called OpenVMS or Open Virtual Memory System. The primary objective of OpenVMS was allow for many of the different technology directions in the market to become one with the DEC ecosystem. It made a lot of sense. In 1992, workstations were the hot emerging trend of the market, the Internet was only two years removed from ARPANET control. Windows 3.0 was two years old and anyone who used Win 3.0 knows it was a huge improvement over 2.1. The rack server was a decade away. Some companies still chose OS/2 over Windows. The Apple Newton (i.e. iPhone) was a year away from release. Here is a summary of DEC’s OpenVMS release:
—————————————————
OpenVMS is a multi-user, multiprocessing virtual memory-based operating system (OS) designed for use in time sharing, batch processing, real-time (where process priorities can be set higher than OS kernel jobs), and transaction processing. It offers high system availability through clustering, or the ability to distribute the system over multiple physical machines. This allows the system to be “disaster-tolerant”[10] against disasters that may disable individual data-processing facilities. VMS also includes a process priority system that allows for real-time processes to run unhindered, while user processes get temporary priority “boosts” if necessary.[11][12][13]
OpenVMS commercialized many features that are now considered standard requirements for any high-end server operating system. These include:
- Integrated computer networking (originally DECnet and later, TCP/IP)[14]
- Symmetrical, asymmetrical, and NUMA multiprocessing, including clustering[15]
- A distributed file system (Files-11)[16]
- Integrated database features such as RMS[17] and layered databases including Rdb[18]
- Support for multiple computer programming languages[19][20]
- A standardized interoperability mechanism for calls between different programming languages[21]
- An extensible shell command language (DIGITAL Command Language)[22][23]
- Hardware partitioning of multiprocessors[24]
- High level of security[25][26][27][28
—————————————————
In 1997, five years after announcing OpenVMS, DEC was acquired by Compaq; a personal computer company.
/wrk
Compute Conundrum for Vendors
It is Saturday April 28, 2012 and I have been involved in an ongoing discussion with a number of friends on what was the significance of Urs Hölzle’s presentation at ONS 2012. I do not care that Google has internally built servers and network switches. Google is a unique, single market. The following conclusion to me is incorrect: Google built internal servers and switches and they are using OpenFlow and therefore the companies that build servers and switches will be under enormous pressure from the network DIY movement and most likely will go out of business.
I think the following conclusion is correct: Google built a network that is adaptable to the compute requirements of their business.
Stepping back for a moment, you can find coverage of Google’s presentation here and here. I wish Google was open enough to post their presentations as I think this would be a benefit to the technology community at large, but for a company who wants to “organize the world’s information and make it universally accessible and useful” this apparently does not include their information. I was in the audience for Urs’s presentation and there are many application points to be discussed, but I want to focus on these points which I am paraphrasing:
- Cost per bit does not naturally decrease with size
- WAN bandwidth should be managed as a resource pool, throw all applications on it and manage the pool of resources based on the requirements of the applications
- Do not manage the network elements as it takes too long to re-compute a working topology when a network element fails, it is not deterministic and sometimes succeeds in finding a new network state and sometimes it does not
- The advantage of centralized traffic engineering utilizing SDN allows for dynamic flow based control that is deterministic is behavior. The need to overprovision the network is mitigated.
- Managing the network as a resource pool for the needs of the applications allows for the pre-compute and continuous compute of optimal network topologies
- New server platforms (e.g. Romley) have a lot of capacity and should view the network as a resource pool for their applications
What this Means for Vendors:
When I wrote a few months ago that the network as like the F-4 Phantom II, I made a reference to pushing complexity to the edge of the network, and I think a portion of that post is worth repeating again after the Google presentation at ONS. “We need a new network and we need to start with two design principles. The first is to be guided by principle of end-to-end arguments in system design and push complexity to the edge and the second is to accept that the network does only one of two actions: connect and disconnect. All the protocols and techniques I listed in the third paragraph (which is about 1 bps of all the stuff out there) were created because as networking people we failed to heed the lessons of the before mentioned principles. I have been posting about this before here and here, and this is post is an extension of those thoughts because I am continually surprised that people think that network is more important than the application and the compute point and that the way to fix the network is to add more stuff to make it work better.
I think this is just crazy talk from people who are buried so deep in the networking pit that they do not realize that they are still using Geocities and they are wondering where everyone has gone. There is a new network rising and instead of connecting a device to all devices and then using 500 tools, protocols and devices to break, shape, compress, balance and route those connections between devices, we are going to have a network that connects the compute elements as needed. We are not going to build a network to do all things; we are going to build a network that facilitates the applications at the compute point, thus pushing complexity to the edge. I think of it as the F-15 – not the F-4 and with this new network, we will need less consultants to explain how it works.”
Go forward a few months and the compute conundrum is becoming visible to vendors. Like an iceberg on a dark, still night hundred years ago, the question we can answer in the future is: what vendors avoided the collision? Companies do not go out of business overnight, but technology shifts and missed product cycles hastening their decline.
The shift and conundrum I am writing about is the presentation that Google presented at ONS. For the people who manage networks, what they heard from Google is it is possible; and one can argue advantageous, to manage flows across the network based on the requirements of the compute element. I am not describing a variation of traffic engineering in the network element using protocols, QoS, compression, priority queuing; those are tools that are in the network and not aware of the compute state. What I am describing is what Urs described in his presentation that the advantage of centralized traffic engineering utilizing SDN allows for dynamic flow based control that is deterministic in behavior and the process flows from the compute element. Therein lays the conundrum for many vendors and an OpenFlow spigot inserted as an after market add-on is not a solution to the conundrum.
The evolution of the network will result in two vendor groups. Group 1, which will be the larger group, with aging, legacy control planes of their own development that fail to actively participate in the dynamic centralized traffic engineering function. Group 1 will slowly be regulated to a passive position in terms of route calculation in the network. Group 2 will be a smaller group, but in the valuable position of participating in the calculation of the optimal network configuration because this group has knowledge of the compute element. As always, my assumptions and hypotheses could be incorrect and it is possible that have no idea what I am writing about.
/wrk
* It is all about the network stupid, because it is all about compute. *
** Comments are always welcome in the comments section or in private. **
Drive the Network Smarter – Not Harder
I spent the better part of week on the road visiting clients and attending the OFC/NFOEC show in Los Angeles. The exhibit show floor does not really provide much interest to me as they are all a letdown when I think back on InterOp+Network World 1994 (Vegas) and Supercomm 1997 (New Orleans), but I do find the panel discussions to have a high chance of being interesting. I missed the panel on Lighting Up the Data Center on Tuesday, but I did attend the Role of the Network in the Age of Social Media and I found the presentations thought provoking.
Not Big Enough, Not Cheap Enough, More Means More:
One panelist who had spent nearly his entire career working for service providers presented a loud and strong message to the audience that optical innovation is not going fast enough, he needs more and more and more and it needs to be cheap, cheap and cheap. He then went on to say that networking (i.e. optical) innovation is not thinking big enough, not cheap enough and more means more. Anyone who has had had a conversation with me on this subject should have no question as to where I stand on this topic. I think it is all non-sense and the equipment companies that what to build more for less are crazy. That was the point of this post. In no real order here are my thoughts on the subject of not big enough, not cheap enough and more is more:
1. More is not more, but cheaper is definitely cheaper. I understand why service providers (SPs) want to off load their R&D requirements on equipment providers. SPs have billions to spend on their network and it easier to use this capital as leverage to get what they want from equipment companies. As long as this trend continues, innovation will be dormant and value creation will be nascent. That was the point of this post.
2. 100M ethernet was introduced around 1996. 1G ethernet was ratified in June 1998 with shipping systems in the 1999 timeframe. 10G ethernet was ratified in June 2002. 40/100G ethernet was ratified in June, 2010. This past week Intel formerly shipped the Romley motherboard called the E5-2600 with 10G ethernet LAN on motherboard (LOM). Ten years after the 10G standard was ratified it is shipping on a mother board. Hmm…let us extrapolate the trend here, the 10G server upgrade cycle is kicking off 14 years after the 1G cycle started. Anyone want to guess when that 100G server cycle is going to kickoff?
3. The last thirty-three years of building networks to the OSI Reference Model is at an end. We have taken the model as far as we can with Moore’s Law, but the time has come to speak of Moore’s Law Exhaustion.
4. It is time to drive the network smarter – not harder. That is what I am working on. Real innovation occurs when people dare to step outside the ridge construct of legacy doctrines that have been enforced by the past. If you are working hard on innovation with the intention of repeating the past, but providing more for less I think you missed the point. “The success of a technology company is really about product cycles. Technology companies become stuck in loops because product cycles become affected by success. The more success a technology company has in winning customers, the more these customers begin to dominant product cycles. As product cycles become anchored by the customer base, the plan of record (POR) suffers from feature creep and the ability of the company to invest in products for new customers and new markets declines. Consider the following:
New Technology = Velocity
Developed Technology = Spent Capital, Doctrine, Incrementalism and Creativity Fail”
That quote from a prior post was the point I was making in my F-4 Phantom II post. More on the effect of doctrine in technology companies here. My summation to the more is more and cheaper argument is I am not surprised by the argument; it is expected from those conditioned over many years to think in a ridged construct. That is the affect of doctrine and doctrine must be exterminated from the organization or we are doomed to repeating loops.
DIY Content:
I was very interested to listen to Bikash Koley’s answer to question about content and global caching. He referenced the effect of Google’s High Speed Global Caching network in Africa. This network built by Google is not without controversy. Here is a link to a presentation in 2008 about the formative roots of this network. My point is I increasingly see service providers and content owners taking a DIY approach to content and these providers do not have to be the size of Google.
Cloud Providers Getting Ready for Big Data:
I was visiting a cloud provider on the west coast with a colleague and I left with a lot of notes, but as usual what I hear when speaking with cloud providers and what I read about cloud providers are at odds with each other. This cloud provider had a growing hosting business with clients in India, South America as well as the US. All of their compute elements are physically located in the US. No one is worried about big data and the external network (this thing called the internet) is running just fine for their video hosting customers as well as avatar hosted gaming clients. They spend a lot of their time on back end optimization of their compute and storage networks. That is the 77% problem versus the 23% problem I posted about here.
/wrk
* It is all about the network stupid, because it is all about compute. *
** Comments are always welcome in the comments section or in private. **
Notebook 03.06.12: RIMM, Product Cycles and Content
I am off to OFC/NFOEC tomorrow and as Andrew Schmitt of Infonetics tweeted (@aschmitt) earlier, I am looking forward to the “Drink every time someone uses the word “bandwidth” and “explosion” in the same sentence. #OFCNFOEC drinking game.” This is post is a collection of things I am thinking about, therefore I am writing them down in my notebook to validate or dismiss.
RIMM: With the news last week of more layoffs within the WebOS group at HPQ, it shows the missed opportunity that RIMM had with PALM. If RIMM had acquired PALM instead of QNX, they would have had a legit, complete OS to put across their devices. The mobile device market would have four OS contenders – not three. Unfortunately, HPQ acquired PALM and has pretty much killed WebOS. I am not saying that acquiring PALM would have saved RIMM, but they would have had a chance. I have spent a fair amount of time posting about RIMM; I read the other day that RIMM was up that day on take over rumors. I still wonder why would someone want to own the RIMM business? Seems crazy to me and as for the new CEO, I agree with him the only way out is hard work and hope they made some correct decisions.
Service Providers: I read an interesting sell side note from Deutsche Bank today in which the analyst (Brian Modoff) wrote that he had spoken to several US carriers who were disappointed in one of their large infrastructure suppliers because this company was not designing products how they wanted the products designed. Well…that pretty much sums up my post here and here from the last few weeks. Nothing kills creativity faster in the organization than becoming the outsourced engineering arm of your customer. I want to solve my customers challenges, I just want to do it on terms that are best for my business – not their business.
Content: I have posted three times on content. For some reason, I glanced at a TV today in a hotel lobby that was tuned to CNBC. There was some bizarre conversation about Apple and mobile devices and Dow 13k and it triggered a thought. One of the trends I have been tracking is what I call DIY content. That was the point I was making about Verizon not buying NFLX, but hiring the team from NFLX. My view is that it is increasingly easier for content owners to distribute their content and this will increasingly pressure content distributors and content aggregators. The middle ground between the consumer and the content creator will NOT be a good place to be unless you can own the distribution ecosystem (i.e. devices) like AAPL.
Off to LA tomorrow for OFC and looking forward to meeting PollyAnna and hearing about the how the internet is about to break under the weight of all those videos.
/wrk
* It is all about the network stupid, because it is all about compute. *
** Comments are always welcome in the comments section or in private. **
PE-Buyout, Technology Companies and Vertical Integration
When I read articles in the financial press that contain quotes from market participants about the market, I often wonder whose agenda is being served. I read this Bloomberg article that contains some quotes from Charlie Giancarlo at buyout firm Silver-Lake. The title is “Silver Lake Sees Fresh Round of Telecommunications Takeovers,” and it begins by saying that PE-Buyout firms are going to “zero in on makers of telecommunications equipment and mobile devices this year.” I have no idea if this is what Charlie meant to say, but any PE firm buying ALU, NSN, RIMM, TLAB, etc, my comment is good luck. I have immense respect for Charlie and what he did at Cisco, but I question if his list of buyout candidates include aging telecom equipment makers and fading handset makers.
I have written before about a PE-buyout of NSN here and RIMM here and I have also written about the fallacy of valuations being considered cheap. There is a difference between unrealized or hidden value and companies with cheap valuations. In the Bloomberg article there is a quote from an investment advisory saying “The big story to me is the incredible undervaluation of Alcatel, this stock is a screaming buy. Private equity should be all over this.” Everyone is entitled to their opinion and for all I know bankers could he putting the final touches on deal for ALU, but I think we need to be very careful when we discuss technology buyouts, they are the not the same as taking RJR/Nabisco, Albertson’s, Hertz and even Alltel private. Technology companies are different from traditional consumer and industrial markets because of the innovation/product cycle nature of their markets. The Oreo has done well for Nabisco for one hundred years come March 6, 2012 selling over 491 billion cookies according to Wikipedia. That is a long and successful product cycle with no comparison in
technology. I inserted two charts a 20 Year and a 10 Year of ALU for review with no comment.
If I was to get a call tomorrow from a PE firm and they were to ask me about buying out any of the public technology companies that have cheap or compelling valuations I would think about their question in four areas and none of them have to do with their balance sheet and current market valuations:
1. What is the state of the present day market construct: I just posted on the service provider market last week. Selling infrastructure equipment to global service providers is hard work and Huawei does not make it easier. Why anyone would want to spend billions to own a business that is difficult and not getting easier is beyond my reasoning skills.
2. Can the legacy company fail fast enough to be successful again? Legacy companies often lack creativity and this is manifested in their product cycles which turn negative and result of that development is a lack of creativity. See my December 2011 post for the details. I do think that large public companies can innovate, but to see the market beyond their past/present view they need to separate a team to prevent innovation from being distracted by the present and blinded by the past. IBM did that with the PC and Apple the same with the iPhone and iPad.
3. Product Cycle Management is the True Measure of Success: I have spent a lot of time posting about product cycles on my blog. They are all listed here. I would ask myself with each buyout candidate: can the product cycle be fixed, can it become a weapon, can the company innovate to take share and differentiate in the market? Just because a company is cheap or has a large patent portfolio does not mean that the team and talent is present to fix the product cycle. Technology companies are very much about the talent level of the team. Hertz may have secured the best physical locations at each airport and that is hard to compete with, but in tech others can find ways to attack your market share and location is not a barrier to entry.
4. How is the Market Changing: Another subject that I have spent considerable time on is the changing nature of networks and the end-user markets. All the posts starting in May 2011 are here. The question I would need to answer is can these legacy companies be players in the changing structure of compute-storage-networking market construct?
I know it is possible to take a company private (e.g. Avaya) or use a private company as a platform (e.g. Genband) to put other companies around it to build solution mass and market share. It is not easy, but it can be done.
There is another market ebb and flow for technology companies that is playing out and that is vertical integration and the world is flat crap. In the 1980s-1990s companies were all about vertical integration such as IBM. The competitors who did well against IBM were singularly focused solution companies such as MSFT, Compaq, DELL, CSCO, etc. Post 2001 the mantra for technology companies was to divest non-core assets, out source businesses such as in-house silicon and functions such as their supply chain and manufacturing to APAC. If you look at what is going on today, the large public technology companies are in the midst of assembling a vertically integrated company from the assets they spent a good 15 years outsourcing.
Take Cisco for an example. Most people think of Cisco as a networking company, but they sell blade servers (i.e. compute) and they have been acquiring optics and component companies as well. IBM who sold their networking business to Cisco in 1999, is now putting it back together. HPQ purchased 3Com (i.e. networking) as well as 3Par (i.e. storage) to go with their compute (i.e. Compaq) business. DELL who started as a PC business and got into servers now owns Equallogic (i.e. storage) and recently purchased Force10 (i.e. networking).
My point is these are large companies all trying to cover compute-storage-networking market and Cisco is no longer a singularly focused networking company. We are on the verge of a new 10-15 year cycle in the network as I described here, here and here. Even Verizon is telling you the network is going to change. My question is why would any of the companies speculated about as buyout targets be the platform of choice to take share in the new emerging compute-storage-network market construct?
/wrk
* It is all about the network stupid, because it is all about compute. *
** Comments are always welcome in the comments section or in private. **
Sometimes Winning Requires Failure
This post is the product of a conversation I had with a colleague at breakfast the other day while on the road making customer calls. The conversation we were having was about technology companies and specifically leadership teams within technology companies. The genesis of this post is the thesis that companies get trapped in loops. I have posted on the subject of creativity theta before and I have not been dissuaded from my thesis that many people in tech are repeating the same processes, the same week to week actions they have been doing for years. I call it creativity fail; most CxOs call it hiring someone with experience.
I had this creativity fail loop concept perfectly framed for me by a former colleague who works at a late stage startup that recently hired a new VP of Marketing. This former colleague told me the new VP was busy hiring their old team including a director to oversee trade shows. Trade shows…wow…back to the 90s on that one. Can anyone tell me who was president the year that the last customer with buying power attended a technology trade show?
This is a microcosm of technology industry and what I term as creativity fail. People are building the same products they built in previous careers, for the same customers they had in previous careers and result is the process becomes stale. Unfortunately, many people are expecting the same result and are perplexed why they cannot repeat the same level of success from twenty years ago using the same strategy and tactics.
The success of a technology company is really about product cycles. Technology companies become stuck in loops because product cycles become affected by success. The more success a technology company has in winning customers, the more these customers begin to dominant product cycles. As product cycles become anchored by the customer base, the plan of record (POR) suffers from feature creep and the ability of the company to invest in products for new customers and new markets declines. Consider the following:
New Technology = Velocity
Developed Technology = Spent Capital, Doctrine, Incrementalism and Creativity Fail
With the passing of Steve Jobs and the success that Apple has achieved over the past decade, there are many books and blogs and papers to read on why Apple succeeded. I would say that a significant reason for Apple’s success post the return of Steve Jobs (1997) was it had failed and it was willing to acknowledge failures and abandon technology anchors that did not provide velocity in business. Steve Jobs was not born with this ability, he had to learn it. He learned it because Lisa failed,Newtonfailed and the PowerPC loyalty was a failure. To win in the future an existing technology company must fail. Apple’s failures allowed them to break from the past and build anew.
Considering my pseudo-equations from above; having worked in mature technology companies I have learned that a difficult trap for leaders is breaking from the present which is anchored in the past. Executives, especially in public companies, look at their customer base and this base equals a large percentage of present and future revenues. To support this base a high percentage of R&D dollars (i.e. incrementalism) are allocated to supporting the base. Armies of people and processes (i.e. doctrine) are created to support existing markets. For many people within a company this is a comfortable place to be. Their daily processes (i.e. reports, procedures and actions) and week to week employment requirements are well defined and understood. That is the path to incrementalism. The trap becomes self fulfilling because corporate leaders find it hard to break from the past. It is comfortable for corporate leaders to build to their base. It is immensely difficult for corporate leaders to have the courage to say no to the past created from developed technology and build towards future markets.
I have some additional ideas for what I think are interesting entries that I will post over the weekend or next week.
/wrk
* It is all about the network stupid, because it is all about compute. *
** Comments are always welcome in the comments section or in private. **
Thoughts on Juniper Networks
If you read research reports on Juniper (JNPR) and you did not have an extensive background in networking technologies, it must be difficult to figure out all the different perspectives. Here is how I would think about JNPR, but I am writing this from the notes I took reading through the last ~6 months of reports.
1. Prior Posts: I think most of my past posts on JNPR can be found here.
2. Product Cycles: Long time readers know that that I believe technology markets and cyclical and positive product cycles are fundamental to positive equity performance. This not only includes revenue growth through market share gains, but also sustaining and expanding gross margins. The fastest way to kill a tech stock is to shrink gross margins. With that in mind there are numerous reports and a recent upgrade based on JNPR entering into a positive product cycle for 2012 with (i) T4000, (ii) PTX and (iii) QFabric. Last month John Chambers said that “Juniper is the most vulnerable I’ve ever seen.” Which one is it positive product cycle, vulnerable company or somewhere in the middle?
3. PTX: Conceptually PTX is an interesting product and with all the OTN offload, router bypass, hollow core, transport router talk over the past couple of years, I can see why JNPR built the platform. On the other hand, the market for the additional capabilities is orthogonal to JNPR’s core business gross margin (GM) level. JNPR and CSCO are companies that enjoy a premium gross margin advantage. Traditional transport companies (e.g. CIEN, TLAB, ALU, ERIC, ADTN, INFN) regularly report GMs about 2000-2500 bps lower that CSCO and JNPR. What I do not know, but look forward learning is how the PTX will be priced and purchased in the market. Can JNPR maintain their traditional GM level in the mid 60s for the PTX platform? Will service providers grind them down? That is the conundrum I am curious to learn about. A company like CIEN would enjoy adding some packet capabilities, but not a full router, to their product and receiving 500-1000 bps more in GMs. I am not sure JNPR would like selling their PTX for 1000 bps lower than their traditional GM level. I have no idea which way it will go, but I would say it is easier for the transport companies to add some packet and live in the market at the 40-55 GM level than it would be for the router companies to add some optical transport and live in a market of 50-55 GMs.
4. QFabric: The CSCO team is quick to point out three failings with QFabric: (i) it is proprietary, (ii) failure domains are too large and (iii) complex software. Of these I am will to believe one and half. The intended nature of the fabric design and evolution from merchant silicon to an ASIC is to run cells between the ToR and the fabric. CSCO says this is proprietary, yet they are running TRILL for Fabric Path and it looks like you can only connect their ToR to the Nexus 7k and you a need software upgrade. What is the difference? TRILL is quasi standards based and JNPR cells are proprietary? This is slicing hairs and is really meaningless. Both vendors are locking out other ToR suppliers from connecting to their fabric architectures. As for the failure domain being too large, I am not buying this. As for the software being complex, I think that is entirely possible. Will this solution be a big seller? I am not so sure – but it is a new approach. I think JNPR has some deliverable issues to overcome and the solution becomes more complete in 2012 with version 2.0, therefore I think the assumption of a large revenue ramp in 2012 is overly optimistic.
5. Trading Range and Chart: In September I told a PM in NYC to buy JNPR. The stock was about $18 and had yet to bottom. He asked me why, knowing I had been negative on JNPR. I told him it was just a trading call at this low level as and bunch of tech trading desks would get positive on it and we would get goofy notes from analysts who are positive on JNPR saying the selling had gone too far. That is exactly what happened. I updated my JNPR weekly chart that I posted a few months ago. I circled the high and low. Before looking at the chart, do you know where JNPR is trading on a chart over 123 months? The answer pretty much the middle. The median price point over that time is $24.58 and the stock is $24.92 today. If you look at the chart, you will see it spends time around the $24.56 level, which is the 50% fibb level from 2002 low and 2011 high.
6. Upgrades and Downgrades: The following chart is just a list of ratings changes I copied off Dow Jones.
Date |
Research Firm |
Action |
From |
To |
31-Oct-11 | RBC Capital Mkts |
Upgrade |
Sector Perform |
Outperform |
14-Oct-11 | ISI Group |
Initiated |
Buy |
|
27-Jul-11 | Oppenheimer |
Downgrade |
Outperform |
Perform |
27-Jul-11 | MKM Partners |
Downgrade |
Buy |
Neutral |
14-Jun-11 | RBC Capital Mkts |
Downgrade |
Outperform |
Sector Perform |
20-Apr-11 | Ticonderoga |
Downgrade |
Buy |
Neutral |
21-Jan-11 | MKM Partners |
Upgrade |
Neutral |
Buy |
18-Jan-11 | MKM Partners |
Initiated |
Neutral |
|
20-Oct-10 | Oppenheimer |
Upgrade |
Perform |
Outperform |
15-Jul-10 | Oppenheimer |
Downgrade |
Outperform |
Perform |
5-Apr-10 | WellsFargo |
Upgrade |
Market Perform |
Outperform |
7-Dec-09 | Auriga U.S.A |
Downgrade |
Hold |
Sell |
13-Nov-09 | Oppenheimer |
Upgrade |
Perform |
Outperform |
2-Nov-09 | Stifel Nicolaus |
Upgrade |
Hold |
Buy |
23-Oct-09 | Piper Jaffray |
Upgrade |
Underweight |
Neutral |
13-Oct-09 | Jefferies & Co |
Upgrade |
Hold |
Buy |
9-Oct-09 | Auriga U.S.A |
Initiated |
Hold |
|
23-Sep-09 | Robert W. Baird |
Downgrade |
Outperform |
Neutral |
31-Jul-09 | BMO Capital Markets |
Initiated |
Market Perform |
|
21-Jul-09 | Soleil |
Initiated |
Hold |
|
15-Jul-09 | Citigroup |
Initiated |
Buy |
|
15-Jul-09 | BWS Financial |
Upgrade |
Sell |
Hold |
2-Jul-09 | Deutsche Securities |
Initiated |
Hold |
|
2-Jun-09 | UBS |
Downgrade |
Buy |
Neutral |
1-Jun-09 | AmTech Research |
Downgrade |
Buy |
Hold |
21-May-09 | Barclays Capital |
Upgrade |
Equal Weight |
Overweight |
20-Apr-09 | Oppenheimer |
Downgrade |
Outperform |
Perform |
17-Apr-09 | Stifel Nicolaus |
Downgrade |
Buy |
Hold |
8-Apr-09 | BreanMurray |
Initiated |
Buy |
|
8-Apr-09 | AmTech Research |
Upgrade |
Neutral |
Buy |
30-Jan-09 | Piper Jaffray |
Downgrade |
Buy |
Neutral |
With all that being written, it is possible I am mistaken and got it all wrong. 🙂
/wrk
* It is all about the network stupid, because it is all about compute. *
** Comments are always welcome in the comments section or in private. Just hover over the Gravatar image and click for email. **
How Arista Networks affects Venture Capitalists
Back in May I wrote this post on the deficient state of venture capital funding for large scale networking projects. At the time, I was not thinking about green tech and companies like Solyndra. I was focused on the network, how it is changing and networking technologies – not social networking or web marketing firms like Groupon; my prior GRPN comments are in this post.
At time I was writing my thoughts on how the network is changing, what I was not thinking about was how Arista Networks is affecting the ecosystem inSilicon Valley. If you are looking for an analysis of Arista Networks, this is not it. I am thinking about Arista, but this is not intended as a technology or solution overview; maybe I will do that for another day.
To understand what I am thinking about, we need to go back to Nuova Systems. Cisco acquired the company in April 2008. From this company emerged the Nexus 5000 class set of products. What was unique about the company was the funding and the impact it has had on others in the Cisco ecosystem. Think about where Cisco stood in 2008. For the FY2008, Cisco sold $13.46B in switching products. (Note that Cisco’s fiscal year is August-July). FY2008 was the peak year in Cisco switching revenue before the onset of the global credit crisis in September 2008. I have detailed the 2009 catch-up spend event here and here, but consider the world before the credit crisis induced spending collapse and subsequent catch-up phase. Cisco was the dominate market share owner in switching. There was no one close and they had spent their acquisition capital from the start on this market segment. This is a just a partial list of Cisco’s switching centric deals starting the most important being Crescendo in 1992. This list alone totals $6.2B:
- 24-Sep-93 Crescendo Communications LAN switching $94,500,000
- 24-Oct-94 Kalpana LAN switching $204,000,000
- 6-Nov-95 Grand Junction Networks LAN switching $220,000,000
- 6-Aug-96 Nashoba Networks LAN switching $100,000,000
- 3-Sep-96 Granite Systems LAN switching $220,000,000
- 31-Aug-99 IBM Networking HW Div Computer networking $2,000,000,000
- 11-Apr-00 PentaCom LAN switching $118,000,000
- 20-Aug-02 Andiamo Systems Datacenter Switching $2,500,000,000
- 29-Jun-04 Actona Technologies Data storage $82,000,000
- 8-Apr-08 Nuova Systems, Inc. Datacenter Switching $678,000,000
Total $6,216,500,000
Nuova was started in mid-2005. It was self funded by the Cisco executives who left to form the company. This list included: Mario Mazzola, Luca Cafiero, Prem Jain, Soni Jiandani as well as outsiders Ed Bugnion and Tom Lyon. In August 2006, Cisco increased their ownership to 80%. In April 2007, Nuova had ~200 employees and Cisco agreed to a cap of the potential buyout of remaining 20% at $678M. A year later, April 2008 Cisco closed the deal to acquire the remaining 20% for $678M. No venture capital firms were involved to my knowledge. Jayshree Ullalleft Cisco in May 2008.
Back to the list of Cisco acquisitions from the 1990s; Granite Systems was founded by Andy Bechtolsheim and David Cheriton. In 2008, Andy Bechtolsheim became Chairman and CDO at Arista networks, which he had founded a few years earlier with David Cheriton. Jayshree became CEO in October 2008.
In October 2008, Sequoia Capital produced a presentation for their portfolio companies entitled “RIP Good Times.” It should be noted that Sequoia was the first and I think only VC to invest in Cisco before the IPO in 1990. In the month of October 2008 these events are occurring:
1. Cisco is having their best switching quarter in their history with revenues closing at $3.54B ending the fiscal year at $10.4B.
2. Sequoia (original Cisco VC) is telling their portfolio companies to cut burn rates, conserve cash, reduce expectations and go into the bunker.
3. Arista namesJayshree UllalCEO. They are just a few months away from their first sale, focused on low latency, high performance compute clusters against Infiniband.
4. Juniper is a few months into the development phase for QFabric.
Did I mention that like Nuova, Arista does not have any venture capital firms as investors? At least to my knowledge there are no VC investors. My prediction is we will look back on October 2008 as a pivotal month in the history of networking. The few venture capitalists focused on networking dug in for a long winter’s nap. Their thesis was why be invested in a market in which Cisco is the dominant company? It is also a market that requires a high level of capital to be successful and is perceived to be a mature market – all of which describes a poor set of conditions for entrepreneurial companies. It is much easier to invest in mobile apps and web properties, than fight incumbents that have scale and capital advantages.
A small group found that the darkest of times was not the time to sleep. It was the time to plot a revolution and start a new company. While the powerful were distracted – others saw opportunity. That is what was happening in October 2008. As I have written in the past, we are on the verge of significant shift in the structure of the network. It is not clear who the winners will be. The manner in which Arista is affecting the venture capital community is in the terms of lost opportunity. Arista is aggressively marketing the company to the investment community and I suspect they will have large and successful IPO in 2012 – an IPO in which no VCs will participate and no LPs will receive shares. I have seen only one other early stage startup in the datacenter space. There are two others that I know of marketing an A round, but the game is afoot and the clock is ticking. This is not some Openflow appliance game; I am referring to a big league game for real estate in the datacenter – the skyscrapers of the future. If this was 1992 or 1998 there would be 6-10 players in the space – but it is 2011 and it is hard to find players for the big game.
I wrote this in early 2006 and without being around any of the events in October 2008, I would hypothesize that this is what was happening. “The victors in any revolution are the companies that are led by great leaders who understand the cycle of change and strategize not for victory on the day of revolution, but for victory in [end]. When Microsoft becomes an old regime and the conditions required to support a new revolution materialize, revolutionaries will come forth. Even today there are revolutionaries who operate in fear of Microsoft. Similar to the formative years that shaped a young Bill Gates; the new revolutionaries might be students at Harvard, MIT, and Stanford or located in far away places such as Moscow, India, and China. They might even be employees of Microsoft. At night they clandestinely plot the downfall of Microsoft in darkened basements of their homes, fearing that Microsoft will not discover their treacherous plots for they know it will cost them their jobs. This is where revolutions begin. They begin with the revolutionary who is motivated by the quest for change, the desire for power, and the belief in a higher ideal, a better plane of existence that in the end will provide some form of social justice and personal satisfaction. Revolutions start when people create a solution to a complex problem. The people working inside Google are clearly revolutionaries who are planning to attack Microsoft from outside the computer operating system. Google’s strategy looks very much like the new generation of service providers who want to offer a service without owning a network. Google is looking to create applications that reside in the web and do not require a specific operating system. If Google can find a way to bypass Microsoft’s core advantage in their dominance of the personal computer operating system and productivity application markets, it will mean that Microsoft will have a major revolution to endure. The United States is born from revolution. We have adopted an economic structure that is not perfect, but it fosters change. The United States changes its government every two to four years. We have a predicable timeline of points when change might occur, called elections. We have an economic system that rewards success and punishes failure. We have a private equity mechanism in place that encourages risk, supports revolutions, and rewards success. “…in America you can drop out of college and start Microsoft, Oracle or Dell. You can get a “C” grade at the Yale School of Management and still launch FedEx. You can dropkick your Ph.D. pursuit and start Google,” [see Rich Karlgaard, Why we Need Startups, Forbes, July 21, 2003]. The result is a nation-state that is far from perfect, but has an infrastructure in place that supports change and revolutions. The roots of this system can be traced back to the Sons of Liberty in Boston during the years leading up to the American Revolution. The people that confronted the government in London found their financial backing from private funds within the American colonies. Privately raised funds were not being used to start a new biotech startup on the banks of the Charles River in 1775, but they were being used to fund the active opposition to General Gage, his military forces in Boston, and the English Monarchy of King George the III. This is how wealth, revolution, and ideas are linked.”
/wrk
* It is all about the network stupid, because it is all about compute. *
** Comments are always welcome in the comments section or in private. Just hover over the Gravatar image and click for email. **
Looking back to go forward; what was I thinking in 2005
I have been spending a lot of time thinking about the future of networking and how to build a solution and business in the future, call it a 12-48 month window from today – not in the present day as I am fully convinced that there is a high rate of external change in networking. I am engaged in the process of thinking about technology decision drivers, technology adoption, what is or will be important to people and anticipating the actions of competitors. By far the best tool for that process is the O-O-D-A loop developed by John Boyd, but that is best left to a separate post for another day. After a week of intellectual exercises, starts, failures and restarts, I found myself on a Saturday morning espresso in hand, looking back on documents and presentations I did over the last ten years.
I surmise that most people think this exercise is a waste of time; but I have posted on thought anchors and biases before. I also believe we are all susceptible to diminished breadth in our creativity as we get older. Diminished breadth in our creativity the root cause as to why history repeats itself and another reason why when we change companies we tend to take content and processes from our prior company and port them to our new company. This is especially true in the technology industry. We recycle people; hence we recycle ideas, content and value propositions from what worked before. Why be creative when it is easier to cut and paste? As a casual observation it seems to me that most people working in tech have a theta calculation as to their creativity. I believe a strategy to guard against creativity decay is to look back on the past and critique the work. That is how I spent part of my Saturday. I was looking back, to go forward.
I found a presentation that I had produced and presented in January 2005 – almost seven years ago. I started going through the presentation and I found many elements of the presentation are relevant today and to my surprise represent elements of projects I am working on or thinking about. I have attached a link to the presentation below (JAN 2005 for SIWDT) and it is mostly unedited, but I did remove one slide specific to my employer at the time and removed a reference to the same employer on another side. Excluding those two edits, the presentation is intact and readers are welcome to laugh at my presentation with perfect hindsight. Here are my thoughts and you are welcome to review the presentation before or after reading my thoughts.
– Slide 1: Everything on this slide was accurate. Our exit from the rampant waste of the client/server era continues to accelerate as we enter a new era, which I call the datacenter era for IT and I posted about last week.
– Slide 2: Many elements are still true, but I have seen an acceleration of enterprises wanting to bring external elements of their outsourced network in-house.
– Slide 3-4: Still true today.
– Slide 5-6: Everything proves true today and I would argue that in sourcing is accelerating.
– Slides 7-8: All still true.
– Slide 9: This is the key slide. Everything I am thinking about today to create in the network is encapsulated on this slide.
– Slide 10: In 2005 I called the cloud, grid computing.
– Slide 12: At every marketing meeting I attend I think of Dr. G.Clotaire Rapaille.
– Slide 13-14: Just directed points for breakouts…
I am not a huge Jack Welch fan, but I do appreciate the honesty and succinctness of this quote which I used on the cover the presentation “When the rate of change outside exceeds the rate of change inside, the end is in sight.” – Jack Welch
/wrk
* It is all about the network stupid, because it is all about compute. *
** Comments are always welcome in the comments section or in private. Just hover over the Gravatar image and click for email. **
Framing Exercise: Web 3.0 and the Network
I firmly believe that we are at the end of the Web 2.0 era and we are now in the early period of what I will call the Web 3.0 era. My definition of the Web 3.0 era is going to be different from how web centric people will call Web 3.0. I look at network from the perspective of (1) how the network is built, (2) what are the driving forces in the network and (3) what are the behavioral changes that will occur within the user community that deploys networks.
I have had a thesis on the evolution of the network that I have been building for some time. It starts with the collapse of Web 1.0 in 2001. Web 1.0 was the static content era based on HTML in which the network was not very efficient. The inability of the network to support dynamic content is why companies such as AKAM were very valuable. AKAM provided a fix to the network inefficiency problem and made the user experience better. It was also an era in which a lot of CAPEX was spent on building long haul (LH) networks. The problem was not a lot of CAPEX was invested in the access portion of the network and bandwidth swaps could only mask the lack of revenue bearing traffic from the access portion of the network for so long.
Frame 1: Web 2.0
Web 2.0 started to take root in 2003. The foundations for the network as a platform took hold and for the next seven years the network changed culminating with virtualization building a firm foundation, but there were many important network milestones along the way. Social networking sites were started: Friendster.com in 2002, Linkedin.com in 2002, Myspace.com in 2003, and Facebook.com in 2004. State aware applications such as Gmail (live on 04.01.2004); user generated content sites emerged like YouTube.com in November 2005 and AMZN launched their S3 storage service in March 2006. Web 2.0 destroyed a large amount of equity value in mobile device space because the legacy mobile device companies like RIMM, MOT and NOK were slow to respond with devices that supported the Web 2.0 world. AAPL was really the first company to build a mobile computing platform for Web 2.0 and they were at the forefront of network evolution that Web 2.0 was driving with a property called iTunes that leveraged the network as a platform. When iTunes and Web 2.0 were combined with a mobile computing platform called the iPhone in January 2007, magic happened.
I have included a series of 10-year equity charts for HPQ, INTC, AKAM, CSCO, NOK, RIMM and JNPR. I have purposely excluded AAPL, AMZN, NFLX and DELL. The charts I chose reflect the market collapse in 2008 and rebound in 2009-2010. For discussion purposes I am going to ignore the market melt down induced by the global credit crisis and collapse of LEH in 2008. I am already on record describing what I termed as the catch-up spend post the LEH collapse here and here. As investors and technologists we should not perceive the catch-up spend as a new market cycle or validation of a technology or market strategy. The catch-up spend was a reaction to a market spending contraction – not a new world. I think the catch-up spend might have fooled a few companies into thinking it was the rebirth of their traditional market, when the catch-up spend really had nothing to do with Web 2.0. Let me illustrate this point with a few long term equity charts. Long time readers will know I like weekly charts on a ~10 year time span because I think they filter out the daily noise and help understand the direction of companies when overlaid with important events. On each chart I have inserted an orange scribble line which is my attempt to insert a proxy trend line assuming that the macro credit crisis and catch-up spend events had not occurred and the consumption of technology was allowed to evolve independent of global events. I trying to separate technology trends (i.e. product trends) and technology market evolution (i.e. technology adoption cycles) from outside influences which is the equity component of my Web 2.0 to Web 3.0 thesis.
HPQ: This is an interesting chart when you consider that it is framed by $25B for Compaq in September 2001 and the iPad shipping in January 2010. Assume for moment that the credit crisis did not happen and the chart is down a bit in 2008 and then peaked in 2010. HPQ acquired EDS in 2008. They did this because they were believing in the ongoing propagation of Moore’s Law and the enterprises would consume more network infrastructure and having the ability to influence or control the decision making process in regard to that consumption was a good position. Kind of interesting that the peak in HPQ equity coincided with launch of the iPad, which marks an important transition point for the PC market in the same manner that the iPhone marked a transition point for the mobile device market.
CSCO: A few observations from the CSCO chart as this is company I have written extensively about in the past. They bought Pure Digital (Flip) at the bottom of the equity cycle. The thinking was that video was going to be a huge driver of the next evolution of the network. In June 2008, CSCO was starting to market and provide thought leadership around the concept of a Zettabyte Era. They have continued to update the models and provide copious amounts of data on the subject. The problem with all this data is I fail to see a follow through from the traffic trends in accelerating equipment demand. I think the network is going to look different, but I think many incumbent vendors think that the network will look like the past. After all, history does repeat itself (that was a joke). In July, I wrote “intended network design has changed little in twenty years. I look back in my notebooks at the networks I was designing in the early 1990s and they look like the networks that the big networking companies want you to build today. If you are selling networking equipment for CSCO, JNPR, BRCD, ALU, CIEN, etc, you go to work every day trying to perpetuate the belief that Moore’s Law rules. You go to work everyday and try to
convince customers to extend the base of their networks horizontally to encompass more resources, vertically build the network up through the core and buy the most core capacity you can and hope the over subscription model works. When the core becomes congested or the access points slow down, come back to your vendor to buy more.” X. Looking at the chart of CSCO, the average person would assume that NFLX streaming event would have been a big driver of CSCO revenues. Hmm…why did that not happen or will it?
JNPR: JNPR is very much a product cycle company and I think the ten year chart illustrates that point. The question is how will a hardware centric company do in a software/virtualized world of Web 3.0?
AKAM: I have already written a longer post on CDNs, thus I am not going to add much to that post. What I will say is content deep networking strategies, increased fiber penetration and software based network controls (e.g. ADCs in virtual form, cloud based SLA controllers, etc) that enable the end-user to control the network access point in a world of state aware apps and real time content is not the best environment for companies that provide solutions intended to improve the inefficiencies of the network. In other words, companies that created value by improving or fixing the network may see less applicability if the network requires less fixing and less improving.
INTC: The era of Web 2.0 was not overly kind to INTC. They never made it big in the mobile device market, the PC market eroded over time and the glory of the 1990s never returned. I am working on a thesis that Web 3.0 might be a better era for INTC.
RIMM and NOK: I have written extensively in the past about the mobile device market, but a few thoughts on RIMM and NOK. Both NOK and RIMM missed the Web 2.0 evolution and how it would impact adoption of their products. RIMM is another example of a company that fixed or improved the operation of the network and that function has little value going forward into the world of Web 3.0. That is a direct reference to the RIMM NOC. So much has been written about NOK I doubt I can add anything insightful. What I will say is their equity charts are twins. Both charts peaked between the iPhone 1 and iPhone 3G and launch of the iPad was just another kick when they were down.
Frame 2: Web 3.0
I would be upfront and say that my definition of Web 3.0 is very much a work in process. I know it when I see it. I am sure my definition will change and evolve. Currently I am using the descriptor of: metadata personalization, real time content, big data, and big personal pipes with user defined controls. With that sounding like research project, I am more inclined to just identify the trends and examples of Web 3.0 and let the definition take its final form in the future. Here are some signs posts along the way that point to Web 3.0:
- HP to sell or spin off Compaq ten years after they paid $25B to buy Compaq. What screams “end of the Web 2.0 era” more than this decision?
- Commodity HW and appliances with value add software will define the new network. This implies that HW based network infrastructure will suffer margin erosion as legacy HW solutions reduce ASPs to match SW based solutions.
- The network aides of the past become less useful (e.g. HW based ADCs, WAN acceleration, CDNs, etc) and new devices that aid content replication locally become far more important as networks see the transition from 10 to 100G.
- User (consumer or enterprise) defined and controlled network pipes. Power to the people to define and control their network experience (i.e. bandwidth).
- Decline of managed services and the rise of the enterprise customer as their own service provider using managed fiber with peering relationships with traditional service providers.
- More FTTH builds around world with governments helping pay for part of the infrastructure build.
- Power, heat, cooling and all the physical requirements for big data deployments continue will drive the adoption of optics and content deeper in the network and an evolution away from OEO.
As always this post is just a rambling of my thoughts about networking. None of it should be taken too seriously as I could easily be wrong.
/wrk
* It is all about the network stupid, because it is all about compute. *
** Comments are always welcome in the comments section or in private. Just hover over the Gravatar image and click for email. **
Friday Musings, August 5th 2011
I had a few other titles for this post:
“Don Quixote and the 1G to 10G, Romley, Big Data Super-cycle”
“Brocade’s Warning”
“Inflection Points are a Time to Pause and Consider the Path Forward”
Before we get into a heavy technology post, I will say that one good jobs report from the BLS does not solve the public sector credit crisis and counterparty risk concerns. Again, it just reminds me of 2008 when few people wanted to talk about the real issues, they just wanted to tell stories of how their daddy bought stocks after the Great Depression or after the Portfolio Insurance crash of 1987 and it all worked out.
BRCD pre-announced results short of expectations this morning citing weakness in storage and ethernet switching. Looking back on the JNPR results a couple of weeks ago and looking forward to CSCO and HPQ, this clearly tells me there is pause in spending. I do believe there is a significant inflection point in 2012 driven by the 1G to 10G upgrades initiated by Intel Romley MBs and the big data play – but I differ from many analysts who think that the incumbent players are going to get the business from this super-cycle.
I think this pending super-cycle is causing a pause in spending and it is opening a window for enterprises to stop and think about architectures and CAPEX. There is no rush to add legacy equipment to a data center or a network today. I think the macro drama is another reason why the big consumers of technology see no rush to spend and the pending super-cycle reinforces a prudent pause to ask: what is the best technology and architecture path going forward?
Let me provide an example of how macro events create opportunities for new technologies to emerge and take hold. Here is what I wrote about APKT from April 2011. “I define luck as: where planning meets opportunity. The APKT team made the assumption that the proliferation of billions of IP enabled devices will use thousands of networks to communicate with each other and these networks will need devices on the borders to translate the languages between the networks. This is called session border control and it is a software function that resides on an appliance. The technology assumption was correct; all the company needed was an event to make the market. Luckily for APKT a large number of investment banks (IB) massively over inflated the residential as well as the commercial real estate markets using leverage, derivatives and synthetic financial products. I am pretty confident that the global financial crisis was not in the APKT business plan in the years leading up to 2009.
When the global credit crisis hit in 2008, it changed the technology buying process. APKT became a winner and CSCO not so much a winner. What really happened was the buying decision process and decision makers for technology provided by APKT changed. CFOs started showing up in the decision making process and they wanted to buy technology that had a longer shelf life. They wanted to break from the past and acquire technology that had longer deployable life span in the network. You can see this in the revenue line in mid-2009. Quarterly revenues start to move significantly higher after ten quarters of a relatively tight range. This is when the market began to broadly adopt their technology and eight years of work began to pay off.”
My hypothesis for the next 12-18 months in the enterprise is the architectures for the data center and campus interconnect networks are up for grabs. It is wide open and incumbents are not guaranteed to be the winners when the decisions are made. In the service provider networks, the same is true. There will be a lot fewer routers and switches and the routers that are deployed, may look very different from the routers you know and love today.
/wrk
* It is all about the network stupid, because it is all about compute. *
** Comments are always welcome in the comments section or in private. Just hover over the Gravatar image and click for email. **
Eight Days till Cisco Earnings
In eight days we are going to here Cisco’s full year FY11 and Q4 FY11 report. Prior to their Q3 FY11 results I wrote about how important Cisco’s report is for the technology industry. All the signs that CSCO had lost their way came true in the Q3 report. What should we expect next week? In the August report, CSCO is going to be the first technology company to tell us about July and they should guide FY12. If they do not guide for the full year FY12, that is a red flag. If they talk about visibility being limited, hard to define macro, and at this time only guide Q1 FY12, then red flags all around. Public sector is a big piece of Cisco’s business so we want to see this number and hear about trends.
A few other data points to look at (1) margins, is the company discounting to protect/gain share (2) commentary from John Chambers on spending dispositions from CEOs. I am looking for additional data points around slowing CAPEX which I posted here and here before JNPR confirmed with their results.
Here are two charts over the same period for JNPR and CSCO, which spans the market bottom in March 2009 till today. CSCO benefitted first from post credit crisis catch up spend, but JNPR had a longer duration uplift driven from positive product cycles against CSCO. As JNPR enters the market slowdown in 2011, do they enter into negative product cycles as CSCO refreshes product lines? CSCO issued five consecutive disappointing reports starting in May 2010. JNPR’s July 2011 report is really their second disappointing report. How many are left?
I still think the bigger problem for CSCO and JNPR is they are clinging to the past, while others are focused on what is appearing in the network. Network deployment strategies are changing. I described the past practices as “if you are selling networking equipment for CSCO, JNPR, BRCD, ALU, CIEN, etc, you go to work every day trying to perpetuate the belief that Moore’s Law rules. You go to work everyday and try to convince customers to extend the base of their networks horizontally to encompass more resources, vertically build the network up through the core and buy the most core capacity you can and hope the over subscription model works. When the core becomes congested or the access points slow down, come back to your vendor to buy more.”
Another description would be if you think network deployment strategies are changing and the inflection point of another long and sustained network build out occurs in the 1H of 2012, then you need to be pretty far down the product/solution development effort path. It is August 2011, which means it is really September because everyone is on vacation. If you want to start selling solutions and deploying solutions in the 1H of 2012, you need be aggressively moving on the plan of record (POR). You have about five months to get product development efforts complete enough to sell and position in your key accounts. What I am going to be looking for at industry conferences over the next few months is evidence that the legacy Moore’s Law companies have realized there is a change and they are now trying to skate to where the puck will be, but I suspect I will find no evidence of this and they will be focused on skating to where they want the puck to be. Game is on.
/wrk
* It is all about the network stupid, because it is all about compute. *
** Comments are always welcome in the comments section or in private. Just hover over the Gravatar image and click for email. **
RIMM and Cloud Thoughts on a Friday
Starting with RIMM this morning I am not sure what more I can write. As I wrote before the company is in a negative product cycle. The good news is they actually have value in their consumer brand awareness, but there is no value in email anymore. Email is a feature and it is not a killer app, it is a common feature set of smartphones. To be competitive RIMM will have to innovate and catch-up in the product cycle marathon. I am not sure that it can. For the analysts and investors who think the stock is cheap, I would say that tech stocks work on growth and margins. When these two metrics turn negative the stock is a short. In the span of 49 days RIMM twice revised their forward guidance to the downside. If the company cannot predicate their earnings power I do not know how we can debate if it is cheap or not.
I have been writing a lot about new drivers in the network (here, here, here and here) and how various dislocations are going to affect the network and procurement of infrastructure. I am giving props to Phil Harvey and this interview with Randy Bias of Cloudscaling. The broader article is here. My comment about this interview is: spot on. Well done. If I was running money in the tech sector I would think very carefully about what this man is saying. I think it is going to be impactful on infrastructure providers as well as service providers and if you extrapolate his thinking, it might just explain a few things.
/wrk
* It is all about the network stupid, because it is all about compute. *
** Comments are always welcome in the comments section or in private. Just hover over the Gravatar image and click for email. **
Three Dislocations: Will They Meet?
When I started this blog, we were on the eve of Q1 earnings and I thought it was prudent to get out of the way of the market. Before the long holiday weekend in the US, I wrote a quick note on 20 tech stocks. I read a lot of the daily Wall Street dribble that comes out hyping this stock or that stock and this TAM versus that TAM and how big this market is going to be compared to that market. Blah, blah….when I posted over the weekend that things are going to be different, it is because the drivers that are affecting the technology ecosystem (i.e. the network as all encompassing term) are coming from unusual places.
The technology industry used to be pretty straight forward. Technology companies innovate and they bring their solution to market. Customers whether enterprise or service provider or consumer had some input, but it was really more of a RFI/RFP process which a consumer of technology would request a proposal to solve a problem or build a service. The following chart is something I created in 2006 to illustrate technology cycles and the companies taking advantage of those cycles (i.e. markets). I am going to update this later, but for now I think that this chart helps illustrate my point that in the past, technology companies helped push the technology market cycles.
What I think is different today is that the drivers seem to be coming from inside the network; from the users, from the consumers – not from the traditional technology suppliers. An example is Openflow, but Openflow is an element of the change I see. I am not willing to call it a SeaChange event, but maybe in the future. What I do know is that it appears that technology suppliers have lost some control of architecture and the technology decisions. It seems that consumers of technology have more control of what is being developed for the network deployment. To me the future looks much more driven by software control, virtualized I/O, meshed connections, SLAs, SSL and whole bunch more concepts that we could spend hours talking about in front of a white board.
By this time you are probably wondering what does this have to do with the market. The first three paragraphs were intended to convey the point that I think many established technology companies have been living off the 2009 catch up spend from the 2008 collapse. Spending stopped post LEH and it took about 4-5 months to restart. When it did restart (~Feb 2009) it lifted all companies, but some companies had fundamentally flawed product cycles like NOK while others had fundamentally beneficial product cycles like APKT. We are now 27 months into the catch up cycle and as comps have gotten tougher and the forces I have be writing about have started to affect the network ecosystem, companies are starting to realize that choices made a few years ago have put them into a difficult position.
Last week’s economic data was not a good. I think that leading indicators are neutral at best, more likely negative for the remainder of 2011. Forward GDP and industrial production numbers have been cut. We are almost mid year so with the economy hitting a rough patch, an air pocket, a slow down…whatever Wall Street speak you want to call it, the real question is how are companies going to handle the next few quarters? Stay the course? Make a correction? What changes will occur?
What I really see is three dislocations. The first is what is going on in the network and the sources driving technology adoption; consumption and innovation are different from the past. The second dislocation is the established technology companies and what they have to offer to a changing market. Products and solutions are set. Portfolios and value propositions do not change in 90 days. The catch up spend is over so how are companies with product cycles turning negative going to react to a harder market? It seems that what the technology market is doing is disconnected from what many of the established technology companies expected and third dislocation is what investors expect from prior two. Dislocations create opportunities.
/wrk
* It is all about the network stupid, because it is all about compute. *
** Comments are always welcome in the comments section or in private. Just hover over the Gravatar image and click for email. **