Not the Market from the 1990s, 2000s or the 2010s…

I spent a few years on the buy side at long/short technology focused hedge fund.  I learned a lot over that time.  I feel confident in saying it was the career stop in which I learned the most.  I was being paid to obtain a synthetic MBA as well as learn how to build and run a portfolio.  Over that time, I developed a lot of business friendships with many people smarter than me on the buy side and the sell side.  Almost a decade later, I still regularly engage with analysts, PMs and strategists.  It is rare for a business day to go by without a conversation of some type with a former colleague in the investment business.  I still like to think that most of the sell side analysts in the networking, hardware, cloud and telecom sectors would take my call.  With that said I have been riding the market wave, but I am conflicted with what I used to know and what I see today.  In my tech career I always found selling in the transition space between a maturing market and an emerging market to be the most interesting and dynamic space to be in.  I am beginning to think the equity markets are moving into a transition space.

Continue reading

More Evidence of The Changing Structure of Technology Companies

This morning Cisco Systems announced the acquisition of Acacia Communications (ACIA) for $2.6B in cash.  Could you imagine an optical components company being acquired by a systems company 10-14 years ago?  Not likely, but the times are a changing.  Will not be surprised to see more fabs coming to a US location near you.

As always, my thoughts on these matters might be completely wrong.

What does VisiCalc and SDN Have in Common?

I was listening to an episode on Planet Money last week regarding the first spreadsheet program called VisiCalc. If you listen to the podcast there is a discussion of the accounting profession before and after the creation of the spreadsheet. Before the creation of the program VisiCalc a spreadsheet was really a spreadsheet.   “If you ran a business, your accountant would put in all your expenses, all your revenues, and you’d get this really detailed picture of how the business worked. But even making a tiny tweak was a huge hassle.” Teams of accounts and bookkeepers would spend days reworking sheets of papers to maintain the accuracy of the books. Continue reading

Deflategate and the Absence of Malice Dilemma for the NFL

There is an enjoyable Sidnay Pollack movie called Absence of Malice. The plot of the movie is about an investigation of a murder and how press leaks are used to manipulate people via public opinion. As I watched the Deflategate drama unfold over the past few weeks, the whole affair reminded me of this movie. No one has died and we are not talking about Federal crimes, but from the coverage of the affair a person in another country not immersed in our football culture, would think we were discussing high crimes against the state. Continue reading

Amateur Analysis of 31 Years of the NFL Draft

Do you have that annoying friend who absolutely hates your sports teams? I am describing the person who sends a weekly barrage of emails full hate and over indulges in schadenfreude when your team loses. I have that friend and he is a Miami Dolphins fan. I am a Patriots fan and season ticket holder for more than twenty years. The Tom Brady era has been toxic for the Miami Dolphins and the AFC East in general. This toxicity manifests itself in a weekly barrage about Patriot cheating, film crews, playbook theft, hometown refs, video recording innuendo and general hatred towards Bill Belichick. Continue reading

Future Generations Riding on the Highways that We Built

When I was in high school and college, I never thought about a career in networking; it was just something I did because it was better than all the other jobs I could find.  I worked at my first networking startup in the late ‘80s and twenty-five years later, I am still working in networking. Continue reading

Talking SDN or Just Plain Next Generation Networking…

Tomorrow in SF, I will be talking about SDN, or as I like to call it next generation networking at the Credit Suisse Next Generation Data Center Conference.  It will be a panel discussion and each participant has a few minutes to present their company and thoughts on the market adoption of SDN.  Explaining the next twenty years of networking in fifteen minutes is a challenge, but I have been working with a small slide deck that helps make the point.  Here are those slides (link below).  I posted a variation of those slides few weeks ago that I used in NYC, but I tailored this deck to strict time limit of 15 minutes.  I will post more frequently after Plexxi is done at NFD #5 this week and around the time of OFC.

CS Next Gen DC Conference



Echoes from Our Past #2: The Long Tail

As with my first post about Antietam and the Vacant Chair, I have started to weave some creative writing into my technology and business focused blog.  If it is not for you, please disregard.  I am writing this post from the reading room in the Norman Williams library in Woodstock Vermont, which was built in 1883.  Outside the leaves are in full color and on the town green is chile cook off contest.  More than a decade ago, I started writing a book on my experiences reenacting the American Civil War.  I was motivated to write it because I had read Confederates in the Attic.  I knew some of those people in that book and had been at the same places.  Writing a book requires time and concentration. Continue reading

No Time to Post

I have had no time to post, but I have been working on a few drafts.  I spent today in Palo Alto at the JP Morgan SDN Conference.  It was a one day event with a lot of executives from SDN startups as well as established vendors in attendance.  I wrote a few pages of notes and I think I might be able to distill my thoughts down to post by the end of the week.  Thanks to the JPM team for the invite.


Labor Day Weekend Posts

Rather than a long blog post, I wrote three short posts on subjects that seemed to occupy my conversations and email this past week.  Off to SFO again this coming week and right back to Boston as we are hosting customers at our Cambridge headquarters at the end of the week.


* It is all about the network stupid, because it is all about compute. *

** Comments are always welcome in the comments section or in private. ** 

Dawn of the Multicore Networking

Off to VMworld, flying my favorite airline Virgin America, blogging, wifi, satellite TV and working; which is just cool when you consider when I starting traveling for business I had a choice of a smoking seat and a newspaper.  I recently posted additional SDN thoughts on the Plexxi blog, which was a follow-up to the last post on the SIWDT blog.

The following is the post that I alluded to last month that I have been writing and revising for a few months.   It all started several months ago when I was reading Brad Hedlund’s blog, in which he posted several Hadoop network designs.  I am referencing the post by Brad because it made me stop and think about designing networks.  I have been talking to a lot of people about where networks are going, how to design networks, blah, blah, just click on the “Networking” tab to the right and you can read more than a years worth of postings on the subject.  Side note, if you experience difficulties falling asleep reading these posts might serve as a cure.

As a starting point, consider the multicore evolution paradigm from a CPU perspective.  In a single core design, the CPU processes all tasks and events and this includes many of the background system tasks.   In 2003 Intel was showing the design of Tejas, which was their next evolution of the single core CPU with plans to introduce in late 2004.  Tejas was cancelled due to heat caused by extreme power consumption of the core.  That was the point of diminishing returns in the land of CPU design. At the time AMD was well down the path of a multicore CPU and Intel soon followed.

From a network design perspective, I would submit that the single core CPU is analogous to the current state of how most networks are designed and deployed.  Networks are a single core design in which the traffic flows to a central aggregation layer or spine layer for switching to other parts of the network.  Consider the following example:

  • 100,000 2x10G Servers
  • Over-Subscription Ratio of 1:1
  • Need 2,000,000 GbE equivalent = 50,000 x 40 GbE
  • Clos would need additional ~100,000 ports
  • Largest 40 GbE aggregation switch today is 72 ports
  • 96 ports coming soon in 2U, at 1-2 kW
  • 100k servers = 1,500 switches
  • 1.5-3.0 MW – just for interconnection overhead

This network design results in what I call the +1 problem.  The +1 problem is reached when the network requires one additional port beyond the capacity of the core or aggregation layer.

In contemporary leaf/spine network designs, 45 to 55% percent of the bandwidth deployed is confined to a single rack.  Depending on the oversubscription ratio this can be higher such as 75% and there is nothing strange about this percentage range, as network designs from most network equipment vendors would yield the same results.  This has been the basis of the networking design rule of: buy the biggest core that you can afford, scale it up to extend the base to encompass as many devices connections as possible.

Multicore CPUs come in different configurations.  A common configuration is what is termed symmetrical multiprocessing (SMP).  In a SMP configuration, CPU cores are treated as equivalent resources that can all work on all tasks, but the operating system manages the assignment of tasks and scheduling.  In the networking world, we have provided the same kind of structure by creating work group clusters for Hadoop, HPC and low latency (LL) trading applications.  The traditional single core network design that has been in place since IBM rolled out the mainframe has been occasionally augmented over the years with additional networks for mini computers, client/server LANs, AS400s and today for Hadoop, low latency and HPC clusters.   Nothing really new here because eventually it all ties back into and becomes integrated with the single core network design.  No real statistical gain or performance improvement is achieved scaling is a function of building wider to build taller.

Multicore CPU designs offer significant performance benefits when they are deployed in asymmetrical multiprocessing (AMP) applications.  Using AMP some tasks are bound to specific cores for processing, thus freeing other cores from overhead functions.  That is how I see the future of networking.  Networks will be multicore designs in which some cores (i.e. network capacity) will be orchestrated for HPC, priority applications and storage, while other cores will address the needs of more mundane applications on the network.

The future of the network is not more abstraction layers, interconnect protocols, protocols wrapped in protocols and riding the curve of Moore’s Law to build bigger cores and broader bases.   That was the switching era.  The new era is about multicore networking.  We have pretty much proven that multicore processing in CPU design is an excellent evolution.  Why do we not have multicore networking?  Why buy a single core network design and use all shorts of patches, tools, gum, paperclips, duct tape and widgets to jam all sorts of different applications through it?  I think applications and workload clusters should have their own network cores.  There could be many network cores.  In fact, cores could be dynamic.  Some would be narrow at the base, but have high amounts of bisectional bandwidth.  Some cores would be broad at the base, but have low bisectional bandwidth. Cores can change; cores can adapt.

I think traditional network developers are trying to solve the correct problems – but under the limitations of the wrong boundary conditions.  They are looking for ways to jam more flows, or guarantee flows with various abstractions and protocols into the single core network design.  We see examples of this every day.  Here is a recent network diagram from Cisco:

When I look at a diagram like this, my first reaction is to ask: What is plan B?  Plan B in my mind is a different network.  I fail to see why we as the networking community would want to continue to operate and develop on the carcass of the single core network if it is broken?   In other words, if the single core network design is broken, nothing we develop on it will fix it.  A broken network is a broken network.   Let the single core network fade away and start building multicore networks.  As always it is possible that I am wrong and someone is working on the silicon for 1000 port terabit ethernet switch.  It is probably a stealth mode startup call Terabit Networks.


* It is all about the network stupid, because it is all about compute. *

** Comments are always welcome in the comments section or in private. **

SDN Thoughts Post CLUS and Pre-Structure

At present I am on a bumpy VA flight over Lake Erie inbound to SFO for the GigaOM Structure conference. My employer will soon be a little less stealthy as we will have a new web page mid week around the Structure conference. I plan to do some blogging during or post Structure, but before we get to those post(s) I thought I would offer a few SDN thoughts post CLUS. I think what SDN is or will become is still unknown and I struggle with the need to find the killer app. SDN has not be defined. I do not think is had been agreed upon. I know that I have some different views on SDN. I actually do not like the term SDN; as networks have always been software defined and I do not think it is as easy as 20 min talk about separating the control plane from the data path on a white board. I am not sure what to offer up as a better term or acronym for SDN, but I think that I offer a few thoughts on SDN that our CEO is presenting at a conference prior to the Structure conference:

– Computation and algorithms, in a word math. That is what SDN is.

– Derive network topology and orchestration from the application and/or the tenant.

– Ensure application/tenant performance and reliability goals are accomplished.

– Make Network Orchestration concurrent with application deployment. This is what SDN is for.

– A controller has the scope, the perspective and the resources to efficiently compute and fit application instantences and tenants onto the network

I know is this considerably less than 93 slides, so I am going to look to augment the previous five points in the future. At Structure, I will looking at what others are doing and listening to the broad ecosystems of competitors, customers and analysts. If you are out at Structure, feel free to stop by the Plexxi table and tell me I am wrong. I look forward to the discussion at Structure or one of the dinners I plan to attend. Side note…this is test of MarsEdit and the ability to post from a cross country flight.


Service Provider Bandwidth: Does it Matter?

Let us start with a question: service provider bandwidth does it matter?  Perhaps a better question would be: is service provider bandwidth a meaningful problem to work on?  I think it does matter, but I am not certain it is meaningful.  This post is not going to be a scientific study and it should not be construed as a start of a working paper.  This post is really a summary of my observations as I am trying to understand the significance of the messaging in the broader technology ecosystem.  I sometimes call these posts framing exercises.  I am really trying to organize and analyze disparate observations, urban myths and inductive logical failings of doctrine.

Frame 1: Bandwidth Pricing Trend

There is no debate on this point; the price trend of bandwidth is more for less.  Bandwidth is deflationary until someone shows me a data set that proves it is inflationary.  I agree that bandwidth is not ubiquitous, it is unevenly distributed and that falls into the category of: life is not fair; get used to it.  In areas in which there is a concentration of higher wage earning humans organized into corporations with the objective of being profit centers, there seems to be an abundance of bandwidth and the trend in bandwidth is deflationary.  Here are a few links:

  • Dan Rayburn Cost To Stream A Movie Today = Five Cents; in 1998 = $270In 1998 the average price paid by content owners to deliver video on the web was around $0.15 per MB delivered. That’s per bit delivered, not sustained. Back then, nothing was even quoted in GB or TB of delivery as no one was doing that kind of volume when the average video being streamed was 37Kbps. Fast forward to today where guys like Netflix are encoding their content at a bitrate that is 90x what it was in 1998.  To put the rate of pricing decline in terms everyone can understand, today Netflix pays about five cents to stream a movie over the Internet.”
  • GigaOm: See the 2nd Chart.
  • Telegeoraphy: See the chart “MEDIAN GIGE IP TRANSIT PRICES IN MAJOR CITIES, Q2 2005-Q2 2011”

Frame 2 Verizon Packet/Optical Direction

Here is a presentation by Verizon at the Infinera DTN-X product briefing day.  The theme of the presentation is that the network is exhausted due to 4G LTE, video, FTTx, etc and that this is driving the need for more bandwidth to include 100G in the metro, 400G and even terabit ethernet in the core.  I have heard these arguments for terabit ethernet before; I am firmly in the minority that it is a network design/traffic engineering problem – not a bandwidth problem to be solved.  It took the world fifteen years to move from 1G to 10, I wonder how long it will take to get to terabit ethernet.

Frame 3 Are the design assumptions incorrect?

When I look at the network, I think of it as a binary solution set.  It can connect and it can disconnect.  For many decades we have been building networks based on the wrong design assumptions.  I have been posting on these errors in prior posts.  Here is a link to a cloud hosting company.  I know this team and I know their focus has been highest IOPs in their pod architecture.  We can use any cloud provider to make the point, but I am using Cloud Provider USA because of the simplicity of their pricing page.  All a person has to do is make five choices: DC location, CPU cores, memory, storage and IP address.  Insert credit card and you are good to go.  Did you notice what is missing?  Please tell me you noticed what is missing, of course you did.  The sixth choice is not available yet, it is network bandwidth; the on or off network function.  The missing value is not the fault of the team at Cloud Provider USA; it is the fault of those of us who have been working in the field of networking.  Networking has to be simple; on or off and at what bandwidth.  I know it is that simple in some places, but my point is it needs to be as easily configured and presented in the same manner as DC-CPU-Memory-Storage-IPs purchase options are presented on the Cloud Provider website.  My observation is the manner in which we design networks results in a complexity of design that is prohibitive to ease of use.

Frame 4 Cisco Cloud Report

I think most people have read Cisco’s Cloud Report.  Within the report there are all sorts of statistics and charts that go up and to the right.  I want to focus on a couple of points they make in the report:

  • From 2000 to 2008, peer-to-peer file sharing dominated Internet traffic. As a result, the majority of Internet traffic did not touch a data center, but was communicated directly between Internet users. Since 2008, most Internet traffic has originated or terminated in a data center. Data center traffic will continue to dominate Internet traffic for the foreseeable future, but the nature of data center traffic will undergo a fundamental transformation brought about by cloud applications, services, and infrastructure.”
  • In 2010, 77 percent of traffic remains within the data center, and this will decline only slightly to 76 percent by 2015.2.  The fact that the majority of traffic remains within the data center can be attributed to several factors: (i) Functional separation of application servers and storage, which requires all replication and backup traffic to traverse the data center (ii) Functional separation of database and application servers, such that traffic is generated whenever an application reads from or writes to a central database (iii) Parallel processing, which divides tasks into multiple smaller tasks and sends them to multiple servers, contributing to internal data center traffic.”

Here is my question from the above statistic.  If 77% of traffic stays in the data center, what is the compelling reason to focus on the remaining 23%?

Frame 5 Application Aware an the Intelligent Packet Optical Conundrum

I observe various transport orientated equipment companies, as well as service providers (i.e. their customers) and CDN providers (i.e. quasi-service provider competitors) discussing themes such as application aware and intelligent packet optical solutions.  I do not really know what is meant by the use of these labels.  They must be marketing terms because I cannot find the linkage between applications and IP transit, lambdas, optical bandwidth, etc.  To me a pipe is a pipe is a pipe.

The application is in the data center – it is not in the network.  Here is a link to the Verizon presentation at the SDN Conference in October 2011.  The single most important statement in the entire presentation occurs on slide 11 “Central Offices evolve to Data Centers, reaping the cost, scaling and service flexibility benefits provided by cloud computing technologies.”  In reference to my point in Frame 3, networks and the network element really do not require a lot of complexity.  I would argue that the dumber the core, the better network.  Forget about being aware of my applications; just give me some bandwidth and some connectivity to where I need to go.  Anything more than bandwidth and connectivity and I think you are complicating the process.

Frame 6 MapReduce/Application/Compute Clustering Observation

Here is the conundrum for the all the people waiting for the internet to break and bandwidth consumption to force massive network upgrades.  When we type a search term into a Google search box it generates a few hundred kilobytes of traffic upstream to Google and downstream to our screen, but inside Google’s data architecture a lot more traffic is generated between servers.  That is the result of MapReduce and application clustering and processing technologies.  This is the link back to the 77% statistic in Frame 4.  Servers transmitting data inside the data center really do not need to be aware of the network.  They just need to be aware of the routes, the paths to other servers or devices and they do not need a route to everywhere, just where they need to go.

Frame 8 What we value is different from what we talk about

Take a look at the chart to the left.  I put a handful of public companies on the list and I am not including balance sheets, debt and other financial metrics.  All I am pointing out is that companies focused on the enterprise (i.e. the data center) enjoy higher margins and richer valuations than companies that focus on the service provider market.  Why is that true?  Is that a result of the 77% problem?  Is it a result of the complexity of the market requirements imposed by service provider customer base?  Is it a result of the R&D requirements to sell to the service provider market?

Frame 7 Do We Need a New Network Architecture?

I have been arguing that we need a new network architecture for some time, but I think the underlying drivers will come from unexpected places.  It was not long ago that we had wiring closets and the emergence of the first blade servers in the early 2001-2002 time period started to change how data centers were built.  When SUN was at their peak, it was because they made the best servers.  It was not long ago that the server deployment philosophy was to buy the most expensive, highest performance servers from SUN that you could afford.  If you could by two, that was better than one.  The advent of cheap servers (blades and racks), virtualization and clustering applications changed the design rules.  Forget about buying one or two high end servers, buy 8 or 10 cheap ones.  If a couple go down on the weekend, worry about it on Tuesday.  I think the same design trend will occur in the network.  It will start in the DC and emerge into the interconnect market.


* It is all about the network stupid, because it is all about compute. *

** Comments are always welcome in the comments section or in private. ** 

Cisco Crushes Switching in Q1 2012

Cisco just put up a $3,675M print in switching.  I have that as the single best Q in the company history.  I think some competitors must be feeling some pain.


* It is all about the network stupid, because it is all about compute. *

** Comments are always welcome in the comments section or in private.  Just hover over the Gravatar image and click for email. ** 

NSN:::Restructuring a Business and Using it as a Platform

I was reading this article the other day which states that two groups of rival PE-buyout firms were looking to acquire the Nokia-Siemens Networks JV business unit from Nokia and Siemens.  My first reaction is this must be a huge relief to management teams at Nokia and Siemens.  Both companies really do not want to be in the networks business as they are focused on their core business – hence the JV.  Both companies do not want to restructure any more businesses and face the political wrath for layoffs.  It would be much better to have some American bankers do it.

I know the reasons why the JV owners want to get rid of the JV.  In the case of NOK, they have a lot of issues to resolve in their handset business and it is not getting better anytime soon.  The networks business to Siemens is like a poor stepchild.  The reason the two companies created the JV was to help improve business scale and eliminate duplicating R&D and market bids.  In a world that has Huawei, ZTE, CSCO and then a whole host of solution focused mid-tier size companies like CIEN, JNPR and Avaya not to mention smaller players like Genband, APKT, FFIV and BSFT, one has to wonder how does a JV like NSN get back on track?  I know they have focused on services and outsourcing deals and in that market ERIC is trying to compete with the same strategy as well.

Why would a PE firm want to buy all or a part of NSN?  A part of it makes more sense and I do not know what part they would be looking at.  There has been industry chatter that their optical group has been for sale since losing the Nortel bid to Ciena.  If a company like Ciena was to acquire it, it would simply be for market share consolidation.  A week or so ago there were rumors that Nokia was looking to buy CIEN, but that is just crazy to me.  Why would a mobile device company want to buy CIEN?  Maybe the person who started the rumor meant to say NSN was trying to buy CIEN, but this seems unlikely too considering that NSN just spent $975M for a portion of the old Motorola.  NSN would have to spend billions to buy CIEN and that would require cash from the two JV owners.  I think it is far fetched for someone to think that Nokia and Siemens want to make their JV bigger by spending billions to buy something as big as CIEN.  I could be wrong, but I think I am living in the real world.

Back to the question of buying NSN if you were a PE firm.  NSN is a €12 billion a year business.  It is a loss making business, but the trend has been better.  What would you pay for a business like this?  Is it 1x sales or $16-17B?  If you take the whole company private, you could possible dispose of a few pieces such as the optical business to CIEN.  You could focus 100% of the company on the MPC business as well as services and outsourcing of networks.  NSN closed the $975M deal to acquire Motorola’s networks business with 6,900 employees a few weeks ago.  To me this is further evidence that NSN wants to be in the business of running networks as acquiring this asset is really a strategy to offer a technology transition bid to the MOT customers as part of an outsourcing deal.

With Nortel gone, NSN is the last land grab in the large cap equipment space.  NSN has a deal with CSCO and a deal with JNPR.  A year ago around the time that Starent was acquired by CSCO, there was industry chatter that CSCO was going to buy the Siemens stake and NSN was going to offer the Starent MPC.  That never worked out, but I think it is important to note as it is an indication of the value of the NSN asset, channel and customers.

I think if you could buy the mobile networks, management and the services business of NSN, it could use that portion as a revenue foundation to bolt on other acquisitions.  The One Equity team has taken this approach with Genband.  No matter what part or all of NSN that is purchased, the question is what would you do with it?  As a JV it seems to me that it is not a master of its own domain.  As a private company, they could restructure in private and look to fix the product portfolio.  I think the focus of a new start would be services and network outsourcing.

In my opinion, the trend within the top 100 global service providers is clear.  This trend is not moving at light speed, but is a slow evolution.  Service providers seem to be less interested in running networks, evaluating technology and pushing technical boundaries.  This is clearly evident in ATT’s Domain Strategy.  Standards such as LTE only provide for similar ubiquitous networks.  Large service providers are going through a cultural shift wherein they seem less concerned about building networks and more concerned about customer acquisition and retention.  The customer experience is more important than the technology in the network.  Make it work, make it accessible to the customer and retain the customer is far more important than developing leading edge solutions.  I would argue that the content aggregators or distributed computing providers such as Google, Amazon, Apple, Facebook will do more to drive technology trends than the RBOCs and PTTs of Cold War era.

As far as NSN goes, if I was the PE buyer my objective would be to build an company that is exceptional at integrating technology, building and running networks.  I would not want to be part of the product development marathon.  I would be far more interested in telling CxOs at the global 100 service providers that they can reduce costs by having us [NSN] run their network and they can focus on the customer.  That is a powerful position if you can be successful as it is the same strategy that IBM used to comeback from the brink in the early 1990s.


* It is all about the network stupid, because it is all about compute. *

** Comments are always welcome publically or in private.  Just hover over the Gravatar image and click for email. **

CSCO Q3 results…quick take…

The numbers that matter:

Q3 Revenue: $10.866B

Q3 Product Revenue: $8.669B

Q3 Services Revenue: $2.197 B

Gross Product Margin: 63.9%

Bookings commentary: “about one” from the CFO.  This tells me that bookings were around $8.600B.  I think backlog going into Q4 is around $4.025B, which is actually slightly less that the start of Q3 hence, the lowered guidance for Q4.  For Q4 management guided revenues to be in the range of $10.84B-$11.05B versus the previous guide of $11.7B-$12B.  In short, management pulled a billion off the table and they did this in traditionally their best quarter.  If you look at the fiscal years between 1994 and 2010, the fiscal fourth quarter was the highest revenue quarter 14 out of 17 years.  For people who like statistics, I would state that the stock has never worked when their best quarter was not in Q4.  Worrisome was commentary about the switching business, margin pressure and the GM guide of 62%.  Through in public sector spending comments and the upcoming deficit ceiling debate and muni market mess I would not be expecting much out of this sector for awhile.

FY12 Guidance: None

If you look at the updated stock chart below, the stock has rallied into the last five prints only to sell off sharply the following day.  Will tomorrow make 5 out of 5?  I put a little red line in the chart as I think the stock goes to around $15.  Anytime management is talking about layoffs, core business segments under assault and closing of product groups, competition, etc, it is hard to see anyone paying a premium to own the equity.  It is not a disaster.  They make a lot of money and have a lot of cash.  It is just really hard to move the top line when you are putting up $10B quarters and there are a lot of people attacking your success.

Side note…John said the word “compute” four times.  That is four times more than he did in the last four quarterly calls combined.  Was he reading my blog?  Hi John if you are reading and do not forget to read that copy of my book I sent you.


** It is all about the network stupid, because it is all about compute. **

Welcome to Cisco Week! Q3 FY2011 Earnings

Welcome to Cisco week.  CSCO reports earnings after the close on Wednesday.  When I was on the buy side this would mean a mind numbing night of numbers after a conference call that was too long and too repetitive.  Cisco is an amazing company that provides a far better set of financial data than the vast majority of technology companies.  It is the broad data set provided within CSCO’s results that are relevant for tech.  Here is a partial list of companies that have revenue exposure to CSCO:




























Then there are the competitors:











As mentioned in a previous post, the stock has been a loser for the past year.  Of late there has been more positive sentiment on the stock by sell side analysts.  This comes from three sources: (1) management changes (CEO’s letter to the employees), (2) strategy/product changes and (3) positive channel checks.  The stock has bounced a little and management has set a high bar for this quarter (Q3) and next quarter (Q4).  This is the area that I want to explore because I would contend that the financial data set provided by the company really does help an investor understand the velocity of business within the technology industry.  The company provides backlog data on an annual basis, they provide product bookings trend on a quarterly basis and in early 2009 the CEO went as far to provide monthly product bookings data.  When I state bookings and backlog I am only referring to product bookings and backlog which does not include services.  Look at the information that the CEO provided on the last three calls:

–         Q4 2010 “From a global perspective, Q4 product orders grew 23% from a year-over-year perspective.”

–         Q1 2011 “From a book-to-bill perspective, while product revenues were very good at 21% year-over-year growth, total product order growth was only approximately 10% year-over-year; therefore, book-to-bill was below 1.”

–         Q2 2011 “Moving on to a more general picture of our products, total product book to bill was greater than 1, and backlog increased. Product orders grew approximately 8% year-over-year, while total product revenue growth was 3% year-over-year in Q2.”

If you go through the financial numbers when filed with the SEC (note the results on Wednesday will be preliminary) and take what the company tells you on the quarterly call, it possible to build a model to predict the linearity and velocity of CSCO’s business.  I have such a model and I will not share it on this blog.  It culls a decade worth of performance data, management commentary, sell side notes, etc into viewpoint designed to predict how management will guide the business versus market expectations.  This can be reproduced, especially if you have no life and like to spend many hours reading and calculating numbers on a spreadsheet.  If I look at the model here is what it is telling me for Q3 which they report on Wednesday and Q4.

The model calculates backlog, bookings and other ratios and average expected values based on the data set Cisco provides.  I am sure the model’s backlog calculation is not accurate to the actual company’s backlog, but the model is providing an output that is close enough to be of value in predictive terms as to how the management team will guide the business.  If I look only at the data set for last 14 quarters a few statistics are apparent:

  •  On average the guidance for the next Q revenue is 51.47% of the backlog going into the Q.  This number has had a low of 46.28% and a high of 59.20%.  This number varies because management sees the forward looking sales funnel and can predict in quarter book to bill turns – but they do not report the funnel numbers.  These unknown numbers will be in play Wednesday night.  Going into Q3, I think CSCO started the Q with ~$4.094B backlog.
  • Another metric is to look only at the product bookings as multiplier to the product guide by stripping out the services business.  This multiplier typically overestimates the revenue guide because I have to tweak the model to handle deferred revenues better.  I simply correct for it in the model.
  • CSCO’s book to bill ratio (product only) has ranged from 0.91 (Q1 ’09) to 1.08 (Q4 ’09) in last 14 quarters.
  • CSCO’s product revenues have never topped $9B in a Q.
  • Only once (Q4 ’08, which was the Q before LEH blew up) has CSCO’s product bookings topped $9B.

On their Q2 FY11 call, CSCO guided revenues to $10.9B for Q3 and $11.85B for Q4.  I am using the mid-point of the guide for Q4.  This revenue level is a tad aggressive.  Assuming that the services business continues to grow by ~$100M a quarter, Cisco is expecting two big back to back bookings quarters or they have deferred revenues coming off the books they expect to recognize.  In the past 14 quarters, CSCO has missed the revenue line once (Q4 FY10) by 28bps.  That tells me they have an excellent near term ability to predict the follow-on quarter revenue.  If the management team says they are going to print $10.9B for Q3 FY11 I believe them.

What is surprising about the $10.9B number is it implies a big bookings number in Q3.  I estimate to get to this number the company needs $9.1B in product bookings followed by $9.75B in Q4, which is typically their best business quarter.  I think it can be done, but we are assuming that in a quarter of organizational changes, mea culpa letters, soul searching that the company was able to be focused on the business.  I am going to take them at the word on the Q3 revenue number; I just wonder what they had to do to get the numbers done?  Will product margins suffer?  The two most important forward looking numbers that management will provide for Q4 and Q1 FY12 will be (1) revenue and (2) margin trajectory.  The stock will work or fail off these two metrics.  If you are long CSCO, then you are making the assumption that management can run the business to achieve a high rate of customer wins, margins are stable and this implies that the economy is getting better.  If you are bearish CSCO, you think they have negative product cycles in switching and routing, margins will suffer and the company is unlikely to grow a pace to move the stock.  See the product revenue by quarter chart, the downtick in switching has been a big concern for investors.

Here are some charts that make you cautious on the forward expectations:

The Broader Meaning of Cisco’s Results

As soon as the CEO provides some bookings commentary the model I have can calculate the expected guidance for the forward quarters.  Beyond this exercise in buy side / sell side / management alignment, I think that CSCO’s report is going to provide some clarity to three questions:

–         Can the market specific competitors (e.g. APKT, RVBD, JNPR, ARUN, FFIV) continue the momentum against CSCO?

–         Will technology spending in general see a positive or negative inflection going forward?

–         Has CSCO stopped the negative product cycles (e.g. switching) and momentum?

If the forward guidance is soft and the product segment data looks poor, I would say this further proves in the theory that focused competitors are hurting CSCO and this is good for FFIV, APKT, RVBD, JNPR and ARUN.

Reaffirming or increasing revenue projections would imply that the US and maybe the global economies are getting stronger and that caution should be considered on being long the CSCO competitors.  The argument can be made that a rising tide lifts all boats, but some consideration of competitors eating CSCO’s lunch in specific markets must be considered and whether CSCO has decided to defend and take market share.  Margins will be an indicator.  Net/net…I wait to hear what management has to say before I make my choices.

Any lower guidance on revenues or poor margin guidance would be bad for the stock.  That would mean that you want to be long the CSCO competitors and that the management team needs to go back into the Telepresence chamber and figure it out.


** It is all about the network stupid, because it is all about compute. **