Working Thoughts on SDN #2

I was in CA earlier in the week to attend the JP Morgan SDN conference in Palo Alto.  To start I would like to thank Rod and the JPM team for the invite.   A number of companies presented (Arista, Cisco, HP, Vyatta, Big Switch, Ciena, Embrane, Vello, Cumulus, Cyan and Brocade) and there were a number of startups in the audience: Insieme, Plumgrid and Plexxi.  Rather than a reprint of my conference notes, I thought it might be more useful to describe what I thought were interesting discussion points, debates and themes to emerge from my perspective as an observer.  I paraphrased the comments from various speakers.  My observations below are not exact quotes, but rather thematic summaries of discussions.

Attendance: Most of the startups or young companies sent their CEOs to speak.  This includes Arista, Vyatta, Big Switch, Embrane and Cumulus.  Juniper was not present.  Cisco sent a Sr. Director level person while Brocade and HPQ sent CTO types.  I am not sure if there is anything to read into that, but I noted who was participating.

Openflow: There was a lot discussion around the next version of OpenFlow.  Dan Pitt (Executive Director of the ONF) and Guru Parulkar (Executive Director, Consulting Professor at Stanford) both spoke on the direction of OpenFlow and the ONF community.

Controller Market Share: Just listening throughout the day I got the sense that Big Switch (BSN), HPQ, CSCO and JNPR (recent press article confirmed by a friend) are all building controllers.  I am not sure if everyone’s controller is based on OpenFlow, but Google also built a controller which I am sure it is for internal use and I know they contribute to the OpenFlow working group.  I listened to Guido Appenzeller, CEO of BSN, speak and he made some interesting points about controllers.  He contrasted BSN with Nicira by saying that Nicira was overlay tunnels for VMs/HyperV, while BSN was focused on three elements of SDN: (i) hardware (HW) switches, (ii) open source Controller which is FloodLight (iii) and apps.  He commented that Floodlight was ahead of all controllers and the market (i.e. end-users) is going to want a open source controller and will not be locked into a single controller from a single vendor.  The BSN controller is being architected for service providers and enterprises and he believes the SDN market will evolve along the lines of Linux.  BSN is not a hardware company, so they are not focused on switches.  BSN is focused on the controller for the good of the community and plan to build their business around the applications using SDN.  They want to be the App Store of SDN networks.  He commented that the first phase of SDN is overlays on the existing brown field networks, but soon there will be a new hardware/switch layer that enables the second phase of the SDN evolution.  In a later panel, Jayshree of Arista commented that there might be application specific overlay controllers.

App Stores for Controllers: When I hear people describe this view of SDN it always involves applications like load balancers, firewalls and routing.  I think of these things as network services and I do not think that network services belong in App Stores and I am not convinced they have anything to do with SDN.

When Does SDN Matter: Everyone agreed that SDN is starting now, it will accelerate and reach critical mass in three years which is the start of the sustained period of adoption and growth years.  Dave Stevens, Brocade’s CTO commented that SDN does not reduce the power of incumbency and that the big stacked networks that have been built over the past twenty years are complex and really hard to switch out.  I would comment that this is exactly the problem statement for SDN.  He commented that SDN is about automation and that many of their customers are asking what does SDN mean to them.  Brocade’s plan is to partner for SDN.

VXLAN: There was lots of chatter on VXLAN.  I understand it, I just do not get the excitement.  Wrapping something in more of something is not really innovation.  If you get excited about VXLAN, you probably think fondly of the following terms: Multi-bus, IGRP, AGS, RIP2 and you have not missed a standards body meeting in a few decades.

SDN in WAN: There were three transport focused companies at the conference: CIEN, Cyan and Vello.  Ever since the Google OpenFlow WAN presentation at2012 ONS, there has been a lot of interest in how SDN fits in the WAN.  The VP of Engineering for Vello commented that transport is complex, resources (i.e. fiber?) are scarce and cited the Google OF WAN.  They are focused on DC to DC interconnects.  The CTO of Ciena (an old colleague from my CIEN days) commented that the problem SPs have is how to get a lot of capacity to many locations cheaply.  He describes this decade as the M2M decade coming off the mobility and internet booms from the prior decades.  He also made some other comments that I would agree with: (i) if SPs can separate compute of topology from path calculation that could become very powerful for carriers and (ii) the SDN proposition for the carrier is the ability to have levers that change the underlying wires of the network more than once every 3-7 years.  Imagine if they could change them many times a day!  That last sentence was a veiled Plexxi comment by me.

SDN for High Capacity Infrastructure: One of the speakers I was most impressed with was J.R. Rivers of Cumulus Networks.  I had never heard him speak before.  He was part of a SDN panel with three other companies, but had time to make some interesting comments.  He said that Cumulus was focused on high capacity infrastructure.  He talked about building DCs in which servers were made up of communities of systems and the relationship of how these communities peer with other communities in the DC.  He cited a example of how a community of ten servers might work together and that there are many apps in a DC and that clusters of machines want to work with other apps in different clusters.  He thinks SDN is a way to deploy apps in the DC and that enterprises are in the proof of concept phase for SDN.  He commented the SDN might be the SLA for the application in the enterprise.  He does not think that SDN obsoletes hardware, but rather SDN allows the network to run different parts of software on different end points.  SDN is not path computation, it is connectivity and SDN enables virtual and physical machines to communicate and in totality this becomes an enabler for SLAs to move into the overall network system.

Panel on Switching and Networking: There was interesting fireside chat with Arista, BSN, Brocade and HPQ towards the end of the ay.  Jayshree of Arista views SDN as a seminal moment in networking where the orchestration of the network is improved and this translates into OPEX savings.  Guido commented that controller architectures will evolve and we will see an evolution of the first generation centralized controllers to a distributed controller model.  There might be one logical controller and a distributed set of computational controllers like a hadoop cluster.  There was a discussion about the development of controllers of controllers and application specific controllers for overlay functions.  At the end of the discussion of controllers, there was a discussion that there was no good solution for the controller of controllers and that this was probably a good PhD thesis to be written.  There was unanimous agreement that the VSwitch is a strategic point in the network.  With all the SDN talk, HPQ pointed out that not everything is about SDN and managing the legacy network is just as important and customers want a single pane of glass to manage the network.

Cisco Systems Meme: Throughout the conference there were discussions on the future for Cisco and what will be the affect of SDN on Cisco; this is kind of what happens when you run around the world with a presentation entitled “Cisco and the Seven Dwarfs” from eight years go.  This has not been a topic I have been shy about, but there was a neatly packaged thesis that made sense to me from several of the CEOs of SDN startups who were former Cisco executives.  The thesis is that Cisco has grown up to be IBM.  They develop their own ASICs, they package their ASICs/Hardware with their closed software and sell it as a system at very high margin.  Think IBM mainframe.  The only way to beat Cisco is to be cheaper, better and faster.  This was clearly the strategy of Arista.  Along comes SDN and it is going to unleash the value that is contained in the closed system.  I once wrote a book on what happens when you deregulate a market called telecom and clearly many of the SDN startups are looking to deregulate Cisco’s market share.  The companies that are building SDN solutions see Cisco as the IBM of the 1980s and they want to unlock the value in the closed system.  In one way this makes sense to me as I started my technology career selling multi-protocol bridges and routers into SNA networks.  I was told as a young sales person that no one gets fired for buying IBM and to beat IBM you need to be cheaper, better and faster.  I know today that no one gets fired for buying Cisco.  This was reinforced just yesterday when I got an email from a CEO of Cloud Provider who told me he was not interested in any alternatives to Cisco because they are a Cisco shop and Cisco now has SDN.  The caveat with this story making sense to me is I might be falling victim to the power of stories and legends.  😦


June 2019 Essay: The Changing Structure of Technology Companies

Working through a framing exercise of a series of cross market observations connecting to several historical corollaries to produce the following hypothesis: there will be an acceleration of change with regard to corporate structures, with emphasis on technology companies. The change that will occur will be impactful on a broad scope to include the private-equity association that has become integral to the technology market segment. The resulting reversal of off shoring to on shoring will have significant effects across the technology industry.
Continue reading

Does SDN, DevOps and Agile Infrastructures Require an Abandonment of Taylorism?

I posted last week about a sales call gone wrong and an innovator’s dilemma moment. Since that time I have had additional customer and internal engagements that caused me to think about what I call institutionalized impedance, which might be more familiar to a broader audience if I called it Taylorism or scientific measurement. Continue reading

Echoes from Our Past #2: The Long Tail

As with my first post about Antietam and the Vacant Chair, I have started to weave some creative writing into my technology and business focused blog.  If it is not for you, please disregard.  I am writing this post from the reading room in the Norman Williams library in Woodstock Vermont, which was built in 1883.  Outside the leaves are in full color and on the town green is chile cook off contest.  More than a decade ago, I started writing a book on my experiences reenacting the American Civil War.  I was motivated to write it because I had read Confederates in the Attic.  I knew some of those people in that book and had been at the same places.  Writing a book requires time and concentration. Continue reading

Looking at the Big Picture and Connecting the Dots

I am sitting at my desk looking at several reports on various technology companies.  I think investment research; especially sell side research is often wonderfully optimistic about the future.  Thematic reports on new technologies and market adoptions are almost universally bullish.  No firm writes an investment research report entitled “Cloud Computing Set to Disappoint,” rather the investment community gets 30-40 reports on cloud computing being the next big thing.  Apple is a pretty simple company from a product and business line perspective.   They have six product lines and Wall Street has 40+ analysts who cover it.  I understand the need for a firm to have an Apple note to use for marketing, but 40 analysts plus another 20 or more boutique research firms and we are talking 60, 70, maybe 80 reports on Apple.  Many firms publish weekly notes on Apple.  Do we really need so many reports and is there anything relevant or differentiated in these reports, but I digress from the topic at hand.

I like to go through what I call a framing exercise.  Readers of the T&G blog from 2006-2007 know where I am going.  I like to layout a number of facts as well as suppositions and create some conclusions.  Here are few frames:

05.09.11: David Yen, EVP and GM of Juniper’s fabric and Switching business unit leaves for CSCO.

05.11.11:  CSCO Reports Q3 FY11 Earnings:  Cisco is one of the first companies in tech to report on the month of April and the first to give expectations for May-June-July.  Stock down big on the forward view of the business.

05.14.11: Barron’s positive on JNPR.  JNPR stock @ $39.12 on the Monday 05.16.2011 close.  It closed at $29.91 yesterday.

05.17.11: HPQ reports and lowers forward guidance.

05.25.11: Auriga downgrades JNPR due to slowing enterprise business and Openflow concerns.

05.27.11: Transmode (TRMO) prices IPO

06.01.11: JNPR CEO is reported to offer cautious comments on backend loaded quarter at the BofA/ML Conference in NYC.  The business pace of the quarter that the CEO spoke about was precipitately reported by Sanjiv Wadhwani of Stifel on May 31.

06.09.11: CIEN Reports Q2 FY11 Earnings: CIEN is reporting an April quarter end, heavily levered to service provider CAPEX and guiding for the months of May-June-July.  Ciena missed street expectations and lowered their guidance as well.

Catch-up Spend: I wrote about the catch-up spend several days ago, but it is even clearer when you layout the operating models for a good sample size (~15) of companies that are in the technology market.  The spending catch-up (started Q2 2009) from the credit crisis spending collapse is over.  Many companies benefited from this event because they cut their operating models in the 2H 2008 expecting a prolonged downturn.  The downturn was deep, but brief lasting 2-3 quarters and then spending came back and accelerated into 2010 year end.  Many good things happened because of this 21-24 month recovery: (1) margins recovered and then (2) earnings peaked, (3) IPO window opened for companies like CALX, TRMO, Avaya (4) M&A returned with a vengeance: DDUP, STAR, COMS, CIEN/NT, CALX/OCNW, Telecordia (today), etc, (5) large public companies refinanced debt and (6) private companies were able to fund.

My supposition is that the negative impact of the spending cycle catch-up is it fooled companies as to the direction of the market.  Because of their size and global reach, CSCO is a key indicator in the technology world and in some ways the tip of the sword.  They had it all figured out.  They were going to have ~15 billion+ business units.  They went aggressively into the downturn planning to use their balance sheet to take share.  For awhile, it worked and they put up the best numbers in the company’s history in the 1H of 2010.  When they saw the underlying business deteriorating they tried to buy their way out: STAR, Tandberg, etc.  Here are two charts showing CSCO’s quarter revenue trends.

The onset of the global credit crisis enabled CIEN to buy Nortel’s Optical Business Unit all because NT’s North American business unit went bankrupt because the company kept too much capital off shore and the tax to repatriate made funding the business a challenge.  CIEN was able to correct two significant errors with the NT deal.  (1) The had completely missed the 40G/DQPSK cycle preferring to focus on 100G and (2) there were a large number of global service provides that were not CIEN customers.  Now both of those errors are fixed at the cost of $1B.  My point is that CIEN is now set in their product plans.  They know who they are and their ability to alter their product offerings is limited because they are going to be really busy for next 30-36 months consolidating their customer base and sorting through the product roadmaps of two companies.

It appears, that looking at companies missing in 1H of 2011 that the miss is 7-9% from plan.  Meaning that the market is growing, but it was linear from Q4 2010 to Q1-Q2 of 2011.  I think this is a re-composition period.  The spending catch up is done, now consumers of technology are putting more thought into what is next.  If you accept the emerging three pillars of (i) centralized control-plane, (ii) software defined ecosystem and (iii) lower priced storage-compute-switching power pack, then I think several companies will have challenges.

Technology leadership in the network is coming from unfamiliar places.  I wrote about this recently.  I am interested in understanding how these new leadership entities are affecting spending entities for 2012 and what new technologies are they adopting for specific applications?  What spending entities will lead the adoption of the three pillars?  Some of this is cloud a discussion, but I really dislike the term cloud.  To me the data center/cloud evolution is four segments:

  1. Warehouse scale (GOOG, AMZN, AAPL, MSFT, Defense/Intelligence)
  2. High performance compute (HPC) clusters (Financials, Energy, R&E and select enterprises)
  3. Private DCs (Bulk of the market who can afford to own their on DC and will outsource small portions for replication etc)
  4. Hosted DCs (SMB market)

The data center/cloud market is going to be big, but I think there are multiple points of entrance and multiple strategies to employ. Here is my short summary of access to each of the four segments:

  1. Warehouse Scale: As I discussed before, this is the evolution to all photonic switching starting with the removal of the aggregation switches, then the TOR switches go away and eventually blades to switches and Openflow is important.  Need to focus own on how the network element interfaces into the virtualized I/O on the blade server.
  2. HPC Clusters: This is another place in the network where technology changes are afoot. This is your higher end, high transaction compute centric enterprise.
  3. Private DC:  I think this is the bulk of the market.  Most companies can afford to own their own data center and will continue to do so, but outsource varying elements to cloud providers.
  4. Hosted DC: This is the long tail of the market.

When I read research reports and transcripts of earnings calls, I am not surprised to see CxOs and analysts using expressions like “history repeats itself” or back in 2001 or 1994 we did X so this time around we are going to do X because X worked then.  That is a bunch crap.  History does not repeat itself.  We say and think those things because we are often lazy and it makes us comfortable to associate events in the present with memories from the past.  There are five behavioral characteristics we see in the market as well as in corporate leadership:

  • Need for Thought Anchors: The process of making decisions is most comfortable when we can associate the decision to that which is familiar.  We tend to justify a decision by saying, “this is just like when we made the decision to do X last year.”  We begin with what is familiar and try to associate future with the past.  John G. Stoessinger described this action as image transfer because “…policy makers often transfer an image automatically from one place to another or from one time to another without careful empirical comparison,” [see Nations in Darkness, John G. Stoessinger, 1971, page 279].  Dr. David D. Burns described this behavior as “all-or-nothing thinking” or “labeling” behavior [see The Feeling Good Book, David D. Burns, 1989, page 8-11].  It is much easier for humans to use thought anchors to build their decision making process around – rather than pursue unfamiliar thought positions.
  • Power of Story and Legends: Decision making based on story and legend often motivates more than facts and research.  We are emotionally driven people.  That is why revolutions are emotionally draining events.  The power of story and legends is the act of basing decisions on simple reasons and not investigating and understanding the complexities of the decision criteria.
  • Overconfidence and Ignorance: There is a tendency for humans to be overly confident on subject matters on which they are enormously ignorant.
  • Herding Behavior: The power of the crowd.  The power of revolution.  In the absence of facts, or when facing a daunting amount of work to justify a decision, it is easier to follow the crowd.
  • Denial – Failure to Accept Mistakes: Humans rarely admit, or learn from, their mistakes.  Mistakes are a natural byproduct of being human.  Admitting and learning from mistakes is an unnatural human activity.

Putting it all together I think:

  • The catch up spend is done.  A number of companies are starting to realize they made some architecture mistakes and failed to realize the effect of the three pillars of (i) centralized control-plane, (ii) software defined ecosystem and (iii) lower priced storage-compute-switching power pack.
  • A number of big companies got trapped by the catch-up phase like CSCO, JNPR, CIEN, etc and they are now in a negative product cycles.  I experienced this at XCOM in the early 1990s against CSCO in the switching market and it really sucks.  You never have enough capital to spend your way out; it keeps deteriorating.
  • The leading edge of the market is beginning to move.  You want to be building in the leading edge of the market.  Companies with nimble product development teams are already focused on the emerging entities affecting the technology selection.  I like to think of technology selection as Darwinism.  Anytime you hear a person talk about Darwinism and survival of the fittest they are quoting urban myth.  Darwin’s theory was about selection – not survival. Going forward technology selection will choose the winners – the biggest is not necessarily a survivor (ask DEC, WANG, Nortel, etc).

If you are a generalist PM I would guess you have no idea what I just wrote in the prior 1800 words.  Most sell side analysts are probably scratching their head thinking I am nuts and wondering what this has to do with their price target and forward multiple on their outperform rated stocks.  Any executive reading this in a networking company is wondering how I know what their forward sales funnel looks like and what customer feedback has been of late.  History is not repeating itself.  The technology ecosystem is changing and evolving everyday and the underlying forces driving that change are coming from unfamiliar places.  We can all jump with the herd on the cloud rush.  We can believe in the stories of the internet breaking and exafloods or we can accept what is really going on.  By the way…did FB just peak too?  I would not know, left it more than a year ago.


* It is all about the network stupid, because it is all about compute. *

** Comments are always welcome in the comments section or in private.  Just hover over the Gravatar image and click for email. **

GOOG: The Day After

I was speaking to the CEO of networking company last night and he surprised me with the comment that he used to read my blog and thought highly of my blog.  I have not blogged in four years.  Now I am back blogging because I have some time to commit to blogging.  I have written blog post that disagree with the notion that blogging should be live in the moment, extemporaneous thought stream.  I attempted to be thoughtful and compose original pieces.  I am not sure how long I will sustain this blog, it could be a matter of days or months, but I though this was a good opportunity to read something I posted in 2006 and see if it still made sense in 2011.  Below this post is my unedited 2006 post on Google.

Today is an apropos day to write about Google as they reported earnings last.  A number of disappointing metrics have clipped the stock today.  There is nothing wrong with Google.  It is a fantastic company – they just make so much money it is hard for their other businesses to live up to the success of the core business.  I had one VC comment to me lately that he thought Google funds all sorts of wild, fantastic projects and people because they make so much money it would look obscene if they just spent what was needed to run the core businesses.

Here are my thoughts on Google post the report.  If you want to see a company that is going to kill RIMM, it is Google.  I would take the long GOOG, short RIMM trade all day.  In many ways, Google is one of the critical computing engines of the internet.  If you want to be long GOOG, what should you expect the stock to do?

If we look at a weekly chart that removes the distraction of daily swings and assume that 2007 run to the peak and the 2008 run to the bottom did not occur, then the stock is actually showing a toping pattern.  See the five black lines of higher highs; they are starting to lose their step function.   Barring a major market sell off, I think the stock is going to become range bound in the $505-625 range.  The good news is I think the recent management change is a reflection the stock for what we know about the company is in a fully valued position.  I would be buyer and holder long term because I think the management team (1) knows this and (2) long term I like their position in a compute centric internet world.

– GC

Friday, September 29, 2006

Thinking About Google, Do they Fit into a Peer Group?

How would you classify or label Google as a company? Are they an ASP, Search Engine, Services Portal, New Millennium Service Provider, Infrastructure Provider, Software Company or something else? Google does receive a large amount of press and even has nice 60 Minute fluff pieces done on the company wherein we learn that the smartest people on the planet are lovingly clustered in one location ensuring that we can have immediate access to information regarding the nearest pizza shop and Starbucks location.

Google defines their company mission as: “Google’s mission is to organize the world’s information and make it universally accessible and useful.” [Source:]. This mission statement is accurate when you accept the premise that the Google business model is really a function of their user base and their ability to connect users with advertisers. By this definition, growing the subscriber base means having presence in demographic markets with large populations, consumer level wealth and connections to the web whether it is wired or wireless. I like Google’s mission statement, but I always think that a better mission statement for a public company would be: “We are here to enhance and grow shareholder value through an ethical application of our business model.

Critical to the success of Google’s business model is having relevant content and recognizable brand. I wrote about the content portion last week [see], thus we only need to cover the brand portion. Google as a brand is a core element of their success. If Google was not a brand at the consumer level, they would not be a preferred destination on the web and would not be generating vast amounts of advertising revenues. Having relevant content and recognizable brand are key components to their success.

 What does Google do well?

Google does three things well. (1) The have an excellent search engine built initially around the concept of ordering an information database by using the concept of source citing and links to data. Think back to the concept of creating a graduate thesis and citing sources. As information evolves, papers the theories evolve from base of existing knowledge. Credible academic papers cite sources and previously published material. Google’s initial search engine started with the concept of ordering information based on relevancy measured by links or references to the content from other sites. (2) Google figured out how to monetize the information in their library. By separating sponsor information from ordered or non-sponsored information Google actually created a space to brand and value. (3) Google does services very well and the core service is what Mary Meeker termed an “on demand customer acquisition tool.” I heard Mary describe Google in this way at the Warburg-Pincus Information Technology Conference in May 2006. Mary went on to state that Google provides “…advertisers and vendors with a toolset, dashboard to manage and measure customer acquisition through sponsored search.” As evidence of this point, Meeker used the Q4 Google numbers (now somewhat dated) that revealed Google generated $1.9B in gross revenues and paid out $629M to partners.

 What Google does not do?

Google is not an infrastructure company. They have a number of data centers built around the concept that would find a familiar home with the American and Russian Armies of the Second World War. Instead of buying the best server and storage technology available, Google builds massive server farms using the concept that cheap and numerous is better then a few high-end storage systems. A typical Google server farm ranges from 3000 to 5000 servers (enough to keep it running and support the traffic load) with ethernet switches and routers as necessary. Google believes the value of their network is in the software that ties servers together – not in the hardware itself. They then take this server model and replicate it as required. Each server farm is built on the assumption that a number of servers (~25) can be out of service at any one time without affecting service. To keep cost down, Google buys off the shelf servers from the any of the computer suppliers (e.g. Dell, HPQ, IBM, etc) who are offering the best price. The real cost of a Google datacenter is not in the hardware, but in power. Power is a significant portion of their cost as power is required to operate 5000 servers as well as cool the heat output of 5000 servers. Google thinks a lot about power consumption as seen in their recent paper submitted to the Intel Developers Forum (IDF) [see].

A few years ago, Google did not have a distributed datacenter architecture and the exogenous shocks of 9/11 and Katrina affected their thinking in terms of network architecture. The increased number of Google data centers as well as need to deploy smaller localized data centers for regional or localized content, has created the need to ensure that there was enough bandwidth and resiliency in their network connections to load level and provide cache updating across a distributed data center network. This is a scaling challenge caused by ever increasing amounts of content and the need to organize, locate as well as educate other Google servers where to find information.

What is important to Google?

Setting aside the three things Google does well, the next most important consideration for Google is the health of the network and the deployment of ever increasing amounts of bandwidth in the global internet. The first five to ten years of the internet have often been called the Web 1.0 or the dialup years. The internet in the mid-1990s looked more like and SNA next from the early 1990s then the basis for a new economy. Post the technology/telecom crash the new internet excitement has centered on Web 2.0 and the deployment of a broadband version of the internet. For the past several years, the service providers have been in the business of upgrading the internet from dialup to broadband. This created a new of challenges which can be covered off in another discussion, but evolution from dialup to broadband is an important health measurement of the overall technology/telecom ecosystem for Google. If Google is maintain a healthy business model, then driving more users (i.e. consumers) to their servers and connecting these users with content and advertisers is the core function of the company.

As content consumes increasing amounts of bandwidth, the speeds, capacities and methods of internet access are an important consideration for Google to monitor. Google has seen massive growth in Google Talk and Google Earth, which are two applications that consume a large amount of bandwidth. This is the reason they are trying to provide the performance and the significantly superior sound quality of Google Talk (i.e. bandwidth) over mobile phones and land lines. [see the recent article on]

Another area of importance to Google is key alliances with content delivery systems such as AOL. In the war between Google and Microsoft, we can see two companies that are increasingly in conflict with each other and each is coming from a different domain expertise. Microsoft’s domain expertise is the operating system and the applications that run on their operating system. Google expertise is the cataloging the content on the web and users with advertisers through relevant content. If you make the assumption the future OS is network based – not system based as in past ~25 years, then a real paradigm shift is occurring. In this war Google and Microsoft both view service providers of all types (i.e. wireline, mobile, ILEC, CLEC, PTT, RBOC, Cable) as companies that are increasingly overlapping in the their competitive offerings and they are an important connection to ensure is that a relationship exists with these providers to ensure access to the end users. Cleary the moves by Microsoft with its relationship with at&t (formerly SBC) and the handset providers is a concern to Google. Google wants to see the world of devices that are used to access content move away from device centric operating systems, while Microsoft wants to seed more end-user devices (e.g. computers, mobile phones, set top boxes) with a Microsoft OS. It is important to remember that this is not the first time Microsoft has engaged in strategic initiatives in the broader technology ecosystem. The following text is from my book (yes, I know, cheap promotion, but it is easier to cut and paste then type for hour), Six Years that Shook the World, pages 100-103 [see]:

In the time period from 1993 to 1994, Microsoft was consumed with readying one of the most significant product launches in the company’s history as well as developing an online service to rival AOL, Compuserve, and Prodigy. The impending new product launch would be a significant driver of Microsoft revenue and would secure Microsoft’s position as the dominant supplier of operating systems for the PC market. This product was called Windows 95 and in August of 1994 when the product was launched, it was difficult to avoid the global marketing campaign developed by Microsoft around the Rolling Stones’ song, Start Me Up. It was just a few short months before the launch of Windows 95, that Microsoft made a commitment to the web. In April of 1994, Bill Gates wrote an important memo to the key leaders of the company entitled, “Sea Change.” In this memo, he outlined the importance of the web and set Microsoft on a rapid development plan to integrate a browser into the Windows 95 product, which was scheduled to launch in a few short months [see Speeding the Net, Joshua Quittner and Michelle Slatalla, 1998]. Before the April memo from Bill Gates, the web was not a top priority for Microsoft and its best resources were not aligned to capitalize on the emergence of the web. Although Microsoft did start work on its browser in late 1993, it was not a top development priority within the company. Following the launch of Windows 95, the MSN Network, and Internet Explorer 1.0, Microsoft began some initial investments in the internet revolution, but the company was still focused on the anticipated revolution in the consumer market of television. As part of their investment in the internet, Microsoft became an investor in UUNET in 1995, but the main investment focus of the company was in area of interactive television. Microsoft envisioned a world in which the personal computer and television would become fused. This belief guided Microsoft to be an early investor in next generation cable television as well as to be a creator of broadcast content. Three significant cable and broadcast media investments set the foundation for what would later become more than $10 billion in follow-on investments in the network infrastructure of cable television to carry high-speed internet data and enhanced video services.

The first major investment that Microsoft made in the cable industry was investing in TCI. The same TCI was the first customer of @Home. Microsoft’s next investment was in a company called WebTV that was developing a form of interactive television. The idea behind WebTV was to provide internet capabilities to a cable television set top box for the consumer who wanted internet access, but did not own a computer. Microsoft eventually acquired WebTV, on April 6, 1997, for $425 million. In the press release announcing the acquisition, Bill Gates said, “This partnership with WebTV underscores our strategy of delivering to consumers the benefits of the Internet together with emerging forms of digital broadcasting.” Before the acquisition of WebTV, Microsoft made a major move into the broadcast media, well before anyone ever envisioned AOL buying Time-Warner. In 1995, Microsoft launched Windows 95 and started a broadcast network with NBC that would become known as MSNBC.

 1995 was just the beginning for Microsoft and the cable industry. The next four years would see Microsoft invest nearly $10 billion in the cable industry worldwide. The first big deal announced after 1995, was a $1 billion investment in Comcast in June of 1997. Bill Gates said of the deal, “Our vision for connecting the world of PCs and TVs has long included advanced broadband capabilities to deliver video, data and interactivity to the home. Comcast’s integrated approach to cable distribution, programming and telecommunications complements that vision of linking PCs and TVs. Today’s announcement will enhance the integration of broadband pipes and content to expand the services offered to consumers.” Brian Roberts, then president of Comcast said of the investment, “I am pleased to have Microsoft’s participation as we shape and advance the integration of the PC and the TV. Microsoft’s investment is a strong endorsement of Comcast’s vision to use its cable networks as a broadband vehicle to homes, schools and businesses. Comcast’s customers will be the beneficiaries of the innovations that America’s most advanced computer and cable companies can offer. In addition to a significant cash infusion, this investment gives us access to Microsoft’s expertise, which will help us facilitate the deployment of high-bandwidth applications and lead to more sophisticated services.” Microsoft was making a major play for the interactive television and internet market accessed via cable networks. At the May 1998 NCTA show, Bill Gates said of the internet and cable industry, “By early ’99 we should be rolling out hundreds of thousands, even millions of set-top boxes that combine PC technology with these Internet connections. Now, the information that people will deal with will be in many different places. You’ll have a pocket-sized device that you can take with you. You’ll have your pager or telephone. You’ll have your intelligent set-top box. And you’ll continue to have PCs that you keep in your den, or that you have as portable devices, or that you use at work. Now, through all these devices you’ll want to get at the same information. And the value of having that information online will continue to increase. It’s really stunning, if you go out on the Internet, to see all of the things you can find out there. You can see what’s going on in Congress. In fact, whenever you go and browse a news site, if you’ve provided your zip code, it automatically appends onto any news stories about the Congress, specific information about how your representative voted. There’s even a link that’s included now, where if you disagree with what they did, or if you want to provide feedback, you simply click, and you can provide electronic mail to your representative. And so we’re going to get interactive democracy, letting people participate in new ways. Electronic commerce across the Internet is also exploding. Companies like are achieving very high valuations, as people see the incredible growth there. Whether it’s finding books, finding records, booking travel, all of these things, the interfaces continue to improve. And I think that a substantial part of all of those activities will be done over the Internet. And therefore, give the cable industry a chance to participate, participate in the transaction fees, and participate in owning the companies that are going to make this happen” [see, Bill Gates,’98.asp%5D.

 The one billion dollars invested in Comcast was just the beginning. Microsoft next invested $212.5M in Time-Warner’s Road Runner cable modem service. Time-Warner was one of the few cable service providers that did not use the @Home service, but rather built its own cable modem service. A look at the June 15, 1998 press release finds two typical ITO Revolution quotes from the two Cable Company CEOs involved in the deal. Gerald M. Levin, CEO of Time-Warner said “Today’s investments by Microsoft and Compaq validate the cable architecture as a premier Internet distribution medium, which will benefit consumers nationwide. This combination of world-class companies will enable us to develop a powerful, branded package of content that will become the high-speed online service of choice for our customers. Microsoft’s and Compaq’s expertise complements perfectly the strengths of Time Warner and MediaOne.” Chuck Lillis, who was chairman and CEO of MediaOne said, “With this combination of industry leaders, the venture will be well-positioned to rapidly deploy a wide range of high-value content and services to our customers. As network-based services become ever more integral to our lives, providing a network that will allow us to ensure the performance, connectivity and interoperability to any network in the world will be critical. This venture provides the ideal platform to build both the online service of choice and the network.”

 As Microsoft pushed into 1999, their investment in the cable infrastructure as a broadband delivery mechanism increased. Interactive active television, the internet, and the legacy service of telephony became targets for Microsoft money. In January of 1999, Microsoft made a $500 million dollar investment in NTL, the largest cable provider in the United Kingdom. Barclay Knapp, then CEO of NTL said, “Microsoft believes in our vision of bringing advanced digital Internet, telephone and television services to consumers and businesses throughout the UK via all platforms. NTL’s pioneering marketing, network and back-office resources, coupled with Microsoft’s world leadership in personal computing and digital television, will make for a great combination.”

 Following the NTL investment, Microsoft made two international cable investments and one of the largest single investments in the company’s history. The two international cable investments were in the cable arm of Portugal Telecom (PT) and Rogers Communications in Canada. The investments were $38.6 million in PT and $400 million in Rogers, which is the largest cable operator in Canada. On July 12, 1999, Microsoft and AT&T announced a $5 billion dollar investment by Microsoft in the cable business of AT&T, known as AT&T Broadband. In the press release, Bill Gates was quoted as saying, “Our agreement today represents an important step in Microsoft’s vision of making the Web lifestyle a reality. Working with AT&T, a leader in the delivery of cable and telephony technologies, we will expand access to an even richer Internet and television experience for millions of people.” In turn, AT&T used the proceeds from this transaction on capital expenditures to improve their cable network. This infusion of capital to build their own network occurred four months after the merger of @Home and Excite.”

Beyond the health of the internet and relationships with content providers, Google looks at broader markets to drive more users to their site. With this in mind, finding successful strategies to commercialize their revenue mechanism in Asia-Pac and Europe is important. Google wants to promote broadband deployments, whether wireline or wireless in large demographic markets for in Asia-Pac. Anytime large demographic populations with increasing disposable wealth have broadband access to the Internet, Google thinks they can make their new age advertising model work.

 Where does Google fit?

With Google shares priced at $403 (09.28.06 closing price) they receive a lot of press coverage regarding their share price and valuation. A good example of this coverage can be seen in this entry from the 24/7 Wall St. Blog [see]. Google is valued much more highly then, and Does this mean these companies are not in Google’s peer group or Google is simply a much better company then these peers? I think the answer can be found in two parts. Part 1 is that Google has clearly produced a better mechanism (i.e. engine) to capture advertising revenue. The metric the stock market is seeking is to know the ceiling capacity of Google’s advertising revenue engine? By knowing the peak revenue generating capacity, it will frame the valuation of Google. The second answer is that Google wants to be something else, without disrupting the current business model. They try not to move to far from their core competency, but realize they need to develop a more sophisticated business. I am using the term sophisticated to mean evolving their revenue sources beyond advertising. This then goes back to looking at their strategic relationships with service providers of all types and the development of applications in a network centric computing model and how this paradigm shift (if it occurs) can be monetized to add to their advertising revenue stream or hedge against a decline in advertising revenues. Google is valued today because the stock market does not know the extent to which they can grow their advertising revenues and there is a component in their valuation that accounts for Google being perceived as an important brand and innovator in the next twenty-five year technology paradigm that started when the technology crash bottomed out in 2003.