Stream of Consciousness on the Data Center and Networking
I was thinking today that it had been seven days since I took the time to write a blog post. This is partly because I have been busy working and also because I have not found anything interesting to write about. I use writing as a way to frame my thoughts. Sometimes when I cannot articulate what I want to say I go for a long bike ride and I find myself writing the argument in my brain and when I get home it is just a matter of typing the words.
I was thinking I would post something about RIMM’s earnings, but I think I have said enough about RIMM although I will reiterate for those who think stocks can be “oversold” and “will bounce,” negative product cycles are hard to break. This past week has been analyst week in Silicon Valley with a variety of public networking companies hosting capital market days or technology briefings. I feel comfortable with my decision to spend time in SV next week and not this week. Judging by the number and content of the phone calls I have gotten at dinner time in the Eastern Time zone, I think a lot of people are more confused than before the week began.
With the all news out of SV this week, I learned that Cisco is not really going to grow 12-17% anymore, Juniper’s QFabric is ready and Infinera has built something really big. Side note…the JNPR and INFN marketing teams must share marketing visions. A few days ago Doug Gourlay posted this on his personal blog. I was also included in number of email threads sorting out all the news. Here are my stream of consciousness thoughts on the data center debates specific to networking. This is really an attempt by me to clarify my previously posted thoughts on the subject while assimilating all the debates I have been part of this week. I am looking forward to people calling me out and disagreeing.
1. My first reaction to all the data center architecture debates and product announcements is to throw them all in the trash. Push the delete button. I have tried to say this in a nice manner before, but maybe I need to be specific as to the problem, the evolution and where the innovation is going to occur. I am going to use virtualization as the starting point. I am not going to review virtualization, but if you have not heard the news VMs are going to be big and the players are VMW, RHT and MSFT.
2. I wrote this on 07.27.2011, but it is worth repeating. Networking is like last dark art left in the world of tech. The people who run networks are like figures out of Harry Potter. Most of them have no idea what is in the network that they manage; they do not know how the network is connected and hope everyday that nothing breaks the network. If the network does break, their first reaction is to undo whatever was done to break the network and hope that fixes the problem. Router, switch and firewall upgrades break the network all the time. The mystics of the data center/networking world are the certified internet engineers. These are people with special training put on the planet to perpetuate and extended the over paying for networking products because no one else knows how to run the network.
Intended network design has changed little in twenty years. I look back in my notebooks at the networks I was designing in the early 1990s and they look like the networks that the big networking companies want you to build today. If you are selling networking equipment for CSCO, JNPR, BRCD, ALU, CIEN, etc, you go to work every day trying to perpetuate the belief that Moore’s Law rules. You go to work everyday and try to convince customers to extend the base of their networks horizontally to encompass more resources, vertically build the network up through the core and buy the most core capacity you can and hope the over subscription model works. When the core becomes congested or the access points slow down, come back to your vendor to buy more. When you read the analyst reports that say JNPR is now a 2012 growth story that is code words for “we are hoping customers come back and buy more in 2012.” Keep the faith. Keep doing what you are doing. Queue the music…don’t stop believin’.
3. The big inflection point in the DC is where the VM meets the VSwitch. When I wrote the above portion about the network being the last dark art, it is because the people running the network are different from the people who are running the servers. These are two different groups and they are on the same team, but often do not play well together. Now we have this thing called the virtualized I/O that runs on the NIC in the server. Is the virtualized I/O part of the network or part of the server? I am not sure this is answered yet because there are different systems for managing parts of this configuration built for the server team and for the network team.
4. I have written this before, but it is worth repeating and clarifying. The number of VMs running on a server is increasing. It is not stable. It has not peaked. It is not in decline. The outer boundary today for the number of VMs in a rack is around 2k. With Romley MBs, SMF and 10GE connections this limit is going higher. Maybe closer to 3K sooner than you realize. At 2.5k VMs in a rack, 35 servers in rack, 40 racks in cluster that is 3.5M VMs per cluster. Multiple 3.5M by the number of clusters in your DC footprint. To add to the complexity, how many interfaces are going to be defined per VM? What if your answer is 4? That creates 10,000 MAC addresses per rack or 400,000 per cluster. The Arista 7548S ToR supports 16,384 MAC addresses. I have already heard of people wanting to push five or more interfaces per VM! I really do not care if your numbers are less or more than above, because my point has yet to come.
5. The problem that I am looking at has nothing to do with the size of your ToR or switch fabric. Configuring VMs and gathering server utilization statistics is easy. The problem starts with connecting all the desired VMs to the network. This is not an easy task
The problem gets really complicated when you try to figure out what is going on in the network when something fails or performance degrades. Inside the DC performance is relatively simple to obtain. Configuring the network and diagnosing problems in a distributed network of data centers in which the compute end points are scaling more VMs per server, per rack, per cluster; well that is a big boy problem. Adding to the complexity is the need to diagnose and solve network connection problems BEFORE humans can react. That is the network world to which we are evolving. Data sizes get larger, the cost of storage declines and the cost of compute declines with the virtualization of the I/O and improved MPUs. The only architecture not changing and moving along the same cost curve is the network and the manner in which the network is configured and managed.
Earlier in the week I was visiting a large service provider who had spent the last five years unifying and collapsing their networks. One of their forward looking initiatives is now figure out how to better engineer traffic flows. As part of the process they want to keep various traffic flows local, they want to have a topology aware network in order to not have to provision so much redundancy. In my view they want to be able to have a connection aware (my terminology) network that identifies and reacts to network failures which is important in a world in which outages for services and SLA now trigger rebates. This last point has increased sensitivity because connection costs are a declining cost curve – not an upward sloping cure. They also described how multiple NOCs deal with problems in different parts of the network. The entire conversation was a repeat of what I hear from DC operators, the difference was the geographic scale of the network.
/wrk
* It is all about the network stupid, because it is all about compute. *
** Comments are always welcome in the comments section or in private. Just hover over the Gravatar image and click for email. **
Pingback: Router Upgrades Gone Wild « SIWDT
Pingback: New Network Meme « SIWDT