Do It Again Part 2: My Thoughts on the State of Networking June 2014
My perception of 2013 is a year lost in noise with regard to networking and SDN. It seems that after Nicira was acquired by VMware in 2012, the velocity of SDN and networking noise increased to unexpected levels and it had a detrimental effect on the market. If I worked at an incumbent networking company, this is exactly the effect I would have desired. Freeze the market in a state of confusion and sprinkle in as much disinformation as possible. Well played if you have market share to protect.
To be fair, all the various networking startups participated in creating the noise and confusion. From mid-2012 through the start of 2014, we had waves of networking messages crash on shore. There was the big generic SDN wave, the OpenFlow wave, the App Store wave, the Linux wave, the ACI wave, the Bare Metal wave, the NFV wave and I am sure there are few waves I neglected to mention.
It makes perfect sense to me that I find myself the lunchtime speaker at Fortune 100 company at which I was asked to provide an SDN overview and a Plexxi product presentation, but because it was the combined IT team, I was asked to make it not too complicated because everyone was confused about SDN. How does one accomplish that task? I took an old presentation and fused it with a new presentation. Here are a few of slides I used to tell the story of the network evolution towards SDN and I left out the Plexxi specific slides.
Slide 1: Imagine If…
Imagine if you worked in a data center in 2003. You had just read Nicholas Carr’s “IT Doesn’t Matter” paper and became profoundly depressed with idea of working for the big IT utility in the sky. You think Carr is correct and decide on a mid-career change. You like coffee, warm weather, beaches and the BVIs are your choice to start a boutique coffee shop. Eleven years later the coffee sabbatical has ended and return to your old job. What would be different?
Slide 2: 2003 Datacenter versus the 2014 Datacenter
When arrive back to your old job in 2014 what would be different? The first blade servers (that is a pic of the RLX blade server on the left) were emerging when you left and today servers are all about high density, multi-core, bare metal systems at a fraction of the cost. Why buy one server when you can by 10 or 20 for the price of one in 2003? Server virtualization was nascent in 2003, today a server admin can make a 1000 VMs fly up and dance across the screen.
Storage was about big chassis of spinning disks, block storage, high-availability systems that look like a Sub-Zero refrigerator. Data migration between storage tiers was very difficult. Today, storage is about flash tiers and hybrid/flash storage systems with vastly more storage than anyone thought possible in 2003. Many customers are deploying JBOD. In ten years the mind-set of the storage leader went from persistence and availability to performance and let it fail (e.g. flash).
The network was about stacking switches on top of switches and buying the biggest core switch you could afford. Ten years later the network looks very much like the network of 2003 for 99% of end-users. I would submit that with just a few days of a tech refresh, a network engineer from 2003 could be productive in 2014. The big difference between 2003 and 2014 is that the server and storage teams left the world of scarcity and entered the world of plenty. The network is still in the land of scarcity worrying about link failures, redundant paths, state distribution, CLIs and using ping to see if a link is up. The server and storage teams build out in the land of plenty and can handle outages on their schedule, while the network teams type cryptic commands into CLIs are 3am.
Slide 3: Applications Have Evolved
The most startling change has been the evolution of applications over the past 15 years. The reality of today is that servers talk to servers because of the nature of the applications, the low cost of compute and low cost of storage. We are no longer in the Client/Server era. I recently had two engineers provide me their perspective on a popular video steaming service. One worked for the streaming company and the other worked for a service provider. They both agreed that the way the application worked was it looked for and optimized around bandwidth. Add more bandwidth and that is where it goes and you can never add enough bandwidth. How does that fit into your 1990s network architecture?
Slide 4: What We Do…
We build ethernet switches based on merchant silicon and Linux. We build a Controller
We provide a photonic interconnect for our switches and a means to control the photonic fabric via our Controller. Together this enables your network capacity to be fluid and provides a method to control your application experience. From the perspective of the hardware, Plexxi has manufactured an ethernet switch with a high-density multi dimensional photonic mesh (i.e. fabric), which is technically called a five-degree chordal ring. This mesh makes up the physical characteristics of the fabric, with the forwarding topologies formed by both the optical point-to-point peering and bandwidth computation.
Slide 5: How we are Different
On the left is a picture of a gasoline-powered car and on the right is an electric powered car. These are both cars. They both drive on existing infrastructure. If both were parked in front of the building everyone who knows how to drive a car could drive both cars after a few minutes of familiarization. Both these cars will allow you to drive yourself and others to various destinations. They look and act like cars, yet there is a startling difference.
The gasoline car is built on technology concept that is more than one hundred years old. A gasoline-powered car is a proven and well-refined technology after many years. An electric powered car is very different under the hood. If you are going to purchase an electric car you need to become comfortable with technology, the range (i.e. scaling), ~7,100 battery cells, performance characteristics and the operating model, which over the life of the vehicle should deliver a dramatically lower TCO. This is how I explain Plexxi switches and SDN. We build what looks like an ethernet switch from any other vendor, but under the hood a Plexxi switch is different. It operates on the same networks with other switches, but over the life of the switch it will deliver a lower TCO because it is designed to deliver a different networking experience. Paradigm shifts take time and fast failures, but over the long term if the shift is real, it will be proved. Here is an article to read in the LA Times about the company and here is a review of the user experience.
Slide 6: What We Provide
A Plexxi network allocates its capacity and functionality directly from the user application and tenant hosting workloads. This results in a system that has transformative scale, performance, and efficiency advantages compared to legacy network architectures. A Plexxi network is the network you build when your design objective to build a single physical infrastructure with multiple logical forwarding topologies. Plexxi network provides the following benefits:
- Simplicity: Single tier photonic network
- High Utilization: Load balanced L2/L3 fabric
- Controller Architecture: Unified view of network fabric
- Uniformed Low Latency: Direct connectivity, No Aggregation Tax
- Faster Failure Handling: Pre-computed forward topologies that converge rapidly to target optimum
- Elastic Network Capacity: Large-scale computation and path optimization through Controller enables fluidity of network capacity
- Reduced Cabling: Simplified network element deployment and insertion
A common misconception is that the advantage of the Plexxi system is derived primarily from the photonic mesh. The inherent advantage of the Plexxi system is that several elements have been designed to work in concert. These elements are the photonic interconnect, merchant silicon, Linux operating system and the Controller architecture. Together they create a high-availability network fabric designed for workload mobility.
Slide 7: What We Are Trying to Solve
When people first look at this slide, the normal reaction is “yes, I understand that OPEX, complexity and administration” are driving costs. What I find more interesting is we spend less on servers today than at the end of the first bubble (2000), yet multi-core processors dominate and we deploy vastly more servers today than we did in 2000. That is partially the subject the next post.
Slide 8: Controller Architecture (SDN) Approach
The IT community requires a (1) tool to harvest metadata, (2) derive the importance of the metadata, (3) create forwarding topologies based the relationships of this metadata and (4) finally imprint these logically derived and correlated forwarding topologies on the network. That is what we call a controller. That is what a Plexxi controller does. Our controller provides a means to administer policy and bandwidth engineering from a single point. In a Plexxi network, we do not configure ports on the switch element. We configure ports at the controller level and administrator domains of the network from a single pane of glass.
The rest of the presentation was deep in the land of Plexxi specific material. Near the end of the presentation, one of the senior people in the room asked me a career-defining question “we have been using the same tools for all our careers, how hard is it to use your controller? Do we have to learn special programing skills?” With that question, I flashed back to the point in my career with I was standing with one leg in the twilight of the SNA era and another leg in the emerging multi-protocol, client/server era. That is how I know we are going to do it again.
/wrk
Pingback: Running to Stand Still | SIWDT
Pingback: I Dislike Tech Conferences for the Most Part… | SIWDT