Martin Casado was in on the ground floor of the development of Software Defined Networking (SDN) at Stanford, and today flies the network virtualization flag at VMware where he is Senior Vice President and General Manager of the Networking and Security Business Unit. Casado says VMware has racked up significant wins with its NSX technology, and says it is a good alternative to Cisco’s SDN approach in ACI, or at least a great complement. Network World Editor in Chief John Dix recently stopped by Casado’s office for an update on NSX adoption.
I was here roughly a year ago to see how the NSX rollout was going. What’s changed?
This is a good time to have this discussion because VMworld just passed and comparing this one to the show last year is a good way to highlight how much progress has happened.
Last year we had about 150 paying customers. This year we have more than 750 customers
and every main stage presenter at the show was a major customer of NSX. I think we had 25 NSX customers presenting. For example, DirectTV said the number one pay-per-view event in history -- the Manny Pacquiao fight -- was done on NSX. Tribune Media said it moved 140 apps onto NSX in under five months. There was this massive amount of gravity around production deployments, around use.
We also used the show to announce a new version of the product, NSX 6.2, which is the culmination of a year’s worth of production experience. In early technology cycles you’re selling to the innovator crowd and you focus a lot on features and differentiation. But once you start getting traction you focus on things like how to make it easy to operate, easy to debug. So 6.2 was well received.
Then we previewed two significant technology futures. One of them we actually demoed -- NSX connecting containerized workloads running in seven data centers, including AWS on three continents, with complete NSX security. This is the notion of NSX as the fabric that provides connectivity and security across all of your endpoints.
The second tech preview was around security. One of the biggest use case drivers for NSX is security. I’d say maybe 40% of our customers adopt it because of security. Initially that meant you could push firewalling into the data center for east-west traffic, which is great. Now wepreviewed the idea of doing encryption within the data center. So imagine you spin up a workload, you click a checkbox and then all the communication is encrypted. So even if there is an insider that has access to the physical switch, like a SPAN port, they still can’t make sense of the traffic.
Presumably most NSX customers are large VMware shops?
We’ve got dozens of customers running in production that aren’t vSphere customers. We still underlie two of the largest OpenStack clouds in the world, which have nothing to do with VMware. We continue to aggressively fund open-source efforts like Open vSwitch and Neutron because my personal goal is to change networking, which is independent of the hypervisor. I want to touch every endpoint possible. I want to touch hardware endpoints, I want to touch KVM endpoints, I want to touch Hyper-V, I want to touch ESX. It’s our goal to be independent of the hypervisor. Of course sales are going to align with our sales motion, but that’s just the reality of being in a big company and having an existing go-to-market engine.
You mentioned security as a big use case, what are the others?
There are three use cases: Automation is probably the most common at about 40%. And by that I mean automating the provisioning and configuration of networking, reducing the time it takes to do something to zero. Security is close at about 40%, and the third one is application continuity, which basically means the ability to keep an application available if it moves between data centers. This is for high availability and disaster recovery use cases. On the main stage at the show we had a customer called Global Speech Networks, which is the largest call center cloud in Australia, and they use NSX for all three use cases. That’s actually very common.
Have you been frustrated at all that the initial grand SDN vision hasn’t taken off the way you might have once envisioned?
SDN has become so many things over the years. I remember 10 years ago when I started doing the work that basically became SDN, another student and I were playing around with the idea of moving more functionality into software to get better guarantees. I didn’t have any preconceived notions, I just thought there would be massive disruption, and now we’re seeing that industry transformation. You can see the impact in the way people are using SDN, the evolution of the use cases. We are seeing change in the hardware and we are seeing change in the supply chain. Architectural transformations take time, but I do think it’s happening.
One of the early use cases people were excited about was the idea of white box networking, taking off the shelf components and loading them with open source software. Some of the biggest companies still like the idea, but IT shops in general seem to be shying away from integration projects, not looking for new ones.
That is a fair comment. I think it’s a very complicated issue and we should look at all of the things that are going on to understand the actual dynamics.
In order for white box to happen you have to decouple features from the hardware. If you look at a modern data center, especially ones that use NSX, the hardware just provides capacity. You move functionality from the hardware into a software layer. Now you can build out the hardware however you want because you don’t need special purpose features, you just need capacity. Salesforce, Facebook, Yahoo, that’s how all of those guys built their data centers.
Once that happens, you decouple the purchasing decision of the hardware and that allows the hardware ecosystem to evolve the right way, whereas in the past it was unnatural because every time you needed a new feature you had to do a refresh cycle. I try to avoid predicting what’s going to happen to that hardware refresh cycle. I don’t know what that looks like. I know things are going to be way cheaper, and that definitely seems to be happening if you look at 10-gig price cutting. The number of new players and chaos and energy in 10-gig data center is phenomenal.
So I do think there’s been disruption. There is this basic need to adjust and I don’t think anybody knows what it’s going to look like in two or three years, but I would say that independent of who provides the hardware switching, it’s basically going to follow a horizontal model, meaning a relatively low margin model.
Is that to say you expect to see the white box model take off?
The reason it’s not very prevalent today is less about white box as an architectural model and more about companies wanting a reputable vendor to stand behind their investments, wanting a support contract, wanting to know that if they pick up the phone someone is going to answer.
To date it’s been startups pushing the idea, and they’re not built to provide these types of services. But HP just announced [an open source network operating system for data center switches], so now it starts to become very credible. We have to wait for this evolution in the industry. I don’t know if white box is the right option. If a customer asks if they should do it I would say that is something they’ve got to figure out for themselves. I think the most important thing for them is to preserve optionality. Just decouple your features from your hardware. Then you can buy your hardware from Cisco, from Arista, from Brocade, or do white box. Do what’s best for you, but preserve that optionality and never get in the position where, to add new features, you’ve got to buy new hardware.
I’ll give you my favorite example. There is so much Cisco Nexus 5000 and Nexus 7000 gear out there and these customers were told this architecture was going to last for the next 10 years. Now Cisco is coming back and saying, “Actually you need to do a rip and replace and put in the Nexus 9000 to get ACI [Cisco’s SDN kit].
This is a perfect opportunity for NSX. It’s fantastic because we go to these customers and say, “Listen, you have the 5K, the 7K, we can provide you a tremendous amount of functionality in software, things that ACI can’t even do today, and you can protect your hardware investment for as long as you want. When you have a refresh cycle, do whatever you want. Go with the newest version of Cisco or not. It’s up to you.”
We’re finding rich, rich opportunities in the massive installed base of this gear. Again, Cisco makes great hardware for forwarding packets. They do it as well as anybody, and better than most. And it’s not like customers are running out of bandwidth. The reason they’re being told to upgrade is because of features which naturally should be in software.
And you integrate with the legacy Cisco gear how?
That’s the great thing about NSX. It doesn’t require direct hardware compatibility. It’s all done in software on the hypervisor at the edge. I mean, 70% percent of our deployments are on 5Ks and 7Ks. NSX just treats the physical network as a back plane to pass packets. It could be IP over InfiniBand for all I care. As long as it has IP connectivity, we do everything on the edge in a distributed fashion. We can do things like L2, L3, load balancing, firewalling, all the mobility, all the security policy, all that stuff in a distributed fashion at the edge without affecting performance, and you can build your physical network how you want. That’s why if a customer has a Cisco 5K or 7K today I think they should seriously consider looking at NSX because the cost avoidance is material dollars.
So no need for OpenFlow?
I’ve been saying for a long time that I don’t think OpenFlow has any business in the data center, and I wrote the first version of OpenFlow. It is much more suitable for what Google did in the WAN, where routing decisions are actually meaningful and you can do dynamic routing. I think it belongs in the WAN and in the campus. Google’s is dealing with the WAN. HP has been focused on the campus. In the data center there is so much bandwidth and there is such low latency that basically everybody just builds L3 ECMP fabrics. So you don’t use OpenFlow to control the switches. In the data center your L3 network just passes packets and everything that is a feature is implemented in software in the hypervisor.
If someone has an existing 5K, 7K, brownfield deployment and is already running vSphere, we say buy NSX and install it, then on a per application basis, turn it on, put it on a virtual network, give it some firewall, give it a load balancer. So they can incrementally benefit. There’s no rip and replace. There’s no controlling the switches. They incrementally deploy it.
I think one of the reasons we’re getting so much adoption is exactly this reason. There’s no change to the hardware or the configuration. They just install bits in the hypervisor and that’s how it works.
How are you guys encountering Cisco’s ACI SDN tech in the marketplace and what happens when you do?
I can give you an anecdote. I was visiting a reasonably sized customer in Asia that was interested in NSX but said they were going with ACI because, the customer said, “Cisco gave it to us for free.” And this happened twice on that one trip. Two customers told me Cisco had given them ACI for free. So, if I could give one bit of advice to your readers, I would say, “Ask for ACI for free.” They’re giving it to other people, so why not? [Asked for a response, Cisco said: “Cisco does not need to give away ACI for free because customers recognize that it’s a better technology and are quite willing to pay for it.”]
But listen, ACI will add value for managing your physical assets, for sure. You can manage security and port groups on physical assets, it’s got good visibility in the fabric management. But when it comes to dealing with virtualization in the virtual edges and, in particular, vSphere, there is no supported integration.
So there is no reason customers can’t use both, and in fact many do. I know of three. The three ACI customers I know that are pretty serious about ACI use NSX as well. NSX provides things that ACI can’t, like fully distributed firewalling in the hypervisor, distributed load balancing, integration into vCenter, integration into vSphere, and then ACI is being used to manage the physical assets.
As always happens in early markets, everybody is trying to figure out what they’re going to target, what their niche is. Over time we’re finding that in the virtual environment NSX is the right approach. ACI is great for physical fabric management and the two coexist actually quite naturally.
Don’t you lose something not having the two SDN worlds integrated?
I think you’ll always have two different control planes; one that manages connectivity and another that does all of the services on top. I do think the two companies can do better integration for operations, and certainly this is a discussion we’re interested in having with Cisco.
Let’s switch gears a bit. There is more and more talk about containers. What does that adoption mean for NSX and, bigger picture, for VMware?