Riverbed Technology is best known for its WAN optimization tools, but the company has branched out over the years through multiple acquisitions. Network World Editor in Chief John Dix caught up with Eric Wolford, president of the company's Products Group, to see how the company is trying to help customers squeeze more efficiency out of their IT resources.
The world knows you as the WAN optimization company, but with the companies you've acquired over the years -- including the $1 billion acquisition of OPNET in 2012 which gave you tools to gauge application, net and server performance -- how do you summarize what Riverbed is all about today?
The key word is performance. So while we have diversified into other products, they are all linked by performance, giving us the ability to diagnose and manage and monitor performance, and address a variety of performance issues. Everything Riverbed does fits into one of those two buckets. Either it's diagnosing performance problems across an end-to-end IT infrastructure, from the client all the way to code in the data center, to fixing performance problems wherever they may be, on the WAN or over the Internet to consumers or heavy duty business workers.
And the company is organized into four business units now?
Right. We have one called Riverbed Performance Management (RPM), which covers application performance management and network performance management [and was built using the OPNET assets]. We have the Steelhead WAN optimization business. We have the Stingray Business Unit, which offers a software-based application delivery controller. And then we have the Granite business, which is sort of a startup within Riverbed that's doing incredibly well. It is small, but for its stage of development it's doing awesome.
Granite is either software or a box that sits at both ends of a network connection and allows customers to take any server or storage or backup out of the remote site, put it in the data center, and project that out to the edge. We provide the acceleration to make it appear and feel like it is local. And users will absolutely see no difference in performance. It will be the exact same performance as if everything was local.
How does revenue break out across these four units?
Steelhead is in the low 70s percent, Riverbed Performance Management is like 22% to 23% and Stingray is 5%. Granite right now hasn't been broken out. We have about 100 customers and the growth rate is similar to the ramp that Steelhead had in its first five or six quarters. I'm sure when Granite gets to be a little bigger we will break it out.
Regarding the RPM group, has OPNET been fully integrated at this point?
I would say we're probably 65% to 70% of the way there. From a systems perspective, it's done. And from a financial accounting and back office perspective, all of that is done. As for sales force integration ... unfortunately, you can draw it up really nicely on Power Point slides and Excel spreadsheets and say, "OK, it's done," but then there are the human factors, getting everybody in place, and you get a lot of square pegs in round holes when you do this type of integration, and so there's a sorting out period and a learning period that takes place. And that takes a reasonable amount of time, but we're over the halfway point.
How do you see the revenue breakout changing over time?
Riverbed year-over-year comparisons are a little weird because there's no OPNET base in the former year. But if you pull out OPNET and take out the WAN optimization business, it's growing at about 50%. The WANop business is growing at mid-single digits. So when you have a large business growing in the mid-single digits and a bunch that are growing at almost 50%, that's going to change the distribution of your revenue over time. When you throw OPNET into the picture, and once we get past the integration issues, we believe there is an opportunity for that business to grow substantially faster than the core business.
Sticking with the core WAN business for a minute, if all the corporate WAN links in the world added up to 100%, what percentage today would you say have been optimized?
You would never optimize some locations because of mitigating circumstances, but if you take that chunk out, maybe we're 25% or 30% of the way through the potential remaining market.
You offer both hardware and software-based WAN optimization products. Do customers still prefer the appliance approach?
Customers with remote sites still prefer a box vs. software. Even though it's cheaper to buy the software, it requires the customer to do more work. They have to get their own hardware and get their own operating system and system engineer it themselves. And when there's a problem, is the problem the box or is the problem the software? But in the data center they are way more comfortable with the software approach because they live in that environment day in and day out.
Are you guys still figuring out ways to squeeze more and more traffic into WAN pipes, or are we already 99% of the way there?
The need for efficiency changes as applications change because the application environment is dynamic. Take SharePoint. We've been optimizing SharePoint for years, but then Microsoft comes out with a new release, it's more complicated and more chattiness is introduced and it requires us to do more application-specific optimization to address that. Then there are brand new applications that come online, like VDI, which require a new type of optimization to make it work well. So whether it's a SharePoint that's changing or whether it's a new type of application, there is always optimization work to be done.
On deduplication, for the most part we have a great ability to deduplicate and there's not enormous marginal improvements that can be made. The only caveat is when an application comes out encrypted, well then we have to do the work to de-encrypt it, do our deduplication magic, and then re-encrypt it.
In the bigger picture world of application performance, how much of the problem ends up being a WAN performance issue vs. all the other things that can be involved?
An excellent question. That question is, in fact, what motivated us to purchase OPNET. If you go to a more senior person in an IT organization that has responsibility for delivery of IT services, they want to know where problems are and how to fix them. So network performance problems are definitely part of it, but isn't like, if the network works great everything is fine.
In an IT organization you'll often end up with four silos, each of which has their own tools that do a great job of proving problems are not their fault. This is the CIO's frustration. "I don't want four tools that prove the innocence of this organization. I want one tool that identifies where the problem is so we can go fix it." So that's what we are doing with our integration of OPNET. We had great network performance management, but we did not have application performance management. We needed to be able to address performance end-to-end and be able to say, with our performance management tools we can tell you where the problem is, whether it's on the client, in the network, in the data center, in one of the servers, in storage, or in the code of the application.
Competitors like F5 are already addressing application performance issues in many big shops and can keep adding functions -- like WAN optimization -- to their ADC platform, so how do you differentiate yourselves?
With F5 it's actually easy because they're not really in the WAN optimization game. They don't have a remote site box. They only do data-center-to-data-center replication optimization. So you can add a module to your Big IP and do some WAN optimization data-center-to-data-center; two sites that have high bandwidth between them moving big chunks of data. So there they have a chance to compete with us. But in the remote site, we don't ever see them. We are, however, going into their space with our software-based application delivery controller. And they would say, "Well, Riverbed isn't in the whole application delivery controller market, they don't even have an appliance." And they would be correct.
Our contention is that the market for application delivery controllers in the data center is moving, consistent with all the trends in the industry, to software. Network Function Virtualization (NFV) is the hot term. Well, ADC is a perfect example of NFV. We see it firsthand. We have 80% growth in that business, did over $10 million in the quarter. Eighty percent growth on something that big is meaningful.
And architecturally there is a move to place ADCs closer to applications. There are so many applications and they have different requirements of ADCs, but if the ADCs are all in software you can achieve massive density, all elastic, with cloud-like scale, and that is very attractive in this day and age.
If they are hardware-based, every time you add more apps you add more boxes. Some very large banks have hundreds of ADCs. That model is, looking forward, less appealing. So we believe we're riding that wave. So while we're not in the entire market, we are in the fastest growing segment and it is consistent with the SDN network function virtualization.
I was going to bring up SDN. Providing SDN emerges as a force, which appears increasingly likely, what will it mean to you guys?
The architecture is appealing. The implementations of the architecture haven't quite nailed it yet. So what does SDN mean to us? We either love it or it is orthogonal to us. We love it because the architecture suggests an increased concentration of stuff into data centers, and any time the data center increases in its power, control, authority, functionality, that's great for us because the workforce continues to be massively distributed so you're going to have performance problems. And you're also going to have visibility problems because SDN turns all these things into tunnels and makes application visibility very opaque. Three hundred applications suddenly become one tunnel. You'll need performance management tools to manage that environment.
It's a bit orthogonal to us in that we do all of our work in Layers 4 through 7. And SDN, in its initial instantiation, is really focused on Layer 2 inside of a data center. I realize in concept it applies to Layer 3 as well, and it has been extended with network function virtualization and service-chaining to include our stuff as well. So at Layer 2 and Layer 3 it's orthogonal to us. But we'll work great with it. We'll add visibility to it. (Read a FAQ on SDNs.)
As it extends in architecture and concept into network function virtualization, then all of our products need to be software. They all need to be managed by orchestration systems and a variety of controllers. Part of our Stingray application delivery controller is already designed for that model, and over the past four years we've taken our Steelhead WAN optimization product and created four different software flavors to fit into that environment, Virtual Steelhead, Cloud Steelhead, Steelhead Cloud Accelerators, Steelhead Mobile. These are all software versions of Steelhead that can fit that architecture, and we're fully committed to RESTful APIs so we will work nicely in that environment.
But SDN, in its initial instantiation, where it is trying to overcome the east/west bottlenecks inside the data center at Layer 2, we add visibility where visibility was lost and we work great with it, because we're Layers 4 through 7.
Will Steelhead ever talk OpenFlow?
Yeah. But we network integrate so many different ways that it would be just adding another way of integrating. We aren't trailblazing OpenFlow. We don't carry the petard. As the market adopts and uses it, we will make sure it is an interface that is available.
OK. Any closing thoughts?
One thing. We just did a Steelhead announcement that I think is going to be a big deal for the next four years in our space, which is something called hybrid networking. Even though the prevailing architecture at remote sites is to have one MPLS connection, that is going to give way to an MPLS connection plus an Internet connection and that Internet connection will have two connections in it, a connection where you're VPN'ing back to the data center and a connection where you go out to the Internet. So we'll have three paths. A path that's private, a path that's virtual private and a path that's to the Internet.
The Internet used to be a toy. Employees would be going to ESPN or shopping online. So many companies were cutting that off because they wanted people working. Well YouTube is a phenomenal business tool now and social networking is important to doing your job. And the economics are staggering if you go from an MPLS connection to Internet. You can provide way more capacity at a lower cost. So there's a variety of benefits that I think will make hybrid networking a bigger deal.
And the reason I give that little spiel is because in our latest release is step one of a multistep process where we're adding path selection to all Steelheads, which means I can take important apps and put them on the MPLS connection, take these less important apps and put them on a VPN connection, and offer direct to Internet connections for still other traffic. And if the VPN connection or Internet connection goes down, well, traffic can roll over to the MPLS connection, all managed by QoS. Or vice versa; if you lost the MPLS connection that traffic can roll over to the Internet connections. So you get this ability to do application performance management over a hybrid network with a lot of visibility and control. So I think that is actually a very big deal that's consistent with how the times will change in the future.
Read more about lan and wan in Network World's LAN & WAN section.
This story, "You can't make it run better if you don't know where the problem lies" was originally published by Network World.