"The only way to share these flash cards across a cluster is by using a fabric," Bagley said. Pooling caches across a cluster should allow enterprises to make better use of flash as they virtualize their workloads to balance out computing needs, he said. Competing vendors of server-based flash, such as Fusion-IO, aren't even close to developing such a pooling technology, according to Bagley.
When a workload moves from one physical server to another and has to go across the Fibre Channel fabric to reach its cached data, that should be slower than accessing it on the same server, but not by much, Bagley said. Testing by Storage Strategies Now showed a difference of about 7.5 percent, he said. But that's far less than the latency involved in "warming up" a new local cache after the workload is moved, he said.
Even higher speeds may be in store given that QLogic's adapter family extends to Ethernet, Bagley said. While the current FabricCache adapter uses 8Gbps (gigabit-per-second) Fibre Channel, Ethernet is available at speeds of 40Gbps and more. "This is just the opener," Bagley said. QLogic said it has 10Gbps Ethernet versions on its road map for Fibre Channel over Ethernet and iSCSI.