"The speed of light trumps your infrastructure. The latency of moving your data is never going to keep up with the growth of how much data we're saving and how much processing we want to do on it," he says. "And yet, if you look at a SAN or NAS architecture, you're always putting all of your storage essentially in one device on the far end of a single wire."
'Stepping back in time 10 or 15 years'
Needless to say, not everyone agrees with McKenty's characterization of the issues involved.
"What he's talking about doing is sort of like stepping back in time about 10 or 15 years," says Brocade product marketing manager Scott Shimomura. "He's making the argument that the direct-attached storage approach is the best way to ensure performance, and that just isn't the way customers think about their storage today."
He also criticizes the idea that one system represents the way forward for nearly all uses.
"The trap that many people fall into is they want to say 'cloud is this' or 'big data is that,' or 'there's one type of infrastructure that's going to solve world hunger,' and the reality is that it depends on the workload," he says. "You're going to have different aspects of an application leverage different parts of your infrastructure."
Director of Product Marketing for HP Storage Sean Kinney also disagrees, arguing that the integrated compute/storage approach makes it difficult for users to adapt to changing workloads in a flexible and agile way.
He also points out that the idea of keeping multiple copies of single files in order to improve access speed flies in the face of the industry trend toward data de-duplication.
"If disk cost is not an issue -- which, for almost every IT customer, it is -- this might work," Kinney says. "Enterprise IT tends to be fairly cautious. What [McKenty] is proposing, at this point ... looks to be a little too much like a science project."
For his part, the Piston Cloud CEO scoffs at the notion of cost being an issue -- compared to the price of specialty storage systems, distributed storage is far more economical. "Converged infrastructure using the hard drives in every server is way less expensive than dedicated filers," he says.
So who's right?
As far as performance goes, McKenty likely has a point, according to Forrester Research Principal Analyst Andrew Reichman.
"Simple storage architectures, I think, end up working better [than large arrays] in a lot of ways," he says. "The disk that's on-board a server is the fastest disk I/O that you can have, because it's connected to the bus. ... Part of the problem with storage arrays is that they share capacity across a large number of servers, so if you're talking about an application that doesn't really have a huge amount of storage capacity ... you probably won't have enough spindles to get good speed."