Experts from IBM, Seagate Inc. and storage interface card vendor LSI Corp. Tuesday said solid state disk (SSD) drive technology will replace high-end Fibre Channel and serial SCSI disk drives for several I/O-intensive applications in the enterprise. However, the industry still needs standards to measure SSD performance and unit price will continue to limit adoption.
That said, Clod Barrera, IBM's chief technical strategist, said the speed with which corporations will adopt SSD into their data centers will beat that of storage area networks (SAN) and serial ATA drives.
Harry Mason, director of industry marketing for LSI, noted that currently there are 70 to 80 small solid state disk drive vendors in the market today. "We're in a bit of a gold rush mentality for SSD right now," he said at the Storage Networking World conference.
Pushing the adoption of SSDs in the enterprise is the fact that as server processors double their compute capacity every 18 to 24 months according to Moore's Law, high-end hard disk drives -- with spindle speeds of 10,000 rpms and 15,000 rpms -- can no longer keep up with I/O requests. So storage administrators add compute capacity to storage arrays by adding more high-end drives but with less data on each drive so that capacity utilization rates drop precipitously. With vastly higher random read times, a few SSDs can replace dozens of high-end hard disk drives, reducing the total cost of ownership.
"It's OK to spend more at the disk device level but less at the system level," Barrerra said.
While the opinions of the three panelists varied somewhat on how successful SSD will be in infiltrating the enterprise market, particularly as new technologies such as phase-change memory come into maturity, they all saw the potential for SSDs to take on I/O-intensive operations, such as online transaction processing, data warehouse queries and data analytics applications.
IBM recently performed benchmark testing on enterprise-class SSDs from STEC, which were more expensive single-layer cell (SLC) NAND flash memory vs. consumer-grade multi-layer cell (MLC) memory. In a paper to be released later this week, IBM showed that it pitted seven arrays of 68 hard disk drives each against a single array of 96 SSDs. The tests revealed the SSD array in some cases had a 30% performance advantage over the arrays filled with 15,000 rpm hard disk drives.
In one benchmark, IBM set up a banking database that included 40 million accounts and measured the time it took for an average ATM transaction request to be completed.
"By the time you get to the actual transaction response time -- the guy standing there poking the button at the ATM -- the advantage is something like a factor of 30%," Barrera said. "Is that a big deal? Well, in this world -- the world of really high volume [online transaction processing] -- people kill for a factor of 5%. These are really nice-looking numbers for having done very little work. From a pure systems perspective, you just reconfigured the disk array."
"You're also getting substantial improvements in power and floor space," he added.
In another benchmark, which simulated a retail industry application, IBM tested a 350GB data warehouse running heavy analytical queries that would normally be used to flesh out marketing directions.
IBM used a 4.1TB SSD array, which was producing 1 million I/Os per second. The DB2 data warehouse queries were returned five times faster than with standard high-end 15,000 rpm hard disk drives in an array.
One limiting factor for more widespread adoption of SSDs in the enterprise is that both SLC NAND flash SSD and enterprise-class hard disk drives are dropping in price by about 40% a year, and that is not expected to change anytime soon, Barrera said. So while companies can purchase a Fibre Channel or SAS hard disk drive for about $500 today, a comparable SSD would cost $5,000. So if I/O performance is not markedly increased, it is not worth the cost.
SSDs also don't as yet offer full end-to-end error correction and don't have native encryption capabilities, according to Marty Czekalski, Seagate's interface and engineering program manager. "Internally in controllers, enterprise [hard disk] drives have parity of all [system] memories, and error correction in all memories including the data and control memory used in the procesors. So if the processor memory takes a soft error hit, you can actually recover from that and not lose data," he said.
Mason added that SSDs vary widely in quality, and specifications need to be established in order to determine performance and longevity of SSDs for procurement purposes. "SSDs have a confusing array of metrics," he said. "Behaviors on these SSD drives are a little different than we've seen with hard disk drives."
Mason said an SSD straight from the box may show one set of performance metrics, but after all the blocks on the drive have been filled, the performance with regard to random writes is likely to markedly decline. "So what do we call that specification for measuring degrading performance?" Mason said.
This story, "Solid state disk adoption to be swift in corporations" was originally published by Computerworld.