Non-volatile memory's future is in software

New memory technology to serve dual roles of mass storage and system memory

By , Computerworld |  Storage, Servers

"Another aspect not available in storage systems today is intelligent interrogation of what the capabilities of the storage is," he said. "That's pretty rudimentary. How can an OS identify what features are available and be able to load modules specific to the characteristics of that device."

Secondly, the task force will work on new interfaces through the OS to applications, which would allow applications to have a "direct access mode" or "OS bypass mode" fast I/O lane to the NVM. A direct access mode would allow the OS to configure NVM so that it's exclusive to an application, cutting out a buffer and multiple instances of data, which adds a great deal of latency.

For example, an OS would be able to offer a relational database application direct access to NVM. IBM with DB2 and Oracle have already demonstrated how their applications would work with direct access to NVM, according to Tony Di Cenzo, director of standards at Oracle and a SNIA task force member.

By far, the most difficult job the task force faces is the development of a specification that allows NVM to be used a system memory and as mass storage at the same time.

"This is still a brand new effort," Pappas said. "Realistically, the [new NVM] media will take several years to materialize. So what we're doing here is having the industry come together, identifying future advancements ... and defining a software infrastructure in advance so we can get full benefit of it when it arrives."

NAND flash increasingly under pressure

Although new NVM technology will available in the next few years, NAND flash is not expected to go anywhere anytime soon, since it could take years for new NVM media to reach the price point of NAND flash. But NAND flash is still under pressure due to technology limitations.

Over time, manufacturers have been able to shrink the geometric size of the circuitry that makes up NAND flash technology from 90 nanometers a few years ago to 20nm today. The process of laying out the circuitry is known as lithography. Most manufacturers are using lithography processes in the 20nm-to-40nm range.

The smaller the lithography process is, the more data can be fit on a single NAND flash chip. At 25nm, the cells in silicon are 3,000 times thinner than a strand of human hair. But as geometry shrinks, so too does the thickness of the walls that make up the cells that store bits of data. As the walls become thinner, more electrical interference, or "noise," can pass between them, creating more data errors and requiring more sophisticated error correct code (ECC). The amount of noise compared to the data that can be read by a NAND flash controller is known as the signal-to-noise ratio.


Originally published on Computerworld |  Click here to read the original story.
Join us:
Facebook

Twitter

Pinterest

Tumblr

LinkedIn

Google+

Join us:
Facebook

Twitter

Pinterest

Tumblr

LinkedIn

Google+

Ask a Question
randomness