June 05, 2013, 4:49 PM — The pending launch of Windows Server 2012 release 2 focuses on offering a number of advanced capabilities in storage and networking, which used to require the purchase of additional software, or even a full-fledged storage system.
"We see networking and storage as the next places where we can help customers increase their agility and reduce their cost," said Michael Schutz, Microsoft general manager Windows Server product marketing. "We are taking the lessons we have in building and operating our cloud services in networking, storage and compute, and bringing them to our customers on premise."
Microsoft announced the update at the company's TechEd North America conference this week in New Orleans. The company plans to release a preview of Windows Server 2012 R2 by the end of the month and issue the full edition by the end of the year.
On the storage front, Microsoft has introduced a technology called Automated Tiering, that "allows the system to automatically decide which [files] are accessed most frequently," Schutz said. The OS will then store the most frequently consulted files on the fastest storage medium available, such as SSDs (solid state drives), and file the rest on other hard drives, such as less expensive traditional hard drives. The idea is to speed system performance while keeping storage costs down, Schutz said.
Automated Tiering builds on the Storage Space capabilities introduced in Windows Server 2012, which allows Windows Server to work as a front-end file server for a large JBOD (just a bunch of disks) array.
Schutz deferred on saying this approach would serve as a replacement for a full-fledged storage area network, but did say this would be a good setup for smaller organizations that couldn't afford a SAN. He mentioned that many Web and cloud service providers don't use SANs, but rather go with JBOD arrays. He also said the full power of this technology would be in using multiple Windows Servers to run a rather large storage array.
For instance, a total of 16 Windows Servers (split across four storage instances) could power a 64-node cluster that would offer 15 petabytes of raw storage. Each server would use SAS (Serial Attached Storage) connections to a JBOD array of 60 4TB disks, or 960TB per server. Microsoft's tested guideline is 240 drives per storage instance within a cluster, though there is no hard limit as to how large the cluster can grow.
If the administrator then wanted to boost throughput with automated tiering, 10 percent of the 4TB drives could be replaced by speedier 500GB SSDs, still leaving over 13 petabytes of cold storage.