Storage tips from heavy-duty users

By John Brandon, Computerworld |  Storage, Analytics, big data

Today, the library holds around 500 million objects per database, but Youkel expects that number to grow to as many as 5 billion. To prepare, Youkel's team has started rethinking the library's namespace system. "We're looking at new file systems that can handle that many objects," he says.

Gene Ruth, a storage analyst at Gartner, says that scaling up and out correctly is critical. When a data store grows beyond 10PB, the time and expense of backing up and otherwise handling that much data go quickly skyward. One approach, he says, is to have infrastructure in a primary location that handles most of the data and another facility for secondary, long-term archival storage.

2. Amazon.com

E-commerce giant Amazon.com is quickly becoming one of the largest holders of data in the world, with around 450 billion objects stored in its cloud for its customers' and its own storage needs. Alyssa Henry, vice president of storage services at Amazon Web Services, says that translates into about 1,500 objects for every person in the U.S. and one object for every star in the Milky Way galaxy.

Some of the objects in the database are fairly massive -- up to 5TB each -- and could be databases in their own right. Henry expects single-object size to get as high as 500TB by 2016. The secret to dealing with massive data, she says, is to split the objects into chunks, a process called parallelization.

In its S3 storage service, Amazon uses its own custom code to split files into 1,000MB pieces. This is a common practice, but what makes Amazon's approach unique is how the file-splitting process occurs in real time. "This always-available storage architecture is a contrast with some storage systems which move data between what are known as 'archived' and 'live' states, creating a potential delay for data retrieval," Henry explains.

Another problem in handling massive data is corrupt files. Most companies don't worry about the occasional corrupt file. Yet, when dealing with almost 450 billion objects, even low failure rates become challenging to manage.

Amazon's custom software analyzes every piece of data for bad memory allocations, calculates the checksums, and analyzes how fast an error can be repaired to deliver the throughput needed for cloud storage.

3. Mazda


Originally published on Computerworld |  Click here to read the original story.
Join us:
Facebook

Twitter

Pinterest

Tumblr

LinkedIn

Google+

StorageWhite Papers & Webcasts

See more White Papers | Webcasts

Join us:
Facebook

Twitter

Pinterest

Tumblr

LinkedIn

Google+

Ask a Question
randomness