Talk about big data: How the Library of Congress can index all 170 billion tweets ever posted

By Brandon Butler, Network World |  Big Data, Library of Congress, Social Networking

Researchers are already clamoring for access to the data -- the library says it has had more than 400 inquires. The project is being done in parallel to efforts by Twitter to give users a record of their Twitter history, including an itemized list of every tweet they have posted from their account.

The Library of Congress is not foreign to managing big data: Since 2000, it has been collecting archives of websites containing government data, a repository already 300TBs in size, it says. But Twitter archives pose a new problem, officials say, because the library wants to make the information easily searchable. In its current tape repository form, a single search of the 2006-2010 archive alone -- which is just one-eighth the size of the entire volume -- can take up to 24 hours. "The Twitter collection is not only very large, it also is expanding daily, and at a rapidly increasing velocity," the library notes. "The variety of tweets is also high, considering distinctions between original tweets, re-tweets using the Twitter software, re-tweets that are manually designated as such, tweets with embedded links or pictures and other varieties."

The solution is not easily apparent. The library has begun studying distributed and parallel computing programs, but it says they're too expensive. "To achieve a significant reduction of search time, however, would require an extensive infrastructure of hundreds if not thousands of servers. This is cost prohibitive and impractical for a public institution."

NOT COST PROHIBITIVE: 10 free router and IP admin tools

~~

So what's the library to do? Big data experts say there are a variety of options to consider. It would probably make the most sense for library officials to find a tool for storing the data, another for indexing it, and yet another to run queries against it, says Mark Phillips, director of community and developer evangelism at Basho, maker of Riak, an open source database tool with a simple, massively scalable key-value store.

Big data management tools have turned into a robust industry with both proprietary and open source options available for different use cases and costs. One of the biggest questions Library of Congress officials will have to tackle is how hands-on they're willing to be in creating and managing the system. If the library wants to take an open source route, there are a variety of tools that can be used to create and manage databases -- everything from a Hadoop cluster to a Greenplum database that specializes in high input/output read/write capabilities. Those can be combined with Apache Solar, which is an open source search tool. Open source provides a free way for developers to take the source code and construct a system based on commodity hardware, but also can take a lot of developer work on the back end. The library can also go the proprietary -- and more expensive -- route of using database software from the likes of Oracle or SAP.


Originally published on Network World |  Click here to read the original story.
Join us:
Facebook

Twitter

Pinterest

Tumblr

LinkedIn

Google+

Big DataWhite Papers & Webcasts

See more White Papers | Webcasts

Answers - Powered by ITworld

ITworld Answers helps you solve problems and share expertise. Ask a question or take a crack at answering the new questions below.

Join us:
Facebook

Twitter

Pinterest

Tumblr

LinkedIn

Google+

Ask a Question
randomness