Talk about big data: How the Library of Congress can index all 170 billion tweets ever posted

Page 3 of 3

Either way, the amount of data the library has for the Twitter project is not insurmountable. 133TB, and growing, is a large amount of data, but Basho has customers managing petabytes of data on its platform, Phillips says. If the library can track how much the database will be growing each month or quarter, then so long as it has the hardware capacity to store the data, the database software should be able to handle it.

WHY NOT FLASH DRIVE IT? First 1TB USB flash drive coming soon

Should the library use the cloud? Theoretically, the library could use a public cloud resource like Amazon Web Services to store all this data and just have AWS provide the constantly increasing amount of hardware capacity that's needed to store all these tweets. Seth Thomas, a Basho engineer, doesn't know if that would be cost-effective over the long term, though. A hybrid architecture is likely more fiscally wise since the library plans to keep this data forever. Perhaps storing the data on-site and using a cloud-based service for an analytics tool could work. That would allow the queries to dynamically scale resources as they are needed to execute a search, enabling the final system to handle the range of requests leveled upon it.

However the library decides to index the tweets, just remember next time you update your status on Twitter, it's being recorded somewhere.

Network World staff writer Brandon Butler covers cloud computing and social collaboration. He can be reached at and found on Twitter at @BButlerNWW.

Read more about lans and routers in Network World's LANs & Routers section.

This story, "Talk about big data: How the Library of Congress can index all 170 billion tweets ever posted" was originally published by Network World.

| 1 2 3 Page 3
ITWorld DealPost: The best in tech deals and discounts.
Shop Tech Products at Amazon