IBM remolds DB2 10.5 as a Hadoop killer

With a new set of acceleration technologies, DB2 10.5 could replace data warehouses or stand-alone in-memory databases, IBM claims

By , IDG News Service |  Software

In the new update of DB2, released Friday, IBM has added a set of acceleration technologies, collectively code-named BLU, that promise to make the venerable database management system (DBMS) better suited for running large in-memory data analysis jobs. "BLU has significant benefits for the analytic and reporting workloads," said Tim Vincent, IBM's vice president and chief technology officer for information management software.

Developed by the IBM Research and Development Labs, BLU (a development code name that stood for Big data, Lightening fast, Ultra easy) is a bundle of novel techniques for columnar processing, data deduplication, parallel vector processing and data compression.

The focus of BLU was to enable databases to be "memory optimized," Vincent said. "It will run in memory, but you don't have to put everything in memory." The BLU technology can also eliminate the need for a lot of hand-tuning of SQL queries to boost performance.

Because of BLU, DB2 10.5 could speed data analysis by 25 times or more, IBM claimed. This improvement could eliminate the need to purchase a separate in-memory database -- such as Oracle's TimesTen -- for speedy data analysis and transaction processing jobs. "We're not forcing you from a cost model perspective to size your database so everything fits in memory," Vincent said.

On the Web, IBM provided an example of how 32-core system using BLU technologies could execute a query against a 10TB data set in less than a second.

"In that 10TB, you're [probably] interacting with 25 percent of that data on day-to-day operations. You'd only need to keep 25 percent of that data in memory," Vincent said. "You can buy today a server with a terabyte of RAM and 5TB of solid state storage for under $35,000."

Also, using DB2 could cut the labor costs of running a separate data warehouse, given that the pool of available database administrators is generally larger than that of data warehouse experts. In some cases, it could even serve as an easier-to-maintain alternative to the Hadoop data processing platform, Vincent said. Among the new technologies is a compression algorithm that stores data in such a way that, in some cases, the data does not need to be decompressed before being read. Vincent explained that the data is compressed in the order in which it is stored, which means predicate operations, such as adding a WHERE clause to a query, can be executed without decompressing the dataset.

Join us:
Facebook

Twitter

Pinterest

Tumblr

LinkedIn

Google+

Answers - Powered by ITworld

Join us:
Facebook

Twitter

Pinterest

Tumblr

LinkedIn

Google+

Ask a Question
randomness