Traditionally, MySQL Cluster has been widely deployed in the telecommunications field. "You can go to any continent and find mobile networks that have deployed MySQL Cluster," Ulin said. Telecommunications companies enjoy the cluster software's ability to scale easily, as well as the ability to recall data no matter how recently it was written to disk. They also enjoy the fact that MySQL Cluster can run on low-cost commodity servers, Ulin said.
With this release, Oracle has prepped MySQL Cluster with new features that should make it particularly well-suited for large-scale Web applications, Ulin said. "We've seen telecoms and Web merging together. The requirements for the telecommunications industry are becoming requirements for the Web industry: [very] low latency, high availability and scaling. We see a very good fit here," Ulin said. The software would be suitable for Web service tasks such as user profile management, session management, online gaming and high-volume OLTP (online transaction processing).
MySQL Cluster 7.2 is the first version to offer access to its data by way of a Memcached API. Used by many large Web service providers such as Facebook, Memcached creates a hash table of commonly accessed database items that is stored in a server's working memory for quick access, by way of an API (application programming interface).
"Persistent Memcached is a useful thing," said Curt Monash of Monash Research, noting that sales of the Couchbase NoSQL database, built on Memcached, have been quite strong.
"MySQL has always given good performance when used just as a key-value store," Monash said. "So it's reasonable to hope the Memcached interface will have good performance out of the box."
MySQL Cluster did offer a similar direct-access capability before through another API called NDB (network database). That API was proprietary to Oracle, however. The use of Memcached will allow more administrators--those already familiar with Memcached--to work easily with MySQL Cluster.
The software also introduces a feature called adaptive query localization, which can reduce the time it takes to execute complex queries. Complex joins have been the "Achilles' heel" of earlier versions of MySQL Cluster, Ulin admitted. Such queries involve combining data from multiple tables, which is a computationally intensive operation, especially with large data sets. In its previous incarnations, MySQL Cluster would execute complex queries by collecting all the data on a server and execute the join orders. "You might have to shuffle 2 gigabytes of data up to the MySQL server just to get only a few lines" of resulting data, Ulin said.