Oracle Database 12c review: Finally, a true cloud database

Pluggable databases bring a new level of efficiency and ease to database consolidation, while a wealth of other new features address performance, availability, and more

By Riyaj Shamsudeen, InfoWorld |  Cloud Computing, databases, Oracle

In development for roughly four years, Oracle Database 12c introduces so many important new capabilities in so many areas -- database consolidation, query optimization, performance tuning, high availability, partitioning, backup and recovery -- that even a lengthy review has to cut corners. Nevertheless, in addition to covering the big ticket items, I'll give a number of the lesser enhancements their due.

Having worked with the beta for many months, I can tell you that the quality of software is also impressive, starting with a smooth RAC cluster installation. As with any new software release, I did encounter a few minor bugs. Hopefully these have been resolved in the production release that arrived yesterday.

[ Also on InfoWorld: NoSQL is no Oracle killer | Which freaking database should I use | Fundamental Oracle flaw revealed | Get a digest of the key stories each day in the InfoWorld Daily newsletter. ]

Pluggable databasesConsolidation is an important business strategy to reduce the costs of infrastructure and operational expenses. In many production database servers, a big portion of CPU cycles go unused. By consolidating many databases into fewer database servers, both the hardware and operational staff can be more effectively utilized.

But database consolidation is easier said than done. Critical issues such as database workload characteristics, the ability to maintain performance service levels, and point-in-time recovery needs of different databases must be considered during consolidation efforts. Ideally, consolidation would not only reduce the need to purchase and allocate less physical CPU, RAM, and I/O (because physical servers are underutilized), but would also reduce actual resource consumption (because multiple instances share some overhead). However, in the past, we've seen that co-locating databases physically in the same server does not reduce overall resource usage.

Oracle's new pluggable database feature reduces the risk of consolidation because the DBA can easily plug or unplug an existing database to or from a container database. There is no need to change any code in the application. When the user connects to a plugged database, the database environment looks exactly as if the user had connected to a traditional database.

Further, pluggable databases do lower resource consumption. Memory and processes are owned by the container database and shared by all pluggable databases, improving resource usage overall. It is also easy to unplug a database and convert the pluggable database to a traditional database if required. In addition, you can back up and recover pluggable databases independently of the container database; you can also perform a point-in-time recovery of a pluggable database. Further, Resource Manager can be used to control resources consumed by a pluggable database.

In short, database consolidation can be done much more effectively with pluggable databases than ever before. Finally, we have a true cloud database.

Optimizer featuresVersion 12c introduces a few useful SQL Optimizer features, and most of these are automatically enabled.

Although Optimizer has matured over the years, it is still not uncommon for Optimizer to choose an inefficient execution plan due to incorrect cardinality estimates, invalid statistics, or even stale statistics. This can have dire results. A SQL statement estimated to run for a few seconds might take hours to execute if the chosen execution plan is not optimal.

Cardinality Feedback -- a feature introduced in Version 11g -- monitors the execution of SQL statements and reoptimizes if the actual cardinality, such as the number of rows returned from the query, varies greatly from the cardinality estimates. A new feature in 12c called Adaptive Plan takes the next step in SQL auto-tuning. Instead of choosing the final execution plan at parse time, Optimizer defers the final choice among multiple sub-plans until execution time.

Essentially, Optimizer introduces a piece of code, aptly named Statistics Collector, into SQL execution. Statistics Collector buffers rows from early steps in the execution plan. Depending upon the number of rows retrieved, the Optimizer chooses the final execution plan. The chosen plan will be reused for subsequent executions of the statement if the cursor is shared. A just-in-time Optimizer!

Adaptive Reoptimization, similar to the Cardinality Feedback feature, affects only the subsequent executions of a SQL statement. If the Optimizer estimates are vastly different from the execution statistics, then the Optimizer uses the execution statistics as a feedback mechanism and reparses the SQL statement during the next execution.

Generally, the quality of the statistics directly equates to the quality of execution plans generated by the Optimizer, bugs notwithstanding. In Version 12c, if the quality of available statistics is not good enough, then the Optimizer can dynamically sample the tables to recollect statistics. This dynamic statistics collection uses the same methods as dynamic sampling available in earlier releases, except that, in Database 12c, these statistics are also stored for future use.

Performance featuresVersion 12c introduces numerous performance enhancements. I will review just a few of the more important ones.

Traditionally, queries with union or union all branches execute one after another, meaning that one branch of the union or union all is executed, followed by the next branch, and so on. Version 12c introduces concurrent execution of union branches, meaning that one set of parallel servers will be executing one branch, a second set of parallel servers will be executing a different union all branch, and so on, all at the same time.

This concurrent execution feature will be very useful if the majority of the query execution time is spent outside of the database, such as when waiting for a SQL*Net message from a remote database link or for an SOA call response. The effective use of this new feature could reduce wait time dramatically, improving SQL elapsed time. (Incidentally, with Version 12c, SQL*Net packets can be compressed for database traffic, helping to reduce latency in a WAN environment.)

A classic problem with parallelism is that all of the parallel servers required for an operation may not be available at the moment of parallel statement initiation, leading to parallel statements executing across a smaller number of parallel servers. Parallel statement queuing -- a feature introduced in Version 11.2 -- resolved the problem by queuing up the sessions whenever sufficient parallel servers were not available. With Database 12c the user can construct multiple parallel statement queues using the database resource manager, bypass parallel statement queueing for critical statements, and group multiple parallel statement together to reduce wait time in parallel statement queues.

Also new in Version 12c, multiple indexes can be created on the same set of columns. For example, the user can create bitmap and b-tree indexes on the same set of columns, or even create a unique and non-unique index on the same set of columns. Multiple indexes will be useful whenever you want to convert an index from one type to another type, or convert a partitioned index to a non-partitioned or vice versa, with minimal downtime. Of course, the user can choose to maintain multiple indexes on the same set of columns permanently, too.

High availabilityIn Oracle Database, the concept of a service involves a connection property specified to connect to a desired instance. We commonly use services to balance the workload among the instances of a Real Application Cluster, for example. Version 12c introduces Global Data Services, which balances the workload not only among instances, but also among the databases.

Imagine a global environment where different databases are used to serve different segments of users. For example, a database in New York serves users on the East Coast, while a database in San Francisco serves users on the West Coast, and both of these databases are synchronized by replication software. In Database 12c, services are truly global, and a global equivalent of the familiar SCAN (Single Client Access Name) listener, called the Global Data Listener, is utilized to redirect the application connection to a database that can better serve the specific client. This feature also improves availability because new connections to failed databases can be redirected quickly to a surviving database.

After a service failover to another instance, applications usually do not know the status of in-flight transactions. While the changes made by a committed transaction are permanent, as dictated by the ACID properties of Oracle Database, commit status messages to the application are transient. The result is that an instance failure creates a classic dilemma: If the application reissues an already committed transaction it can lead to logical corruption, but if the application does not reissue a failed transaction then changes can be permanently lost.

Database 12c resolves this dilemma with Transaction Guard, a new feature that maintains transaction status permanently. Transaction Guard assigns a unique global transaction ID to each transaction, and maintains the status of that global transaction for a pre-defined period of time. After a failover, the application can requery the status of a transaction and take corrective action deterministically.

Oracle did not stop at merely providing a mechanism to identify transaction status. Version 12c also introduces Application Continuity. With this feature, a new client-side replay driver remembers submitted SQL statements, and after the failure detection, statements are replayed to insert failed transactions into the database. Note that code changes may be required to safely integrate the replay driver with the application, though.

Partitioning enhancementsNo database release is complete without partitioning enhancements, and Database 12c is not an exception.

Traditionally, indexes are created in all partitions of a partitioned table. Version 12c introduces Partial Indexing, which allows you to create indexes on a partial set of partitions. For example, if you have partitions p1, p2, p3 ... p10 in a table, you could choose to create an index on partitions p1, p2, and p3 only, and not to create indexes on the other partitions. This feature will be useful in time-variant partitioning schemes, typically in which older partitions are mostly only queried and newer partitions are updated heavily.

We can reduce transaction workload by deferring indexes on transaction-intensive partitions and add indexes only when the partitions become less transaction-intensive. Also, the user can build new indexes in multiple steps to reduce locking and resource consumption issues associated with a traditional index rebuild. Of course, you probably would have guessed, partition pruning is a key requirement for the Optimizer to choose partial indexes, and partial indexes are not supported for unique indexes.

Global indexes cause problems for partition-level operations. For example, if a partition is dropped, then a global index on that table must be updated. While the drop/truncate partition commands are fast (because they are DDL operations), updates to global index entries are far slower, leading to availability issues during DDL operations. For this reason, Version 12c decouples global index maintenance from DDL command execution. Thus, only global index metadata is updated during partition DDL operation, and actual index maintenance can be deferred for a later time. A new scheduled job lets you clear stale global index entries at, say, 2 a.m., improving the viability of global indexes on critical tables.

Also in Database 12c, reference partitioning now truncates dependent partitions. With a reference partitioning scheme, tables in both ends of a referential constraint are partitioned along the same partitioning boundaries. From 12c onwards, truncating or dropping a parent table partition will cascade to child table partitions.

Backup and recoveryTraditionally, restoring a single table is a cumbersome process involving a tablespace restore, exporting the restored table from the restored tablespace, and then importing to the production database. The new restore table command in Recovery Manager (RMAN) simplifies this task.

Re-instantiation of a primary database to a standby site or Data Guard site is a huge problem for VLDB (Very Large Database) sites, especially if the database is separated by thousands of miles. Prior to Database 12c, RMAN did not support native compression during active duplication and so, generally, DBAs resorted to another method of restoring from backup, such as copying files over the network through a compressed pipe, or even shipping a tape. In Database 12c, RMAN supports datafile copies over the network with compression. This feature will ease database cloning efforts tremendously. Also, the Active Duplicate command supports network compression during the data transfer, enabling faster clones directly from the production database.

Executing SQL statements from the RMAN command line is not only unwieldy, but the syntax is not exactly user-friendly. Version 12c enhances the RMAN command line so that you can execute SQL statements natively in RMAN without needing the additional SQL clause.

You can also perform cross-platform backups and restores in Database 12c without needing to explicitly convert endianness. If only Oracle could also supply a method to convert endianness of archive log files, because it would greatly simplify migrations between platforms with different endianness. Without the ability to convert the endianness of archive log files, we must resort to replication products, such as Golden Gate, to reduce the database downtime during platform migration.  

Miscellaneous goodnessIn addition to the major improvements outlined above, Database 12c introduces many relatively minor enhancements that will be important to DBAs. Here are a few of the most noteworthy.

Materialized View Refresh. Non-atomic refreshes of a materialized view can affect the performance of user queries due to the need to maintain read consistency. Delete statements are used for non-atomic refreshes, so if another SQL query accesses the materialized view concurrently, then the query will suffer from performance issues because the query must apply enormous amount of undo records to reconstruct read consistent blocks. Version 12c introduces new optimization, so instead of deleting from original table, a new identical table is populated with refreshed data. At the completion of refresh, tables are swapped, thus completing the out-of-place refresh. Users can query the original table without incurring any additional overhead. This strategy provides operational convenience to refresh materialized view with minimal impact.

IPv6 support.Many organizations are gearing up to certify IPv6 support as IPv4 address space becomes exhausted. Database 12c supports IPv6 for public network addresses. It does not support IPv6 in private network addresses, but this is probably a non-issue.

Parallel upgrade. This is a feature I have been looking for over many years. In highly available environments, keeping the database down for a database upgrade, even for a few hours, is cost-prohibitive. Rolling upgrades are not always possible for major database software upgrades, and it is always a risky adventure to tune Database Upgrade itself. Version 12c uses parallelism to improve the database upgrade to reduce upgrade-related downtime.

Password files in ASM. Another important Database 12c feature is the ability to store password files in ASM (Automatic Storage Management). In RAC, changing passwords for privileged users is a cumbersome task. Even with the use of NFS or another shared file system for the password file, grants must be executed in all instances. In Database 12c, password files are stored in ASM -- and grants need to be executed in just one instance.

The wealth of new features in Oracle Database 12c gives Oracle shops many reasons to consider upgrading. If you manage a cloud database, or wish to improve resource utilization by consolidating multiple databases on shared hardware, you should consider upgrading sooner rather than later. The new pluggable databases feature is extremely useful for co-locating multiple applications in a single database instance.

If you frequently clone databases over a WAN, then you should consider upgrading to to take advantage of RMAN's native compression when transferring the files. If your employer mandates IPv6 protocol support, then you should look at upgrading. Further, sites that make use of numerous materialized views can benefit from the new refresh methods available in the new version.

And finally, if you are currently in the process of designing or developing an application with high-availability focus, then you should take a close look at the Transaction Guard and Application Continuity features introduced in 12c.

This article, "Oracle Database 12c review: Finally, a true cloud database," was originally published atInfoWorld.com. Follow the latest developments in data management and cloud computing at InfoWorld.com. For the latest business technology news, follow InfoWorld.com on Twitter.

Read more about data management in InfoWorld's Data Management Channel.

Don't miss...

Best programming fonts
The 13 best programming fonts of all time
Top 10 programming skills that will get you hired
Top 10 programming skills that will get you hired
Low tech solutions to high tech problems
Funny fixes! 8 low-tech solutions for high-tech problems

  Sign me up for ITworld's FREE daily newsletter!
Email: 
 


Originally published on InfoWorld |  Click here to read the original story.
Join us:
Facebook

Twitter

Pinterest

Tumblr

LinkedIn

Google+

Answers - Powered by ITworld

Ask a Question
randomness