Pluggable databases bring a new level of efficiency and ease to database consolidation, while a wealth of other new features address performance, availability, and more
In development for roughly four years, Oracle Database 12c introduces so many important new capabilities in so many areas -- database consolidation, query optimization, performance tuning, high availability, partitioning, backup and recovery -- that even a lengthy review has to cut corners. Nevertheless, in addition to covering the big ticket items, I'll give a number of the lesser enhancements their due.
Having worked with the beta for many months, I can tell you that the quality of software is also impressive, starting with a smooth RAC cluster installation. As with any new software release, I did encounter a few minor bugs. Hopefully these have been resolved in the production release that arrived yesterday.
[ Also on InfoWorld: NoSQL is no Oracle killer | Which freaking database should I use | Fundamental Oracle flaw revealed | Get a digest of the key stories each day in the InfoWorld Daily newsletter. ]
Pluggable databasesConsolidation is an important business strategy to reduce the costs of infrastructure and operational expenses. In many production database servers, a big portion of CPU cycles go unused. By consolidating many databases into fewer database servers, both the hardware and operational staff can be more effectively utilized.
But database consolidation is easier said than done. Critical issues such as database workload characteristics, the ability to maintain performance service levels, and point-in-time recovery needs of different databases must be considered during consolidation efforts. Ideally, consolidation would not only reduce the need to purchase and allocate less physical CPU, RAM, and I/O (because physical servers are underutilized), but would also reduce actual resource consumption (because multiple instances share some overhead). However, in the past, we've seen that co-locating databases physically in the same server does not reduce overall resource usage.
Oracle's new pluggable database feature reduces the risk of consolidation because the DBA can easily plug or unplug an existing database to or from a container database. There is no need to change any code in the application. When the user connects to a plugged database, the database environment looks exactly as if the user had connected to a traditional database.
Further, pluggable databases do lower resource consumption. Memory and processes are owned by the container database and shared by all pluggable databases, improving resource usage overall. It is also easy to unplug a database and convert the pluggable database to a traditional database if required. In addition, you can back up and recover pluggable databases independently of the container database; you can also perform a point-in-time recovery of a pluggable database. Further, Resource Manager can be used to control resources consumed by a pluggable database.
In short, database consolidation can be done much more effectively with pluggable databases than ever before. Finally, we have a true cloud database.
Optimizer featuresVersion 12c introduces a few useful SQL Optimizer features, and most of these are automatically enabled.
Although Optimizer has matured over the years, it is still not uncommon for Optimizer to choose an inefficient execution plan due to incorrect cardinality estimates, invalid statistics, or even stale statistics. This can have dire results. A SQL statement estimated to run for a few seconds might take hours to execute if the chosen execution plan is not optimal.
Cardinality Feedback -- a feature introduced in Version 11g -- monitors the execution of SQL statements and reoptimizes if the actual cardinality, such as the number of rows returned from the query, varies greatly from the cardinality estimates. A new feature in 12c called Adaptive Plan takes the next step in SQL auto-tuning. Instead of choosing the final execution plan at parse time, Optimizer defers the final choice among multiple sub-plans until execution time.
Essentially, Optimizer introduces a piece of code, aptly named Statistics Collector, into SQL execution. Statistics Collector buffers rows from early steps in the execution plan. Depending upon the number of rows retrieved, the Optimizer chooses the final execution plan. The chosen plan will be reused for subsequent executions of the statement if the cursor is shared. A just-in-time Optimizer!
Adaptive Reoptimization, similar to the Cardinality Feedback feature, affects only the subsequent executions of a SQL statement. If the Optimizer estimates are vastly different from the execution statistics, then the Optimizer uses the execution statistics as a feedback mechanism and reparses the SQL statement during the next execution.
Generally, the quality of the statistics directly equates to the quality of execution plans generated by the Optimizer, bugs notwithstanding. In Version 12c, if the quality of available statistics is not good enough, then the Optimizer can dynamically sample the tables to recollect statistics. This dynamic statistics collection uses the same methods as dynamic sampling available in earlier releases, except that, in Database 12c, these statistics are also stored for future use.
Performance featuresVersion 12c introduces numerous performance enhancements. I will review just a few of the more important ones.
Traditionally, queries with union or union all branches execute one after another, meaning that one branch of the union or union all is executed, followed by the next branch, and so on. Version 12c introduces concurrent execution of union branches, meaning that one set of parallel servers will be executing one branch, a second set of parallel servers will be executing a different union all branch, and so on, all at the same time.
This concurrent execution feature will be very useful if the majority of the query execution time is spent outside of the database, such as when waiting for a SQL*Net message from a remote database link or for an SOA call response. The effective use of this new feature could reduce wait time dramatically, improving SQL elapsed time. (Incidentally, with Version 12c, SQL*Net packets can be compressed for database traffic, helping to reduce latency in a WAN environment.)
A classic problem with parallelism is that all of the parallel servers required for an operation may not be available at the moment of parallel statement initiation, leading to parallel statements executing across a smaller number of parallel servers. Parallel statement queuing -- a feature introduced in Version 11.2 -- resolved the problem by queuing up the sessions whenever sufficient parallel servers were not available. With Database 12c the user can construct multiple parallel statement queues using the database resource manager, bypass parallel statement queueing for critical statements, and group multiple parallel statement together to reduce wait time in parallel statement queues.
Also new in Version 12c, multiple indexes can be created on the same set of columns. For example, the user can create bitmap and b-tree indexes on the same set of columns, or even create a unique and non-unique index on the same set of columns. Multiple indexes will be useful whenever you want to convert an index from one type to another type, or convert a partitioned index to a non-partitioned or vice versa, with minimal downtime. Of course, the user can choose to maintain multiple indexes on the same set of columns permanently, too.
High availabilityIn Oracle Database, the concept of a service involves a connection property specified to connect to a desired instance. We commonly use services to balance the workload among the instances of a Real Application Cluster, for example. Version 12c introduces Global Data Services, which balances the workload not only among instances, but also among the databases.
Imagine a global environment where different databases are used to serve different segments of users. For example, a database in New York serves users on the East Coast, while a database in San Francisco serves users on the West Coast, and both of these databases are synchronized by replication software. In Database 12c, services are truly global, and a global equivalent of the familiar SCAN (Single Client Access Name) listener, called the Global Data Listener, is utilized to redirect the application connection to a database that can better serve the specific client. This feature also improves availability because new connections to failed databases can be redirected quickly to a surviving database.
After a service failover to another instance, applications usually do not know the status of in-flight transactions. While the changes made by a committed transaction are permanent, as dictated by the ACID properties of Oracle Database, commit status messages to the application are transient. The result is that an instance failure creates a classic dilemma: If the application reissues an already committed transaction it can lead to logical corruption, but if the application does not reissue a failed transaction then changes can be permanently lost.
Database 12c resolves this dilemma with Transaction Guard, a new feature that maintains transaction status permanently. Transaction Guard assigns a unique global transaction ID to each transaction, and maintains the status of that global transaction for a pre-defined period of time. After a failover, the application can requery the status of a transaction and take corrective action deterministically.
Oracle did not stop at merely providing a mechanism to identify transaction status. Version 12c also introduces Application Continuity. With this feature, a new client-side replay driver remembers submitted SQL statements, and after the failure detection, statements are replayed to insert failed transactions into the database. Note that code changes may be required to safely integrate the replay driver with the application, though.
Partitioning enhancementsNo database release is complete without partitioning enhancements, and Database 12c is not an exception.
Traditionally, indexes are created in all partitions of a partitioned table. Version 12c introduces Partial Indexing, which allows you to create indexes on a partial set of partitions. For example, if you have partitions p1, p2, p3 ... p10 in a table, you could choose to create an index on partitions p1, p2, and p3 only, and not to create indexes on the other partitions. This feature will be useful in time-variant partitioning schemes, typically in which older partitions are mostly only queried and newer partitions are updated heavily.
We can reduce transaction workload by deferring indexes on transaction-intensive partitions and add indexes only when the partitions become less transaction-intensive. Also, the user can build new indexes in multiple steps to reduce locking and resource consumption issues associated with a traditional index rebuild. Of course, you probably would have guessed, partition pruning is a key requirement for the Optimizer to choose partial indexes, and partial indexes are not supported for unique indexes.
Global indexes cause problems for partition-level operations. For example, if a partition is dropped, then a global index on that table must be updated. While the drop/truncate partition commands are fast (because they are DDL operations), updates to global index entries are far slower, leading to availability issues during DDL operations. For this reason, Version 12c decouples global index maintenance from DDL command execution. Thus, only global index metadata is updated during partition DDL operation, and actual index maintenance can be deferred for a later time. A new scheduled job lets you clear stale global index entries at, say, 2 a.m., improving the viability of global indexes on critical tables.
Also in Database 12c, reference partitioning now truncates dependent partitions. With a reference partitioning scheme, tables in both ends of a referential constraint are partitioned along the same partitioning boundaries. From 12c onwards, truncating or dropping a parent table partition will cascade to child table partitions.
Backup and recoveryTraditionally, restoring a single table is a cumbersome process involving a tablespace restore, exporting the restored table from the restored tablespace, and then importing to the production database. The new restore table command in Recovery Manager (RMAN) simplifies this task.
Re-instantiation of a primary database to a standby site or Data Guard site is a huge problem for VLDB (Very Large Database) sites, especially if the database is separated by thousands of miles. Prior to Database 12c, RMAN did not support native compression during active duplication and so, generally, DBAs resorted to another method of restoring from backup, such as copying files over the network through a compressed pipe, or even shipping a tape. In Database 12c, RMAN supports datafile copies over the network with compression. This feature will ease database cloning efforts tremendously. Also, the Active Duplicate command supports network compression during the data transfer, enabling faster clones directly from the production database.
Executing SQL statements from the RMAN command line is not only unwieldy, but the syntax is not exactly user-friendly. Version 12c enhances the RMAN command line so that you can execute SQL statements natively in RMAN without needing the additional SQL clause.
For the iPhone, change is constant
If a summary judgment is granted within the next couple of months, the lottery could end -- the...
Microsoft's decision to force Windows 10's patch and maintenance model on customers running the...
Sponsored by AT&T
Today's savvy bank robbers don't break into vaults looking for gold or diamonds: They're more likely to...
Microsoft continued its push to get companies on the hybrid cloud train on Monday, with the launch of...
Microsoft is going big on machine-learning powered features in its productivity services, with the...