The following information I obtained during various sessions during OOW16. Sometimes they resemble just an impression I got while a certain feature was mentioned during a speak. So use the information with caution.
new db version 12.2
So the 12.2 version is out there now, but most of us won’t be able to use it very soon. Why? Because Oracle choose to do a cloud first approach. That means 12.2 is available for the Oracle Cloud but not for other installations yet. And my guess would be that first enterprise edition will get it and standard edition (SE2) might even get it a bit later.
Still there are many exciting new features in it and it makes sense to get ourselves familiar with them. So here are my favourites with a slight focus on developer relevance.
There is documentation about 12.2 new features out already, but it is extremely hard to find. It is hidden in the docs for the new Exadata Express Cloud service. So here is the quick link to see the new features guide.
To summarise many of the new features focus on improving availability of the database. Essentially giving developers and DBAs more options to keep the applications running even when encountering errors or while changes are takes place.
The second set of enhancements seem to be things that further extend capabilities of features that were added in 12.1.
And of cause a lot of performance improving possibilities had been added (this seems to by typical for an R2 version).
longer identifiers (128 chars)
Identifiers can now be put to 128 chars long. This applies for almost all objects including table, column, index and constraint names. Long awaited and finally there.
I suggest not to overdo it at the beginning, I suspect that many external tools that work with the Oracle database might not fully support such long identifiers yet. And of cause some common sense when naming your tables and columns should be applied as well. For example please do not repeat the table name again in the column name. However it will help greatly when you apply naming conventions on foreign key constraints or indexes.
There seems to be a new plsql constant Ora_Max_Name_Len that holds the maximum possible value in your database. It can be used at compile time to set the sizes of variables. There is an example in the plsql section later.
LISTAGG now has an option not to error out when the list gets too long. Instead the list will be cut off and a default value (ellipsis “…”) is added. Additionally we can add the number of found results.
To do so add the new overflow clause, for example:
ON OVERFLOW TRUNCATE WITH COUNT
Unfortunatly LISTAGG is still not able to build distinct lists. If you want that, vote for this feature in the OTN database ideas space: https://community.oracle.com/ideas/12533
conversions with error handling
All conversion functions have now an added new clause that decides how an error is handled, when the conversion fails because of a formatting error. CAST, TO_NUMBER, TO_DATE and all the other TO_xxx functions got this improvement. The only exception is TO_CHAR. Because everything can be converted into char.
CAST(prod_id AS NUMBER DEFAULT 0 ON CONVERSION ERROR)
TO_DATE(last_processed_day DEFAULT date'2016-01-01' ON CONVERSION ERROR, 'DD-MON-RRRR','nls_date_language=AMERICAN')
So in the case where the conversion would result in an error, we can provide an alternative default value instead. This is highly useful!
Additionally there is a new function VALIDATE_CONVERSION which can be used to find values in a column where conversion would not be possible. It returns 0 for invalid values and 1 for correct ones.
new approximation functions
In 12R1 we already got APPROX_COUNT_DISTINCT
new are APPROX_PERCENTILE, APPROX_MEDIAN
These approximation functions can now be used in materialized views!
And there is a parameter that is able to switch the exact functions to the approximate versions.
alter session set approx_for_aggregation = 'TRUE';
Also new are APPROX_COUNT_DISTINCT_DETAIL, APPROX_COUNT_DISTINCT_AGG and TO_APPROX_COUNT_DISTINCT allowing to build hierarchies with those aggregation functions. Something we were not able to do in the past without rerunning the function on each aggregation level.
case insensitive linguistic queries
It is now possible to do searches using a case insensitive option, e.g. BINARY_CI for various functions like LIKE. Those functions are able to use stemming. Not much detail about it yet, but it sounds like it can be used instead of putting UPPER around the columns. At the moment I have no further information about possible performance impacts of that.
greatly enhanced JSON support
better simplified JSON
There is a complex JSON syntax using all the JSON functions and a simplified JSON syntax using dot notation. In general a lot of work was done to enhance the simplified JSON syntax. Like you can access elements of an array now.
JSON_EXISTS with predicates
If seen short examples where “and” expressions/filters using an && operator were done inside some SQL statement using JSON_EXISTS. Essentially JSON_EXISTS allows to add filters to the path expression using a solid set of operators.
build JSON with SQL/JSON
Using a similar syntax as the older sql/xml functions (XMLELEMENT, XMLAGG) we now have some new SQL functions to help us createing a JSON document. The output of those functions is VARCHAR2.
JSON_OBJECT, JSON_OBJECTAGG, JSON_ARRAY, JSON_ARRAYAGG
Views on JSON documents / Data Guide
The Data Guide allows to analyse JSON documents and automatically build views and virtual columns for elements in our JSON object.
The data dictionary has a list of all columns with an enabled data guide
We can access the dataguide in SQL with the functions JSON_DATAGUIDE or JSON_HIERDATAGUIDE or in PLSQL with the function DBMS_JSON.GET_INDEX_DATAGUIDE.
The plsql functions need an json search index to work. If it exists then those functions should be preferred, since they work on persisted index data. If the json documents in the columns do all have a completly different structure, then it might be better to use the SQL functions and not use a json search index.
See also: Multiple Data Guides per Document Set
Based upon the data guide more operations are possible. For example we can easily create views and/or virtual columns that extract information from the JSON document as relational data.
To create a virtual column that shows data from our json column in a relational way we can use DBMS_JSON.addVC.
To create a view that exposes json document data in a table like structure, we can use DBMS_JSON.createViewonDemand. This is based upon a JSON_TABLE function.
JSON search index
CREATE SEARCH INDEX po_dg_only_idx ON j_purchaseorder (po_document) FOR JSON;
If the JSON includes coordinates that are conform with the GeoJSON standard, then it is possible to do geolocation searches on that JSON document.
GeoJSON also is supported during Spatial Queries and can be converted directly into Spatial Geometry.
more JSON enhancements
- the JSON sql functions are now available in plsql as well => especially IS JSON will be useful
- highly improved JSON search index
- new predefined plsql object types JSON_OBJECT_T, JSON_ARRAY_T, JSON_ELEMENT_T, JSON_KEYLIST_T, JSON_SCALAR_T
ON STATEMENT refresh for MV
We had ON DEMAND and ON COMMIT before. Now Materialised Views can be refreshed after DML changes to the base tables without having the need to wait for the commit
refresh statistics view
This was long overdue! We are now able to see statistics when the materialised view was refreshed. The name of the data dictionary view is not completely clear now, but I suspect DBMS_MVREF_STATS.
There is also a new package DBMS_MVIEW_STATS that can be used to organise the collection and the cleanup of those statistics.
improvements for partitioned objects
- MOVE TABLE, SPLIT PARTITION and some other partitioned operations can be done online. That means they will not disrupt ongoing DML operations. And this includes automatic index maintenance.
- CREATE TABLE FOR EXCHANGE prepares a non partitioned table to be partitioned.
It is now possible to define columns as join groups. A JOIN GROUP is a new database object and it greatly increases performance of Join and analytical queries over those columns.
In Memory expressions
An INMEMORY expression is a special kind of virtual column that has an additional INMEMORY attribute attached to it. By that attribute special optimisations kick in that speed up access to this expression. This method is also used by some of the JSON optimisations.
A data dictionary view is added to help finding expressions that could profit from such virtual columns.
Analytical views are an easy way to define dimensional hierarchies and how they are rolled up. An Analytical View is a new object type.
This seems to be a completely new feature. Slightly based on the older dimensional cube possibilities. I’m not sure if this will be available for all editions later or if it will be an additional cost feature.
This feature was surprising to me, but I find it hugely interesting.
You can declare now plsql methods as deprecated by using this pragma. The “only” thing that happens is, if such a function is used you will get a compiler warning (PLW-something).
function myFunc return varchar2 is pragma deprecate 'myFunc is deprecated. Use yourFunc instead!'; begin return 'x'; end myFunc;
So far I didn’t miss that feature, but I immediately have some projects in mind where I would use it.
static expressions instead of literals
All areas where a literal is to be used, can now be replaced by a so-called “static expression”. e.g.
declare myTab varchar2( Ora_Max_Name_Len + 2); myObject varchar2(2* (Ora_Max_Name_Len + 2)); begin myTab := '"Table"'; myObject := '"ThisSchema".' || myTab ; ...
This works only as long as the expression can be resolved at compilation time.
As such it will not improve or change writing dynamic queries, however there is some impact for deployment scripts. So whenever you needed to use manually written compiler directives, this might be a new alternative.
- ACCESSIBLE BY for sub modules
- bind plsql only datatypes to dbms_sql => this essentially just finishes what 18.104.22.168 already allowed for anonymous blocks and native SQL (execute immediate).
- some enhancements for PL/scope: SQL_ID for static sql statements, reports where native SQL is used
- slight enhancements for dbms_hprof: can now include SQL_IDs, improved reporting for sub cursors
new code coverage tool
It splits code into blocks and tells the developer if and how often those blocks are used during some sample run. Code that is not reached is coloured differently (red).
Blocks can be marked with a pragma
So that this block is not marked when it is not covered. The tool is currently only a new DBMS package, but it will be integrated into the next SQL Developer version.
It is Oracles new landing page for developers. Strongly influenced by Steven Feuerstein.
Can execute a sql now when the debugger encounters a breakpoint. Not implemented yet directly in SQL Developer, but it will be there in a future version. In general the new SQL developer already supports the enhanced debugging capabilities of the database, but not all is possible yet.
The debugger also is not available for the cloud yet. The protocol used, is not suited for cloud operations.
EBR (edition based redefinition)
Editioned objects no longer in use can be cleaned up automatically in the background
All we have to do is to drop the edition. Even in cases where this was not possible before.
CBO improved adaptive handling
The adaptive features of the CBO that are already in the database can now be better controlled. And the defaults were set in a way, that upgrades from 11g result in very similar behaviour.
OPTIMIZER_ADAPTIVE_FEATURES = TRUE or FALSE => deprecated
replaced by two new parameters
OPTIMIZER_ADAPTIVE_PLANS => default = TRUE
OPTIMIZER_ADAPTIVE_STATISTICS => default = FALSE
So if you switch from 12.1 to 12.2 and if you had OPTIMIZER_ADAPTIVE_FEATURES=TRUE then you might want to set the second parameter to TRUE again. If you switch from 11g to 12.2 you probably want the defaults.
The dbms_stats package got some improvements that go together with it.
At the moment not much is known about that yet.
Mview query rewrite
The CBO can now rewrite a query to a materialised view even if the view is stale at that moment. Materialized view logs are considered to return the correct results.
more CBO based things
- new histogram types
- GTT use session private statistics by default
- automated SQL plan management
Even after DB upgrade the capture of the old plan can be done by setting the parameter OPTIMIZER_FEATURES_ENABLED = ‘22.214.171.124’
Plans can then evolve later
- copy optimiser metadata from pre production to prod.
EXPDP/IMPDP or dbms_stats.transfer_stats
Instead of just having a CDB and several PDBs in it, we can now define an Application Container. This serves as a kind of general repository for all PDBs that are part of that application container.
This is highly interesting.
The typical use case seems to be that you would roll out your application to various customers (in the cloud) by using a PDB for each customer. However all the common objects would be added into the application container, so that changes / enhancements can be rolled out to all customers at once or one after the other. The application container keeps track of you upgrade scripts an is able to “replay” those scripts in the specific PDBs. Only that is is not a replay of the script, instead the objects are linked from the PDB to the Application Container. I think this is the same mechanism as currently the CDB objects/metadata are made available on the PDBs.
Objects can be shared from the Application Container to the PDB by using the SHARING clause in the create statement.
SHARE METADATA would only share the object definition, while SHARE OBJECT would also share the data.
It is possible to combine application containers with edition based redefinition. Essentially it seems as if the editions are copied from the Application Container to the PDB as all other Objects are copied/linked. The Application container just keeps track of what needs to be installed in a specific PDB.
A new very special way to have locally distributed database. Seems to cover the same concept as shards in some noSql DBs. If the application is distributed over a set of databases, you can now declare them as sharded somehow. And if you define a table as sharded then its data will be split to the different databases. You use a shard key (very similar to a partitioned key) and that key defines where the data is going.
Exadata Express Cloud service
Essentially you get a PDB with a few GB of space on an Exadata Machine running Oracle EE for only 175$ per month. Pricelist here.
SQL Developer version 4.1.5 supports the Exacta Express Cloud service. You can drag and drop objects (tables, etc.) directly onto the connection and get it moved there.
The Exadata Express Service includes Apex , ORDS and SODA. So it can also serve as a kind of JSON document storage using rest interface.
Jetty is now supported for production mode. That means you can now run ORDS in standalone mode also for a production environment.
Since ORDS is not part of the database this does not depend on using database version 12.2.