secret oracle db features

10 “secret” Oracle DB features you might not know

“Secret” here means either not documented or hidden in the documentation so deep that is is almost impossible to find.

1 – restart sequence

We can use a single ddl statement to reset a sequence. Works from 12.2 onwards.

alter sequence mySeq restart start with 115;

It is also possible to use this to reset an identity column!

alter table myTable
modify ID generated always as identity 
       restart start with 115;

This is not (yet) documented.

2 – hidden format options

We can display a negative number in round brackets using the “pt” format option. Which is not in the list of format options in the SQL reference.

select to_char(-10,'fm9990pt') from dual;
(10)

documented is the “pr” option which uses angle brackets

select to_char(-10,'fm9990pr') from dual;
<10>

See also this ODC thread.

This is documented, but not in the SQL Reference but in some OBI Publisher documentation.

3 – sample (x,y)

The SAMPLE clause is documented. It is part of the query table expression clause. Not documented is the second secret parameter y.

sample clause
https://docs.oracle.com/en/database/oracle/oracle-database/19/sqlrf/img_text/sample_clause.html

The sample clause allows us to get a portion (a sample) of the tables data.

select * 
from employees SAMPLE(20); 

This returns 20 percent of the employees.

There is a ROW mode and a BLOCK mode. In BLOCK mode 20% of the blocks are returned, instead of 20% of the rows.

The sample clause has a second undocumented parameter.

select *
from employees SAMPLE(20, 0.5);

This also returns 20%. So far no obvious difference…

Jonathan Lewis mentioned that this second parameter is the number of consecutive groups/rows during row or block sampling. It was introduced during changes for the SAMPLE clause in 10g.

My non-representative tests showed that setting this parameter to 1 during block sampling makes sure we get results. Otherwise we frequently get blocks from the tables tablespace that are not filled with data. This seems to depend on the data distribution.

So… if a query like the following returns 0…

-- how many rows are in 1% of the blocks?
select count(*)
from employees SAMPLE BLOCK (1);
0

… then you can set the second parameter to 1 to be sure to get some data.

-- how many rows are in 1% of the blocks?
select count(*)
from employees SAMPLE BLOCK (1, 1);
110

Use at your own risk!

See also this ODC thread.

4 – Exadata test parameter

Warning! This is only for testing purposes! It can potentially destroy your database.

alter system set "_exadata_feature_on"=true scope=spfile;
shutdown immediate;
startup;

Using this parameter we can test some of the newest options which are available only on exadata/cloud, like automated indexing.

5 – sequence cache is not lost during graceful database shutdown

Shutdown normal|immediate|transactional will not loose the sequence cache. Instead the next value will be used and set as last_number. Only a shutdown abort will loose the cache.

This was documented somewhere but I am not able to find it anymore. it might be that only older versions of the oracle docs still have this information.

To confirm it, I tested it. It works in SE and EE. It will probably not work in the same way on a RAC – because there is a separate cache for each instance.

There is an indirect mention of the behaviour in the 11g admin guide.

The database might skip sequence numbers if you choose to cache a set of sequence numbers. For example, when an instance abnormally shuts down (for example, when an instance failure occurs or a SHUTDOWN ABORT statement is issued), sequence numbers that have been cached but not used are lost. Also, sequence numbers that have been used but not saved are lost as well. The database might also skip cached sequence numbers after an export and import. See Oracle Database Utilities for details.

So a shutdown abort will loose the sequence cache. All other “organized” shutdowns will keep the cache intact.

The problem is that most DBAs seem way too keen on doing a shutdown about. So loosing the sequence cache looks like the default behaviour, although it is not.

6 – reorder columns

We can change the logical column order by setting them invisible and making them visible again. This can be used to reorder columns in the column list. Important to understand is that the table is not physically changed, only the data dictionary entry is manipulated.

If we want to reorder columns physically, maybe to improve data compression, then this is not the way to do it.

example

create table t (col1 number, col2 number);
Table T created.

desc t;
Name Null? Type   
---- ----- ------ 
COL1       NUMBER 
COL2       NUMBER 

/* make the first column invisible */
alter table t modify col1 invisible;
Table T altered.

desc t;
Name Null? Type   
---- ----- ------ 
COL2       NUMBER 

/* make the first column visible again */
alter table t modify col1 visible;
Table T altered.

desc t;
Name Null? Type   
---- ----- ------ 
COL2       NUMBER 
COL1       NUMBER 

/* the column order is now reversed */

To physically reorder a table we need to export/import it, use dbms_redefinition or move the table.

Side note: The describe command in SQL*plus (and SQL Developer) will not show invisible columns. You can change that by setting the documented SQL*PLUS parameter COLINVISIBLE.

SET COLINVISIBLE ON

desc t;
Name             Null? Type   
---------------- ----- ------ 
COL2                   NUMBER 
COL1 (INVISIBLE)       NUMBER 

7 – object type “NON-EXISTENT”

Namespaces are used to avoid naming clashes between objects of different types. Among others the namespace 1 includes the object types table, view and function. So we can not have a table with the same name as a view or as a function.

There is a special object type in namespace 1 that is called “NON-EXISTENT”. This object type is used to mark dropped objects under edition based redefinition (EBR).

demo

Assuming 3 editions: ORA$BASE => CHILD1 => CHILD2

-- switch to base edition (ORA$BASE)
ALTER SESSION SET EDITION="ORA$BASE";

Adding editionable code in the parent edition ORA$BASE.

-- create a new function in base (EDITIONABLE is the default)
create function test_editions return varchar2
as 
begin
  return 'Base';
end;
/

Test the function in the child edition. It is usable. Although we didn’t create it there yet.

-- switch to child edition (CHILD1)
ALTER SESSION SET EDITION="CHILD1";

-- test function
select test_editions from dual;
TEST_EDITIONS
Base
function is functioning…

Now remove the function from the child edition, but not from the parent.

-- drop code
drop function test_editions;

-- test function again
select test_editions from dual;

ORA-00904: “TEST_EDITIONS”: invalid identifier

This error message was expected. The function was removed. No real object with the name “TEST_EDITIONS” exists (in namespace 1) in the edition CHILD1.
Just to be on the safe side, double check in parent edition:

-- switch back to base and test function
ALTER SESSION SET EDITION="ORA$BASE";
select test_editions from dual;
TEST_EDITIONS
Base
function still functioning in BASE

This is also the result that was expected. The function still works in the parent edition.

What does the data dictionary say? A special view user_objects_ae shows the objects from all editions (_ae):

-- Check dictionary
select object_name, edition_name, object_type, namespace, editionable, status
from user_objects_ae
where object_name = 'TEST_EDITIONS';
OBJECT_NAMEEDITION_NAMEOBJECT_TYPENAMESPACEEDITIONABLESTATUS
TEST_EDITIONSORA$BASEFUNCTION1YVALID
TEST_EDITIONSCHILD1NON-EXISTENT1VALID
TEST_EDITIONSCHILD2FUNCTION1YINVALID

We see that in edition “CHILD1” the object type of the function now is “NON-EXISTENT”. This is important, otherwise the database would not know to return the ORA-00904 error message when it searches for this object.

If the whole row would not exist, then the object in edition CHILD1 would be inherited from the parent edition ORA$BASE. This was the data situation before the function was actualized in CHILD1.

Even after we drop the function from all editions, then the dictionary will still show the NON-EXISTENT entries.

A side note/warning:

The status of the function in the second child edition CHILD2 is invalid after running the sample code. This dictionary entry was created, when the function was dropped in CHILD1.

This is because this function was never compiled in CHILD2. That means the code never was actualized there. And when we drop the code from all parent editions, then in CHILD2 any try to compile the code will not give an error, but the status will remain invalid.

This is why it is strongly recommended to recompile all editionable objects after some code changes, but especially after deletes/drops. Each recompile breaks the relationship to the parent and actualizes the code. So code changes in the parent will not propagate to the child anymore after it was recompiled at least once in the child.

8 – connect / as sysdba to PDB

Using a bequeath connection we can login to an Oracle database without a username and a password. This only works directly on the server using an os account that has enhanced privileges (part of the oradba group). To do so the ORACLE_SID environment variable needs to be set. However this connects only to the CDB.

It is possible to do the same using the undocumented environment variable ORACLE_PDB_SID to do the same against a PDB.

Mike Dietrich has posted a nice blog post about it: https://mikedietrichde.com/2020/05/07/can-you-connect-as-sysdba-without-password-into-a-pdb-directly/

export ORACLE_SID=MYCDB
export ORACLE_PDB_SID=MYPDB

$ sqlplus / as sysdba
SQL> show con_name

CON_NAME
------------------------------
MYPDB

9 – print boolean values to serveroutput

There is an undocumented utility package sys.dbms_metadata_util that has a couple of interesting helper methods.

For example put_bool can be used to print boolean variables to serveroutput (using dbms_output).

-- print boolean values to dbms_output
set serveroutput on
begin
  sys.dbms_metadata_util.put_bool('This is ', TRUE);
  sys.dbms_metadata_util.put_bool('This is ', FALSE);
  sys.dbms_metadata_util.put_bool('This is ', NULL);
end;
/
META:13:17:57.489: This is TRUE
META:13:17:57.489: This is FALSE
META:13:17:57.489: This is FALSE

PL/SQL procedure successfully completed.

There are also several other interesting functions in this package. For example a way to convert dictionary column values of type LONG into a CLOB variable.

v_view_source := sys.dbms_metadata_util.long2clob(10000,'USER_SOURCE','TEXT',25) ;

10 – undo_sql

There is a way to create the opposite of an SQL DML statement. For an insert we get a delete, for a delete we get an insert with the deleted column values and for an update we get an update with the previous values.

This is documented, but not many people know about. It is part of the flashback logic and more specifically the flashback transaction query.

Using Oracle Flashback Transaction Query

Use Oracle Flashback Transaction Query to retrieve metadata and historical data for a given transaction or for all transactions in a given time interval. Oracle Flashback Transaction Query queries the static data dictionary view FLASHBACK_TRANSACTION_QUERY, whose columns are described in Oracle Database Reference.

The column UNDO_SQL shows the SQL code that is the logical opposite of the DML operation performed by the transaction. You can usually use this code to reverse the logical steps taken during the transaction. However, there are cases where the UNDO_SQL code is not the exact opposite of the original transaction. For example, a UNDO_SQL INSERT operation might not insert a row back in a table at the same ROWID from which it was deleted.

Some security researchers (Alexander Kornbrust at DOAG 2019 – page 74) believe this poses an security risk, especially because it would allow us to find out about column values that are redacted. There are several ways to circumvent data redaction, and Oracle itself states that this feature is not ment as a high secure implementation (see Security Guidelines). So I don’t see that of a high risk. Using undo_sql to circumvent data redaction is the most complex hack I have seen so far – there are way easier methods.


Still an interesting feature. Here is an example.
It needs a lot of non default settings to make it work.

example

Using the HR demo schema to test this.

First some steps to enable supplemental logging which is needed for undo_sql to work.

-- preparations as DBA in the CDB
select * from dba_pdbs;
alter session set container = CDB$ROOT;
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
alter session set container = ORCL;
-- and as HR
ALTER TABLE HR.EMPLOYEES ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;

Now we can do some changes.

-- as HR user
select * from hr.employees;

update hr.employees
set last_name = '--secret--'
where first_name = 'Randall';
2 rows updated.
delete from hr.employees
where first_name = 'Randall';
2 rows deleted.

Don’t commit or rollback yet. Simply change to another session/account.

-- as DBA
select undo_sql
from FLASHBACK_TRANSACTION_QUERY
where table_name='EMPLOYEES';

Result:

insert into "HR"."EMPLOYEES"("EMPLOYEE_ID","FIRST_NAME","LAST_NAME","EMAIL","PHONE_NUMBER","HIRE_DATE","JOB_ID","SALARY","COMMISSION_PCT","MANAGER_ID","DEPARTMENT_ID") values ('191','Randall','--secret--','RPERKINS','650.505.4876',TO_DATE('19-DEC-99', 'DD-MON-RR'),'SH_CLERK','2500',NULL,'122','50');
insert into "HR"."EMPLOYEES"("EMPLOYEE_ID","FIRST_NAME","LAST_NAME","EMAIL","PHONE_NUMBER","HIRE_DATE","JOB_ID","SALARY","COMMISSION_PCT","MANAGER_ID","DEPARTMENT_ID") values ('143','Randall','--secret--','RMATOS','650.121.2874',TO_DATE('15-MAR-98', 'DD-MON-RR'),'ST_CLERK','2600',NULL,'124','50');
update "HR"."EMPLOYEES" set "LAST_NAME" = 'Perkins' where ROWID = 'AAATiDAAMAAALKzABb';
update "HR"."EMPLOYEES" set "LAST_NAME" = 'Matos' where ROWID = 'AAATiDAAMAAALKzAAr';

-- as HR
rollback;

Alternatively run the 4 statements from undo_sql.

-- reset setup changes

-- as HR
ALTER TABLE HR.EMPLOYEES DROP SUPPLEMENTAL LOG DATA (ALL) COLUMNS;

-- as DBA
ALTER DATABASE DROP SUPPLEMENTAL LOG DATA;

Conclusion

10 undocumented or unknown features. Some are useful, some are interesting, some are simple gaps in the documentation. I hope you found at least one that you didn’t know before and liked to read about.

A quirk with object dependencies under EBR

TL;DR;

USER_DEPENDENCIES does not show dependencies between objects in different editions. Which means, even when no dependencies are shown, we still can get errors when trying to drop objects. This is especially relevant when trying to drop TYPEs.

The basics

Oracle tracks dependencies between objects, for example packages but also between types. It provides us a view USER_DEPENDENCIES (and the matching ALL_|DBA_|CDB_DEPENDENCIES views) where we can see how those objects are related (are calling each other).

Here are two small examples

Example 1) Table + View

create table myTab (col1 number);
create view myView as select * from myTab;
Table MYTAB created.
View MYVIEW created.

Example 2) Object Type + Collection Type

create type myOT as object (col1 number);
/
create type myCT as table of myOT;
/
Type MYOT compiled

Type MYCT compiled

Further code demos will all assume that those 4 objects are in place unless mentioned otherwise.

The data dictionary

select * from user_dependencies;
NAMETYPEREFERENCED_OWNERREFERENCED_NAMEREFERENCED_TYPEREFERENCED_LINK_NAMESCHEMAIDDEPENDENCY_TYPE
MYOTTYPESYSSTANDARDPACKAGE122HARD
MYCTTYPESYSSTANDARDPACKAGE122HARD
MYVIEWVIEWSVENMYTABTABLE122HARD
MYCTTYPESVENMYOTTYPE122HARD

We can ignore the references to the sys.standard package. I think they come from references to datatypes that are defined there. Those dependencies do not effect the following quirks.

Data dictionary shows that the view MYVIEW depends on the table MYTAB (example 1) and the collection type MYCT depends on the object type MYOT (example 2)

The dependency type HARD tells us, that we are not allowed to drop the referenced objects without breaking|invalidating the dependent object.

drop table mytab;
Table MYTAB dropped.
select status from user_objects where object_name = 'MYVIEW';
INVALID

For types we get an error message when trying to drop it.

drop type myot;
ORA-02303: cannot drop or replace a type with type or table dependents

Side note: Another dependency type would be REF, which we encounter for example when using materialized views (MV). If a table is dropped that is used by a select in the materialized view, then we can still fetch data from the MV.

To successfully drop both types we first need to drop the collection type MYCT before we can drop the referenced object type MYOT.

drop type myct;
drop type myot;
Type MYCT dropped.

Type MYOT dropped.

Ok, this worked as expected. Dropping the objects in the opposite order as we had created them.

How does it work under edition based redefinition (EBR)?

Views and types are both editionable objects. That means we can have several “versions” of them in the same schema under different editions.

Assuming my typical 3 edition hierarchy

ORA$BASE ⇒ BETA ⇒ ALPHA 
(Parent ⇒ child1 ⇒ child2)  

I will only use ORA$BASE and ALPHA for this experiment. The 4 objects all have been created in ORA$BASE. Which means, they can be accessed also in the child editions. So let’s switch to ALPHA.

alter session set edition="ALPHA";
select sys_context('userenv','current_edition_name') from dual;
ALPHA

The data dictionary entries are the same as in the base edition.

select * from user_dependencies where name like 'MY%';
NAMETYPEREFERENCED_OWNERREFERENCED_NAMEREFERENCED_TYPEREFERENCED_LINK_NAMESCHEMAIDDEPENDENCY_TYPE
MYOTTYPESYSSTANDARDPACKAGE122HARD
MYCTTYPESYSSTANDARDPACKAGE122HARD
MYVIEWVIEWSVENMYTABTABLE122HARD
MYCTTYPESVENMYOTTYPE122HARD

Now we compile one of the types, but NOT the other.

alter type myct compile;
Type MYCT altered.

This did actualize the type MYCT in the child edition. Actualization is a concept of EBR. When a parent object is (re)compiled in a child edition the code afterwards exists twice. In the parent and in the child edition.

We go back to the parent edition and repeat our test now.

alter session set edition="ORA$BASE";
select sys_context('userenv','current_edition_name') from dual;
ORA$BASE

We try again to drop the types in the correct drop order.

drop type myct;
drop type myot;
Type MYCT dropped.

Error starting at line : 2 in command -
drop type myot
Error report -
ORA-02303: cannot drop or replace a type with type or table dependents

This time we can not drop the object type. Unfortunately the data dictionary does not tell, which object causes this dependency problem.

select * from user_dependencies where name like 'MY%';
NAMETYPEREFERENCED_OWNERREFERENCED_NAMEREFERENCED_TYPEREFERENCED_LINK_NAMESCHEMAIDDEPENDENCY_TYPE
MYOTTYPESYSSTANDARDPACKAGE122HARD
MYVIEWVIEWSVENMYTABTABLE122HARD

MYOT is not in the list of referenced names anymore. So we should be able to drop it.

The problem is that the view USER_DEPENDENCIES only shows objects that are in the current edition or that are not editionable (like sys.standard). But we have created an dependency between the object MYCT that still exists in edition ALPHA to the object MYOT from the parent edition ORA$BASE.

Which objects exist in which edition can be checked using the user_objects_AE view. The AE stands for all editions.

select edition_name, object_name, object_type, editionable
from user_objects_ae
where object_name like 'MY%';
EDITION_NAMEOBJECT_NAMEOBJECT_TYPEEDITIONABLE
ORA$BASEMYCTNON-EXISTENT
ALPHAMYCTTYPEY
ORA$BASEMYOTTYPEY
MYTABTABLE
ORA$BASEMYVIEWVIEWY

Because the type MYOT was never actualized in the edition ALPHA, we only have this object in the parent ORA$BASE. The dependency crosses from one edition to the other. In ALPHA we are still able to see this dependency in the dictionary.

So how can we drop the object in ORA$BASE?

There are three possible ways. Which way to choose depends on the reason why you dropped the object in the first place.

a) Use the force!

drop type myot force;

I do not recommend this, but there are situations were it is the most easy way. This also invalidates any usage of the dependent collection type in the child edition. It is very easy to overlook such an issue therefore only use FORCE when you understand what the reason the the error is and why it is ok to use FORCE,

b) drop the collection type in the child edition first.

alter session set edition="ALPHA";
drop type myct;

alter session set edition="ORA$BASE";
drop type myot;

c) actualize the object type in the child edition
This is easiest when done before dropping the type.

alter session set edition="ALPHA";
alter type myot compile;

Compiling the type creates a copy of the code in the data dictionary. From that point on any drop will leave a proper reference in the data dictionary of each edition. Cross edition references will not happen when all objects are actualized always. That is also why I start creating new objects in ALPHA and then move them to BETA and later for final testing to ORA$BASE.

Further complications

This looks a bit complicated, but after all we knew what we were doing. So it should pose a too big of a problem. Right?

Let me introduce you to 12.2 oracle maintained sys_plsql types.

Since 12.2 plsql pipelined table functions do not need a dedicated SQL type anymore. If we create such a function, typically inside some package, we can create the needed structures (types) purely inside the package spec.

I also dropped all the previously created objects to avoid confusions.

Example

create or replace package myPkg
  authid definer
is 
  type ot_typ is record (col1 number, col2 varchar2(30));
  type ct_typ is table of ot_typ;
  
  function myFnc return ct_typ pipelined;
end;
/

create or replace package body myPkg 
is 
  
  function myFnc return ct_typ pipelined 
  is
    vRow ot_typ := ot_typ();
  begin
    for i in 1..5 loop
      vRow.col1 := i;
      vRow.col2 := chr(ascii('A')-1+i);
      pipe row(vRow);
    end loop;
  end myFnc;
  
end;
/

Now run the function. Since 12.2 there is no need to add the TABLE() operator anymore. However because of name resolution issues, then the function needs a trailing pair of parenthesis.

-- table "operator" syntax
select * from table(myPkg.myFnc); 

-- syntax without table keyword
select * from myPkg.myFnc(); 
COL1 COL2
1    A
2    B
3    C
4    D
5    E

The function sucessfully returns five rows. No SQL type was actively created. This was needed in the past but is not needed anymore.

Also if we check the dictionary, we do not see any new types. But they secretly exist. If we check user_dependencies (or dba_objects) we can see them.

select * from user_types;
no data found
select * from user_dependencies;
NAMETYPEREFERENCED_OWNERREFERENCED_NAMEREFERENCED_TYPEREFERENCED_LINK_NAMESCHEMAIDDEPENDENCY_TYPE
MYPKGPACKAGESYSSTANDARDPACKAGE122HARD
MYPKGPACKAGE BODYSYSSTANDARDPACKAGE122HARD
SYS_PLSQL_9C0C5336_24_1TYPESYSSTANDARDPACKAGE122HARD
SYS_PLSQL_9C0C5336_DUMMY_1TYPESYSSTANDARDPACKAGE122HARD
SYS_PLSQL_9C0C5336_9_1TYPESYSSTANDARDPACKAGE122HARD
MYPKGPACKAGE BODYSVENMYPKGPACKAGE122HARD
SYS_PLSQL_9C0C5336_24_1TYPESVENSYS_PLSQL_9C0C5336_9_1TYPE122HARD

The view shows a set of types that all start with the almost identical name. Most importantly type SYS_PLSQL_9C0C5336_24_1 depends on type SYS_PLSQL_9C0C5336_9_1.

This was the output in 18c (18.7). The result changed in a 19c database. More about that later.

These two types match the type definitions in the package spec.

type ot_typ is record (col1 number, col2 varchar2(30));
type ct_typ is table of ot_typ;

Depending on how the package is then changed in different editions, it can happen that a cross edition dependency is created for those sys_plsql types. This can then prevent further compilation of the package in one of the editions.

We encountered such a scenario during development in an 18.7 database base. However the used types were considerably more complex. At some point we were not able to compile a package body because of a missing spec. However the spec was there and could apparently be compiled (but secretly would not properly compile).

I was able to track down the problem to a cross edition type dependency for the generated sys_plsql types. Dropping those types (in the relevant editions) allowed us to recompile the package spec and the package bodies.

If you encounter such a case a drop of all the sys_plsql types in each edition and a package recompile then solves the issue.

Behaviour in 19c (19.3)

The general cross edition type dependency issue also exists in 19c.

The sys_plsql types are not shown anymore in the dictionary views – I am not sure if the same behaviour still is in use. I could not reproduce the package problem that we encountered in an 18c (18.7) database that was caused by the sys_plsql types using a simple test case – neither in 18c nor in 19c. Which does not mean, that the problem is gone. Only that it is tricky to recreate.

The 18c and 19c patch release notes do not indicate any changes for the sys_plsql types – so it is not clear if there was a released bug that was fixed. Maybe some internal behaviour was changed but no note is added in the documentation about this.

However in the EBR guide the following is mentioned under Evolutionary capability improvements

The dependency model is now fine-grained: e.g. adding a new column to a table, or a new subprogram to a package spec, no longer invalidates the dependants.

Conclusion

Make sure to always compile all code in your specific edition. This actualizes all the objects and the editions are then independent from each other.

Start by adding new code objects in the child edition first, before you move it to the parent.

Under EBR consider to use specific SQL types for table functions, even if they are not needed anymore. This will give you a tiny bit more control over the code and a better chance to react to problematic dependency issues.

10 Oracle SQL functions and function parameters you might not know

Functions mentioned here are in alphabetical order

BITAND function but no BITOR function

There is a BITAND function in SQL but no BITOR function.

Bitand allows to bit-compare values. It returns a value where all bits have been compared using AND. That means, each bit in the first operand must be matched with the bits in the second operand. This is useful to compare a set of flags (yes/no values) that are stored inside the same integer field.

BIN_TO_NUM is a helper function that “translates” bits into (decimal) numbers.

BITAND example

SELECT BITAND(
    BIN_TO_NUM(1,0,0),  -- 4 in decimal
    BIN_TO_NUM(0,0,1)   -- 1 in decimal
    ) check_flag
  from dual;

0

So 4 and 1 have no bits in common.

SELECT BITAND(
    BIN_TO_NUM(1,0,1),  -- 5 in decimal
    BIN_TO_NUM(1,0,0)   -- 4 in decimal
    ) check_flag
  from dual;

4
4 = 0×2⁰ + 0×2 + 1×2² = BIN_TO_NUM(1,0,0)

So 5 and 4 have bit 3 in common. (Bitcounting starts 1 but powers of 2 start with 0)

BITOR would combine the bits from each operand.

Fortunately BITOR can be calculated using BITAND. We need to sum the operands and then subtract the BITAND result from that.

Here is an example

select 1 + 4  
       - BITAND(1,4) bitor_compare
  from dual;

5

select 5 + 4  
       - BITAND(5,4) bitor_compare
from dual;

5

CAST with default on conversion error

The CAST function allows to convert values from one datatype into another.

Parameter enhancements have been introduced in 12.2 along with similar enhancements to many datatype conversion functions.

https://docs.oracle.com/en/database/oracle/oracle-database/20/sqlrf/CAST.html#GUID-5A70235E-1209-4281-8521-B94497AAEF75

Let’s experiment a little with it.

select cast('01-01-20' as date default null on conversion error
           ,'dd-mm-rr') 
from dual;
01-Jan-20

select cast('INVALID' as date default null on conversion error
           ,'dd-mm-rr') 
from dual;
(null)

Works like a charm. The string was converted into a date (and converted back into a string to print it on the screen using local nls settings). An invalid string could not be converted, instead the default NULL value was chosen.

Sometimes we want to return today in case there is a conversion error…

select cast('01-01-20' as date default sysdate on conversion error
           ,'dd-mm-rr') 
from dual;
01-Jan-20

select cast('INVALID' as date default sysdate on conversion error
           ,'dd-mm-rr') 
from dual;
17-Mar-20

seems to work!

..not so fast

alter session set nls_date_format = 'day-Mon-rr';
select cast('01-01-20' as date default sysdate on conversion error
           ,'dd-mm-rr') 
from dual;

ORA-01858: a non-numeric character was found where a numeric was expected

Although we used cast with a default on conversion error, we still got an conversion error!
The problem here is the default parameter. It needs to be a string. Since it is not, an implicit type conversion happened. The result differs from the function parameter fmt. Which then leads to a conversion error.

Unfortunately the default parameter can not be an expression.

select cast('01-01-20' as date 
            default to_char(sysdate,'dd-mm-rr') 
            on conversion error
           ,'dd-mm-rr')
from dual;

ORA-43907: This argument must be a literal or bind variable.

Also nls_date_format is not allowed as nlsparam.

select cast('01-01-20' as date default sysdate on conversion error
           ,'dd-mm-rr'
           ,'nls_date_format=dd-mm-rr')
from dual;

ORA-12702: invalid NLS parameter string used in SQL function

The same could happen for other conversions, like casting a string into a number. This example is from the docs.

SELECT CAST(200
       AS NUMBER
       DEFAULT 0 ON CONVERSION ERROR)
  FROM DUAL;

It is a very very bad example, since 200 is already a number. So let’s assume this is ‘200.00’ but the default is still 0. And we add the other parameters to specify the format of this string.

select cast('200.00' as number default 0 on conversion error
           ,'FM999d00'
           ,'nls_numeric_characters=.,')
from dual;
200

select cast('200x00' as number default 0 on conversion error
           ,'FM999d00'
           ,'nls_numeric_characters=.,')
from dual;
0

Seems to work. But here again we can manipulate the session settings into a wrong conversion. Notice that the alter session settings and the nlsparam settings differ.

alter session set nls_numeric_characters = ',.';
Session altered.

select cast('200x00' as number default 0 on conversion error
           ,'FM999d00'
           ,'nls_numeric_characters=.,')
from dual;
0

Good. Fortunately the number conversion is slightly more robust than the date conversion. We can not set a session default number format (it is always “TM9” I believe) in the same way as we can set the default date format.

However we get into trouble once the default value uses non-integer numbers.

select cast('200x00' as number default 0.5 on conversion error
           ,'FM999d00'
           ,'nls_numeric_characters=.,')
from dual;

5

The default was 0.5 but we got 5!

The correct way to do it of cause would be to use a string with a format that reflects the fmt and nlsparam parameters.

select cast('200x00' as number default '0.5' on conversion error
           ,'FM999d00'
           ,'nls_numeric_characters=.,')
from dual;

0,5

Now we got the correct default value returned!

CHECKSUM

Checksum can be used to quickly calculate a checksum over a set of rows. There is an analytic version of it too.

Description of checksum.eps follows
https://docs.oracle.com/en/database/oracle/oracle-database/20/sqlrf/checksum.html#GUID-3F55C5DF-F23A-4B2F-BC6F-E03B34B78BA8

If you want to quickly see if data was changed between two databases, you can run a checksum over each column and compare the result on both environments.

select checksum(phone_number)
from hr.employees;

Never heard of this function before? Probably because it just got introduced in 20c.

Dump

The dump function is often used to inspect binary values. Useful for example when solving character set issues.

Not many know that it has some additional parameters.

Description of dump.eps follows
https://docs.oracle.com/en/database/oracle/oracle-database/20/sqlrf/DUMP.html#GUID-A05793C9-B35D-4BA7-B68C-E3693BCF47A5

Interesting is especially the return_fmt parameter. It allows to return the dump in octal (8), hex (16), decimal (10=default) or a special format (17). The return format “17” will return the single byte values in readable ASCII format, but multi byte values that can not be converted are shown in hex.

For example the “€” symbol in UTF-8 uses 3 Bytes: E2 82 AC

Source: https://unicode-table.com/en/20AC/

Dump-17 will return this

select dump('abc€',17) from dual;
Typ=96 Len=6: a,b,c,e2,82,ac

The string is made of 6 bytes. The first 3 are single byte characters. They are converted into ascii. The 4th character is the Euro-Sign, witch is 3 bytes in UTF-8. So the format 17 is interesting, because it helps us to focus and find the problematic characters.

Another interesting option is to add 1000 to the format. This will add the character-set to the output.

select dump('abc€',1010) from dual;
Typ=96 Len=6 CharacterSet=AL32UTF8: 97,98,99,226,130,172

Of cause that works with NCHAR too.

select dump(n'abc€',1010) from dual;
Typ=96 Len=8 CharacterSet=AL16UTF16: 0,97,0,98,0,99,32,172

LNNVL

This is a very special function. Apart from the unspeakable name it is used for Oracle internal SQL transformations by the optimizer. Here is a document that describes such transformations. LNNVL returns a Boolean and can be used directly inside a where clause. This is not yet possible for user defined functions that return Boolean, but other such functions do exist (e.g. regexp_like).

I sometimes use it to find rows where two values differ. If the comparison value is NULL I still want to consider it to be different.

Instead of using the non-equality operator != we have to use the opposite the equality operator =. This is just how LNNVL works.

select dummy from dual
where lnnvl('A'='B');
X;

select dummy from dual
where lnnvl('A'='A');
No data found.

select dummy from dual
where lnnvl('A'=null);
X

This assumes that ‘A’ always is a not null value.

To get the same result typically a comparison condition needs also consider the NULL case.

select dummy from dual
where (('A'!='B' and 'B' is not null) OR 'B' is null);

Since more conditions like this might follow, the statement very soon becomes cluttered with OR checks, parenthesis and IS NULL comparisons.

Unfortunately since not many developers are familiar with this function, we should always add some comments to explain the purpose and the behaviour.

NANVL

NANVL is similar to NVL.

NVL returns a value, when NULL is encountered.
NANVL returns a value when NaN (not a number) is encountered.

NaN is part of the binary_float and binary_double datatypes.

select to_binary_float(1/17) from dual;
0,05882353

select to_binary_float('NaN') from dual;
NaN

Lets apply NANVL to it

select nanvl(to_binary_float('NaN'),'0') from dual;
0.0

Or we could set it to negative infinity…

select nanvl(to_binary_float('NaN'),'-INF') from dual;
-Infinity

Somehow interesting, but rarely useful I think.

NEXT_DAY function parameter

The NEXT_DAY function needs a second parameter that defines what target weekday it points to.

Description of next_day.eps follows

The argument char must be a day of the week in the date language of your session, either the full name or the abbreviation. The minimum number of letters required is the number of letters in the abbreviated version. Any characters immediately following the valid abbreviation are ignored. The return value has the same hours, minutes, and seconds component as the argument date.

https://docs.oracle.com/en/database/oracle/oracle-database/20/sqlrf/NEXT_DAY.html#GUID-01B2CC7A-1A64-4A74-918E-26158C9096F6

This second parameter char works differently than what you might expect. Here is an example.

alter session set nls_date_format = 'Dy, DD-MON-YYYY';
Session altered.

alter session set nls_language='AMERICAN';
select next_day(sysdate,'FRIDAY') from dual;
Fri, 20-MAR-2020

alter session set nls_language='GERMAN';
Session altered.

select next_day(sysdate,'FREITAG') from dual;
Fr, 20-MRZ-2020

select next_day(sysdate,'FRIDAY') from dual;
Fr, 20-MRZ-2020

Isn’t that strange? It looks as if although my session language is GERMAN, but that the American spelling still works!

Let’s test this with a different language. In Spanish “friday” would be “viernes”.

select next_day(sysdate,'VIERNES') from dual;
ORA-01846: Kein gültiger Wochentag
"not a valid day of the week"

alter session set nls_language='SPANISH';
Session altered.

select next_day(sysdate,'VIERNES') from dual;
Vie, 20-MAR-2020

select next_day(sysdate,'FRIDAY') from dual;
ORA-01846: día de la semana no válido
"not a valid day of the week"

Ok. So the Spanish language works as long the NLS settings are correct, but it doesn’t accept the American spelling.

Is German special? In a way yes. In German several weekdays have an abbreviation that matches the American spelling. And the abbreviation is only two letters short. Especially those days that have a good chance to be used in the NEXT_DAY function (monday, friday, saturday, but not sunday!).

For “FREITAG” the abbreviation is “FR”. The NEXT_DAY function accepts anything as long as the char string starts with the abbreviation of a weekday in the correct language. So FREITAG, FRIDAY and even FRITZ or FROG all return the next friday.

alter session set nls_language='GERMAN';
Session altered.

select next_day(sysdate,'FR') from dual;
Fr, 20-MRZ-2020

select next_day(sysdate,'FRITZ') from dual;
Fr, 20-MRZ-2020

select next_day(sysdate,'FROG') from dual;
Fr, 20-MRZ-2020

Is this how you expected the char parameter to work?

Recommendation? Don’t use NEXT_DAY at all. I prefer TRUNC over NEXT_DAY any day.

POWER (sql) = ** (plsql)

In SQL to calculate an exponentiation we have to use the POWER function.

Example 2³

select power(2,3) from dual;
8

In plsql we can use the ** operator instead.

set serveroutput on
begin
  dbms_output.put_line(2**3); 
end;
/
8

Unfortunately the ** operator will not work in SQL.

ROUND_TIES_TO_EVEN function

Everybody knows the round function. Round rounds a value up which is exactly half.

1.5 ≈ 2
2.5 ≈ 3
...
7.5 ≈ 8
8.5 ≈ 9
9.5 ≈ 10

This is what statisticians call a biased function. The computed values do not reflect the distribution of the original values – especially if there are many 0.5 decisions.

An unbiased version of rounding is to round one value up, the next down, then up again.

1.5 ≈ 2
2.5 ≈ 2
...
7.5 ≈ 8
8.5 = 8
9.5 ≈ 10

In German that is called “kaufmännisches Runden”. Loosely translated to Bankers Rounding.

The round_ties_to_even function has been introduced in 18c.

Description of round_ties_to_even.eps follows
https://docs.oracle.com/en/database/oracle/oracle-database/19/sqlrf/ROUND_TIES_TO_EVEN-number.html#GUID-49919B6B-4337-4812-A248-B5D98F102DBD
select level+0.5 as val, 
       round(level+0.5), 
       round_ties_to_even(level+0.5)
from dual connect by level <= 10;
VALROUND(VAL)ROUND_TIES_TO_EVEN(VAL)
1.522
2.532
3.544
4.554
5.566
6.576
7.588
8.598
9.51010
10.51110

This new rounding function implements “Bankers Rounding”. Especially when adding up the rounded values, the rounded sum differs less from the correct (unrounded) sum compared to what normal rounding delivers.

STANDARD_HASH function

If we quickly want to create some hash values we can use the standard_hash function.

select standard_hash(dummy) from dual;
C032ADC1FF629C9B66F22749AD667E6BEADF144B

The default uses an SHA1 hash algorithm, but we can specify SHA1, SHA256, SHA384, SHA512 and MD5.

select standard_hash(dummy,'SHA512') from dual;
3173F0564AB9462B0978A765C1283F96F05AC9E9F8361EE1006DC905C153D85BF0E4C45622E5E990ABCF48FB5192AD34722E8D6A723278B39FEF9E4F9FC62378

MD5 is not recommended at all (https://www.md5online.org/blog/why-md5-is-not-safe/). Also SHA1 is not considered secure anymore, but you can still use it safely to create hash values by it for non security purposes.

Conclusion

That’s it. 10 functions or parameters of functions that you probably didn’t know before. At least not all of them.

SQL Quickie: How to reset an identity column

Introduction

Since Oracle 12 we can declare columns as an identity column. Other databases call this feature “autoincrement” column. A term that can easily lead to some misunderstandings, but is already well established. In some of those databases such a column allows to reset the current value. There is no (documented) way for that with an Oracle identity column. A recent discussion in the ODC forums lead me to think whether it is possible to set an identity column to a custom value. And yes it is.

TL;DR;

  • Change the increment of the sequence that supports the identity column by modifying the column.
  • Increase the value by an insert+rollback.
  • Reset the sequence by modifying the column again.

or use the undocumented RESTART option (12.2 onwards)

alter table demo_identity_reset
modify id generated always as identity restart start with 60;

Showcase

demo setup

First we create a table with an identity column. We use default settings for the sequence, which for example means increment +1 and cache 20.

/* prepare demo table */
create table demo_identity_reset
(id number generated always as identity primary key
,name varchar2(500)
);

Then insert some demo values.

/* add some data */
insert into demo_identity_reset (name) values ('test1');
insert into demo_identity_reset (name) values ('test2');
commit;
select * from demo_identity_reset;
ID NAME
1 test1
2 test2

We now have a table with an identity column where some values are used and more values are currently cached in the SGA.

Set a custom value (12.1)

Gather info

First we need to find the name of the sequence. Either by looking at all sequences, but more correct would be to look at the column definitions. The name of the sequence can be seen in the default value of the identity column.

/* check data dictionary and find the sequence name*/
select * from user_sequences;

or

/* check data dictionary and find the sequence name*/
select data_default
from user_tab_columns
where table_name = 'DEMO_IDENTITY_RESET'
and identity_column = 'YES';
DATA_DEFAULT
"MYSCHEMA"."ISEQ$$_172206".nextval

In my case the sequence name is ISEQ$$_172206. The generated name will always start with ISEQ$$_.

Other information we need to find is the current value, the cache size and the increment. The only decent way to see the current value is to set the sequence to nocache and after that check the data dictionary. We need to remember the original cache size setting to be able to restore it later to that value.

Start the change

From this step onward no other session should insert at the same time. It might be a good idea to lock the table in exclusive mode during those steps.

The cache also needs to be set to nocache to prevent caching any values with the wrong increment during the next steps.

/* reset the cache so that we can see the next value */
select cache_size 
from user_sequences 
where sequence_name = 'ISEQ$$_172206';

alter table demo_identity_reset 
modify id generated always as identity nocache;

Cache size was 20. The sequence now is not caching anymore. This is shown in the user_sequences dictionary view.

Now we can read the next value from the data dictionary and use that information to set an increment that jumps to our target value.

Here I assume a target value of 60.

/* find the next value */
select last_number from user_sequences 
where sequence_name = 'ISEQ$$_172206';

/* calculate the increment: 57 = 60(target value) - 3(last number) */

/* change the increment so that it jumps over all the other values. */
alter table demo_identity_reset 
modify id generated always as identity increment by 57 nocache; 

The last number was 3. Last number will always hold the next value after the cached values.

-- this does a sequence.nextval which we can not call otherwise
insert into demo_identity_reset (name) 
values ('dummy. rollback immediatly.'); 
rollback;

It doesn’t matter much, which value was inserted (59). In case you want to avoid any gaps, then this insert needs to be one of the real inserts that needs to be done. And the increment needs to be one value more (58 instead of 57). So that instead of a rollback you can do a commit.

/* reset the changes */
alter table demo_identity_reset 
modify id generated always as identity increment by 1 cache 20;

Done. The next insert will now start with our target value 60.

Set a custom value (12.2)

I tested this in 19c on LiveSql but it should work in all versions that belong to the 12.2. database family (12.2.0.1, 18c, 19c).

Instead of the complicated series of steps in 12.1 we can do it all in a single command. Unfortunately this command is undocumented, so use it at your own risk.

alter table demo_identity_reset 
modify id generated always as identity restart start with 60;

The RESTART option was mentioned in one of the new feature guides and Roger Troller made me aware of it in one of his blog posts. However currently (19c) it is still missing in the SQL reference documentation.

Test the change

The following steps are not needed anymore. But they are proof that the concepts works.

/* check the result in the dictionary*/
select last_number, cache_size 
from user_sequences 
where sequence_name = 'ISEQ$$_172206';
LAST_NUMBER    CACHE_SIZE
 60    20
/* test the result using the the table*/
insert into demo_identity_reset (name) values ('test3');
commit;
select * from demo_identity_reset;
IDNAME
1test1
2test2
60test3

Yes it works!

Cleanup code

Drop the table and the sequence. A simple drop table is not enough, because the sequence will continue to exist as long as the table is still in the recycle bin.

/* undo testcase */
drop table demo_identity_reset purge;

Further information

In general I would not recommend doing that in a production environment. But there are situations where you want to do it, for example after coping a production schema to a test database and cleaning out many values, you might like to reset the sequences too.

Such situations are rare and probably once in a while activities. That is also why I didn’t include the lock table command. And the lock will only hold until the next alter table command. Which makes it not safe to use.

It is not possible to use an alter sequence command for sequences that are connected with an identity column. It results in an error message.

alter sequence "ISEQ$$_172206" restart start with 60;

ORA-32793: cannot alter a system-generated sequence

Here is an example of the definition of an autoincrement column in another database (MySQL).

https://dev.mysql.com/doc/refman/5.7/en/example-auto-increment.html

The AUTO_INCREMENT attribute can be used to generate a unique identity for new rows

No value was specified for the AUTO_INCREMENT column, so MySQL assigned sequence numbers automatically. You can also explicitly assign 0 to the column to generate sequence numbers, unless the NO_AUTO_VALUE_ON_ZERO SQL mode is enabled.

If the column is declared NOT NULL, it is also possible to assign NULL to the column to generate sequence numbers.

When you insert any other value into an AUTO_INCREMENT column, the column is set to that value and the sequence is reset so that the next automatically generated value follows sequentially from the largest column value.

Updating an existing AUTO_INCREMENT column value in an InnoDB table does not reset the AUTO_INCREMENT sequence as it does for MyISAM and NDB tables.

MySQL 5.7 Reference Manual – 3.6.9 Using AUTO_INCREMENT

Some quick facts about sequence caches and gaps in IDs

  1. Oracle sequences generate key values without gaps (minor exception from the rule: scalable sequences).
  2. Surrogate keys can and will have gaps.
  3. Gaps exist because it is not worth the huge effort to avoid them.

The biggest misunderstanding is probably the difference between the value coming from a sequence and the primary key value inside a database column (ID column). We can use a sequence to set a surrogate key. Typically we do this by a database trigger or an identity column or even as a value directly during the insert.

“The history of sequence caches is a history of misunderstandings.”

somebody on the internet, 1963

Essentially it is a two step process. The new ID value is created by sequence.nextval and then used during the insert. Oracle provided the logic to get a new sequence value in a fast, performant and multi-session friendly way. The developer is responsible to make sure this value is not wasted.

The sequence cache is a performance instrument. A lost cache value means we will see gaps in future ID values. There is only a very limited number of activities that makes cached sequence values “disappear”. The majority of gaps originates from other activities, like failed inserts or deletes. Many developers seem to think, that such a gap is caused by the sequence cache. I find it important to separate what reason ultimately lead to a gap in an ID value.

A gap analysis can be the first step. The following select finds gaps and orders them by size. Put your own table and PK column into the “ids” subquery and find out how the gaps are distributed in that table.

-- gap analysis
with ids as (select region_id as id, lead(region_id) over (order by region_id) as nextid from regions)
   , gaps as (select id, nextid - id as gap_size from ids where nextid > id + 1)
select gap_size , count(*) gaps_found  
from gaps
group by gap_size
order by count(*) desc;

Example result

gap_sizegaps_found
205
32
22
191

Here the data shows a few small gaps. Either some insert didn’t make it or rows had been deleted. But there are also several gaps of size 19 and 20, which means the sequence cache was probably lost at least 6 times.

To make it clear, IDs are allowed to have gaps. However when an end user complains why he constantly sees gaps in some ID, then we should investigate what is going on. Frequent gaps should not be the expected normal behaviour – it is worth finding the root cause for such an issue.

So here is a (non complete) list of activities that do or do not influence the caching of sequences and the existence of gaps in an ID column.

action result
multiple sessionscache used
multiple sessions with one session slow to commitIDs not in order of commit => not a gap eventually, but looks like a gap until final commit
alter system flush shared_pool ;cache values lost => gap in ID
shutdown immediate;cache values not lost
shutdown abort;cache values lost => gap in ID
insert; commit;no gap
insert; rollback;gap in ID
failed insertgap in ID
delete; commit;gap in ID
switch sequence from cache to nocachecache values not lost
kill sessionpossible gap in ID
(values not used|commited are lost)
different RAC sessionscache values not lost
(but separate caches on each RAC node)
SGA size too smallcache values lost => gap in ID
(sequence cache ages out of shared pool per LRU list)
session sequencesnot useful for unique ID values => duplicate ID
scalable sequencescache values not lost => planned large gap in ID
(because of prefix)
alter system cancel SQL ‘SID, serial’;
(insert killed)
gap in ID
alter system disconnect session post_transaction;no gap
(unless sequence was fetched before the transaction started)
alter system disconnect session immediate; gap in ID
sequence restartpossible duplicate IDs
sequence interval > 1planned gap in ID

That a sequence ages out of the shared pool is in my experience the most common cause of lost sequence values that are noticed. A strong indication is when frequent gaps of 20 values exist.

Possible workarounds are

a) set the sequence to nocache if the sequence is used rarely
alter myseq nocache;
This might cost performance.

b) pin the sequence in the shared pool
dbms_shared_pool.keep('MYSCHEMA.MYSEQ','Q')
Pinning the sequence is useful when you have times with lots of concurrent activities followed by larger idle periods.
This costs a little memory.

c) increase SGA size – sometimes it helps to increase the restricted shared pool area.
This costs memory (and money) – not available for other non-database processes.

Conclusions

  • Small gaps in surrogate keys (ID columns) are natural, mostly because of delete statements or failed inserts.
  • The sequence object typically does not produce gaps.
  • Very frequent gaps of size 20 (default cache size) are not natural and might need investigation.

EBR – how to run a background job in the correct edition

Introduction

Edition based redefinition is a no extra cost feature of the oracle database. It allows to have multiple versions of the same plsql based code (packages, views, triggers, object types, synonyms, …) in the database at the same time.

Code, that starts a scheduled background job, should execute this background job in the same current edition.

Here is how do it using job classes.

TL;DR.

First create a job class that connects to a specific edition via a service name

begin
  dbms_scheduler.create_job_class (
    job_class_name => 'JCED_DEV$ALPHA',
    service => 'DEV_ALPHA'
  );
end;
/

Then start the background job using that job class.

v_jobclass_prefix := 'JCED_';
v_jobname := 'myJob';
dbms_scheduler.create_job(job_name            => v_jobname,
                          job_type            => 'STORED_PROCEDURE',
                          job_action          => 'myPkg.doSomething',
                          job_class           => v_jobclass_prefix ||sys_context('userenv','current_edition_name'),
                          number_of_arguments => 0,
                          start_date          => sysdate,
                          enabled             => true,
                          auto_drop           => true);

Problem description

This article assumes some basic understanding of the edition based redefinition (EBR) feature of the oracle database.

Scheduled jobs do run in the background in a separate new session. Any new session runs in the default edition of the database, unless it specifies the edition at the time the session is created.

It is possible to switch the edition on demand during a session, but this is not recommended. Chris Saxon misused this possibility for one of his SQL Magic tricks. Not everything that is possible should be used.

Chris Saxon – SQL Magic

Solution

A scheduled job can use a job class. A job class can be set to use a database service. A database service can be set to connect to a specific edition.

When using EBR it is a recommended practice to create a service for each edition and possibly another one for the default edition. Information how to create a service for an edition can by found on one of my older blog entries: working with editions – part 2. Also Oren Nakdimon recently publiced a very nice article about it: using services for exposing new editions .

The following assumes that we have three editions ORA$BASE, TST$BETA and DEV$ALPHA. It also assumes that we have created matching database services ORA_BASE, TST_BETA and DEV_ALPHA and that those services are running.

create the job classes

For each edition we create a job class JCED_<edition_name> (JCED_ORA$BASE, JCED_TST$BETA and JCED_DEV$ALPHA). This can be done with DBA or with the MANAGE SCHEDULER privilege.

Job classes also allow to prioritize between jobs and to connect them to a ressource group.

I’m using a prefix JCED (Job Class EDitioned) to tag the classes. You can use any name you want, but it is advisable to have some common identification for the classes. The name also allows me later to find the correct job class, depending on the current edition of the session.

-- Run as DBA
begin
  dbms_scheduler.create_job_class (
    job_class_name => 'JCED_ORA$BASE',
    service => 'ORA_BASE'
  );
end;
/

begin
  dbms_scheduler.create_job_class (
    job_class_name => 'JCED_TST$BETA',
    service => 'TST_BETA'
  );
end;
/

begin
  dbms_scheduler.create_job_class (
    job_class_name => 'JCED_DEV$ALPHA',
    service => 'DEV_ALPHA'
  );
end;
/

Then we need to make those classes available to the user that will later create the scheduled jobs.

grant execute on sys.JCED_ORA$BASE to mySchema;
grant execute on sys.JCED_TST$BETA to mySchema;
grant execute on sys.JCED_DEV$ALPHA to mySchema;

It is also possible to allow a schema to use any class.

grant execute any class to mySchema;

Job classes do not belong to any schema. They are always created in SYS. Something to keep in mind, for example when exporting/importing schemas to another database.

programatically create a job

The following function creates a background job using the current edition. The job calls a procedure with 3 arguments.

The function is part of some larger package myPackage. It is assumed that the doSomething procedure is also part of the same package.

/* constant declaration section in package body */
  g_jobclass_prefix     constant varchar2(10) := 'JCED_';


/* modules */

function createBackgroundJob(p_id in number, p_starttime in timestamp) return varchar2
is
  co_modul_name CONSTANT VARCHAR2(96) := $$PLSQL_UNIT || '.createBackgroundJob';
  v_job_nr binary_Integer;
  v_jobname varchar2(100);
begin

  v_jobname := 'myJob_'||to_char(p_id);
  dbms_scheduler.create_job(job_name            => v_jobname,
                            job_type            => 'STORED_PROCEDURE',
                            job_action          => $$plsql_unit || '.doSomething',
                            job_class           => g_jobclass_Prefix||sys_context('userenv','current_edition_name'),
                            number_of_arguments => 3,
                            start_date          => p_starttime,
                            enabled             => FALSE,
                            auto_drop           => true);
  dbms_scheduler.set_job_argument_value(job_name=>v_jobname, argument_position=>1, argument_value=>p_id);
  dbms_scheduler.set_job_argument_value(job_name=>v_jobname, argument_position=>2, argument_value=>'RUN QUICK');
  dbms_scheduler.set_job_argument_value(job_name=>v_jobname, argument_position=>3, argument_value=>myPackage.g_trace_level);

  return v_jobname;
end createBackgroundJob;

The name of the job class is calculated using the name of the current edition by sys_context('userenv','current_edition_name').

start the background job

The createBackgroundJob function only prepares the background job. To run it, we need to call the create function and enable the job afterwards.

declare
  job varchar2(128);
begin  
  job := myPackage.createBackgroundJob(1, localtimestamp);
  dbms_scheduler.enable(job);
end;
/

Conclusion

Using job classes is a easy way to start a scheduled job in the same edition that the currently running session is in

Further remarks

Justification

I used this logic to split a plsql heavy task into several worker tasks that could run in parallel. I wanted to make sure that the parallel execution was done using the same edition as the thread that started it.

Restrictions

I did not test if it is possible to run lightweight jobs in a specific edition. There seems to be no obvious restriction that prevents using a job class also for lightweight jobs.

Since job classes belong to sys sometimes they need to be with the schemaname in front :sys.JCED_DEV$ALPHA.

It is not recommened to switch an edition from inside some code, because that code itself is running in a specific edition (the default one). This is very hard to control and to do properly.

Jobs are created with job names in upper case. In some cases we need to make sure to use the upper cased job name, to find/handle the correct job.

Error handling

Error handling needs special care when combining scheduled jobs and editions. For example I make sure that proper instrumentation is in place and that the log entry also includes the edition in which the error happend. In some situations, like if the service is dropped, then the session is still created, but falls back to the default edition. We want to be sure, that we notice any issues arising from such a situation.