lateral join – decorrelation gone wrong

A colleaque made me aware of the following misbehaviour of Oracles optimizer. Shoutout to Christine S. who discovered that problem.

demo

create table a 
as select 1 x from dual;

create table b 
as select 1 x, 1 y from dual union all 
   select 1 x ,2 y from dual;

select a.x, c.y
from  a,
     lateral (select b.x, max (b.y) y from b where b.x = a.x) c
where a.x = c.x;

result

         X          Y
---------- ----------
         1          2

At first glance this looks like what we intended to see. However a closer inspection of the code reveals that select b.x, max (b.y) y from b is not valid syntax. We do get an error message, when running it directly.

ORA-00937: not a single-group group function

We also get the same error message when we use the select with the lateral keyword to drive a materialized view.

create materialized view mv
as
select a.x, c.y
from a,
     lateral (select b.x, max (b.y) y from b where b.x = a.x) c
where a.x = c.x;
ERROR at line 5:
ORA-00937: not a single-group group function

The ORA-937 error message is correct and should be displayed.

That we get a result in the first case, is an error!

reason

Oracle transforms the lateral join into a decorrelated lateral join. During this transformation the syntax error disappears. We see this when checking the outline section of the execution plan (see below). This transformation is different when used in a materialized view.

select *
from dbms_xplan.display_cursor(sql_id=>'2c1nchvqtjt6s', format=>'+outline');
Plan hash value: 3244120894

----------------------------------------------------------------------------------------
| Id  | Operation            | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |                 |       |       |     7 (100)|          |
|*  1 |  HASH JOIN           |                 |     1 |    29 |     7  (15)| 00:00:01 |
|   2 |   TABLE ACCESS FULL  | A               |     1 |     3 |     3   (0)| 00:00:01 |
|   3 |   VIEW               | VW_DCL_A18161FF |     1 |    26 |     4  (25)| 00:00:01 |
|   4 |    HASH GROUP BY     |                 |     1 |     6 |     4  (25)| 00:00:01 |
|   5 |     TABLE ACCESS FULL| B               |     2 |    12 |     3   (0)| 00:00:01 |
----------------------------------------------------------------------------------------

Outline Data
-------------

  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('19.1.0')
      DB_VERSION('19.1.0')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$B1519A3D")
      DECORRELATE(@"SEL$2")
      OUTLINE_LEAF(@"SEL$13D91C5D")
      OUTER_JOIN_TO_INNER(@"SEL$71877063" "C"@"SEL$1")
      OUTLINE(@"SEL$2")
      OUTLINE(@"SEL$71877063")
      DECORRELATE(@"SEL$B1519A3D")
      OUTLINE(@"SEL$1")
      OUTLINE(@"SEL$B1519A3D")
      DECORRELATE(@"SEL$2")
      FULL(@"SEL$13D91C5D" "A"@"SEL$1")
      NO_ACCESS(@"SEL$13D91C5D" "C"@"SEL$1")
      LEADING(@"SEL$13D91C5D" "A"@"SEL$1" "C"@"SEL$1")
      USE_HASH(@"SEL$13D91C5D" "C"@"SEL$1")
      FULL(@"SEL$B1519A3D" "B"@"SEL$2")
      USE_HASH_AGGREGATION(@"SEL$B1519A3D")
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("A"."X"="C"."X")

The name of the view VW_DCL_xxx indicates that it was used for a decorrelated lateral operation. For a normal lateral operation the name of the view woud be VW_LAT_xxx.

opinion

I’m a big fan of lateral joins. For more insights what a lateral join does, check this older blog post by the Optimizer Development Group: http://optimizermagic.blogspot.com/2007/12/outerjoins-in-oracle.html

Since 12c (or maybe a little earlier than that) there were some optimizer enhancements to lateral joins, called decorrelation. In essence it means a lateral join, which can be seen as a kind of correlated subquery, is rewritten into a uncorrelated subquery under certain specific circumstances.

I believe this rewrite is almost always wrong. If the developer on purpose decides to use the lateral keyword (same goes for cross apply and outer apply) the optimizer should not try to correct this.

Other authors have written about problems with decorrelation before.
See

The case demonstrated here is just another example where decorrelation has gone wrong. An optimizer hint should not change the result of a query, and even more so should not change whether the query is formulated correctly or not. There are exceptions to this rule, where the intention of the hint is to change the outcome, for example ignore_row_on_dupkey_index or fresh_mv. But the (undocumented) DECORRELATE/NO_DECORRELATE hints are no such cases.

workaround

There are two ways to disable the decorrelation and get the expected ora- error message.

a) use the NO_DECORRELATE hint.

SQL> select /*+ no_decorrelate(@"SEL$2") */
  2   a.x, c.y
  3  from  a,
  4       lateral (select b.x, max (b.y) y from b where b.x = a.x) c
  5  where a.x = c.x;
     lateral (select b.x, max (b.y) y from b where b.x = a.x) c
                     *
ERROR at line 4:
ORA-00937: not a single-group group function

b) switch the decorrelation feature off using the hidden (underscore) parameter _optimizer_ansi_join_lateral_enhance

SQL> alter session set "_optimizer_ansi_join_lateral_enhance"=false;

Session altered.

SQL> select a.x, c.y
  2  from  a,
  3       lateral (select b.x, max (b.y) y from b where b.x = a.x) c
  4  where a.x = c.x;
     lateral (select b.x, max (b.y) y from b where b.x = a.x) c
                     *
ERROR at line 3:
ORA-00937: not a single-group group function

 

conclusion

Although LATERAL joins are a nifty feature, always be aware that such features often come with bugs.

plsql function: who_called – simplify your exception handler!

Does your error logging logic still look like this?

example 1 – old school handler

procedure doSomething is
begin
   ...
exception
  when others then
     pk_log.error(p_module    => $$PLSQL_UNIT, 
                  p_submodule => 'doSomething', 
                  p_message   => 'Something not done!'||sqlerrm);
     raise;
end;

Here is what I prefer!

example 2 – compact handler

procedure doSomething is
begin
   ...
exception
  when others then
     pk_log.error('Something not done!');
     raise;
end;

Such code is way more convinient! Of cause this will store the same information as the previous example 1.

To get there some parts have to be moved into the logging method (pk_log.error).

  1. sqlerrm
    is the error message
    When moving, it should also be improved (at least use dbms_utility.format_error_stack)
  2. $$PLSQL_UNIT
    is the name of the package (or of the standalone procedure or function)
  3. ‘doSomething’
    is the hardcoded name of the procedure
    It needs to be gathered dynamically at runtime inside the logging package.

The biggest issue is 3.) – to find the name of the module at runtime. The function who_called can solve that. It also gathers the package name on the way, so 2.) is also covered.

Side note: Is hardcoding the procedure name a problem?

Not really. However if the name changes one needs to change it in several lines. And in case one forgets to change the error handler, the compiler will not issue an error. Then the problem might go unnoticed for some time and could be misleading in case of exceptions.

I have two versions of the who_called function. The first standalone version is not recommend to use. Instead use the packaged version. However the first version demonstrates the implemented logic in a clear way.

Both functions will find the full name of the module that called this code block. The default behaviour is that modules from the same package are ignored. This allows easy usage inside packages ment for logging, auditing or testing. Everything that is instrumentation code can usually profit from this little helper.

Usecase 1 – demo purposes – standalone function

This logic assumes that we use the function to find out who called the procedure that is currently running. So not the name of the procedure itself, but the name of the caller.

Installation code

-- Cleanup
-- drop function who_called;

-- Usecase 1: standalone function - simple demo version

-- check the call stack to find the calling module.
create or replace function who_called return varchar2
  is
    v_caller utl_call_stack.unit_qualified_name;
    v_name      varchar2(128);
begin
    -- print the call stack just for demo purposes
    for i in 1..utl_call_stack.dynamic_depth  loop
      sys.dbms_output.put_line(
         rpad('+',utl_call_stack.dynamic_depth-i,'+')||
         utl_call_stack.concatenate_subprogram( utl_call_stack.subprogram(i) )
         );
    end loop;

    
    -- step backwards though the call stack 
    --      1 = current module = who_called function
    --      2 = the module that calls who_called 
    --      3 = the caller!
    v_caller := utl_call_stack.subprogram(3);

    v_name := utl_call_stack.concatenate_subprogram(v_caller);

    return v_name;

end who_called;
/

example 3 usage

set serveroutput on

-- Testcase 1.1 anonymous block
declare
  procedure DoSomething as
  begin
    dbms_output.put_line('I want to doSomething!');
    dbms_output.put_line('Who called? '|| who_called);
  end;

  procedure DoMore as
  begin
    doSomething;
    dbms_output.put_line('I want to doMore!');
    dbms_output.put_line('Who called? '|| who_called);
  end;
begin
  doMore;
end;
/

output

I want to doSomething!
+++WHO_CALLED
++__anonymous_block.DOSOMETHING
+__anonymous_block.DOMORE
__anonymous_block
Who called? __anonymous_block.DOMORE
I want to doMore!
++WHO_CALLED
+__anonymous_block.DOMORE
__anonymous_block
Who called? __anonymous_block


PL/SQL-Prozedur erfolgreich abgeschlossen.

Each time the function is called, it prints the full call stack to the screen. This is just for demonstration purposes.
The first time the function who_called is executed is from inside the submodule doSomething.
The call stack at this point looks like this

Position in StackModule
1who_called
2doSomething
3doMore
4__anonymous_block

Line 5 in the anonymous block is the line with the who_called function (call stack 1). It is part of the doSomething procedure (call stack 2). On position 3 of the call stack we always find the caller – here the procedure doMore that did execute doSomething.

The second time the function is called is in the doMore procedure. And the call stack looks like this:

Position in StackModule
1who_called
2doMore
3__anonymous_block

In this case the caller is the anonymous block.

The example clearly shows that we can fetch the name of any module in the call stack (including anonymous blocks). We just have to fetch the name from the proper level in the call stack.

Usecase 2 – error and trace logging – packaged version

This works a bit differently when used inside a larger package that is used to instrument the code base. Typically we can add error logging logic, and logging debug (trace) calls. In that scenario, we want to find out the name of the module where the trace or error logging call is made.

The logic assumes that we have a package used for this instrumentation purpose only. Which also means, that all calls inside this package can be ignored. So essentially we want to find out what is the full name of the procedure (or function) where the log error or the log trace call is done. The full name can be something like package.module.submodule.subsubmodule.

Installation code

When testing this, make sure you do not have/own a package pk_log already. This would overwrite it.

If you like it copy the code for the who_called function into your own instrumentation package.

-- Cleanup
-- drop function who_called;

-- who_called packaged version 
-- check the call stack to find the calling module.
-- calls from this package are not considered
create or replace package pk_log as
/* this is a very simplified example for error logging 
   only demonstration purposes of the WHO_CALLED function 
   your error logging logic must be better!
*/
  procedure error(message in varchar2);
  procedure trace(message in varchar2); 
end pk_log;
/

create or replace package body pk_log as
    function who_called return varchar2
      is
        -- author: Sven Weller, 2021, syntegris information soultions GmbH
        v_pkg    varchar2(128) := $$plsql_unit; -- name of the package that holds the who_called function
        v_caller utl_call_stack.UNIT_QUALIFIED_NAME;
        v_name varchar2(128);
    begin
        -- step backwards though the call stack 
        --      1 = current module = who_called function
        --      2 = the module that calls who_called 
        --      3 = the caller!
        -- we could start later (level 3 or even 4), if we knew exactly how many other submodules are always in the call stack 
        -- This might be risky if function inlineing is possible, but that can be prevented by a pragma. Example in proc error
        for i in 2..utl_call_stack.dynamic_depth  loop

          v_caller := utl_call_stack.subprogram(i);

          v_name := utl_call_stack.concatenate_subprogram(v_caller);

          -- at lexical depth 1 we find the name of the package
          if v_caller(1) != v_pkg then
            -- In some scenarios, we might want to go even one step upward the call stack. Typically not for error logging purposes.
            -- In such cases we could simply fetch one more name from the call stack
            -- v_caller := utl_call_stack.subprogram(i+1);
            
            -- build the full name, but only if we found the caller
            v_name := utl_call_stack.concatenate_subprogram(v_caller);
            -- no need to go further through the call stack
            exit;
          end if;
        end loop;

        return v_name;

    end who_called;

    procedure writeMessage(message in varchar2, message_type in varchar2) as
    begin
      case message_type 
      when 'E' then
        sys.dbms_output.put_line('ERROR at ' || who_called);
      when 'T' then
        sys.dbms_output.put_line('Info from ' || who_called);
      else  
        sys.dbms_output.put_line('ERROR! Unknown Message Typ ' || message_type || ' in '||who_called);
      end case;        
      sys.dbms_output.put_line(message);

      if message_type = 'E' then 
        -- Add full error stack
        sys.dbms_output.put_line(dbms_utility.format_error_stack);
        -- Add line where the error happened
        -- Only the last line from the backtrace is relevant, all other lines are already in the error stack
        -- compare output to sys.dbms_output.put_line(dbms_utility.format_error_backtrace);
        sys.dbms_output.put_line(' in '||nvl(UTL_CALL_STACK.backtrace_unit(utl_call_stack.backtrace_depth),'anonymous block')
                               ||' at line '||sys.utl_call_stack.backtrace_line(utl_call_stack.backtrace_depth)
                               );
      end if;
    end writeMessage; 

    procedure error(message in varchar2) as
      pragma autonomous_transaction;
    begin
      pragma inline(writeMessage,'NO'); -- we don't want to inline in case of optimization_level = 3.
      writeMessage(message,'E' );
    end error;
    procedure trace(message in varchar2) as
      pragma autonomous_transaction;
    begin
      writeMessage(message,'T' );
    end trace;   
end pk_log;
/

example 4 call package, log trace and log error

set serveroutput on

-- Testcase 2.1 anonymous block
declare
  procedure DoSomething as
  begin
    pk_log.trace('I want to doSomething!');
  end;

  procedure DoMore as
  begin
    doSomething;
    raise no_data_found;
  end;
begin
  doMore;
exception 
  when others then
    pk_log.error('I wanted to do more, but this happend!');
    raise;
end;
/

output

This is the dbms_output (the simplified trace and error logs).

Info from __anonymous_block.DOSOMETHING
I want to doSomething!
ERROR at __anonymous_block
I wanted to do more, but this happend!
ORA-01403: no data found
ORA-06512: at line 10

 in anonymous block at line 13

The output is followed by the exception from the final RAISE; of the block (from SQL developer).

Fehler beim Start in Zeile: 4 in Befehl -
declare
  procedure DoSomething as
  begin
    pk_log.trace('I want to doSomething!');
  end;

  procedure DoMore as
  begin
    doSomething;
    raise no_data_found;
  end;
begin
  doMore;
exception 
  when others then
    pk_log.error('I wanted to do more, but this happend!');
    raise;
end;
Fehlerbericht -
ORA-01403: no data found
ORA-06512: at line 17
ORA-06512: at line 10
ORA-06512: at line 13
01403. 00000 -  "no data found"
*Cause:    No data was found from the objects.
*Action:   There was no data from the objects which may be due to end of fetch.

In SQL Plus we get one more line, because of the additional raise in line 17.

There is something special in the code. The last line from the error backtrace is captured and returned. "in anonymous block at line 13" .

This last line is missing in the error stack, which means it is usually not returned when you only store dbms_utility.format_error_stack.

To add this last line utl_call_stack.backtrace_line(utl_call_stack.backtrace_depth) is executed. All other lines from the error backtrace are already in the error stack (depends on DB version) – so those can be ignored.

Conclusion

For my purposes it works fine and the resulting code is very clean.

I currently use it in an audit package based upon Connor McDonalds Audit Generator which was massively reworked for some custom needs. Maybe I publish this one day. But that is another story.

who_called advantages

  • Way less to type – less mistakes
  • much easier to see what the error handler does
  • concentrate coding effort on the important bits – mostly the additional information that should be added to the error

who_called disadvantages

  • A tiniest bit slower than hard coding the package and submodule name
    => For exception handling this is not relevant.
    => In huge debug or trace scenarios this time could add up if extreme performance is needed. Conditional compiling can be applied to remove instrumentation calls if you compile for extreme plsql performance.
  • since part of the logic is hidden other developers might wanted to add the module name into the message
  • module name is fetched at run time, not at compile time
    => this can be an issue, especially if optimization level 3 is used. Function inlineing (check the compiler warnings) is a method that the plsql optimizer can choose to speed up logic. However it also means that the function at runtime will not exist anymore. And utl_call_stack will not be able to report the name of the module correctly. Note that there is a compiler warning that inlining happend (which usually is a good thing).

Feel free to adapt this code snippet to your special needs. I’m convinced this will help you to write simpler and more efficient error handlers.

Small demo about sequences and identity columns

A recent twitter discussion lead me to (re)test the syntax a little more around auto-incrementing ID columns that work without triggers. I was especially curious about the LIMIT VALUE option, that I didn’t know before.

identity options

This option is documented (as opposed to RESTART which still seems not to made it into the docs)

Demo 1 – Attach a sequence to a table via the DEFAULT option

The comments explain what I’m trying to do/show.

-- Create some sequences to be used later
create sequence mySeq1;
Sequence MYSEQ1 created.

create sequence mySeq2;
Sequence MYSEQ2 created.

-- Table with PK not tied to a sequence
create table t1 (id  number not null primary key
               , txt varchar2(30)); 
Table T1 created.

-- fill PK using a Sequence
insert into t1 values (mySeq1.nextval, 'Test1');
1 row inserted.

-- set the sequence to a new start value 55
alter sequence mySeq1 restart start with 55;
Sequence MYSEQ1 altered.

-- insert more data
insert into t1 values (mySeq1.nextval, 'Test2');
1 row inserted.

-- check data
select * from t1;
ID TXT
1 Test1
55 Test2

Unfortunately we can not easily change our PK into an identity column.

alter table t1 modify id generated as identity;

ORA-30673: column to be modified is not an identity column

alter table t1 modify id generated by default on null as identity;

ORA-30673: column to be modified is not an identity column

Other syntax variants including using LIMIT VALUE also do not work.

However it is easily possible to use a default value for the column and populate it by an existing (user managed) sequence.

-- Use a default value column and a different sequence
alter table t1 modify id default on null mySeq2.nextval;
Table T1 altered.

-- Trying to insert a values give a UK error, 
-- because we used a fresh sequence which gave us value 1
-- which exists already.
insert into t1 (txt) values ('Test3');
ORA-00001: unique constraint (MYUSER.SYS_C00145531) violated

-- set the sequence to a different value using RESTART
alter sequence mySeq2 restart start with 70;
Sequence MYSEQ2 altered.

insert into t1 (txt) values ('Test3');
1 row inserted.

insert into t1 values (null, 'Test4');
1 row inserted.

insert into t1 values (default, 'Test5');
1 row inserted.

-- Check data
select * from t1;
ID TXT
1 Test1
55 Test2
70 Test3
71 Test4
72 Test5

So Demo 1 shows some commands around normal ID columns that are populated via a default value setting.

Demo 2 – Use identity column from the start

create table t2 (id  number generated always as identity primary key
               , txt varchar2(30)); 
Table T2 created.

-- Try to insert some rows
insert into t2 values (null, 'Test1');
ORA-32795: cannot insert into a generated always identity column

insert into t2 values (default, 'Test2');
1 row inserted.

insert into t2 (txt) values ('Test3');
1 row inserted.

-- Try to insert a record via plsql
declare 
  r t2%rowtype;
begin 
  r.txt := 'Test4';
  insert into t2 values r;
end;
/

ORA-32795: cannot insert into a generated always identity column

-- hide the ID column then try row insert by record again
alter table t2 modify id invisible;
Table T2 altered.

declare 
  r t2%rowtype;
begin 
  r.txt := 'Test5';
  insert into t2 values r;
end;
/
PL/SQL procedure successfully completed.

-- Check data
select * from t2;
TXT
Test2
Test3
Test5

The ID is missing? Well it is there.

select id, txt from t2;
ID TXT
1 Test2
2 Test3
3 Test5

Hiding the ID column is one possible way to make certain types of inserts work again. Not my preferred way thou.

-- make column visible again
alter table t2 modify id visible;
Table T2 altered.

-- logical column order now is changed. This can impact DML that depend on column order!
select * from t2;
TXT ID
Test2 1
Test3 2
Test5 3
-- restore original column order by toggling other columns invisible
alter table t2 modify txt invisible;
Table T2 altered.

alter table t2 modify txt visible;
Table T2 altered.

-- modify to allow other data values (instead of generated ALWAYS)
alter table t2 modify id generated by default on null as identity ;
Table T2 altered.

-- Try inserts again that didn't work previously
insert into t2 values (null, 'Test1');
1 row inserted.

declare 
  r t2%rowtype;
begin 
  r.txt := 'Test4';
  insert into t2 values r;
end;
/
PL/SQL procedure successfully completed.

-- Check data
select * from t2;
ID TXT
1 Test2
2 Test3
3 Test5
4 Test1
5 Test4
-- add custom IDs
insert into t2 values (-1, 'Test6');
1 row inserted.

insert into t2 values (100, 'Test7');
1 row inserted.

-- Check data
select * from t2;
ID TXT
1 Test2
2 Test3
3 Test5
4 Test1
5 Test4
-1 Test6
100 Test7

So now we can insert other ID values into that column, even thou, we have an identity column. Eventually this will lead to a UK error when the value created by the Oracle managed sequence reaches 100.

This can be solved with the LIMIT VALUE clause

-- Check current high water mark (last value) of sequence identity column
select identity_column, data_default from user_tab_columns where table_name = 'T2' and column_name = 'ID';

IDENTITY_COLUMN	DATA_DEFAULT
-------
YES	"MYUSER"."ISEQ$$_258212".nextval

-- using the sequence name from the previous select
select last_number, cache_size 
from user_sequences where sequence_name = 'ISEQ$$_258212';

LAST_NUMBER	CACHE_SIZE
-------
24	20

-- reset Identity column to the highest value that is currently in the table 
alter table t2 modify ID  generated by default on null as identity start with limit value;
Table T2 altered. 

-- Check high water mark again 
select last_number, cache_size 
from user_sequences where sequence_name = 'ISEQ$$_258212';

LAST_NUMBER	CACHE_SIZE
-------
101	20

-- insert 
insert into t2 values (default, 'Test8');
1 row inserted.

-- Check data
select * from t2;
ID TXT
1 Test2
2 Test3
3 Test5
4 Test1
5 Test4
-1 Test6
100 Test7
101 Test8

Success!

So Demo 2 showed commands how to work with an identity column. Including the possibility to switch it from generated always as identity to generated by default on null. Which opens up some ways to manipulate the data inside this managed ID column.

Remember we can not influence (alter) the Oracle managed sequence that drives the identity column directly. But we can change its properties via ALTER TABLE MODIFY .

cleanup code

-- cleanup
drop sequence mySeq1;
drop sequence mySeq2;
drop table t1 purge;
drop table t2 purge;

A quirk with object dependencies under EBR

TL;DR;

USER_DEPENDENCIES does not show dependencies between objects in different editions. Which means, even when no dependencies are shown, we still can get errors when trying to drop objects. This is especially relevant when trying to drop TYPEs.

The basics

Oracle tracks dependencies between objects, for example packages but also between types. It provides us a view USER_DEPENDENCIES (and the matching ALL_|DBA_|CDB_DEPENDENCIES views) where we can see how those objects are related (are calling each other).

Here are two small examples

Example 1) Table + View

create table myTab (col1 number);
create view myView as select * from myTab;
Table MYTAB created.
View MYVIEW created.

Example 2) Object Type + Collection Type

create type myOT as object (col1 number);
/
create type myCT as table of myOT;
/
Type MYOT compiled

Type MYCT compiled

Further code demos will all assume that those 4 objects are in place unless mentioned otherwise.

The data dictionary

select * from user_dependencies;
NAMETYPEREFERENCED_OWNERREFERENCED_NAMEREFERENCED_TYPEREFERENCED_LINK_NAMESCHEMAIDDEPENDENCY_TYPE
MYOTTYPESYSSTANDARDPACKAGE122HARD
MYCTTYPESYSSTANDARDPACKAGE122HARD
MYVIEWVIEWSVENMYTABTABLE122HARD
MYCTTYPESVENMYOTTYPE122HARD

We can ignore the references to the sys.standard package. I think they come from references to datatypes that are defined there. Those dependencies do not effect the following quirks.

Data dictionary shows that the view MYVIEW depends on the table MYTAB (example 1) and the collection type MYCT depends on the object type MYOT (example 2)

The dependency type HARD tells us, that we are not allowed to drop the referenced objects without breaking|invalidating the dependent object.

drop table mytab;
Table MYTAB dropped.
select status from user_objects where object_name = 'MYVIEW';
INVALID

For types we get an error message when trying to drop it.

drop type myot;
ORA-02303: cannot drop or replace a type with type or table dependents

Side note: Another dependency type would be REF, which we encounter for example when using materialized views (MV). If a table is dropped that is used by a select in the materialized view, then we can still fetch data from the MV.

To successfully drop both types we first need to drop the collection type MYCT before we can drop the referenced object type MYOT.

drop type myct;
drop type myot;
Type MYCT dropped.

Type MYOT dropped.

Ok, this worked as expected. Dropping the objects in the opposite order as we had created them.

How does it work under edition based redefinition (EBR)?

Views and types are both editionable objects. That means we can have several “versions” of them in the same schema under different editions.

Assuming my typical 3 edition hierarchy

ORA$BASE ⇒ BETA ⇒ ALPHA 
(Parent ⇒ child1 ⇒ child2)  

I will only use ORA$BASE and ALPHA for this experiment. The 4 objects all have been created in ORA$BASE. Which means, they can be accessed also in the child editions. So let’s switch to ALPHA.

alter session set edition="ALPHA";
select sys_context('userenv','current_edition_name') from dual;
ALPHA

The data dictionary entries are the same as in the base edition.

select * from user_dependencies where name like 'MY%';
NAMETYPEREFERENCED_OWNERREFERENCED_NAMEREFERENCED_TYPEREFERENCED_LINK_NAMESCHEMAIDDEPENDENCY_TYPE
MYOTTYPESYSSTANDARDPACKAGE122HARD
MYCTTYPESYSSTANDARDPACKAGE122HARD
MYVIEWVIEWSVENMYTABTABLE122HARD
MYCTTYPESVENMYOTTYPE122HARD

Now we compile one of the types, but NOT the other.

alter type myct compile;
Type MYCT altered.

This did actualize the type MYCT in the child edition. Actualization is a concept of EBR. When a parent object is (re)compiled in a child edition the code afterwards exists twice. In the parent and in the child edition.

We go back to the parent edition and repeat our test now.

alter session set edition="ORA$BASE";
select sys_context('userenv','current_edition_name') from dual;
ORA$BASE

We try again to drop the types in the correct drop order.

drop type myct;
drop type myot;
Type MYCT dropped.

Error starting at line : 2 in command -
drop type myot
Error report -
ORA-02303: cannot drop or replace a type with type or table dependents

This time we can not drop the object type. Unfortunately the data dictionary does not tell, which object causes this dependency problem.

select * from user_dependencies where name like 'MY%';
NAMETYPEREFERENCED_OWNERREFERENCED_NAMEREFERENCED_TYPEREFERENCED_LINK_NAMESCHEMAIDDEPENDENCY_TYPE
MYOTTYPESYSSTANDARDPACKAGE122HARD
MYVIEWVIEWSVENMYTABTABLE122HARD

MYOT is not in the list of referenced names anymore. So we should be able to drop it.

The problem is that the view USER_DEPENDENCIES only shows objects that are in the current edition or that are not editionable (like sys.standard). But we have created an dependency between the object MYCT that still exists in edition ALPHA to the object MYOT from the parent edition ORA$BASE.

Which objects exist in which edition can be checked using the user_objects_AE view. The AE stands for all editions.

select edition_name, object_name, object_type, editionable
from user_objects_ae
where object_name like 'MY%';
EDITION_NAMEOBJECT_NAMEOBJECT_TYPEEDITIONABLE
ORA$BASEMYCTNON-EXISTENT
ALPHAMYCTTYPEY
ORA$BASEMYOTTYPEY
MYTABTABLE
ORA$BASEMYVIEWVIEWY

Because the type MYOT was never actualized in the edition ALPHA, we only have this object in the parent ORA$BASE. The dependency crosses from one edition to the other. In ALPHA we are still able to see this dependency in the dictionary.

So how can we drop the object in ORA$BASE?

There are three possible ways. Which way to choose depends on the reason why you dropped the object in the first place.

a) Use the force!

drop type myot force;

I do not recommend this, but there are situations were it is the most easy way. This also invalidates any usage of the dependent collection type in the child edition. It is very easy to overlook such an issue therefore only use FORCE when you understand what the reason the the error is and why it is ok to use FORCE,

b) drop the collection type in the child edition first.

alter session set edition="ALPHA";
drop type myct;

alter session set edition="ORA$BASE";
drop type myot;

c) actualize the object type in the child edition
This is easiest when done before dropping the type.

alter session set edition="ALPHA";
alter type myot compile;

Compiling the type creates a copy of the code in the data dictionary. From that point on any drop will leave a proper reference in the data dictionary of each edition. Cross edition references will not happen when all objects are actualized always. That is also why I start creating new objects in ALPHA and then move them to BETA and later for final testing to ORA$BASE.

Further complications

This looks a bit complicated, but after all we knew what we were doing. So it shouldn’t pose a too big of a problem. Right?

Let me introduce you to 12.2 oracle maintained sys_plsql types.

Since 12.2 plsql pipelined table functions do not need a dedicated SQL type anymore. If we create such a function, typically inside some package, we can create the needed structures (types) purely inside the package spec.

I also dropped all the previously created objects to avoid confusions.

Example

create or replace package myPkg
  authid definer
is 
  type ot_typ is record (col1 number, col2 varchar2(30));
  type ct_typ is table of ot_typ;
  
  function myFnc return ct_typ pipelined;
end;
/

create or replace package body myPkg 
is 
  
  function myFnc return ct_typ pipelined 
  is
    vRow ot_typ := ot_typ();
  begin
    for i in 1..5 loop
      vRow.col1 := i;
      vRow.col2 := chr(ascii('A')-1+i);
      pipe row(vRow);
    end loop;
  end myFnc;
  
end;
/

Now run the function. Since 12.2 there is no need to add the TABLE() operator anymore. However because of name resolution issues, then the function needs a trailing pair of parenthesis.

-- table "operator" syntax
select * from table(myPkg.myFnc); 

-- syntax without table keyword
select * from myPkg.myFnc(); 
COL1 COL2
1    A
2    B
3    C
4    D
5    E

The function successfully returns five rows. No SQL type was actively created. This was needed in the past but is not needed anymore.

Also if we check the dictionary, we do not see any new types. But they secretly exist. If we check user_dependencies (or dba_objects) we can see them.

select * from user_types;
no data found
select * from user_dependencies;
NAMETYPEREFERENCED_OWNERREFERENCED_NAMEREFERENCED_TYPEREFERENCED_LINK_NAMESCHEMAIDDEPENDENCY_TYPE
MYPKGPACKAGESYSSTANDARDPACKAGE122HARD
MYPKGPACKAGE BODYSYSSTANDARDPACKAGE122HARD
SYS_PLSQL_9C0C5336_24_1TYPESYSSTANDARDPACKAGE122HARD
SYS_PLSQL_9C0C5336_DUMMY_1TYPESYSSTANDARDPACKAGE122HARD
SYS_PLSQL_9C0C5336_9_1TYPESYSSTANDARDPACKAGE122HARD
MYPKGPACKAGE BODYSVENMYPKGPACKAGE122HARD
SYS_PLSQL_9C0C5336_24_1TYPESVENSYS_PLSQL_9C0C5336_9_1TYPE122HARD

The view shows a set of types that all start with the almost identical name. Most importantly type SYS_PLSQL_9C0C5336_24_1 depends on type SYS_PLSQL_9C0C5336_9_1.

This was the output in 18c (18.7). The result changed in a 19c database. More about that later.

These two types match the type definitions in the package spec.

type ot_typ is record (col1 number, col2 varchar2(30));
type ct_typ is table of ot_typ;

Depending on how the package is then changed in different editions, it can happen that a cross edition dependency is created for those sys_plsql types. This can then prevent further compilation of the package in one of the editions.

We encountered such a scenario during development in an 18.7 database base. However the used types were considerably more complex. At some point we were not able to compile a package body because of a missing spec. However the spec was there and could apparently be compiled (but secretly would not properly compile).

I was able to track down the problem to a cross edition type dependency for the generated sys_plsql types. Dropping those types (in the relevant editions) allowed us to recompile the package spec and the package bodies.

If you encounter such a case a drop of all the sys_plsql types in each edition and a package recompile then solves the issue.

Behaviour in 19c (19.3)

The general cross edition type dependency issue also exists in 19c.

The sys_plsql types are not shown anymore in the dictionary views – I am not sure if the same behaviour still is in use. I could not reproduce the package problem that we encountered in an 18c (18.7) database that was caused by the sys_plsql types using a simple test case – neither in 18c nor in 19c. Which does not mean, that the problem is gone. Only that it is tricky to recreate.

The 18c and 19c patch release notes do not indicate any changes for the sys_plsql types – so it is not clear if there was a released bug that was fixed. Maybe some internal behaviour was changed but no note is added in the documentation about this.

However in the EBR guide the following is mentioned under Evolutionary capability improvements

The dependency model is now fine-grained: e.g. adding a new column to a table, or a new subprogram to a package spec, no longer invalidates the dependants.

Conclusion

Make sure to always compile all code in your specific edition. This actualizes all the objects and the editions are then independent from each other.

Start by adding new code objects in the child edition first, before you move it to the parent.

Under EBR consider to use specific SQL types for table functions, even if they are not needed anymore. This will give you a tiny bit more control over the code and a better chance to react to problematic dependency issues.

SQL Developer export wizard

workaround: SQL Error: ORA-54013: INSERT operation disallowed on virtual columns

Problem

Unfortunately when we export data using the SQL developer export functionality it also exports data for virtual columns. While this is certainly useful for spreadsheet exports, it is rarely sensible for insert statements.

Any insert into a virtual column fails with error ORA-54013.

SQL Error: ORA-54013: INSERT operation disallowed on virtual columns

54013. 0000 – “INSERT operation disallowed on virtual columns”
*Cause: Attempted to insert values into a virtual column
*Action: Re-issue the statment without providing values for a virtual column

One way to avoid this is to set the virtual column to invisible before the export is started. SQL Developer will not export the data from invisible columns. But changing the data model in the source just to get a little more convenience is usually not the way to go. Remember that setting columns to invisible and visible again will put these columns at the end of the logical column list. Which could be a problem if code depends on the column order (which it shouldn’t). Also this requires to export the data again.

If you already have an export file with a lot of inserts, the following dirty little trick might help you.

Temp column trick

We create a new real column, insert the data into this column and throw it away afterwards. The virtual column is temporarily moved out of the way while doing so.

Lets assume we have a table persons and a virtual column that concatenates the columns firstname and lastname into fullname.

create table person (id number, 
                     firstname varchar2(100), 
                     lastname varchar2(100), 
                     fullname as lastname||',  '||firstname);

We rename the target column and temporarily add a new column at the end of the table. The insert statements produced by SQL Developer have the column names included in the insert, so they do not depend on column order.

alter table person rename fullname to "_FULLNAME";
alter table person add fullname VARCHAR2(1000);

Then run the insert scripts that were created by SQL Developer.

SET DEFINE OFF;
@PERSON_DATA_TABLE.sql

After the data was inserted, restore the old columns.

alter table person drop column fullname;
alter table person rename "_FULLNAME" to fullname;

Automation

And here are two selects that generate all those statements for a complete schema. The second select needs to run before the generated code from the first select is executed. Otherwise the columns in the dictionary already have been changed.

Preparation Script SQL

select 'alter table "'||table_name||'" rename "'||column_name||'" to "_'||column_name||'";'||chr(10)||
       'alter table "'||table_name||'" add "'||column_name||'" '||case when data_type in ('VARCHAR', 'VARCHAR2','CHAR','NCHAR','NVARCHAR2') then 'VARCHAR2(4000 byte)' else data_type end||';'
       as pre_insert_ddl
from user_tab_cols
where virtual_column='YES' 
--and user_generated='YES'
and hidden_column = 'NO';  

Restore Script SQL

select 'alter table "'||table_name||'" drop column "'||column_name||'";'||chr(10)||
'alter table "'||table_name||'" rename "_'||column_name||'" to "'||column_name||'";'
as post_insert_ddl
from user_tab_cols
where virtual_column='YES'
--and user_generated='YES'
and hidden_column = 'NO';

Restrictions and side effects

The scripts are not perfect. For example if you have a column name that is already at maximum identifier length (e.g. 30 or 128 characters long) then adding _ in front of the column will produce an error. I think this situation is very rare and should best handled by manually correcting the scripts.

Varchar2 columns are expected to be equal or less than 4k, which might not be correct when extended_string_size is used in newer DB versions.

Some special datatypes might need extra consideration. In general the datatype of the new column does not matter as long as it doesn’t produce an error during insert.

SQL Quickie: How to reset an identity column

Introduction

Since Oracle 12 we can declare columns as an identity column. Other databases call this feature “autoincrement” column. A term that can easily lead to some misunderstandings, but is already well established. In some of those databases such a column allows to reset the current value. There is no (documented) way for that with an Oracle identity column. A recent discussion in the ODC forums lead me to think whether it is possible to set an identity column to a custom value. And yes it is.

TL;DR;

  • Change the increment of the sequence that supports the identity column by modifying the column.
  • Increase the value by an insert+rollback.
  • Reset the sequence by modifying the column again.

or use the undocumented RESTART option (12.2 onwards)

alter table demo_identity_reset
modify id generated always as identity restart start with 60;

Showcase

demo setup

First we create a table with an identity column. We use default settings for the sequence, which for example means increment +1 and cache 20.

/* prepare demo table */
create table demo_identity_reset
(id number generated always as identity primary key
,name varchar2(500)
);

Then insert some demo values.

/* add some data */
insert into demo_identity_reset (name) values ('test1');
insert into demo_identity_reset (name) values ('test2');
commit;
select * from demo_identity_reset;
ID NAME
1 test1
2 test2

We now have a table with an identity column where some values are used and more values are currently cached in the SGA.

Set a custom value (12.1)

Gather info

First we need to find the name of the sequence. Either by looking at all sequences, but more correct would be to look at the column definitions. The name of the sequence can be seen in the default value of the identity column.

/* check data dictionary and find the sequence name*/
select * from user_sequences;

or

/* check data dictionary and find the sequence name*/
select data_default
from user_tab_columns
where table_name = 'DEMO_IDENTITY_RESET'
and identity_column = 'YES';
DATA_DEFAULT
"MYSCHEMA"."ISEQ$$_172206".nextval

In my case the sequence name is ISEQ$$_172206. The generated name will always start with ISEQ$$_.

Other information we need to find is the current value, the cache size and the increment. The only decent way to see the current value is to set the sequence to nocache and after that check the data dictionary. We need to remember the original cache size setting to be able to restore it later to that value.

Start the change

From this step onward no other session should insert at the same time. It might be a good idea to lock the table in exclusive mode during those steps.

The cache also needs to be set to nocache to prevent caching any values with the wrong increment during the next steps.

/* reset the cache so that we can see the next value */
select cache_size 
from user_sequences 
where sequence_name = 'ISEQ$$_172206';

alter table demo_identity_reset 
modify id generated always as identity nocache;

Cache size was 20. The sequence now is not caching anymore. This is shown in the user_sequences dictionary view.

Now we can read the next value from the data dictionary and use that information to set an increment that jumps to our target value.

Here I assume a target value of 60.

/* find the next value */
select last_number from user_sequences 
where sequence_name = 'ISEQ$$_172206';

/* calculate the increment: 57 = 60(target value) - 3(last number) */

/* change the increment so that it jumps over all the other values. */
alter table demo_identity_reset 
modify id generated always as identity increment by 57 nocache; 

The last number was 3. Last number will always hold the next value after the cached values.

-- this does a sequence.nextval which we can not call otherwise
insert into demo_identity_reset (name) 
values ('dummy. rollback immediatly.'); 
rollback;

It doesn’t matter much, which value was inserted (59). In case you want to avoid any gaps, then this insert needs to be one of the real inserts that needs to be done. And the increment needs to be one value more (58 instead of 57). So that instead of a rollback you can do a commit.

/* reset the changes */
alter table demo_identity_reset 
modify id generated always as identity increment by 1 cache 20;

Done. The next insert will now start with our target value 60.

Set a custom value (12.2)

I tested this in 19c on LiveSql but it should work in all versions that belong to the 12.2. database family (12.2.0.1, 18c, 19c).

Instead of the complicated series of steps in 12.1 we can do it all in a single command. Unfortunately this command is undocumented, so use it at your own risk.

alter table demo_identity_reset 
modify id generated always as identity restart start with 60;

The RESTART option was mentioned in one of the new feature guides and Roger Troller made me aware of it in one of his blog posts. However currently (19c) it is still missing in the SQL reference documentation.

Test the change

The following steps are not needed anymore. But they are proof that the concepts works.

/* check the result in the dictionary*/
select last_number, cache_size 
from user_sequences 
where sequence_name = 'ISEQ$$_172206';
LAST_NUMBER    CACHE_SIZE
 60    20
/* test the result using the the table*/
insert into demo_identity_reset (name) values ('test3');
commit;
select * from demo_identity_reset;
IDNAME
1test1
2test2
60test3

Yes it works!

Cleanup code

Drop the table and the sequence. A simple drop table is not enough, because the sequence will continue to exist as long as the table is still in the recycle bin.

/* undo testcase */
drop table demo_identity_reset purge;

Further information

In general I would not recommend doing that in a production environment. But there are situations where you want to do it, for example after coping a production schema to a test database and cleaning out many values, you might like to reset the sequences too.

Such situations are rare and probably once in a while activities. That is also why I didn’t include the lock table command. And the lock will only hold until the next alter table command. Which makes it not safe to use.

It is not possible to use an alter sequence command for sequences that are connected with an identity column. It results in an error message.

alter sequence "ISEQ$$_172206" restart start with 60;

ORA-32793: cannot alter a system-generated sequence

Here is an example of the definition of an autoincrement column in another database (MySQL).

https://dev.mysql.com/doc/refman/5.7/en/example-auto-increment.html

The AUTO_INCREMENT attribute can be used to generate a unique identity for new rows

No value was specified for the AUTO_INCREMENT column, so MySQL assigned sequence numbers automatically. You can also explicitly assign 0 to the column to generate sequence numbers, unless the NO_AUTO_VALUE_ON_ZERO SQL mode is enabled.

If the column is declared NOT NULL, it is also possible to assign NULL to the column to generate sequence numbers.

When you insert any other value into an AUTO_INCREMENT column, the column is set to that value and the sequence is reset so that the next automatically generated value follows sequentially from the largest column value.

Updating an existing AUTO_INCREMENT column value in an InnoDB table does not reset the AUTO_INCREMENT sequence as it does for MyISAM and NDB tables.

MySQL 5.7 Reference Manual – 3.6.9 Using AUTO_INCREMENT

Some quick facts about sequence caches and gaps in IDs

  1. Oracle sequences generate key values without gaps (minor exception from the rule: scalable sequences).
  2. Surrogate keys can and will have gaps.
  3. Gaps exist because it is not worth the huge effort to avoid them.

The biggest misunderstanding is probably the difference between the value coming from a sequence and the primary key value inside a database column (ID column). We can use a sequence to set a surrogate key. Typically we do this by a database trigger or an identity column or even as a value directly during the insert.

“The history of sequence caches is a history of misunderstandings.”

somebody on the internet, 1963

Essentially it is a two step process. The new ID value is created by sequence.nextval and then used during the insert. Oracle provided the logic to get a new sequence value in a fast, performant and multi-session friendly way. The developer is responsible to make sure this value is not wasted.

The sequence cache is a performance instrument. A lost cache value means we will see gaps in future ID values. There is only a very limited number of activities that makes cached sequence values “disappear”. The majority of gaps originates from other activities, like failed inserts or deletes. Many developers seem to think, that such a gap is caused by the sequence cache. I find it important to separate what reason ultimately lead to a gap in an ID value.

A gap analysis can be the first step. The following select finds gaps and orders them by size. Put your own table and PK column into the “ids” subquery and find out how the gaps are distributed in that table.

-- gap analysis
with ids as (select region_id as id, lead(region_id) over (order by region_id) as nextid from regions)
   , gaps as (select id, nextid - id as gap_size from ids where nextid > id + 1)
select gap_size , count(*) gaps_found  
from gaps
group by gap_size
order by count(*) desc;

Example result

gap_sizegaps_found
205
32
22
191

Here the data shows a few small gaps. Either some insert didn’t make it or rows had been deleted. But there are also several gaps of size 19 and 20, which means the sequence cache was probably lost at least 6 times.

To make it clear, IDs are allowed to have gaps. However when an end user complains why he constantly sees gaps in some ID, then we should investigate what is going on. Frequent gaps should not be the expected normal behaviour – it is worth finding the root cause for such an issue.

So here is a (non complete) list of activities that do or do not influence the caching of sequences and the existence of gaps in an ID column.

action result
multiple sessionscache used
multiple sessions with one session slow to commitIDs not in order of commit => not a gap eventually, but looks like a gap until final commit
alter system flush shared_pool ;cache values lost => gap in ID
shutdown immediate;cache values not lost
shutdown abort;cache values lost => gap in ID
insert; commit;no gap
insert; rollback;gap in ID
failed insertgap in ID
delete; commit;gap in ID
switch sequence from cache to nocachecache values not lost
kill sessionpossible gap in ID
(values not used|commited are lost)
different RAC sessionscache values not lost
(but separate caches on each RAC node)
SGA size too smallcache values lost => gap in ID
(sequence cache ages out of shared pool per LRU list)
session sequencesnot useful for unique ID values => duplicate ID
scalable sequencescache values not lost => planned large gap in ID
(because of prefix)
alter system cancel SQL ‘SID, serial’;
(insert killed)
gap in ID
alter system disconnect session post_transaction;no gap
(unless sequence was fetched before the transaction started)
alter system disconnect session immediate; gap in ID
sequence restartpossible duplicate IDs
sequence interval > 1planned gap in ID

That a sequence ages out of the shared pool is in my experience the most common cause of lost sequence values that are noticed. A strong indication is when frequent gaps of 20 values exist.

Possible workarounds are

a) set the sequence to nocache if the sequence is used rarely
alter myseq nocache;
This might cost performance.

b) pin the sequence in the shared pool
dbms_shared_pool.keep('MYSCHEMA.MYSEQ','Q')
Pinning the sequence is useful when you have times with lots of concurrent activities followed by larger idle periods.
This costs a little memory.

c) increase SGA size – sometimes it helps to increase the restricted shared pool area.
This costs memory (and money) – not available for other non-database processes.

Conclusions

  • Small gaps in surrogate keys (ID columns) are natural, mostly because of delete statements or failed inserts.
  • The sequence object typically does not produce gaps.
  • Very frequent gaps of size 20 (default cache size) are not natural and might need investigation.

EBR – how to run a background job in the correct edition

Introduction

Edition based redefinition is a no extra cost feature of the oracle database. It allows to have multiple versions of the same plsql based code (packages, views, triggers, object types, synonyms, …) in the database at the same time.

Code, that starts a scheduled background job, should execute this background job in the same current edition.

Here is how do it using job classes.

TL;DR.

First create a job class that connects to a specific edition via a service name

begin
  dbms_scheduler.create_job_class (
    job_class_name => 'JCED_DEV$ALPHA',
    service => 'DEV_ALPHA'
  );
end;
/

Then start the background job using that job class.

v_jobclass_prefix := 'JCED_';
v_jobname := 'myJob';
dbms_scheduler.create_job(job_name            => v_jobname,
                          job_type            => 'STORED_PROCEDURE',
                          job_action          => 'myPkg.doSomething',
                          job_class           => v_jobclass_prefix ||sys_context('userenv','current_edition_name'),
                          number_of_arguments => 0,
                          start_date          => sysdate,
                          enabled             => true,
                          auto_drop           => true);

Problem description

This article assumes some basic understanding of the edition based redefinition (EBR) feature of the oracle database.

Scheduled jobs do run in the background in a separate new session. Any new session runs in the default edition of the database, unless it specifies the edition at the time the session is created.

It is possible to switch the edition on demand during a session, but this is not recommended. Chris Saxon misused this possibility for one of his SQL Magic tricks. Not everything that is possible should be used.

Chris Saxon – SQL Magic

Solution

A scheduled job can use a job class. A job class can be set to use a database service. A database service can be set to connect to a specific edition.

When using EBR it is a recommended practice to create a service for each edition and possibly another one for the default edition. Information how to create a service for an edition can by found on one of my older blog entries: working with editions – part 2. Also Oren Nakdimon recently publiced a very nice article about it: using services for exposing new editions .

The following assumes that we have three editions ORA$BASE, TST$BETA and DEV$ALPHA. It also assumes that we have created matching database services ORA_BASE, TST_BETA and DEV_ALPHA and that those services are running.

create the job classes

For each edition we create a job class JCED_<edition_name> (JCED_ORA$BASE, JCED_TST$BETA and JCED_DEV$ALPHA). This can be done with DBA or with the MANAGE SCHEDULER privilege.

Job classes also allow to prioritize between jobs and to connect them to a ressource group.

I’m using a prefix JCED (Job Class EDitioned) to tag the classes. You can use any name you want, but it is advisable to have some common identification for the classes. The name also allows me later to find the correct job class, depending on the current edition of the session.

-- Run as DBA
begin
  dbms_scheduler.create_job_class (
    job_class_name => 'JCED_ORA$BASE',
    service => 'ORA_BASE'
  );
end;
/

begin
  dbms_scheduler.create_job_class (
    job_class_name => 'JCED_TST$BETA',
    service => 'TST_BETA'
  );
end;
/

begin
  dbms_scheduler.create_job_class (
    job_class_name => 'JCED_DEV$ALPHA',
    service => 'DEV_ALPHA'
  );
end;
/

Then we need to make those classes available to the user that will later create the scheduled jobs.

grant execute on sys.JCED_ORA$BASE to mySchema;
grant execute on sys.JCED_TST$BETA to mySchema;
grant execute on sys.JCED_DEV$ALPHA to mySchema;

It is also possible to allow a schema to use any class.

grant execute any class to mySchema;

Job classes do not belong to any schema. They are always created in SYS. Something to keep in mind, for example when exporting/importing schemas to another database.

programatically create a job

The following function creates a background job using the current edition. The job calls a procedure with 3 arguments.

The function is part of some larger package myPackage. It is assumed that the doSomething procedure is also part of the same package.

/* constant declaration section in package body */
  g_jobclass_prefix     constant varchar2(10) := 'JCED_';


/* modules */

function createBackgroundJob(p_id in number, p_starttime in timestamp) return varchar2
is
  co_modul_name CONSTANT VARCHAR2(96) := $$PLSQL_UNIT || '.createBackgroundJob';
  v_job_nr binary_Integer;
  v_jobname varchar2(100);
begin

  v_jobname := 'myJob_'||to_char(p_id);
  dbms_scheduler.create_job(job_name            => v_jobname,
                            job_type            => 'STORED_PROCEDURE',
                            job_action          => $$plsql_unit || '.doSomething',
                            job_class           => g_jobclass_Prefix||sys_context('userenv','current_edition_name'),
                            number_of_arguments => 3,
                            start_date          => p_starttime,
                            enabled             => FALSE,
                            auto_drop           => true);
  dbms_scheduler.set_job_argument_value(job_name=>v_jobname, argument_position=>1, argument_value=>p_id);
  dbms_scheduler.set_job_argument_value(job_name=>v_jobname, argument_position=>2, argument_value=>'RUN QUICK');
  dbms_scheduler.set_job_argument_value(job_name=>v_jobname, argument_position=>3, argument_value=>myPackage.g_trace_level);

  return v_jobname;
end createBackgroundJob;

The name of the job class is calculated using the name of the current edition by sys_context('userenv','current_edition_name').

start the background job

The createBackgroundJob function only prepares the background job. To run it, we need to call the create function and enable the job afterwards.

declare
  job varchar2(128);
begin  
  job := myPackage.createBackgroundJob(1, localtimestamp);
  dbms_scheduler.enable(job);
end;
/

Conclusion

Using job classes is a easy way to start a scheduled job in the same edition that the currently running session is in

Further remarks

Justification

I used this logic to split a plsql heavy task into several worker tasks that could run in parallel. I wanted to make sure that the parallel execution was done using the same edition as the thread that started it.

Restrictions

I did not test if it is possible to run lightweight jobs in a specific edition. There seems to be no obvious restriction that prevents using a job class also for lightweight jobs.

Since job classes belong to sys sometimes they need to be with the schemaname in front :sys.JCED_DEV$ALPHA.

It is not recommened to switch an edition from inside some code, because that code itself is running in a specific edition (the default one). This is very hard to control and to do properly.

Jobs are created with job names in upper case. In some cases we need to make sure to use the upper cased job name, to find/handle the correct job.

Error handling

Error handling needs special care when combining scheduled jobs and editions. For example I make sure that proper instrumentation is in place and that the log entry also includes the edition in which the error happend. In some situations, like if the service is dropped, then the session is still created, but falls back to the default edition. We want to be sure, that we notice any issues arising from such a situation.

18c scheduler EXTERNAL_SCRIPT with error ORA-27369: job of type EXECUTABLE failed with exit code: No child processes

After an upgrade to oracle database version 18.3 from 12.2 I encountered this error for a scheduled job of type EXTERNAL_SCRIPT. The job failed.

error code: 10 – Command not found

The view all_scheduler_job_details had additional information.

ORA-27369: job of type EXECUTABLE failed with exit code: No child processes

Last year I wrote about using scheduled job type EXTERNAL_SCRIPT instead of EXECUTABLE. Even the simplest demo code block from that time was not running anymore in 18c.

Here is a very simple shell script that needs to run

#!/bin/bash
echo “Job ok!”;

I execute this script using a scheduled job. The job runs as user oracle (via credential ORACLE_OS_CREDS).

-- First test a script that should not produce an error
declare
v_jobname varchar2(200);
v_good_script clob;
begin
v_jobname := upper('DEMO_SCHEDULED_EXTERNAL_SCRIPT_NO_ERROR');
-- the following line breaks are important.
-- Do NOT remove them, they are part of the linux script.
v_good_script := '#!/bin/bash
echo "Job ok!"';

-- create and run the job
dbms_scheduler.create_job(job_name => v_jobname,
job_type => 'EXTERNAL_SCRIPT',
job_action => v_good_script,
credential_name => 'ORACLE_OS_CREDS',
enabled => true,
auto_drop => true
);
end;
/

After several failed trails that including checking credentials, privs on OS level extproc settings, etc. I found a simple reason.

We are not allowed anymore to set the shell in the first line. So the script needed to change.

echo “Job ok!”;

In case you didn’t know, this first line is called Shebang.

I didn’t find any notice in the documentation about that. So it might be a bug or some intented stealth change.
Oracle 12c doc
Oracle 18c doc
Both say the same thing.

EXTERNAL_SCRIPT‘ This specifies that the job is an external script that uses the command shell of the computer running the job. For Windows this is cmd.exe and for UNIX based systems the sh shell, unless a different interpreter is specified by prefixing the first line of the script with #!.

Oracle Documentation

It is definitly not working in my environment. In case you encounter the same issue, here is my suggested solution.

To circumvent the issue I added some conditional compiling so that the same code is running in 12.2 and in 18.3.
In 12.2 it adds the shell call as the first line of the script – in 18c it avoids it.
I didn’t test which shell really is used to run the script. The difference between bash and sh is usually not relevant for my scripts.

-- First test a script that does not produce an error
declare
v_jobname varchar2(200);
v_good_script clob;
begin
v_jobname := upper('DEMO_SCHEDULED_EXTERNAL_SCRIPT_NO_ERROR');
-- conditional compile used because shebang not allowed anymore in 18c
v_good_script :=
$IF DBMS_DB_VERSION.VER_LE_12_2 $THEN
'#!/bin/bash
'||
$END
'echo "Job ok!"';
dbms_scheduler.create_job(job_name => v_jobname,
job_type => 'EXTERNAL_SCRIPT',
job_action => v_good_script,
credential_name => 'ORACLE_OS_CREDS',
enabled => false,
auto_drop => false
);
-- run the job
dbms_scheduler.enable(v_jobname);
end;
/

I also updated my older blog post “dbms_scheduler 12c – run EXTERNAL_SCRIPT” to considered those findings.

You might also be interested in a post by Markus Fletchner: File ownership after patching or relinking Oracle RDBMS software

He describes issues with scheduled jobs of type EXECUTABLE after patching the database to 18 because of changed permissions. I first suspected I have the same problem, but it turned out differently. Still an interesting read.

I hope this post will help some others to waste less time, when encountering this error.

Basic SQL: All about sequences

Intro

There are still a lot of misunderstandings about Oracle sequences. Sometimes even experts tell you things about sequences that are easy to misunderstand, especially if we look into the details. The following post wants to give a detailed overview about what are sequences, why they work as they do, and how we should use them.

There are also a lot of parameters that the sequence object has and that you can use to tweak the behaviour. I will cover the most common things here.

Wording

Many of the misunderstandings come from how we use the word “sequence”. It can mean several slightly different things.

meaning a) The sequence object in the database aka the number generator
meaning b) the number value that is retrieved via mySeq.nextval
meaning c) an attribute for a list of numbers, stored typically in an ID column
“This list is in sequence” often means that we have an ordered list of numbers without gaps (math: monoton increasing integer values).

For the remainder of the document I will try to make always clear which meaning I am referring to. The relevant words will be written in italics to hint about the specific interpretation in that sentence. In cases where I say “sequence” without additional specification details, I will mean the sequence object.

Purpose

The most common sequence usage is as technical values for ID columns. A typical ID column is a surrogate key. Opposed to a natural key, a surrogate key has no intrinsic meaning. It’s only use is to identify (=ID) a database record in a table. No intrinsic meaning also implies that we can not use this ID value to make business decisions dependend on it.

For example the following sentence should be considered a wrong deduction.
“Employee ID=17 was hired before Employee ID=26 because he/she has a lower ID”.

If we want to make qualified statements, then we must add the needed information to the data. For example add a column “hire_date”. Then we can use it to deduct when an employee was hired and what the order among different employees is.

The main advantage of a surrogate (meaningless technical) key is that the database can use it to ensure referential integrity. And this integrity rule is ensured even if something changes with regards to the business key. Typically business keys do not change. But if it happens, then the relationship is ensured by the foreign key still pointing to the surrogate key. For example we might have an INVOICE table. The business key might be the invoice number. In general this number is immutable, however it could be that there was some typo or scanner fault while the invoice was registered into the system. Using a surrogate key it is possible to change this invoice number without having to change all dependent records (like invoice positions) as well.

One of the best ways to supply values for such a surrogate key column (ID) is to use a sequence object and call the NEXTVAL function (pseudocolumn) on it. We can do that with a database trigger, as an identity column or directly in an insert statement.

 

Usage

standard usage

The standard usage of a sequence simply is to provide values for an ID column in the most performant way.

If you are new to the concept of Oracle sequences, then I suggest to go to livesql.com and try out the next few examples there by yourselfs. Experienced developers might want to skip those basic examples.

A) sequence + nextval on insert

First create a sequence using all default settings. We then use this sequence to provide ID values for our super-employees.


create table super_emp
(id number primary key,
   first_name varchar2(100),
   last_name varchar2(100),
   hire_date date);

create sequence emp_seq;

Then call nextval directly in an insert statement


insert into super_emp (id, first_name, last_name, hire_date)
values (emp_seq.nextval, 'Peter', 'Parker', trunc(sysdate));

insert into super_emp (id, first_name, last_name, hire_date)
values (emp_seq.nextval, 'Clark', 'Kent', trunc(sysdate));

1 row inserted.

1 row inserted.

The NEXTVAL pseudocolumn was used directly in the values section of the insert statement.

B) before row insert table trigger

Create a table trigger that fires during insert (pre 12c solution)

create or replace trigger trg_emp_bri
  before insert on super_emp
  for each row
begin
  if inserting then
    if :NEW.ID is null then
      :NEW.ID := emp_seq.nextval;
    end if;
  end if;
end;
/

The Oracle SQL Developer has a very nice wizard that helps to quickly create such a trigger. The table context menu (rightclick) has an entry to create a PK trigger based with a sequence. It creates a trigger very similar to the one above (I removed a select from dual in favour of a direct assignment).

Then insert into the table using either a NULL value or without the ID column.


insert into super_emp (id, first_name, last_name, hire_date)
values (null, 'Tony', 'Stark', trunc(sysdate));

insert into super_emp (first_name, last_name, hire_date)
values ('Bruce', 'Wayne', trunc(sysdate));

1 row inserted.

1 row inserted.

This is very nice. The application code that does the insert does not need to bother with the name of the sequence.

The trigger fires once FOR EACH ROW that is inserted. The code executes slightly BEFORE the row data is inserted. Before row triggers are typically used to set default values for columns or do some more complicated checks. After row triggers also exists. They are usually used for monitoring purposes, like writing data into an audit trail.

C) Use the sequence in the column definition (since 12c)

Since 12c we have two new options. Create a column AS an IDENTITY column or set the default value for the column to sequence.NEXTVAL. Both options can be configured to work only ON NULL. In case of an identity column, Oracle will automatically create a sequence. More about this in the chapter “identity columns”. Here is an example using the default setting.

The table trigger from B) is not needed anymore, so we can drop it.

alter table super_emp modify id default on null emp_seq.nextval;

drop trigger trg_emp_bri;

Then run the inserts.


insert into super_emp (id, first_name, last_name, hire_date)
values (null, 'Diana', 'Prince', trunc(sysdate));

insert into super_emp (first_name, last_name, hire_date)
values ('Steve', 'Rogers', trunc(sysdate));

1 row inserted.

1 row inserted.

Before 12c it was not possible to use pseudocolumns or non-deterministic functions like sysdate as a default value for a column. With 12c this is possible now. The result is the same as with a before row trigger, but usually it is noticably faster when we insert multiple rows.

 

Check the results

select id, first_name, last_name from super_emp;

ID FIRST_NAME LAST_NAME
1 Peter Parker
2 Clark Kent
3 Tony Stark
4 Bruce Wayne
5 Diana Prince
6 Steve Rogers

All inserts were done successfully. All three methods work.

 

18c create sequence parameters

create sequence syntax diagram 18c

syntax diagram

Some basic stuff first

Here we go through the different parameters. Behind some of those are very complex concepts. If so, those concepts are explained in a later section. This basic section tackles the way how to set the parameter and the immediate effects of setting or not setting it.

INCREMENT BY vs. START WITH

START WITH says what the very first value will be. It can be negative.

INCREMENT BY says how the next value will be calculated. It can also be negative but not 0.

The syntax diagram is slightly misleading. It gives the impression as if we can only specify one during the creation. Either INCREMENT or START WITH, but not both. This is not true, we can create a sequence and specify both. The default for both is 1.


create sequence testseq increment by 10 start with 2;
select testseq.nextval from dual connect by level <= 3;

NEXTVAL
2
12
22

Other parameters like CYCLE and NOCYCLE can not be specified both at the same time. The syntax diagram is correct for those.

For the reminder of this document, we assume the increment is always 1 (unless clearly mentioned otherwise)

Note that we can not alter the START WITH value, but we can alter the INCREMENT BY.

Hint: The undocumented RESTART clause allows to set a new START WITH value. See section about “How to reset a sequence”.

MAXVALUE and MINVALUE

Typically we don’t have the need to set those two parameters, the defaults are good.

Facts

  • MAXVALUE and MINVALUE specify the highest and the lowest possible value a sequence can have. MAXVALUE must be greater than MINVALUE.
  • NOMAXVALUE (1028-1) is the highest possible value.
    NOMINVALUE -(1027 -1) is the lowest possible value.
  • The default for MAXVALUE is NOMAXVALUE.
    The default for MINVALUE is 1.
  • The MINVALUE can not exceed the value defined by START WITH. Or otherwise we get an error.
    ORA-04006: START WITH cannot be less than MINVALUE

CYCLE vs. NOCYCLE (default)

CYCLE specifies, that the sequence after it reached the MAXVALUE, will start again with the MINVALUE (not with the START WITH value). The theoretical maxvalue of a sequence is 28 digits. It is a bit less with scalable sequences.

Nowadays there is no compelling reason to use CYCLE.

I believe in the old days (1990 – Oracle 7) disc space was still a premium commodity. Therefore number columns were often limited to a low number of digits (5 or 6). Under certain specific circumstances a cycling sequence then might have been useful to prevent numeric or value errors. Those days are gone.

CACHE (default) vs. NOCACHE

Caching a sequence is a huge performance feature. The default setting is CACHE 20, which is good for most scenarios.  It means 20 sequence values are read from shared memory (SGA) instead from hard drive. And after that the dictionary will be updated one time.

See the section about caching considerations for more information about this very important parameter.

Demo:

create sequence mySeq cache 1000;
select sequence_name, cache_size, last_number
from user_sequences
where sequence_name ='MYSEQ';
SEQUENCE_NAME CACHE_SIZE LAST_NUMBER
MYSEQ         1000       1

select myseq.nextval from dual connect by level <= 3;
NEXTVAL
1
2
3

select sequence_name, cache_size, last_number
from user_sequences
where sequence_name ='MYSEQ';
SEQUENCE_NAME CACHE_SIZE LAST_NUMBER
MYSEQ         1000       1001

After this, we still have 997 cached sequence values.

The default value of cache 20 is a kind of sweet spot for OLTP purposes. Only when you have the need to create a very large number of sequence values in a short time, then consider to increase the cache. This typically happens during data load situations. Don’t forget to lower the cache value again after the data load is over.

 

ORDER vs. NOORDER (default)

It is a common misconception that we need to specify ORDER to get ordered values from a sequence object. The sequence object will always produce ordered values! Oracle did not implement some kind of random mechanism. Sequence.nextval will always give you the last value + the increment. Any kind of “randomness” comes from other things, like that you seem to have no control over who fetched the last value (multi user), when was that value inserted (seq.nextval call < insert time < commit time) and lost sequence caches.

The ORDER setting is only relevant in a RAC (Real Application Cluster) environment. And even there it should always be NOORDER (the default). Read the chapter about the performance considerations for an explaination.

Remember

ORDER on RAC = slow

ORDER on non-RAC = no effect

easy.

 

KEEP vs. NOKEEP (default)

This is a switch that most database developers will never need. It might be more relevant for Java developers.

In 12.2 a new feature called application continuity was introduced. It allows to capture and replay a certain workload on the database. It comes with the license options for RAC or Active Data Guard.

Problem is that a call to sequence.nextval would deliver a new value. This is not wanted for REPLAY purposes. Altering a sequence to KEEP would provide the same sequence value during the replay.

From the appendix of Oracles White paper about Application continuity:

Mutable Functions
Mutable functions are functions that can change their results each time that they are called. Mutable functions can cause replay to be rejected because the results visible to the client can change at replay.
Consider sequence.NEXTVAL that is often used in key values. If a primary key is built with a sequence value and this is later used in foreign keys or other binds, the same function result must be returned at replay.
Application Continuity provides mutable value replacement at replay for Oracle function calls if GRANT KEEP or ALTER.. KEEP has been configured.
If the call uses database functions that support retaining original mutable
values, including sequence.NEXTVAL, SYSDATE, SYSTIMESTAMP, and SYS_GUID, then, the original values returned from the function execution can be saved and reapplied at replay. If an application decides not to grant mutable support and different results are returned to the client at replay, replay for these requests is rejected.

Important to remember is that the KEEP parameter during creation has nothing to do with keeping the sequence pinned in the SGA. An example how to do that is in the “discussion about gapless IDs” section.

SCALE vs. NOSCALE (default)

SCALE is a very interesting new setting. It allows to improve the clustering factor of the index on the ID column. More details about that in the performance section.

Useing SCALE adds the session ID (SID) to the beginning of the sequence value.

SCALE has two options EXTEND and NOEXTEND (default). See how it works and differs.

 

create sequence myseq;

Sequence MYSEQ created.

                               NEXTVAL
--------------------------------------
                                     1
alter sequence mySeq scale;
                               NEXTVAL
--------------------------------------
          1017670000000000000000000002

alter sequence mySeq scale extend;
                               NEXTVAL
--------------------------------------
    1017670000000000000000000000000003

For sake of brevity I removed the “sequence altered.” results and “select mySeq.nextval from dual;” calls.

My session ID in this demo was 767. The 101 is the instance ID (1) + 100. So in a RAC environment, this will ensure that the values provided by different nodes will not clash. On non-RAC systems this leading part should always be 101.

NOSCALE gives us a normal sequence value of 1.
SCALE NOEXTEND gives us a sequence value of 28 digits (MAXVALUE) with a 2 at the end (last value+increment by) and a 101767 at the beginning.
SCALE EXTEND gives us a sequence value of  28+6 digits with a 3 at the end (last value+increment by) and a 101767 at the beginning.

So EXTEND adds the additional digits on top of the MAXVALUE setting, whereas NOEXTEND adds it inside the range defined by MAXVALUE.

In most circumstances – if we consider scalable sequences – we should use SCALE NOEXTEND. Just to be sure, that the generated value still does fit into the table column. For very large tables if there are already some extremly high values, we might need to use EXTEND, but I expect this situation to be very rare.

When is this useful? Only for cases when extrem performance matters. So for large or very large tables, with a lot of inserts from multiple sessions (parallel inserts).

SESSION vs. GLOBAL (default)

Sequence values do not depend on a user session. Every call to sequence.nextval will give the next incremented value regardless of which session executed this. This feature ensures that nobody gets a duplicate key.

User/Session A calls mySeq.nextval 3 times and gets 1,2,3.
User/Session B calls mySeq.nextval 3 times and gets 4,5,6.
If both sessions fetch the values almost simultaniously then A might get 1,3,4 and B might get 2,5,6. Notice that there might be gap from the perspective of a single session, but the values are still ordered for each session.

With SESSION sequences this behaviour changes. Session A calls mySeq.nextval three times and gets 1,2,3. Session B calls mySeq.nextval three times and also gets 1,2,3. The values are not shared between sessions.

Where do we need this? Only for global temporary tables (GTT). The data in a GTT  persists for the duration of a session (alternatively until commit) and then is gone. Same behaviour for the SESSION sequence – the generated sequence values only persist for the duration of the session.

 

Practical considerations

For most cases the default settings are perfect.

Only if you encounter issues (performance  or unusual number of gaps) or if your data has some special scenarios (batch ETL jobs, very large number of rows) then you should start thinking about tinkering with the default settings.

The following sub chapters discuss common questions and show cases how to work with sequences to solve some typical tasks.

How to avoid reusing the same ID in different dbs

Sometimes we have a distributed database. Especially for global companies each region might have its own database. The data for the different regions still needs to be comparable. And sometimes the data will be consolidated or exchanged. In such cases it helps, if the ID values do not overlap.

One way to do that is to use the INCREMENT parameter. On database 1 we use a sequence object starting by 1 and and increment of 10. So this will give IDs like 1, 11, 21, 31, 41,….

create sequence testseq increment by 10 start with 1;

On Database 2 we use a sequence starting with 2 and increment 10. This will work up to 10 regions. So this will give IDs like 2, 12, 22, 32, 42, ….

create sequence testseq increment by 10 start with 2;

Result is that those values do not overlap. There are other (and possibly better) ways to solve the situation, like sys_guid(). But this is a fairly easy and stable concept.

Caching considerations

If the sequence is used very infrequently, then you can set it to NOCACHE. For example if you have an staff table; I don’t expect that new personell is hired every second. Typically it will be a few people per month (depends on the size of your company of cause). For such low frequency inserts performance doesn’t matter. You can set the sequence object to NOCACHE or to a very low cache value. However if you do a large data import, consider to increase the cache size before running that data load.

Does setting a larger cache size need more SGA memory?

No.

Or to explain it with Tom Kytes words

All we need to keep in the cache is:

the sequence on disk was N
the cache size is M
the current value is X

As long as X is less than N+M – we just increment X when someone calls NEXTVAL.

we do not need to keep in the cache “N, N+1, N+2, … N+M-1”, we just keep N, M and X and increment X when someone asks for a new sequence value. When X=M, we update SEQ$ and reset N in the cache.

So, cache 1000 and cache 20 take the same amount of space in the cache.

How to reset a sequence?

There are three general ways to set a sequence to a different value.

  1. Call sequence.NEXTVAL so often until you reach the target value
  2. Manipulate the increment parameter using a negative increment. Call nextval once. Reset.
  3. Restart the sequence (new undocumented feature)

The first way usually is not practical. A noticable exception might be, if you manually added some data without using the sequence and you want to jump over those few values.

If you want way 1, then the CONNECT BY LEVEL clause helps to do it quickly.


select myseq.nextval from dual connect by level <= 996;

 

And here is a demo for way 2:
Preparation setup


drop sequence mySeq;
create sequence mySeq cache 1000;
set autotrace traceonly statistics
select myseq.nextval from dual connect by level <= 996;
set autotrace off
select myseq.nextval from dual;
   NEXTVAL
----------
       997

The “set autotrace traceonly” command works in sql*plus. I used it here to avoid printing 996 values onto the screen. It is not relevant for the demo itself.

The current value now is 997 but we want that the next call to nextval should give us 1.

Now reset the sequence.

alter sequence mySeq increment by -996 nocache;
select myseq.nextval from dual;
alter sequence mySeq increment by 1 cache 1000;

After this code, the very first session that calls myseq.nextval, will see 2 as the value returned.

If we really need to see 1 we also must lower the MINVALUE. Because INCREMENT BY can not result in anything lower than the MINVALUE (ORA-08004: sequence MYSEQ.NEXTVAL goes below MINVALUE and cannot be instantiated).

alter sequence mySeq increment by -997 nocache minvalue 0;
select myseq.nextval from dual;
alter sequence mySeq increment by 1 cache 1000;
select myseq.nextval from dual; 

Notice that we incremented now by -997 instead of -996 and that we are calling nextval twice. We can not reset the minvalue to 1 during the second ALTER sequence command, because that also would violate the rules (ORA-04007: MINVALUE cannot be made to exceed the current value). Easyiest solution is to let it stay at 0.

Using NOCACHE is important, to avoid having issues with the stored last_value. Also check the increment by and the cache setting, before you alter the sequence. If the increment by is different, then you need to change the above code and probably need to call nextval a second time.

In 18c we got a third option to reset a sequence – the RESTART option.


ALTER SEQUENCE mySeq RESTART;

This is currently undocumented.

Thanks to Roger Troller for makeing me aware about it (Blog).

I tested it a little bit further and found out two more things.

  • We can already use it in 12.2.0.1. Which makes sense, since 18c is really just 12.2.0.2.
  • And we can combine it with the START WITH clause.

So the following works !

alter sequence testseq_20 restart;
VAL1
1
alter sequence testseq_20 restart start with 15;
Sequence TESTSEQ_20 altered.
select testseq_20.nextval val1 from dual;
VAL1
15

Very convinient. This should be the preferred way to reset a sequence whenever you need to do that.

Not recommended is to drop and recreate the sequence. While this will also allow us to set a new START_WITH value, it has a major drawback. All references to the seqeunce are then broken. Especially all privileges are lost, like GRANT SELECT on #sequence to #schema.

 

Can we use an ID from a sequence to order by insert time?

Short answer no. The order of inserts and the order of sequenced values often match but are not guaranteed to match.

Detailed answer: Usually it works.

I very often use a ID column filled by a sequence as a second order criteria. For example I typically sort a logging table – where trace information is written – by the insert date and the LOG_ID (sequence based PK ).

order by insert_date desc, log_id desc

The insert date (if it is a date) is only accurate to the second. Even if it is a timestamp there might have been multiple inserts at the same fraction of a second. The log_id is a perfect second order criterium.

We can safely assume, that the inserts that were done from the same session, have ordered sequence values. There might be gaps, but the sequence values will be produced in the same order as we did the inserts. There can be ID values in between, that are from a different sessions. But for trace log information, usually it does not matter if a different session is ordered before or after our session. However the data from one session should be correctly ordered. And this is guaranteed.

Is cycling useful?

I never had the need of cycling sequences. I firmly believe if you think you need those, you have a much deeper problem somewhere else. It would probably better to solve that problem, instead of useing a cycling sequence.

With 18c we get SESSION sequences. For some cases where CYCLE was considered in the past, a SESSION sequence might be the better choice.

Also ROWNUM and the analytic function ROW_NUMBER can be used to create consecutive values at time of select, instead of a sequence providing those values at time of insert.

Discussion of gapless IDs

This is a problem/question I often encountered: How to make a sequence gapless?

TL;DR: You don’t need to. The effort and the restrictions to make an ID column gapless is to high in (almost) all use cases.

The sequence object can and does provide gapless numbers. In a multi user environment we just can not reliably use the provided values to store them in a gapless way. Even in a single user environment, the stored IDs could be deleted. So one consequence of the gaplessness requirement, would be to forbid delete operations.

The main point is that almost all the things that will create “holes” in an ID column are under our control. It is not the Oracle database that can not provide gapless sequences. It is the complexity of the business rules combined with performance requirements in a multi user environment, that make it almost impossible to have an ID column without potential gaps.

Performance + Multi User + Gapless IDs build a triangle of goals that exclude each other. We can not reach all three goals at the same time, one needs to suffer. However those goals do not react in the same way, when we sacrifice one a tiny bit. So let’s investigate what happens then:

We still can not reach good performance (instead of very high performance) if we need multi user capability and gaplessness at the same time. To enable this we need to serialize access to the whole table. Which in turn means only one session can write into the table and all other sessions will need to wait until the other session finishes the whole transaction.

We can have very high performance and gaplessness if we only have a single user. But as soon as a second user wants to write at the same time, we need to introduce severe serialization of the whole transaction, just to ensure gaplessness. And this means performance drops immensely. Btw. this is how MS Access works. Only one user can write into the so-called database.

But we can get almost gapless IDs and still have very high performance for multiple users. Almost gapless means, we sometimes might have gaps in our sequence, but this situation is rare. This is the default behaviour of Oracle sequences.

How do we get gaps in our IDs?

a) a record in our table was deleted.
b) the insert run into an error
(remember sequence.nextval is called a tiny moment before the insert is executed).
c) Somebody called sequence.nextval but didn’t use the value.
d) The sequence cache was lost. One way how this happens is if the database decides that other objects need to be in SGA memory and the sequence wasn’t called for a longer time.

By pinning the sequence we can avoid situations where the sequence cache ages out of the shared pool. A better alternative is to size the shared pool appropriately, so that in general sequence caches will not age out of it.

execute sys.dbms_shared_pool.keep(owner.mySequence,'Q');

This still doesn’t guarantee gapless IDs, but for most use cases it would be good enough.

The oracle docs about skipping cached numbers:

18.1 Database Admin Guide – Managing Sequences
The database might skip sequence numbers if you choose to cache a set of sequence numbers. For example, when an instance abnormally shuts down (for example, when an instance failure occurs or a SHUTDOWN ABORT statement is issued), sequence numbers that have been cached but not used are lost. Also, sequence numbers that have been used but not saved are lost as well. The database might also skip cached sequence numbers after an export and import. See Oracle Database Utilities for details.

A normal or immediate shutdown of the database will not loose sequence numbers. Instead the database will update the data dictionary (table sys.seq$) with the last used value.  Unfortunatly most DBAs prefer to shutdown a database using abort, since they don’t bother enough about user sessions.

Sequence Performance

Why is a sequence fast and how can we use it in the most performant way?

Oracle invented sequences with performance in mind. They provide a way to create surrogate keys in a multi user environment while minimizing serialization.

Basic working of an Oracle sequence

A sequence is just an entry in the dictionary table sys.seq$.

desc sys.seq$;

Name       Null?    Type 
---------- -------- ------------ 
OBJ#       NOT NULL NUMBER 
INCREMENT$ NOT NULL NUMBER 
MINVALUE            NUMBER 
MAXVALUE            NUMBER 
CYCLE#     NOT NULL NUMBER 
ORDER$     NOT NULL NUMBER 
CACHE      NOT NULL NUMBER 
HIGHWATER  NOT NULL NUMBER 
AUDIT$     NOT NULL VARCHAR2(38) 
FLAGS               NUMBER 
PARTCOUNT           NUMBER

The highwater column is the same as the last_number column in the view user_sequences.

When a sequence fetches a new sequence value (using .nextval) then the dictionary table needs to be read and the row needs to be updated with the new value. Now if multiple sessions do that, then one would have to wait for the other. This is called serialization. To avoid that issue, Oracle uses two clever mechanisms.

  1.  The dictionary table is updated using an autonomous transaction. So the value is stored and other sessions can see it, even if the main transaction (from the user session) is not finished.
  2. The new highwater value that is stored, is not the next value, but it is the value + the cache. Any call to sequence.nextval will first read from the sequence cache and only once the numbers there are exhausted, it will read from the table and update it.

It is of cause possible to write a similar mechanism ourselfs with our own table. And I have seen projects where they did exactly that. But it is very hard to do properly and even then will not beat the performance of the original sequence.  So you would need a very special business case to justify writing your own sequence mechanism.

Speed it up

If you aim for maximum performance there are some considerations to do.

  1. You must use a sequence cache. The cache size also plays an important role. For most OLTP tables the default setting of cache=20 is a very good choice. However when you do large dataloads, then a much larger cache size is advisable. There is a diminishing returns effect. Doubleing the cache does not double the performance.
  2. On a RAC you really should use NOORDER. The ORDER keyword is only relevant for real application clusters. Using ORDER would try to synchronize the sequence caches over all cluster nodes. This is extremly bad for the performance. Useing NOORDER gives each RAC node a separate sequence cache. Which also means that an insert on node1 might have sequence value 1 and the next insert on node2 might have a sequence value 1001. The third insert on node1 again would use value 2.
  3. Sequences should be used as late as possible. There is usually no need to fetch a sequence value first and then do the insert later. Use the sequence while doing the insert. Either by adding it to the insert statement, or via a database trigger or since 12c as an identity column or a default on NULL column setting. Using the 12c mechanics allows to avoid the database trigger. This results in much better performance, as I have shown in a previous blog post.
  4. Consider scalable sequences for large tables if you are on 18c already. The effect can not be seen immediatly, but scalable sequences should give a better and more stable performance in the long run.
    For small and medium sized tables I expect scalable sequences to be slower than non-scalable sequences (because a bigger number needs to be stored). I didn’t test the effect, but a normal (small) sequence value only needs 2-6 bytes, wheras a scaled sequence value needs always 15 (NOEXTEND) or 18 (EXTEND) bytes. These bytes are used by the table column, the unique index that supports the primary key (PK), all foreign key (FK) columns pointing to the PK and the indexes supporting those FKs.

 

If you need the value of the sequence later in your code again you can either use .currval (not recommended) or use the returning clause to give you the generated ID.

best practice: returning clause

Several SQL and PL/SQL DML commands have a returning clause. It allows to get back data that is created or manipulated while the DML (insert or update) is running.

The most common usage is to return the ID value, that is filled by a database trigger (or an identity column) so that this ID can now be used furthermore in the same session or transaction or to be returned back to the client. For example to insert any child records or to show the freshly generated record in a GUI report.


insert into super_emp (first_name, last_name, hire_date)
values ('Bruce', 'Banor', trunc(sysdate))
returning id into :ID;

print :ID;

ID
--------------------------------------------------------------------------------
9

The returning clause is more typical in pl/sql. Here is an example using a record of %rowtype. We can even return the generated ID value directly into the record.

declare
r_super_emp super_emp%rowtype;
begin
r_super_emp.first_name := 'Hal';
r_super_emp.last_name := 'Jordan';
r_super_emp.hire_date := trunc(sysdate);

insert into super_emp
values r_super_emp
returning id into r_super_emp.id;

sys.dbms_output.put_line('New ID = '||r_super_emp.id);
end;
/
New ID = 10

 

Identity columns

Identity columns and Default on null are a great enhancements in db version 12.1.

It allows us to use a sequence as late as possible (while inserting). But without the need for a before row insert table trigger. This improves insert performance dramatically. A trigger is plsql based. It runs during the execution of a SQL DML statement (insert). Because of that a context switch from the SQL to the PL/SQL engine (and back) is needed. If we can avoid the trigger completly we can avoid the context switch and this will improve performance considerably.

I made some tests and under very favourible circumstances (nothing else inserted but the ID) the insert performance was 900% faster using IDENTITY or DEFAULT columns instead of a trigger.

With DEFAULT ON NULL we would still create the sequence by ourself. Which also means we know the name. With IDENTITY the sequence is automatically created and maintained by Oracle.

The name of the generated sequence will always begin with “ISEQ$$_”.

demo


create table test
( id number generated by default on null as identity (start with 20) primary key
);

select table_name, column_name, identity_column, default_on_null, data_default
from user_tab_columns;

TABLE_NAME COLUMN_NAME IDENTITY_COLUMN DEFAULT_ON_NULL DATA_DEFAULT
TEST ID YES YES “SCHEMANAME”.”ISEQ$$_10707661″.nextval

 

Drawbacks

It can be problematic to use identity columns over a database link. Especially if the ID value is needed. The main issue is that the returning clause does not work over a db link and there are no good alternatives for identity columns. This works slightly better with “default on null”. We know the sequence object and can use it to fetch the id value over a DB link first and use it then later for the insert. Not performant at all, but it works.

We also can not directly alter an existing ID column into an IDENTITY column. Although it is possible to modify an existing identity column (for example switching between generated always and generated by default on null).

Another minor inconvinience is that the system generated sequence will still be there when the table is dropped. At least as long as the table is still in the recycle bin.

There were also some other very special bugs using identity columns. All have workarounds, but my experience is, that default on null is slightly less error prone.

 

Index contention and Scalable Sequences

Scalable sequences where secretly introduced in 12.1.0.1 but only documented in 18.1.

Richard Foote did a three part series about scalable sequences that covers all you need to know.

The basic problem has to do with index contention.

To give a very brief explanation: when we have an ID column that is inserted using a sequence the index -over time- will become unbalanced. Because new values will only be added to the right side of the index leaf block splits will happen there frequently. Sometimes it will be 50-50 block split and the space in those blocks usually will not be filled up.  This eventually leads to a heavily right (un)balanced index tree.

Such a block split is a very ressource intensive operation and other sessions will need to wait for it. If you see a high number of “enq: TX – index contention issue” wait events (check MOS 873243.1) the reason could be those index block splits.

One workaround for the index contention problem in the past was to use a REVERSE KEY index. But this created other performance problems, like the CBO will not do any range scans on that index.

Scalable sequences are a slightly better solution to avoid those index contention issues (hot index blocks). Because they have the session ID in front of the number, values provided by a scalable sequences are distributed more evenly over the index. At least as long as multiple sessions do the insert.

 

Export and Import

consistency issues

When you do an export of a database or a schema it is crucial to do a time consistent export using

exp ... consistent=Y ...

Why? Otherwise the sequence object including the current value as start with is exported first. And later the tables with their data. Which means, that in between some session could call sequence.nextval and use up a value. You won’t notice the issue during import. But as soon as an insert in the imported schema happens, you will get an dup_val_on_index error, because the table has an ID value already, that the sequence generator just provided.

sys warning

Consistent=Y does not work as SYS. So never export data as SYS! The reason is that sys can not do read only transactions. Using SYSTEM is fine.

datapump

For datapump the equivalent to consistent=Y is the flashback parameter.

expdp ... flashback_time=systimestamp ...

There is also a flashback_scn parameter. Both do a time consistent export.

And since 11.2 there is a legacy mode for datapump, which allows to use consistent=Y (it is rewritten into the flashback_time parameter).

 

Other ways to generate ordered numbers

Sometimes a sequence is not the best way to generate ordered numbers. For example when we want to sort entries from a detail table based upon their parent keys. Each detail record should start with 1 for each parent entry. A sequence is not the proper tool to get such values.

Alternatives are ROWNUM, the analytical function ROW_NUMBER() and certain ways to create lists in SQL, for example by using hierachical queries with CONNECT BY.

 

 

Further reads