Some quick facts about sequence caches and gaps in IDs

  1. Oracle sequences generate key values without gaps (minor exception from the rule: scalable sequences).
  2. Surrogate keys can and will have gaps.
  3. Gaps exist because it is not worth the huge effort to avoid them.

The biggest misunderstanding is probably the difference between the value coming from a sequence and the primary key value inside a database column (ID column). We can use a sequence to set a surrogate key. Typically we do this by a database trigger or an identity column or even as a value directly during the insert.

“The history of sequence caches is a history of misunderstandings.”

somebody on the internet, 1963

Essentially it is a two step process. The new ID value is created by sequence.nextval and then used during the insert. Oracle provided the logic to get a new sequence value in a fast, performant and multi-session friendly way. The developer is responsible to make sure this value is not wasted.

The sequence cache is a performance instrument. A lost cache value means we will see gaps in future ID values. There is only a very limited number of activities that makes cached sequence values “disappear”. The majority of gaps originates from other activities, like failed inserts or deletes. Many developers seem to think, that such a gap is caused by the sequence cache. I find it important to separate what reason ultimately lead to a gap in an ID value.

A gap analysis can be the first step. The following select finds gaps and orders them by size. Put your own table and PK column into the “ids” subquery and find out how the gaps are distributed in that table.

-- gap analysis
with ids as (select region_id as id, lead(region_id) over (order by region_id) as nextid from regions)
   , gaps as (select id, nextid - id as gap_size from ids where nextid > id + 1)
select gap_size , count(*) gaps_found  
from gaps
group by gap_size
order by count(*) desc;

Example result

gap_sizegaps_found
205
32
22
191

Here the data shows a few small gaps. Either some insert didn’t make it or rows had been deleted. But there are also several gaps of size 19 and 20, which means the sequence cache was probably lost at least 6 times.

To make it clear, IDs are allowed to have gaps. However when an end user complains why he constantly sees gaps in some ID, then we should investigate what is going on. Frequent gaps should not be the expected normal behaviour – it is worth finding the root cause for such an issue.

So here is a (non complete) list of activities that do or do not influence the caching of sequences and the existence of gaps in an ID column.

action result
multiple sessionscache used
multiple sessions with one session slow to commitIDs not in order of commit => not a gap eventually, but looks like a gap until final commit
alter system flush shared_pool ;cache values lost => gap in ID
shutdown immediate;cache values not lost
shutdown abort;cache values lost => gap in ID
insert; commit;no gap
insert; rollback;gap in ID
failed insertgap in ID
delete; commit;gap in ID
switch sequence from cache to nocachecache values not lost
kill sessionpossible gap in ID
(values not used|commited are lost)
different RAC sessionscache values not lost
(but separate caches on each RAC node)
SGA size too smallcache values lost => gap in ID
(sequence cache ages out of shared pool per LRU list)
session sequencesnot useful for unique ID values => duplicate ID
scalable sequencescache values not lost => planned large gap in ID
(because of prefix)
alter system cancel SQL ‘SID, serial’;
(insert killed)
gap in ID
alter system disconnect session post_transaction;no gap
(unless sequence was fetched before the transaction started)
alter system disconnect session immediate; gap in ID
sequence restartpossible duplicate IDs
sequence interval > 1planned gap in ID

That a sequence ages out of the shared pool is in my experience the most common cause of lost sequence values that are noticed. A strong indication is when frequent gaps of 20 values exist.

Possible workarounds are

a) set the sequence to nocache if the sequence is used rarely
alter myseq nocache;
This might cost performance.

b) pin the sequence in the shared pool
dbms_shared_pool.keep('MYSCHEMA.MYSEQ','Q')
Pinning the sequence is useful when you have times with lots of concurrent activities followed by larger idle periods.
This costs a little memory.

c) increase SGA size – sometimes it helps to increase the restricted shared pool area.
This costs memory (and money) – not available for other non-database processes.

Conclusions

  • Small gaps in surrogate keys (ID columns) are natural, mostly because of delete statements or failed inserts.
  • The sequence object typically does not produce gaps.
  • Very frequent gaps of size 20 (default cache size) are not natural and might need investigation.

Advertisements

EBR – how to run a background job in the correct edition

Introduction

Edition based redefinition is a no extra cost feature of the oracle database. It allows to have multiple versions of the same plsql based code (packages, views, triggers, object types, synonyms, …) in the database at the same time.

Code, that starts a scheduled background job, should execute this background job in the same current edition.

Here is how do it using job classes.

TL;DR.

First create a job class that connects to a specific edition via a service name

begin
  dbms_scheduler.create_job_class (
    job_class_name => 'JCED_DEV$ALPHA',
    service => 'DEV_ALPHA'
  );
end;
/

Then start the background job using that job class.

v_jobclass_prefix := 'JCED_';
v_jobname := 'myJob';
dbms_scheduler.create_job(job_name            => v_jobname,
                          job_type            => 'STORED_PROCEDURE',
                          job_action          => 'myPkg.doSomething',
                          job_class           => v_jobclass_prefix ||sys_context('userenv','current_edition_name'),
                          number_of_arguments => 0,
                          start_date          => sysdate,
                          enabled             => true,
                          auto_drop           => true);

Problem description

This article assumes some basic understanding of the edition based redefinition (EBR) feature of the oracle database.

Scheduled jobs do run in the background in a separate new session. Any new session runs in the default edition of the database, unless it specifies the edition at the time the session is created.

It is possible to switch the edition on demand during a session, but this is not recommended. Chris Saxon misused this possibility for one of his SQL Magic tricks. Not everything that is possible should be used.

Chris Saxon – SQL Magic

Solution

A scheduled job can use a job class. A job class can be set to use a database service. A database service can be set to connect to a specific edition.

When using EBR it is a recommended practice to create a service for each edition and possibly another one for the default edition. Information how to create a service for an edition can by found on one of my older blog entries: working with editions – part 2. Also Oren Nakdimon recently publiced a very nice article about it: using services for exposing new editions .

The following assumes that we have three editions ORA$BASE, TST$BETA and DEV$ALPHA. It also assumes that we have created matching database services ORA_BASE, TST_BETA and DEV_ALPHA and that those services are running.

create the job classes

For each edition we create a job class JCED_<edition_name> (JCED_ORA$BASE, JCED_TST$BETA and JCED_DEV$ALPHA). This can be done with DBA or with the MANAGE SCHEDULER privilege.

Job classes also allow to prioritize between jobs and to connect them to a ressource group.

I’m using a prefix JCED (Job Class EDitioned) to tag the classes. You can use any name you want, but it is advisable to have some common identification for the classes. The name also allows me later to find the correct job class, depending on the current edition of the session.

-- Run as DBA
begin
  dbms_scheduler.create_job_class (
    job_class_name => 'JCED_ORA$BASE',
    service => 'ORA_BASE'
  );
end;
/

begin
  dbms_scheduler.create_job_class (
    job_class_name => 'JCED_TST$BETA',
    service => 'TST_BETA'
  );
end;
/

begin
  dbms_scheduler.create_job_class (
    job_class_name => 'JCED_DEV$ALPHA',
    service => 'DEV_ALPHA'
  );
end;
/

Then we need to make those classes available to the user that will later create the scheduled jobs.

grant execute on sys.JCED_ORA$BASE to mySchema;
grant execute on sys.JCED_TST$BETA to mySchema;
grant execute on sys.JCED_DEV$ALPHA to mySchema;

It is also possible to allow a schema to use any class.

grant execute any class to mySchema;

Job classes do not belong to any schema. They are always created in SYS. Something to keep in mind, for example when exporting/importing schemas to another database.

programatically create a job

The following function creates a background job using the current edition. The job calls a procedure with 3 arguments.

The function is part of some larger package myPackage. It is assumed that the doSomething procedure is also part of the same package.

/* constant declaration section in package body */
  g_jobclass_prefix     constant varchar2(10) := 'JCED_';


/* modules */

function createBackgroundJob(p_id in number, p_starttime in timestamp) return varchar2
is
  co_modul_name CONSTANT VARCHAR2(96) := $$PLSQL_UNIT || '.createBackgroundJob';
  v_job_nr binary_Integer;
  v_jobname varchar2(100);
begin

  v_jobname := 'myJob_'||to_char(p_id);
  dbms_scheduler.create_job(job_name            => v_jobname,
                            job_type            => 'STORED_PROCEDURE',
                            job_action          => $$plsql_unit || '.doSomething',
                            job_class           => g_jobclass_Prefix||sys_context('userenv','current_edition_name'),
                            number_of_arguments => 3,
                            start_date          => p_starttime,
                            enabled             => FALSE,
                            auto_drop           => true);
  dbms_scheduler.set_job_argument_value(job_name=>v_jobname, argument_position=>1, argument_value=>p_id);
  dbms_scheduler.set_job_argument_value(job_name=>v_jobname, argument_position=>2, argument_value=>'RUN QUICK');
  dbms_scheduler.set_job_argument_value(job_name=>v_jobname, argument_position=>3, argument_value=>myPackage.g_trace_level);

  return v_jobname;
end createBackgroundJob;

The name of the job class is calculated using the name of the current edition by sys_context('userenv','current_edition_name').

start the background job

The createBackgroundJob function only prepares the background job. To run it, we need to call the create function and enable the job afterwards.

declare
  job varchar2(128);
begin  
  job := myPackage.createBackgroundJob(1, localtimestamp);
  dbms_scheduler.enable(job);
end;
/

Conclusion

Using job classes is a easy way to start a scheduled job in the same edition that the currently running session is in

Further remarks

Justification

I used this logic to split a plsql heavy task into several worker tasks that could run in parallel. I wanted to make sure that the parallel execution was done using the same edition as the thread that started it.

Restrictions

I did not test if it is possible to run lightweight jobs in a specific edition. There seems to be no obvious restriction that prevents using a job class also for lightweight jobs.

Since job classes belong to sys sometimes they need to be with the schemaname in front :sys.JCED_DEV$ALPHA.

It is not recommened to switch an edition from inside some code, because that code itself is running in a specific edition (the default one). This is very hard to control and to do properly.

Jobs are created with job names in upper case. In some cases we need to make sure to use the upper cased job name, to find/handle the correct job.

Error handling

Error handling needs special care when combining scheduled jobs and editions. For example I make sure that proper instrumentation is in place and that the log entry also includes the edition in which the error happend. In some situations, like if the service is dropped, then the session is still created, but falls back to the default edition. We want to be sure, that we notice any issues arising from such a situation.

18c scheduler EXTERNAL_SCRIPT with error ORA-27369: job of type EXECUTABLE failed with exit code: No child processes

After an upgrade to oracle database version 18.3 from 12.2 I encountered this error for a scheduled job of type EXTERNAL_SCRIPT. The job failed.

error code: 10 – Command not found

The view all_scheduler_job_details had additional information.

ORA-27369: job of type EXECUTABLE failed with exit code: No child processes

Last year I wrote about using scheduled job type EXTERNAL_SCRIPT instead of EXECUTABLE. Even the simplest demo code block from that time was not running anymore in 18c.

Here is a very simple shell script that needs to run

#!/bin/bash
echo “Job ok!”;

I execute this script using a scheduled job. The job runs as user oracle (via credential ORACLE_OS_CREDS).

-- First test a script that should not produce an error
declare
v_jobname varchar2(200);
v_good_script clob;
begin
v_jobname := upper('DEMO_SCHEDULED_EXTERNAL_SCRIPT_NO_ERROR');
-- the following line breaks are important.
-- Do NOT remove them, they are part of the linux script.
v_good_script := '#!/bin/bash
echo "Job ok!"';

-- create and run the job
dbms_scheduler.create_job(job_name => v_jobname,
job_type => 'EXTERNAL_SCRIPT',
job_action => v_good_script,
credential_name => 'ORACLE_OS_CREDS',
enabled => true,
auto_drop => true
);
end;
/

After several failed trails that including checking credentials, privs on OS level extproc settings, etc. I found a simple reason.

We are not allowed anymore to set the shell in the first line. So the script needed to change.

echo “Job ok!”;

In case you didn’t know, this first line is called Shebang.

I didn’t find any notice in the documentation about that. So it might be a bug or some intented stealth change.
Oracle 12c doc
Oracle 18c doc
Both say the same thing.

EXTERNAL_SCRIPT‘ This specifies that the job is an external script that uses the command shell of the computer running the job. For Windows this is cmd.exe and for UNIX based systems the sh shell, unless a different interpreter is specified by prefixing the first line of the script with #!.

Oracle Documentation

It is definitly not working in my environment. In case you encounter the same issue, here is my suggested solution.

To circumvent the issue I added some conditional compiling so that the same code is running in 12.2 and in 18.3.
In 12.2 it adds the shell call as the first line of the script – in 18c it avoids it.
I didn’t test which shell really is used to run the script. The difference between bash and sh is usually not relevant for my scripts.

-- First test a script that does not produce an error
declare
v_jobname varchar2(200);
v_good_script clob;
begin
v_jobname := upper('DEMO_SCHEDULED_EXTERNAL_SCRIPT_NO_ERROR');
-- conditional compile used because shebang not allowed anymore in 18c
v_good_script :=
$IF DBMS_DB_VERSION.VER_LE_12_2 $THEN
'#!/bin/bash
'||
$END
'echo "Job ok!"';
dbms_scheduler.create_job(job_name => v_jobname,
job_type => 'EXTERNAL_SCRIPT',
job_action => v_good_script,
credential_name => 'ORACLE_OS_CREDS',
enabled => false,
auto_drop => false
);
-- run the job
dbms_scheduler.enable(v_jobname);
end;
/

I also updated my older blog post “dbms_scheduler 12c – run EXTERNAL_SCRIPT” to considered those findings.

You might also be interested in a post by Markus Fletchner: File ownership after patching or relinking Oracle RDBMS software

He describes issues with scheduled jobs of type EXECUTABLE after patching the database to 18 because of changed permissions. I first suspected I have the same problem, but it turned out differently. Still an interesting read.

I hope this post will help some others to waste less time, when encountering this error.

Basic SQL: All about sequences

Intro

There are still a lot of misunderstandings about Oracle sequences. Sometimes even experts tell you things about sequences that are easy to misunderstand, especially if we look into the details. The following post wants to give a detailed overview about what are sequences, why they work as they do, and how we should use them.

There are also a lot of parameters that the sequence object has and that you can use to tweak the behaviour. I will cover the most common things here.

Wording

Many of the misunderstandings come from how we use the word “sequence”. It can mean several slightly different things.

meaning a) The sequence object in the database aka the number generator
meaning b) the number value that is retrieved via mySeq.nextval
meaning c) an attribute for a list of numbers, stored typically in an ID column
“This list is in sequence” often means that we have an ordered list of numbers without gaps (math: monoton increasing integer values).

For the remainder of the document I will try to make always clear which meaning I am referring to. The relevant words will be written in italics to hint about the specific interpretation in that sentence. In cases where I say “sequence” without additional specification details, I will mean the sequence object.

Purpose

The most common sequence usage is as technical values for ID columns. A typical ID column is a surrogate key. Opposed to a natural key, a surrogate key has no intrinsic meaning. It’s only use is to identify (=ID) a database record in a table. No intrinsic meaning also implies that we can not use this ID value to make business decisions dependend on it.

For example the following sentence should be considered a wrong deduction.
“Employee ID=17 was hired before Employee ID=26 because he/she has a lower ID”.

If we want to make qualified statements, then we must add the needed information to the data. For example add a column “hire_date”. Then we can use it to deduct when an employee was hired and what the order among different employees is.

The main advantage of a surrogate (meaningless technical) key is that the database can use it to ensure referential integrity. And this integrity rule is ensured even if something changes with regards to the business key. Typically business keys do not change. But if it happens, then the relationship is ensured by the foreign key still pointing to the surrogate key. For example we might have an INVOICE table. The business key might be the invoice number. In general this number is immutable, however it could be that there was some typo or scanner fault while the invoice was registered into the system. Using a surrogate key it is possible to change this invoice number without having to change all dependent records (like invoice positions) as well.

One of the best ways to supply values for such a surrogate key column (ID) is to use a sequence object and call the NEXTVAL function (pseudocolumn) on it. We can do that with a database trigger, as an identity column or directly in an insert statement.

 

Usage

standard usage

The standard usage of a sequence simply is to provide values for an ID column in the most performant way.

If you are new to the concept of Oracle sequences, then I suggest to go to livesql.com and try out the next few examples there by yourselfs. Experienced developers might want to skip those basic examples.

A) sequence + nextval on insert

First create a sequence using all default settings. We then use this sequence to provide ID values for our super-employees.


create table super_emp
(id number primary key,
   first_name varchar2(100),
   last_name varchar2(100),
   hire_date date);

create sequence emp_seq;

Then call nextval directly in an insert statement


insert into super_emp (id, first_name, last_name, hire_date)
values (emp_seq.nextval, 'Peter', 'Parker', trunc(sysdate));

insert into super_emp (id, first_name, last_name, hire_date)
values (emp_seq.nextval, 'Clark', 'Kent', trunc(sysdate));

1 row inserted.

1 row inserted.

The NEXTVAL pseudocolumn was used directly in the values section of the insert statement.

B) before row insert table trigger

Create a table trigger that fires during insert (pre 12c solution)

create or replace trigger trg_emp_bri
  before insert on super_emp
  for each row
begin
  if inserting then
    if :NEW.ID is null then
      :NEW.ID := emp_seq.nextval;
    end if;
  end if;
end;
/

The Oracle SQL Developer has a very nice wizard that helps to quickly create such a trigger. The table context menu (rightclick) has an entry to create a PK trigger based with a sequence. It creates a trigger very similar to the one above (I removed a select from dual in favour of a direct assignment).

Then insert into the table using either a NULL value or without the ID column.


insert into super_emp (id, first_name, last_name, hire_date)
values (null, 'Tony', 'Stark', trunc(sysdate));

insert into super_emp (first_name, last_name, hire_date)
values ('Bruce', 'Wayne', trunc(sysdate));

1 row inserted.

1 row inserted.

This is very nice. The application code that does the insert does not need to bother with the name of the sequence.

The trigger fires once FOR EACH ROW that is inserted. The code executes slightly BEFORE the row data is inserted. Before row triggers are typically used to set default values for columns or do some more complicated checks. After row triggers also exists. They are usually used for monitoring purposes, like writing data into an audit trail.

C) Use the sequence in the column definition (since 12c)

Since 12c we have two new options. Create a column AS an IDENTITY column or set the default value for the column to sequence.NEXTVAL. Both options can be configured to work only ON NULL. In case of an identity column, Oracle will automatically create a sequence. More about this in the chapter “identity columns”. Here is an example using the default setting.

The table trigger from B) is not needed anymore, so we can drop it.

alter table super_emp modify id default on null emp_seq.nextval;

drop trigger trg_emp_bri;

Then run the inserts.


insert into super_emp (id, first_name, last_name, hire_date)
values (null, 'Diana', 'Prince', trunc(sysdate));

insert into super_emp (first_name, last_name, hire_date)
values ('Steve', 'Rogers', trunc(sysdate));

1 row inserted.

1 row inserted.

Before 12c it was not possible to use pseudocolumns or non-deterministic functions like sysdate as a default value for a column. With 12c this is possible now. The result is the same as with a before row trigger, but usually it is noticably faster when we insert multiple rows.

 

Check the results

select id, first_name, last_name from super_emp;

ID FIRST_NAME LAST_NAME
1 Peter Parker
2 Clark Kent
3 Tony Stark
4 Bruce Wayne
5 Diana Prince
6 Steve Rogers

All inserts were done successfully. All three methods work.

 

18c create sequence parameters

create sequence syntax diagram 18c

syntax diagram

Some basic stuff first

Here we go through the different parameters. Behind some of those are very complex concepts. If so, those concepts are explained in a later section. This basic section tackles the way how to set the parameter and the immediate effects of setting or not setting it.

INCREMENT BY vs. START WITH

START WITH says what the very first value will be. It can be negative.

INCREMENT BY says how the next value will be calculated. It can also be negative but not 0.

The syntax diagram is slightly misleading. It gives the impression as if we can only specify one during the creation. Either INCREMENT or START WITH, but not both. This is not true, we can create a sequence and specify both. The default for both is 1.


create sequence testseq increment by 10 start with 2;
select testseq.nextval from dual connect by level <= 3;

NEXTVAL
2
12
22

Other parameters like CYCLE and NOCYCLE can not be specified both at the same time. The syntax diagram is correct for those.

For the reminder of this document, we assume the increment is always 1 (unless clearly mentioned otherwise)

Note that we can not alter the START WITH value, but we can alter the INCREMENT BY.

Hint: The undocumented RESTART clause allows to set a new START WITH value. See section about “How to reset a sequence”.

MAXVALUE and MINVALUE

Typically we don’t have the need to set those two parameters, the defaults are good.

Facts

  • MAXVALUE and MINVALUE specify the highest and the lowest possible value a sequence can have. MAXVALUE must be greater than MINVALUE.
  • NOMAXVALUE (1028-1) is the highest possible value.
    NOMINVALUE -(1027 -1) is the lowest possible value.
  • The default for MAXVALUE is NOMAXVALUE.
    The default for MINVALUE is 1.
  • The MINVALUE can not exceed the value defined by START WITH. Or otherwise we get an error.
    ORA-04006: START WITH cannot be less than MINVALUE

CYCLE vs. NOCYCLE (default)

CYCLE specifies, that the sequence after it reached the MAXVALUE, will start again with the MINVALUE (not with the START WITH value). The theoretical maxvalue of a sequence is 28 digits. It is a bit less with scalable sequences.

Nowadays there is no compelling reason to use CYCLE.

I believe in the old days (1990 – Oracle 7) disc space was still a premium commodity. Therefore number columns were often limited to a low number of digits (5 or 6). Under certain specific circumstances a cycling sequence then might have been useful to prevent numeric or value errors. Those days are gone.

CACHE (default) vs. NOCACHE

Caching a sequence is a huge performance feature. The default setting is CACHE 20, which is good for most scenarios.  It means 20 sequence values are read from shared memory (SGA) instead from hard drive. And after that the dictionary will be updated one time.

See the section about caching considerations for more information about this very important parameter.

Demo:

create sequence mySeq cache 1000;
select sequence_name, cache_size, last_number
from user_sequences
where sequence_name ='MYSEQ';
SEQUENCE_NAME CACHE_SIZE LAST_NUMBER
MYSEQ         1000       1

select myseq.nextval from dual connect by level <= 3;
NEXTVAL
1
2
3

select sequence_name, cache_size, last_number
from user_sequences
where sequence_name ='MYSEQ';
SEQUENCE_NAME CACHE_SIZE LAST_NUMBER
MYSEQ         1000       1001

After this, we still have 997 cached sequence values.

The default value of cache 20 is a kind of sweet spot for OLTP purposes. Only when you have the need to create a very large number of sequence values in a short time, then consider to increase the cache. This typically happens during data load situations. Don’t forget to lower the cache value again after the data load is over.

 

ORDER vs. NOORDER (default)

It is a common misconception that we need to specify ORDER to get ordered values from a sequence object. The sequence object will always produce ordered values! Oracle did not implement some kind of random mechanism. Sequence.nextval will always give you the last value + the increment. Any kind of “randomness” comes from other things, like that you seem to have no control over who fetched the last value (multi user), when was that value inserted (seq.nextval call < insert time < commit time) and lost sequence caches.

The ORDER setting is only relevant in a RAC (Real Application Cluster) environment. And even there it should always be NOORDER (the default). Read the chapter about the performance considerations for an explaination.

Remember

ORDER on RAC = slow

ORDER on non-RAC = no effect

easy.

 

KEEP vs. NOKEEP (default)

This is a switch that most database developers will never need. It might be more relevant for Java developers.

In 12.2 a new feature called application continuity was introduced. It allows to capture and replay a certain workload on the database. It comes with the license options for RAC or Active Data Guard.

Problem is that a call to sequence.nextval would deliver a new value. This is not wanted for REPLAY purposes. Altering a sequence to KEEP would provide the same sequence value during the replay.

From the appendix of Oracles White paper about Application continuity:

Mutable Functions
Mutable functions are functions that can change their results each time that they are called. Mutable functions can cause replay to be rejected because the results visible to the client can change at replay.
Consider sequence.NEXTVAL that is often used in key values. If a primary key is built with a sequence value and this is later used in foreign keys or other binds, the same function result must be returned at replay.
Application Continuity provides mutable value replacement at replay for Oracle function calls if GRANT KEEP or ALTER.. KEEP has been configured.
If the call uses database functions that support retaining original mutable
values, including sequence.NEXTVAL, SYSDATE, SYSTIMESTAMP, and SYS_GUID, then, the original values returned from the function execution can be saved and reapplied at replay. If an application decides not to grant mutable support and different results are returned to the client at replay, replay for these requests is rejected.

Important to remember is that the KEEP parameter during creation has nothing to do with keeping the sequence pinned in the SGA. An example how to do that is in the “discussion about gapless IDs” section.

SCALE vs. NOSCALE (default)

SCALE is a very interesting new setting. It allows to improve the clustering factor of the index on the ID column. More details about that in the performance section.

Useing SCALE adds the session ID (SID) to the beginning of the sequence value.

SCALE has two options EXTEND and NOEXTEND (default). See how it works and differs.

 

create sequence myseq;

Sequence MYSEQ created.

                               NEXTVAL
--------------------------------------
                                     1
alter sequence mySeq scale;
                               NEXTVAL
--------------------------------------
          1017670000000000000000000002

alter sequence mySeq scale extend;
                               NEXTVAL
--------------------------------------
    1017670000000000000000000000000003

For sake of brevity I removed the “sequence altered.” results and “select mySeq.nextval from dual;” calls.

My session ID in this demo was 767. The 101 is the instance ID (1) + 100. So in a RAC environment, this will ensure that the values provided by different nodes will not clash. On non-RAC systems this leading part should always be 101.

NOSCALE gives us a normal sequence value of 1.
SCALE NOEXTEND gives us a sequence value of 28 digits (MAXVALUE) with a 2 at the end (last value+increment by) and a 101767 at the beginning.
SCALE EXTEND gives us a sequence value of  28+6 digits with a 3 at the end (last value+increment by) and a 101767 at the beginning.

So EXTEND adds the additional digits on top of the MAXVALUE setting, whereas NOEXTEND adds it inside the range defined by MAXVALUE.

In most circumstances – if we consider scalable sequences – we should use SCALE NOEXTEND. Just to be sure, that the generated value still does fit into the table column. For very large tables if there are already some extremly high values, we might need to use EXTEND, but I expect this situation to be very rare.

When is this useful? Only for cases when extrem performance matters. So for large or very large tables, with a lot of inserts from multiple sessions (parallel inserts).

SESSION vs. GLOBAL (default)

Sequence values do not depend on a user session. Every call to sequence.nextval will give the next incremented value regardless of which session executed this. This feature ensures that nobody gets a duplicate key.

User/Session A calls mySeq.nextval 3 times and gets 1,2,3.
User/Session B calls mySeq.nextval 3 times and gets 4,5,6.
If both sessions fetch the values almost simultaniously then A might get 1,3,4 and B might get 2,5,6. Notice that there might be gap from the perspective of a single session, but the values are still ordered for each session.

With SESSION sequences this behaviour changes. Session A calls mySeq.nextval three times and gets 1,2,3. Session B calls mySeq.nextval three times and also gets 1,2,3. The values are not shared between sessions.

Where do we need this? Only for global temporary tables (GTT). The data in a GTT  persists for the duration of a session (alternatively until commit) and then is gone. Same behaviour for the SESSION sequence – the generated sequence values only persist for the duration of the session.

 

Practical considerations

For most cases the default settings are perfect.

Only if you encounter issues (performance  or unusual number of gaps) or if your data has some special scenarios (batch ETL jobs, very large number of rows) then you should start thinking about tinkering with the default settings.

The following sub chapters discuss common questions and show cases how to work with sequences to solve some typical tasks.

How to avoid reusing the same ID in different dbs

Sometimes we have a distributed database. Especially for global companies each region might have its own database. The data for the different regions still needs to be comparable. And sometimes the data will be consolidated or exchanged. In such cases it helps, if the ID values do not overlap.

One way to do that is to use the INCREMENT parameter. On database 1 we use a sequence object starting by 1 and and increment of 10. So this will give IDs like 1, 11, 21, 31, 41,….

create sequence testseq increment by 10 start with 1;

On Database 2 we use a sequence starting with 2 and increment 10. This will work up to 10 regions. So this will give IDs like 2, 12, 22, 32, 42, ….

create sequence testseq increment by 10 start with 2;

Result is that those values do not overlap. There are other (and possibly better) ways to solve the situation, like sys_guid(). But this is a fairly easy and stable concept.

Caching considerations

If the sequence is used very infrequently, then you can set it to NOCACHE. For example if you have an staff table; I don’t expect that new personell is hired every second. Typically it will be a few people per month (depends on the size of your company of cause). For such low frequency inserts performance doesn’t matter. You can set the sequence object to NOCACHE or to a very low cache value. However if you do a large data import, consider to increase the cache size before running that data load.

Does setting a larger cache size need more SGA memory?

No.

Or to explain it with Tom Kytes words

All we need to keep in the cache is:

the sequence on disk was N
the cache size is M
the current value is X

As long as X is less than N+M – we just increment X when someone calls NEXTVAL.

we do not need to keep in the cache “N, N+1, N+2, … N+M-1”, we just keep N, M and X and increment X when someone asks for a new sequence value. When X=M, we update SEQ$ and reset N in the cache.

So, cache 1000 and cache 20 take the same amount of space in the cache.

How to reset a sequence?

There are three general ways to set a sequence to a different value.

  1. Call sequence.NEXTVAL so often until you reach the target value
  2. Manipulate the increment parameter using a negative increment. Call nextval once. Reset.
  3. Restart the sequence (new undocumented feature)

The first way usually is not practical. A noticable exception might be, if you manually added some data without using the sequence and you want to jump over those few values.

If you want way 1, then the CONNECT BY LEVEL clause helps to do it quickly.


select myseq.nextval from dual connect by level <= 996;

 

And here is a demo for way 2:
Preparation setup


drop sequence mySeq;
create sequence mySeq cache 1000;
set autotrace traceonly statistics
select myseq.nextval from dual connect by level <= 996;
set autotrace off
select myseq.nextval from dual;
   NEXTVAL
----------
       997

The “set autotrace traceonly” command works in sql*plus. I used it here to avoid printing 996 values onto the screen. It is not relevant for the demo itself.

The current value now is 997 but we want that the next call to nextval should give us 1.

Now reset the sequence.

alter sequence mySeq increment by -996 nocache;
select myseq.nextval from dual;
alter sequence mySeq increment by 1 cache 1000;

After this code, the very first session that calls myseq.nextval, will see 2 as the value returned.

If we really need to see 1 we also must lower the MINVALUE. Because INCREMENT BY can not result in anything lower than the MINVALUE (ORA-08004: sequence MYSEQ.NEXTVAL goes below MINVALUE and cannot be instantiated).

alter sequence mySeq increment by -997 nocache minvalue 0;
select myseq.nextval from dual;
alter sequence mySeq increment by 1 cache 1000;
select myseq.nextval from dual; 

Notice that we incremented now by -997 instead of -996 and that we are calling nextval twice. We can not reset the minvalue to 1 during the second ALTER sequence command, because that also would violate the rules (ORA-04007: MINVALUE cannot be made to exceed the current value). Easyiest solution is to let it stay at 0.

Using NOCACHE is important, to avoid having issues with the stored last_value. Also check the increment by and the cache setting, before you alter the sequence. If the increment by is different, then you need to change the above code and probably need to call nextval a second time.

In 18c we got a third option to reset a sequence – the RESTART option.


ALTER SEQUENCE mySeq RESTART;

This is currently undocumented.

Thanks to Roger Troller for makeing me aware about it (Blog).

I tested it a little bit further and found out two more things.

  • We can already use it in 12.2.0.1. Which makes sense, since 18c is really just 12.2.0.2.
  • And we can combine it with the START WITH clause.

So the following works !

alter sequence testseq_20 restart;
VAL1
1
alter sequence testseq_20 restart start with 15;
Sequence TESTSEQ_20 altered.
select testseq_20.nextval val1 from dual;
VAL1
15

Very convinient. This should be the preferred way to reset a sequence whenever you need to do that.

Not recommended is to drop and recreate the sequence. While this will also allow us to set a new START_WITH value, it has a major drawback. All references to the seqeunce are then broken. Especially all privileges are lost, like GRANT SELECT on #sequence to #schema.

 

Can we use an ID from a sequence to order by insert time?

Short answer no. The order of inserts and the order of sequenced values often match but are not guaranteed to match.

Detailed answer: Usually it works.

I very often use a ID column filled by a sequence as a second order criteria. For example I typically sort a logging table – where trace information is written – by the insert date and the LOG_ID (sequence based PK ).

order by insert_date desc, log_id desc

The insert date (if it is a date) is only accurate to the second. Even if it is a timestamp there might have been multiple inserts at the same fraction of a second. The log_id is a perfect second order criterium.

We can safely assume, that the inserts that were done from the same session, have ordered sequence values. There might be gaps, but the sequence values will be produced in the same order as we did the inserts. There can be ID values in between, that are from a different sessions. But for trace log information, usually it does not matter if a different session is ordered before or after our session. However the data from one session should be correctly ordered. And this is guaranteed.

Is cycling useful?

I never had the need of cycling sequences. I firmly believe if you think you need those, you have a much deeper problem somewhere else. It would probably better to solve that problem, instead of useing a cycling sequence.

With 18c we get SESSION sequences. For some cases where CYCLE was considered in the past, a SESSION sequence might be the better choice.

Also ROWNUM and the analytic function ROW_NUMBER can be used to create consecutive values at time of select, instead of a sequence providing those values at time of insert.

Discussion of gapless IDs

This is a problem/question I often encountered: How to make a sequence gapless?

TL;DR: You don’t need to. The effort and the restrictions to make an ID column gapless is to high in (almost) all use cases.

The sequence object can and does provide gapless numbers. In a multi user environment we just can not reliably use the provided values to store them in a gapless way. Even in a single user environment, the stored IDs could be deleted. So one consequence of the gaplessness requirement, would be to forbid delete operations.

The main point is that almost all the things that will create “holes” in an ID column are under our control. It is not the Oracle database that can not provide gapless sequences. It is the complexity of the business rules combined with performance requirements in a multi user environment, that make it almost impossible to have an ID column without potential gaps.

Performance + Multi User + Gapless IDs build a triangle of goals that exclude each other. We can not reach all three goals at the same time, one needs to suffer. However those goals do not react in the same way, when we sacrifice one a tiny bit. So let’s investigate what happens then:

We still can not reach good performance (instead of very high performance) if we need multi user capability and gaplessness at the same time. To enable this we need to serialize access to the whole table. Which in turn means only one session can write into the table and all other sessions will need to wait until the other session finishes the whole transaction.

We can have very high performance and gaplessness if we only have a single user. But as soon as a second user wants to write at the same time, we need to introduce severe serialization of the whole transaction, just to ensure gaplessness. And this means performance drops immensely. Btw. this is how MS Access works. Only one user can write into the so-called database.

But we can get almost gapless IDs and still have very high performance for multiple users. Almost gapless means, we sometimes might have gaps in our sequence, but this situation is rare. This is the default behaviour of Oracle sequences.

How do we get gaps in our IDs?

a) a record in our table was deleted.
b) the insert run into an error
(remember sequence.nextval is called a tiny moment before the insert is executed).
c) Somebody called sequence.nextval but didn’t use the value.
d) The sequence cache was lost. One way how this happens is if the database decides that other objects need to be in SGA memory and the sequence wasn’t called for a longer time.

By pinning the sequence we can avoid situations where the sequence cache ages out of the shared pool. A better alternative is to size the shared pool appropriately, so that in general sequence caches will not age out of it.

execute sys.dbms_shared_pool.keep(owner.mySequence,'Q');

This still doesn’t guarantee gapless IDs, but for most use cases it would be good enough.

The oracle docs about skipping cached numbers:

18.1 Database Admin Guide – Managing Sequences
The database might skip sequence numbers if you choose to cache a set of sequence numbers. For example, when an instance abnormally shuts down (for example, when an instance failure occurs or a SHUTDOWN ABORT statement is issued), sequence numbers that have been cached but not used are lost. Also, sequence numbers that have been used but not saved are lost as well. The database might also skip cached sequence numbers after an export and import. See Oracle Database Utilities for details.

A normal or immediate shutdown of the database will not loose sequence numbers. Instead the database will update the data dictionary (table sys.seq$) with the last used value.  Unfortunatly most DBAs prefer to shutdown a database using abort, since they don’t bother enough about user sessions.

Sequence Performance

Why is a sequence fast and how can we use it in the most performant way?

Oracle invented sequences with performance in mind. They provide a way to create surrogate keys in a multi user environment while minimizing serialization.

Basic working of an Oracle sequence

A sequence is just an entry in the dictionary table sys.seq$.

desc sys.seq$;

Name       Null?    Type 
---------- -------- ------------ 
OBJ#       NOT NULL NUMBER 
INCREMENT$ NOT NULL NUMBER 
MINVALUE            NUMBER 
MAXVALUE            NUMBER 
CYCLE#     NOT NULL NUMBER 
ORDER$     NOT NULL NUMBER 
CACHE      NOT NULL NUMBER 
HIGHWATER  NOT NULL NUMBER 
AUDIT$     NOT NULL VARCHAR2(38) 
FLAGS               NUMBER 
PARTCOUNT           NUMBER

The highwater column is the same as the last_number column in the view user_sequences.

When a sequence fetches a new sequence value (using .nextval) then the dictionary table needs to be read and the row needs to be updated with the new value. Now if multiple sessions do that, then one would have to wait for the other. This is called serialization. To avoid that issue, Oracle uses two clever mechanisms.

  1.  The dictionary table is updated using an autonomous transaction. So the value is stored and other sessions can see it, even if the main transaction (from the user session) is not finished.
  2. The new highwater value that is stored, is not the next value, but it is the value + the cache. Any call to sequence.nextval will first read from the sequence cache and only once the numbers there are exhausted, it will read from the table and update it.

It is of cause possible to write a similar mechanism ourselfs with our own table. And I have seen projects where they did exactly that. But it is very hard to do properly and even then will not beat the performance of the original sequence.  So you would need a very special business case to justify writing your own sequence mechanism.

Speed it up

If you aim for maximum performance there are some considerations to do.

  1. You must use a sequence cache. The cache size also plays an important role. For most OLTP tables the default setting of cache=20 is a very good choice. However when you do large dataloads, then a much larger cache size is advisable. There is a diminishing returns effect. Doubleing the cache does not double the performance.
  2. On a RAC you really should use NOORDER. The ORDER keyword is only relevant for real application clusters. Using ORDER would try to synchronize the sequence caches over all cluster nodes. This is extremly bad for the performance. Useing NOORDER gives each RAC node a separate sequence cache. Which also means that an insert on node1 might have sequence value 1 and the next insert on node2 might have a sequence value 1001. The third insert on node1 again would use value 2.
  3. Sequences should be used as late as possible. There is usually no need to fetch a sequence value first and then do the insert later. Use the sequence while doing the insert. Either by adding it to the insert statement, or via a database trigger or since 12c as an identity column or a default on NULL column setting. Using the 12c mechanics allows to avoid the database trigger. This results in much better performance, as I have shown in a previous blog post.
  4. Consider scalable sequences for large tables if you are on 18c already. The effect can not be seen immediatly, but scalable sequences should give a better and more stable performance in the long run.
    For small and medium sized tables I expect scalable sequences to be slower than non-scalable sequences (because a bigger number needs to be stored). I didn’t test the effect, but a normal (small) sequence value only needs 2-6 bytes, wheras a scaled sequence value needs always 15 (NOEXTEND) or 18 (EXTEND) bytes. These bytes are used by the table column, the unique index that supports the primary key (PK), all foreign key (FK) columns pointing to the PK and the indexes supporting those FKs.

 

If you need the value of the sequence later in your code again you can either use .currval (not recommended) or use the returning clause to give you the generated ID.

best practice: returning clause

Several SQL and PL/SQL DML commands have a returning clause. It allows to get back data that is created or manipulated while the DML (insert or update) is running.

The most common usage is to return the ID value, that is filled by a database trigger (or an identity column) so that this ID can now be used furthermore in the same session or transaction or to be returned back to the client. For example to insert any child records or to show the freshly generated record in a GUI report.


insert into super_emp (first_name, last_name, hire_date)
values ('Bruce', 'Banor', trunc(sysdate))
returning id into :ID;

print :ID;

ID
--------------------------------------------------------------------------------
9

The returning clause is more typical in pl/sql. Here is an example using a record of %rowtype. We can even return the generated ID value directly into the record.

declare
r_super_emp super_emp%rowtype;
begin
r_super_emp.first_name := 'Hal';
r_super_emp.last_name := 'Jordan';
r_super_emp.hire_date := trunc(sysdate);

insert into super_emp
values r_super_emp
returning id into r_super_emp.id;

sys.dbms_output.put_line('New ID = '||r_super_emp.id);
end;
/
New ID = 10

 

Identity columns

Identity columns and Default on null are a great enhancements in db version 12.1.

It allows us to use a sequence as late as possible (while inserting). But without the need for a before row insert table trigger. This improves insert performance dramatically. A trigger is plsql based. It runs during the execution of a SQL DML statement (insert). Because of that a context switch from the SQL to the PL/SQL engine (and back) is needed. If we can avoid the trigger completly we can avoid the context switch and this will improve performance considerably.

I made some tests and under very favourible circumstances (nothing else inserted but the ID) the insert performance was 900% faster using IDENTITY or DEFAULT columns instead of a trigger.

With DEFAULT ON NULL we would still create the sequence by ourself. Which also means we know the name. With IDENTITY the sequence is automatically created and maintained by Oracle.

The name of the generated sequence will always begin with “ISEQ$$_”.

demo


create table test
( id number generated by default on null as identity (start with 20) primary key
);

select table_name, column_name, identity_column, default_on_null, data_default
from user_tab_columns;

TABLE_NAME COLUMN_NAME IDENTITY_COLUMN DEFAULT_ON_NULL DATA_DEFAULT
TEST ID YES YES “SCHEMANAME”.”ISEQ$$_10707661″.nextval

 

Drawbacks

It can be problematic to use identity columns over a database link. Especially if the ID value is needed. The main issue is that the returning clause does not work over a db link and there are no good alternatives for identity columns. This works slightly better with “default on null”. We know the sequence object and can use it to fetch the id value over a DB link first and use it then later for the insert. Not performant at all, but it works.

We also can not directly alter an existing ID column into an IDENTITY column. Although it is possible to modify an existing identity column (for example switching between generated always and generated by default on null).

Another minor inconvinience is that the system generated sequence will still be there when the table is dropped. At least as long as the table is still in the recycle bin.

There were also some other very special bugs using identity columns. All have workarounds, but my experience is, that default on null is slightly less error prone.

 

Index contention and Scalable Sequences

Scalable sequences where secretly introduced in 12.1.0.1 but only documented in 18.1.

Richard Foote did a three part series about scalable sequences that covers all you need to know.

The basic problem has to do with index contention.

To give a very brief explanation: when we have an ID column that is inserted using a sequence the index -over time- will become unbalanced. Because new values will only be added to the right side of the index leaf block splits will happen there frequently. Sometimes it will be 50-50 block split and the space in those blocks usually will not be filled up.  This eventually leads to a heavily right (un)balanced index tree.

Such a block split is a very ressource intensive operation and other sessions will need to wait for it. If you see a high number of “enq: TX – index contention issue” wait events (check MOS 873243.1) the reason could be those index block splits.

One workaround for the index contention problem in the past was to use a REVERSE KEY index. But this created other performance problems, like the CBO will not do any range scans on that index.

Scalable sequences are a slightly better solution to avoid those index contention issues (hot index blocks). Because they have the session ID in front of the number, values provided by a scalable sequences are distributed more evenly over the index. At least as long as multiple sessions do the insert.

 

Export and Import

consistency issues

When you do an export of a database or a schema it is crucial to do a time consistent export using

exp ... consistent=Y ...

Why? Otherwise the sequence object including the current value as start with is exported first. And later the tables with their data. Which means, that in between some session could call sequence.nextval and use up a value. You won’t notice the issue during import. But as soon as an insert in the imported schema happens, you will get an dup_val_on_index error, because the table has an ID value already, that the sequence generator just provided.

sys warning

Consistent=Y does not work as SYS. So never export data as SYS! The reason is that sys can not do read only transactions. Using SYSTEM is fine.

datapump

For datapump the equivalent to consistent=Y is the flashback parameter.

expdp ... flashback_time=systimestamp ...

There is also a flashback_scn parameter. Both do a time consistent export.

And since 11.2 there is a legacy mode for datapump, which allows to use consistent=Y (it is rewritten into the flashback_time parameter).

 

Other ways to generate ordered numbers

Sometimes a sequence is not the best way to generate ordered numbers. For example when we want to sort entries from a detail table based upon their parent keys. Each detail record should start with 1 for each parent entry. A sequence is not the proper tool to get such values.

Alternatives are ROWNUM, the analytical function ROW_NUMBER() and certain ways to create lists in SQL, for example by using hierachical queries with CONNECT BY.

 

 

Further reads

 

 

10 possibilities and 10 restrictive things you might not know about UPDATE

This is mostly about the UPDATE command. Features which I noticed, that many do not know about. Some are really useful, but most fall in the category of “interesting to know”.

An update can do that!?

1 – we can update multiple columns with a single subselect

 Update t1
set (a,b) = (select t2.a,t2.b from t2 where t1.id = t2.id)
...

2 – the returning clause of an update can have an aggregation

 update t1
set c1 = c1+100
returning sum(c1) into … 

3 – updates can be restricted to a partition

 update t1 partition (p1)
set ...

4 – an update with flashback data from the same table is possible

UPDATE employees u
SET salary = (SELECT e.salary
              FROM employees e
              AS OF TIMESTAMP (SYSTIMESTAMP - INTERVAL '2' MINUTE)
              WHERE e.last_name = 'Chung')
WHERE u.last_name = 'Chung'; 

5 – we can update a TABLE() expression

example from the docs

update TABLE(select h.people -- this is a nested table column
                  from hr_info h
                  where h.department_id = 280 ) p
SET p.salary = p.salary + 100;

6 – SELECT can be harmful because of the FOR UPDATE clause

Using only the SELECT privilege on a table we can lock the whole table by using SELECT FOR UPDATE. So using only select privileges we can do some serious harm for a running application. That is one reason why SELECT ANY TABLE is so dangerous to grant. The new READ privilege avoids that issue. It allows to SELECT a table but not to LOCK it. READ was introduced in Oracle DB 12.1.0.2

7 – SELECT FOR UPDATE allows to skip locked rows

select *
from employees
where department = 'IT'
for update of salary
skip locked;

8 – the LOG ERRORS clause can have a text (a simple expression) to indicate which update produced the error

Usually log errors is used during insert operations. But it is also possible for updates.

And the chance is high, that we will run multiple updates, so it makes even more sense to mark the single update by adding some text to the “tag” column.


create table t2 (id number primary key, str varchar2(100));

insert into t2 values (1, '10-AUG-2018');
insert into t2 values (2, '13-NOV-2018');
insert into t2 values (3, 'SEP-15-2018');

commit;

BEGIN
  DBMS_ERRLOG.create_error_log (dml_table_name =&amp;amp;gt; 'T2');
END;
/

alter table t2 add dt date;

update t2
set dt = to_date(str)
log errors into ERR$_T2('update try 1, format='||sys_context('userenv','nls_date_format'))
reject limit unlimited;

select ora_err_mesg$, ora_err_tag$, id, str from err$_t2;

drop table t2;
drop table err$_t2;

ORA_ERR_MESG$ORA_ERR_TAG$IDSTR
ORA-01858: a non-numeric character was found where a numeric was expectedupdate try 1, format=DD-MON-RR3SEP-15-2018

Our string to date conversion failed, but we captured the row that failed and also the sessions nls_date_format.

9 – materialized views can be made updateable

create materialized view myMV ...
refresh fast
with primary key for update
...;

update myMV
set    col1 = 'ABC'
...;

But after a refresh the changes are lost!

10 – We can update a column to its DEFAULT value

Which is not the same as setting it to null (unless the column is declared with DEFAULT ON NULL).


alter table scott.emp modify hiredate default sysdate;

update scott.emp
set hiredate = default
where empno = 7900;

select * from scott.emp where empno=7900;
EMPNO      ENAME      JOB       MGR        HIREDATE          SAL        COMM       DEPTNO
---------- ---------- --------- ---------- ----------------- ---------- ---------- ----------
7900       JAMES      CLERK     7698       07.11.18 12:38:22 950                   30

restrictions and other features

11 – we can not combine select for update with a row limiting clause (fetch first)

select *
from emp
where empno = 7900
fetch first 1 row only
for update of job;

ORA-02014: cannot select FOR UPDATE from view with DISTINCT, GROUP BY, etc.

But we can select for update a table limited by rownum.

select *
from emp
where empno = 7900
and rownum = 1
for update of job;

EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
7900 JAMES CLERK 7698 03.12.81 00:00:00 950 30
1 row selected.

The reason of cause is, how the row limiting clause is rewritten using the analytic ROW_NUMBER() function.

12 – A before update statement trigger can trigger twice

see also: this OTN thread
and The trouble with triggers by Tom Kyte.

BEFORE Triggers Fired Multiple Times

If an UPDATE or DELETE statement detects a conflict with a concurrent UPDATE, then Oracle Database performs a transparent ROLLBACK to SAVEPOINT and restarts the update. This can occur many times before the statement completes successfully. Each time the statement is restarted, the BEFORE statement trigger is fired again. The rollback to savepoint does not undo changes to any package variables referenced in the trigger.

13 – the MODEL clause has a RETURN UPDATED ROWS mode

This mode only returns rows that were updated by one of the rules.


select * from scott.emp
model RETURN UPDATED ROWS
dimension by (empno)
measures (job, sal, comm)
rules (job[7900] = 'DRIVER'
      ,comm[7900] = 300
);
EMPNO JOB    SAL COMM
7900  DRIVER 950 300

Without the RETURN UPDATED ROWS setting all the rows would have been selected.

A slightly different effect can be reached using “RULES UPDATE”. The rule is used only for those rows, that existed already. No new rows will be created. Default is UPSERT (rows inserted and updated by the rules).

We now partition additionally by department.


select * from scott.emp
MODEL
  partition by (deptno)
  dimension by (empno)
  measures (job, sal, comm)
  rules UPDATE (
     job[7900] = 'DRIVER'
    ,comm[7900] = 300
);

DEPTNO	EMPNO	JOB	        SAL	COMM
20	7369	CLERK	        800	
30	7499	SALESMAN	1600	300
30	7521	SALESMAN	1250	500
20	7566	MANAGER	        2975	
30	7654	SALESMAN	1250	1400
30	7698	MANAGER	        2850	
10	7782	MANAGER	        2450	
10	7839	PRESIDENT	5000	
30	7844	SALESMAN	1500	0
20	7902	ANALYST	        3000	
10	7934	CLERK	        1300	
30	7900	DRIVER	        950	300

The rules are executed for each partition. And without the UPDATE setting, a new entry for department 10 and 20 would have been made. Since we used UPDATE, only one existing row was changed.

14 – an update can change rowids

ROWIDs are very stable. Row chaining will not change the rowid and not even row migration will do it.

But there are ways (I know two, there might be more) how the original rowid can change.

  • An update on the partition key can move the row to a different partition.
  • When updateing a row in a table compressed with Hybrid Columnar Compression, the ROWID of the row may change.

15 – An update with RETRY_ON_ROW_CHANGE hint is retried, if ORA_ROWSCN changed

doc: RETRY_ON_ROW_CHANGED

So far it is not clear what the RETRY_ON_ROW_CHANGE hint is good for. It was discussed that it might be used in connection with edition based redefinition (EBR) and cross edition triggers.

16 – parallel UPDATE is not supported for temporary tables.

Same goes for DELETE and MERGE.

Potential test code (not verified yet)


ALTER SESSION ENABLE PARALLEL DML

create global temporary table t1 (col1 number, col2 varchar2(100));
insert into t1
select level as col1,'0' as col2
from dual connect by level &amp;amp;lt;= 100000;

update /*+ parallel(t1, 4) */ t1
set col2=sys_context('userenv','sid')
where col1 &amp;amp;lt; 90000;

select col2, count(*) cnt from t1 group by col2;

drop table t1;

COL2 CNT
---- -----
612  89999
0    10001

We see only one session (sid=612), so the conclusion is that the parallel hint was ignored.

The code above is how I think, this can be tested. However the test also needs to ensure, that when using a real table, that more than one session is used and reported via the sys_context. I couldn’t verify that yet (didn’t work on livesql and parallel is not an option on Standard Edition, so don’t try it there).

17 – it is possible to update remote lobs

The 12.2 new features guide has some information ( 12.2 new features guide)

and there is a direct note in the description of the UPDATE command: 12.2. sql reference (UPDATE)

Starting with Oracle Database 12c Release 2 (12.2), the UPDATE statement accepts remote LOB locators as bind variables. Refer to the “Distributed LOBs” chapter in Oracle Database SecureFiles and Large Objects Developer’s Guide for more information.

SecureFiles: Distributed LOBs

So far I didn’t have the chance to test it. But it looks useful.

18 – To update an identity column is not allowed


create table t1 (id number generated as identity, name varchar2(100));
insert into t1(name) values ('Fred');
insert into t1(name) values ('Wilma');
insert into t1(name) values ('Barney'); 

update t1
set id = 4
where name = 'Fred';

ORA-32796: cannot update a generated always identity column

Also a virtual column can not be updated. However an invisible column can – unless it is virtual or an identity column.

The identity restriction is one of the main reasons, why I prefere to create a column as DEFAULT ON NULL with a value for the sequence.

create sequence t1_seq;

create table t1 (id number default on null t1_seq.nextval primary key, name varchar2(100));

19 – the number of updates against a table can be seen in xxx_TAB_MODIFICATIONS


select inserts, updates, deletes, truncated, timestamp
from USER_TAB_MODIFICATIONS
where table_name = 'MYTABLE';

INSERTS UPDATES DELETES TRUNCATED TIMESTAMP
763     15799   761     NO        07.11.18 12:29:18

Data is tracked since the last time the statistics gathering job updated statistics or more consice when the statistics job decided, that the data in that table is stale. Sometimes this corresponds to the LAST_ANALYZED column in xxx_TAB_STATISTICS.

Two interesting blog posts that cover this useful feature:

Ulrike Schwirn (in German): Tabellen Monitoring mit DBA_TAB_MODIFICATIONS und SYS.COL_USAGE$

Martin Widlake: DBA_TAB_MODIFICATIONS

20 – at least 19 SQL commands have an “UPDATE” keyword

The following SQL commands can have “UPDATE” as a syntax keyword in some of their clauses included. The SELECT command has three different clauses. A command that allows to use a SELECT and therefore also an UPDATE is not counted for its select clause.

I’m not sure if the list is complete, but I searched through all syntax diagrams of 18.1. Feel free to comment if you know of another statement that allows a specific UPDATE keyword. Maybe there is something new in 18.3.

COMMANDclauseadditional info
ADMINISTER KEY MANAGEMENTupdate secretDoc: 18.1 SQL ref
ALTER AUDIT POLICYadd|drop actions update
ALTER INDEXupdate block referencesfor IOTs only
ALTER TABLEupdate indexesavoids indices to become UNUSABLE
ANALYZEvalidate ref updatecompare and correct rowids for REF values
AUDIT|NOAUDITupdate table|view|mvaudit of the update command
CREATE AUDIT POLICYupdate table|view|mvaudit (new version) of the update command
CREATE MATERIALIZED VIEWwith primary key for updatecreates an updateable MV
CREATE OUTLINEon update
CREATE PLUGGABLE DATABASEcontainer_map updatepartitions created in cdb$root or application root are also updated in the new PDB.
CREATE TRIGGERbefore|after update
EXPLAIN PLANfor update
GRANT|REVOKEupdate on table|update any table|update any cube|…
LOCK TABLEshare updatesame as ROW SHARE, lock modes ROW SHARE and SHARE UPDATE
MERGEwhen matched then update
SELECTfor updatelocks the selected rows
SELECTmodel return updated rows;
model … rules update
part of the model clause
UPDATEwhole command

dbms_scheduler 12c/18c – run EXTERNAL_SCRIPT

Introduction

With 12c we have several new job types for our scheduler jobs. One of them is EXTERNAL_SCRIPT. The other new job types are SQL_SCRIPT and BACKUP_SCRIPT.

From Oracle 12.2 plsql packages and type reference

‘EXTERNAL_SCRIPT’

This specifies that the job is an external script that uses the command shell of the computer running the job. For Windows this is cmd.exe and for UNIX based systems the sh shell, unless a different interpreter is specified by prefixing the first line of the script with #!.

In the past we could run an external script using the EXECUTABLE job type. This type is still available.

‘EXECUTABLE’

This specifies that the job is going to be run outside the database using an external executable. External jobs are anything that can be executed from the command line of the operating system. Anydata arguments are not supported with a job or program type of EXECUTABLE. The job owner must have the CREATE EXTERNAL JOB system privilege before the job can be enabled or run.

In general both options could do the same thing. Execute something on the host OS.
So why should we change anything? Is there a difference?

Let’s find out.

schedule an EXTERNAL_SCRIPT

setup credentials first

We can create credentials using a dbms package or via SQL Developer.

dbms_credential.create_credential(credential_name   => 'ORACLE_OS_CREDS',
                                  username          =>  'oracle',
                                  password          =>  'oracle',
                                  comments          => 'run scripts using oracle OS account');

In the developer VM box, the password is always oracle. That’s why I included it here. You need to use your own correct password.

For real world environments I suggest to create a specific OS account that is only allowed to execute the script and to do anything that needs to be done for this specific task but not more. This account might need the “Log On As Batch Job” Right under windows (support note #2065024.1).

For demonstration purposes I stay here with the oracle credentials.

setup a scheduled job to run a linux script

Of cause this works under windows too, but I did test it only using Oracle Linux.

Here I setup 4 slightly different examples how to run an EXTERNAL_SCRIPT job. After that we check and compare the output.

These are our for slightly different test scenarios.

  1. run a simple bash script.
  2. run a script, that has an error
  3. run a script with an error, but an exitcode=0
  4. same as 3. and use undocumented FAIL_ON_SCRIPT_ERROR argument

To test what happens if the script itself has an error, I added a change directory command pointing to a non existent directory.
This command will result in an error.

cd /abcd/efgh/ijk

Here is the command to schedule the 4 jobs. Each job has a slightly different name. The differences between one and the previous job are marked. (sorry new WP editor messed up syntax highlighting for code)

-- First test a script that does not produce an error

declare
  v_jobname     varchar2(200);
  v_good_script clob;
begin
   v_jobname := upper('DEMO_SCHEDULED_EXTERNAL_SCRIPT_NO_ERROR');

  -- the following line breaks are important.
  -- Do NOT remove them, they are part of the linux script.
   v_good_script := 
$IF DBMS_DB_VERSION.VER_LE_12_2 $THEN
   '#!/bin/bash
'||
$END
'echo "Job ok!"';

   dbms_scheduler.create_job(job_name   => v_jobname,
                              job_type   => 'EXTERNAL_SCRIPT',
                              job_action => v_good_script,
                              credential_name => 'ORACLE_OS_CREDS',
                              enabled    => false,
                              auto_drop  => false
                              );
   -- run the job
   dbms_scheduler.enable(v_jobname);         

end;
/


PL/SQL procedure successfully completed.

-- Now test a script that does produce an error


declare
  v_jobname    varchar2(200);
  v_bad_script clob;
begin
   v_jobname := upper('DEMO_SCHEDULED_EXTERNAL_SCRIPT_WITH_ERROR');

   v_bad_script :=
$IF DBMS_DB_VERSION.VER_LE_12_2 $THEN
'#!/bin/bash
'||
$END
'cd /abcd/efgh/ijk';

   dbms_scheduler.create_job(job_name   => v_jobname,
                              job_type   => 'EXTERNAL_SCRIPT',
                              job_action => v_bad_script,
                              credential_name => 'ORACLE_OS_CREDS',
                              enabled    => false,
                              auto_drop  => false
                              );
   dbms_scheduler.enable(v_jobname);
end;
/

PL/SQL procedure successfully completed.

-- Now test a script that does produce an error but uses exit 0

declare
  v_jobname    varchar2(200);
  v_bad_script clob;
begin
   v_jobname := upper('DEMO_SCHEDULED_EXTERNAL_SCRIPT_WITH_ERROR_EXIT0');

 v_bad_script := $IF DBMS_DB_VERSION.VER_LE_12_2 $THEN '#!/bin/bash '|| $END
'cd /abcd/efgh/ijk
exit 0';

   dbms_scheduler.create_job(job_name   => v_jobname,
                              job_type   => 'EXTERNAL_SCRIPT',
                              job_action => v_bad_script,
                              credential_name => 'ORACLE_OS_CREDS',
                              enabled    => false,
                              auto_drop  => false
                              );

   dbms_scheduler.enable(v_jobname);
end;
/

PL/SQL procedure successfully completed.

-- run script using attribute FAIL_ON_ERROR
declare
  v_jobname    varchar2(200);
  v_bad_script clob;
begin
   v_jobname := upper('DEMO_SCHEDULED_EXTERNAL_SCRIPT_WITH_ERROR_FAILONERROR');

 v_bad_script := $IF DBMS_DB_VERSION.VER_LE_12_2 $THEN '#!/bin/bash '|| $END
'cd /abcd/efgh/ijk
exit 0';

   dbms_scheduler.create_job(job_name   => v_jobname,
                              job_type   => 'EXTERNAL_SCRIPT',
                              job_action => v_bad_script,
                              credential_name => 'ORACLE_OS_CREDS',
                              enabled    => false,
                              auto_drop  => false
                              );
  -- Make sure script errors result in a job error and are noticed.
   dbms_scheduler.set_attribute(
            name => v_jobname,
            attribute => 'FAIL_ON_SCRIPT_ERROR',
            value => true);                              

   -- run the job
   dbms_scheduler.enable(v_jobname);
end;
/
PL/SQL procedure successfully completed.


Note that all anonymous blocks executed successfully. This only means that we could create and start a scheduled job. It does not tell us, what the result of the job was.

To find the job result, we need to check the data dictionary view all_scheduler_job_run_details. The information is in there but only AFTER the job finished.

Results

The data dictionary gives us information about the result of the scheduler runs in the view ALL_SCHEDULER_JOB_RUN_DETAILS.


select replace(job_name,'DEMO_SCHEDULED_EXTERNAL_SCRIPT') as job_name,
       log_date, status, output, error#, errors, additional_info
from all_Scheduler_job_run_details
where job_name like 'DEMO_SCHEDULED_EXTERNAL_SCRIPT%'
--and log_date >= sysdate - interval '5' minute
order by log_date desc
;

For presentation purposes I flipped rows and columns. So the next picture shows the columns from the DD view as rows.
result_run_details_transposed

Let’s go through the results step by step.

The first testcase did not have an error. status of the run = SUCCEEDED. The output column also shows the stdev output which is nice. So there is no need to spool the output into an extra file, just to be able to see later what happend. The same column is also used for jobs of type SQL_SCRIPT to return dbms_output.

The second testcase had an error. The job status correctly reported that the script errored with exit code = 1 (column error#). And we see what kind of error happen in the errors column.

"/tmp/job_2078996_3568888_script: line 2: cd: /abcd/efgh/ijk: No such file or directory
"

We also see that the script itself was created as a file in the /tmp folder using job and log id for the file name “job_2078996_3568888_script”.
Also note that the additional_info column says

"EXTERNAL_LOG_ID="job_2078995_3568886",
ORA-27369: job of type EXECUTABLE failed with exit code: Operation not permitted
"

This is slightly misleading, since the job_type was EXTERNAL_SCRIPT and not EXECUTABLE. And “Operation not permitted” could lead us suspecting some issue with privs (might be the case when “cd” doesn’t work) or with the credentials (definitly not the case here).

The third testcase had an error in the script, however it finished with exit 0 (=success). It makes sense that in this case the job run status is also marked as SUCCEEDED. However the error “no such file or directory” still can be found in the errors column.

The forth testcase uses a new feature. I’m not sure if that is already there in the 12.1 db version, all my tests were done under 12.2. The all_scheduler_jobs view has a new column FAIL_ON_SCRIPT_ERROR. It defaults to FALSE. We can set this as an attribute for the scheduled job.


   dbms_scheduler.set_attribute(
            name => v_jobname,
            attribute => 'FAIL_ON_SCRIPT_ERROR',
            value => true);

This is currently undocumented, but it works, as above demo shows. I believe it is a documentation bug. The new column in the view is documented, but FAIL_ON_SCRIPT_ERROR is not in the list of allowed attributes.

The result is, that even with EXIT=0, the job status goes to FAILED. We also see a different error number 27382 instead of 1. 27382 seems to be the ORA-Error number that we also see in the additional_details column. This column now says “ORA-27382: job type EXTERNAL_SCRIPT has errors in the job action”. Not a major difference, but slightly better additional_details when we use this attribute.

Quite some interesting behaviour. So if we want to run a script that always exits with 0, we can still make our job go to FAILED if there are any errors inside that script.

I would guess that we find the same behaviour for the other new job_types SQL_SCRIPT and BACKUP_SCRIPT. I didn’t test it yet.

Comparison between EXTERNAL_SCRIPT and EXECUTABLE

Executable is the old way.

It requires to configure and start an external agent. Often this is a fairly complex task. It involves configuration of the listener, tnsnames.ora, extproc.ora, externaljob.ora and some other parts. Those are usually not under control of a developer. Additionally all external jobs that are executed using this agent run with the same OS privileges. Using credentials gives us a little more control.

Starting from 12.1 such external jobs can alternativly run with credentials. Same as I already showed for running external_scripts. So the credentials argument is only a half-baked one.

It is also difficult to track down an error in case something goes wrong. This is where EXTERNAL_SCRIPT seem much better then EXECUTABLE. I have to admit I did only some very short tests about this, but I never found a disadvantage for running a scheduled job as EXTERNAL_SCRIPT instead of EXECUTABLE.

So if you have a choice go with the newer option.

Integrate it into plsql including status check

How to setup a plsql procedure that is able to run an external_script, but also returns an exception, in case the script runs into errors?

Lets assume the external script is something like this:

# set environment
export PATH=/usr/local/bin:$PATH
export ORACLE_SID=XE
export ORAENV_ASK=NO
. /usr/local/bin/oraenv

CLASSPATH=$CLASSPATH:$APEX_HOME/utilities
CLASSPATH=$CLASSPATH:.
export CLASSPATH

cd /opt/jasper/report1

java -jar runJasperReport.jar
exit

So this executes some jar file. I don’t want to wait until the java logic is finished, but I want to be informed if something basic goes wrong – like if the jar file couldn’t be found.

Here is an example that I used in some APEX application. After the job is started, it hangs around for a couple of seconds and checks if anything surprising did happen. The code to start the job itself is not included. But you can assume that it is done in the same packaged procedure. The script is in the v_script variable.

Do not copy it 1:1, but understand it and adapt it to your needs.


...
    v_jobname             varchar2(128);
    r_job_details         all_scheduler_job_run_details%rowtype;
    r_job                 all_scheduler_jobs%rowtype;
    c_max_check_job_tries constant binary_integer :=3;
    v_message             varchar2(1000);
    v_script              clob;
begin
...
   -- check if the scheduled job did sucessfully start.
   -- In case some error happend after starting the job, we might not notice that, since it was started in the background.
   -- e.G. ORA-27369: Job of Typ EXECUTABLE not successfull with Exit-Code: No such file or directory
   -- In such a case check scheduler data dictionary almost immediatly after job creation.
   apex_util.pause(p_seconds => 0.5);
   
   for i in 1..c_max_check_job_tries loop
      -- constant is set to 3, so max. wait time = 3.5 seconds.

        -- read info about scheduled job
        -- started/running jobs are in scheduler_jobs, finished jobs including results are in scheduler_job_run_details
        begin
          select * into r_job from all_scheduler_jobs where job_name = upper(v_jobname);
          logger.trace(p_message=>'Job "'||v_jobname||'" created with state='||r_job.state );
       exception
         when no_data_found then
           -- Job might have stopped already, check run details!
           r_job.state := 'NOT FOUND';
       end;    

       if r_job.state in ('RUNNING','SCHEDULED') then
         -- RUNNING+SCHEDULED => Looks ok, Job runs,just to make sure wait for another few seconds
         apex_util.pause(p_seconds =>  1);

       else
         begin
           -- for any other state check details
           select * into r_job_details from all_scheduler_job_run_details where job_name = upper(v_jobname);
           logger.trace(p_message=>'Job Details "'||v_jobname||'" with status='||r_job_details.status );
           if r_job_details.status= 'FAILED' or r_job.state= 'FAILED'  then
                v_message := 'Job "'|| v_jobname||'" with Error!';
                if r_job_details.additional_info like '%ORA-27369%' or r_job_details.additional_info like '%ORA-27382%' then
                  v_message := v_message ||' Jobaction='||v_script;
                end if;
                pk_logging.pr_logError(p_message=>v_message);
                apex_error.add_error(   p_message => v_message,
                                        p_additional_info => r_job_details.additional_info,
                                        p_display_location => apex_error.c_inline_in_notification --apex_error.c_on_error_page
                                        );
                -- step out of loop and raise an error using the OS error message
                Raise_application_error(-20001, r_job_details.errors);
           end if;
         exception
           when no_data_found then
             -- Job not started yet or just about to finish...
             -- consider to wait a few sec first. Then raise an error if job still not there.
             -- last try?
             if i=c_max_check_job_tries then
                logger.error(p_message=>'Job "'||v_jobname||'" was not started!');
                Raise_application_error(-20001, 'Warning! Job "'|| v_jobname||'" wasn't started (yet)! Check application log!');
             else
                -- wait 1 second until job is hopefully created
                apex_util.pause(p_seconds =>  1);
             end if;
         end;
       end if;
   end loop get_job_info;
   if r_job.state = 'SCHEDULED' then
      -- still scheduled? inform user
      Raise_application_error(-20001, 'Warning! Job "'|| v_jobname||'" needs longer than expected to start. Please monitor closely and informa administrator!');
   end if;
...

18c issue: ORA-27369: job of type EXECUTABLE failed with exit code: No child processes

In 18c my external scripts encountered an error

10 Command not found

EXTERNAL_LOG_ID=”job_xxx”,
ORA-27369: job of type EXECUTABLE failed with exit code: No child processes

Something did change.

I found out that beginning with 18c we must not set the shell at the start of the script. Simply remove the line ‘#!/bin/bash’ . I changed the test code above and added conditional compiling, so that the tests scripts should be running in all db versions.

Security considerations

Running external jobs is always something where we need to take extra care – so that we do not put holes into our security defense system.

I carefully watch out for two major security risks:
1) If we add something dynamically to the script that we are executing, like an extra parameter, make sure to sanitize all the inputs. Otherwise we could get some kind of injection problem.

2) The agent/account that runs our script should only get the least needed privileges. So it shoud NOT be running under user oracle, like I did in the demo.
Create a separate account for that. Name the account to something that logically points to the task that it is supposed to do.

conclusion

The new job_type EXECUTE_SCRIPT is useful.

Some of the hurdles that developers face when trying to run a host command are lowered. The script itself does not need to be deployed on the database server.

Reacting to errors in the script is possible, but we need to check the correct columns and use the proper settings.

cleanup demo jobs


-- cleanup -- remove all the jobs

execute dbms_scheduler.drop_job(job_name => 'DEMO_SCHEDULED_EXTERNAL_SCRIPT_NO_ERROR'); execute dbms_scheduler.drop_job(job_name => 'DEMO_SCHEDULED_EXTERNAL_SCRIPT_WITH_ERROR'); execute dbms_scheduler.drop_job(job_name => 'DEMO_SCHEDULED_EXTERNAL_SCRIPT_WITH_ERROR_EXIT0'); execute dbms_scheduler.drop_job(job_name => 'DEMO_SCHEDULED_EXTERNAL_SCRIPT_WITH_ERROR_FAILONERROR');

Fetch module name from line number in package with pl/scope

Here is a small statement I am using to find the name of a submodule based upon error stack data (utl_call_stack.error_line(x) or dbms_utility.format_error_stack).

The function IDENTIFY_MODULE helps to diagnose errors. If an error happens in plsql the error stack returns only the name of a package (=unit) and the line number. Using this line number we can look up the pl/scope information and make a solid guess about the module name. For various reasons this is only a good guess, not a guarantee (see problem section below).

solution

The following code snippets will only work if the relevant packages were compiled with PLSCOPE_SETTINGS=’IDENTIFIERS:ALL’.

ALTER SESSION SET plscope_settings='identifiers:all';

SQL statement

fetch the nearest procedure or function

select * --type, name, signature, line, usage_id, usage_context_id
from all_identifiers
where object_name = :PACKAGE_NAME
and object_type= 'PACKAGE BODY'
and usage in ( 'DEFINITION','DECLARATION' )
and type in ('PROCEDURE','FUNCTION')
and line <= :LINE_NUMBER
order by line desc
fetch first 1 row only
;

This will find procedures and functions in our code that were declared just before the line of error.

There are many cases where we get some false positives.

But it is a good start.

plsql enrichment

Using a little bit of plsql we can make this logic more robust. And even get data about submodule hierachies.

Put this function in the instrumentation package of your choice (for example logger) and then use it to improve log information. How this is done in detail is out of scope for this post.

create or replace function identify_module (p_owner in varchar2, p_unit in varchar2, p_line in number) return varchar2
is
  /**************************************************************
  /* Name          :  identify_module
  **************************************************************
  * description    :  uses PLSCOPE, to get additional info about the module name of a source code line 
  *  This only works reliably for code compiled with optimizationlevel = 1
  *  higher optimization levels might move code lines. The line reported in error and backtrace stacks (run time) can differ from the lines stored in PL/scope or user_source (compile time).
  *
  * @author: Sven Weller, syntegris information solutions GmbH
  **************************************************************
  * parameter      
  * @param  : p_owner = schema name of unit
  * @param  : p_unit = package name
  * @param  : p_line = line of code, for which we would like to see the name of the modul
  * @return : concatenated submodule names  
  **************************************************************/
 cursor c_search_by_line (cv_owner in varchar2, cv_unit in varchar2, cv_line in number)
  is
    select /*+ first_rows(1) */ i.type, i.name, i.line, i.usage_id, i.usage_context_id, i.usage, i.signature
    from ALL_IDENTIFIERS i
    where i.owner = cv_owner
    and i.object_name = cv_unit
    and i.object_type = 'PACKAGE BODY'
    and i.line <= cv_line 
    -- context must be in same package body
    and i.usage_context_id in (select i2.usage_id from ALL_IDENTIFIERS i2 where i2.owner = cv_owner and i2.object_name = cv_unit and i2.object_type = 'PACKAGE BODY')
    order by line desc, usage_id asc
    ;
  
  cursor c_search_by_usage (cv_owner in varchar2, cv_unit in varchar2, cv_usage_id in number)
  is
    select /*+ first_rows(1) */ type, name, line, usage_id, usage_context_id, usage, signature
    from ALL_IDENTIFIERS
    where owner = cv_owner
    and object_name = cv_unit
    and OBJECT_TYPE = 'PACKAGE BODY'
    and usage_id = cv_usage_id 
    order by decode (usage, 'DEFINITION',1, 'DECLARATION', 2, 3), line desc,  usage_id asc
    ;

  r_result_byLine  c_search_by_line%rowtype;
  r_result_byUsage c_search_by_usage%rowtype;
  r_last_result    c_search_by_usage%rowtype;
  v_owner          all_identifiers.owner%type;
  v_modul_name     all_identifiers.name%type;
  v_first_type     all_identifiers.type%type;
  v_max_hierarchy  binary_integer := 5;
begin
  -- If owner is missing, use the current schema
  v_owner := coalesce(p_owner,sys_context('userenv','current_schema'));
  
  -- find the closest line and check its context.
  open c_search_by_line(v_owner, p_unit, p_line);
  fetch c_search_by_line into r_result_byLine;
  close c_search_by_line;

  if r_result_byLine.usage_context_id = 1 then
    -- we seem to be already in main package body.
    -- this can be either a problem during a parameter call 
    -- or the error happened in the initialisatzion part of the package
    case r_result_byLine.usage 
      when 'DEFINITION' then
        v_modul_name :=r_result_byLine.type||' '||p_unit||'.'||r_result_byLine.name;
      when 'DECLARATION' then
        v_modul_name :='declaration of '||r_result_byLine.type||' '||p_unit||'.'||r_result_byLine.name;
      else 
        v_modul_name :='body of '||p_unit;
    end case;
    
  else    
      r_result_byUsage := r_result_byLine;
      --r_result_byUsage.usage_context_id := r_result_byLine.usage_id;
      
      -- find module names
      <>
      loop 
        if r_result_byUsage.usage in ('DEFINITION', 'DECLARATION')  
           and r_result_byUsage.type in ('PROCEDURE','FUNCTION')
           and (r_last_result.signature != r_result_byUsage.signature or r_last_result.signature is null)
           then
             -- concat multiple submodule names
             v_modul_name := r_result_byUsage.name
                            ||case when v_modul_name is not null then '.'||v_modul_name end;
             v_first_type := coalesce(v_first_type, r_result_byUsage.type);
             -- remember result to compare if we get duplicate entries because of declaration->definition
             r_last_result := r_result_byUsage;
        end if;
        -- stop when package body level is reached
        exit when r_result_byUsage.usage_context_id in (0, 1) or v_max_hierarchy = 0;


        -- it seems to be a submodule, so do an additional call and fetch also the parent module
        open c_search_by_usage(p_owner, p_unit, r_result_byUsage.usage_context_id);
        fetch c_search_by_usage into r_result_byUsage;
        close c_search_by_usage;
    
        
        -- safety counter to prevent endless loops
        v_max_hierarchy := v_max_hierarchy - 1;
      end loop parent_modules;  
    
      -- add info about type (FUNCTION/PROCEDURE)
      if v_modul_name is not null then
        v_modul_name :=v_first_type||' '||p_unit||'.'||v_modul_name;
      --else   
      --  v_modul_name := '--no submodule found--';
      end if;
  end if;  
  return v_modul_name;
exception
  when no_data_found then
    return null;
end identify_module;
/

Example

Check the result for each line of some test package.
You can run this example yourself in LiveSQL .

The function had to be modified slightly to use USER_IDENTIFIERS instead of ALL_IDENTIFIERS to be able to run in LiveSQL.

select line, identify_module(user, name, line) , text 
from user_source
where name='TEST_PACKAGE_FUNC_PROC'
and type = 'PACKAGE BODY';

Result

LINE	IDENTIFY_MODULE(USER,NAME,LINE)	TEXT
1		"package body Test_Package_Func_Proc "
2		"as "
3	declaration of VARIABLE TEST_PACKAGE_FUNC_PROC.GLOBAL_VAR	" global_var number := 0;"
4	FUNCTION TEST_PACKAGE_FUNC_PROC.TEST_FUNC	" function test_func (in_val in number) return number  "
5	FUNCTION TEST_PACKAGE_FUNC_PROC.TEST_FUNC	" is "
6	FUNCTION TEST_PACKAGE_FUNC_PROC.TEST_FUNC	" begin "
7	FUNCTION TEST_PACKAGE_FUNC_PROC.TEST_FUNC	" return in_val; "
8	FUNCTION TEST_PACKAGE_FUNC_PROC.TEST_FUNC	" exception "
9	FUNCTION TEST_PACKAGE_FUNC_PROC.TEST_FUNC	"  when others then "
10	FUNCTION TEST_PACKAGE_FUNC_PROC.TEST_FUNC	"    RAISE; "
11	FUNCTION TEST_PACKAGE_FUNC_PROC.TEST_FUNC	" end; "
12	FUNCTION TEST_PACKAGE_FUNC_PROC.TEST_FUNC	""
13	PROCEDURE TEST_PACKAGE_FUNC_PROC.TEST_PROC	" procedure test_proc  (in_val in number) "
14	PROCEDURE TEST_PACKAGE_FUNC_PROC.TEST_PROC	" is "
15	PROCEDURE TEST_PACKAGE_FUNC_PROC.TEST_PROC.SUBMODULE	"   procedure submodule( in_val in number) is"
16	PROCEDURE TEST_PACKAGE_FUNC_PROC.TEST_PROC.SUBMODULE	"   begin"
17	PROCEDURE TEST_PACKAGE_FUNC_PROC.TEST_PROC.SUBMODULE	"      dbms_output.put_line (in_val); "
18	PROCEDURE TEST_PACKAGE_FUNC_PROC.TEST_PROC.SUBMODULE	"   end;  "
19	PROCEDURE TEST_PACKAGE_FUNC_PROC.TEST_PROC.SUBMODULE	" begin "
20	PROCEDURE TEST_PACKAGE_FUNC_PROC.TEST_PROC	"   submodule(in_val); "
21	PROCEDURE TEST_PACKAGE_FUNC_PROC.TEST_PROC	" exception "
22	PROCEDURE TEST_PACKAGE_FUNC_PROC.TEST_PROC	"  when others then "
23	PROCEDURE TEST_PACKAGE_FUNC_PROC.TEST_PROC	"    RAISE; "
24	PROCEDURE TEST_PACKAGE_FUNC_PROC.TEST_PROC	" end; "
25	PROCEDURE TEST_PACKAGE_FUNC_PROC.TEST_PROC	"begin"
26	body of TEST_PACKAGE_FUNC_PROC	"  global_var := 1;"
27	body of TEST_PACKAGE_FUNC_PROC	"end; "

The test package was copied and modifed based upon Hemant K Chitales “Function and Procedure in Package” LiveSQL demo.

Some problems

  • Currently only ment for package bodies
  • Compiler optimization can move code. That means the line number of an error at runtime is not the same line number as during compile time. PL/Scope only gives us compile time information. So would all_source.
    The only (bad) workaround is to compile with optimization level 1 and recreate the error, so that the correct line of error is shown.
  • Errors that happen in the declaration part of a package can not always be resolved. We might get a false positive for the previously declared object.
  • Does not inform when package body is wrapped.

Further readings

10 Oracle plsql things you probably didn’t know

Many people enjoyed reading my last blog post “10 Oracle SQL features you probably didn’t know”. So I decided to spice it up a little more and do something similar for plsql.

I hope you like that one too.

With our further ado, let’s get started with the list.

10. The first Oracle version to feature plsql was Oracle DB version 6 (1988)

And no. Steven Feuerstein did NOT invent it.

At that time PLSQL did not have stored procedures nor did it have proper exception handling. But it already had embedded SQL.

I learned that from the great Lewis Cunningham. One of the godfathers of development with SQL and PLSQL.

Stored Procedures were added in Oracle 7 (1992). 7.3 was the version when I started to work with an Oracle Database. At that point plsql was in version 2.x. However there never was a version 3. Plsql versioning jumped to 8 when Oracle DB version 8 was introduced and plsql versioning was aligned with the db versions. So there are no plsql versions 3-7. But honestly? Nobody cares anymore that plsql does have its own versioning.

9. labels do not need to match

We can use <<labels>> in plsql. Mostly to increase readability of code. This is especially useful for loop constructs, but it also works for normal begin..end blocks.

begin
 <<mainloop>>
 loop
   <<check_some>>
   begin 
     <<dummyloop>>
     for r in (select * from dual) loop
       -- do nothing
       null;
     end loop dummyloop;
   end check_some;
   exit when 1=1;
 end loop mainloop;
end; 
/

As we can see there are several <<labels>>. And the usage of those labels at the “end” helps to distinguish which code part we are looking at [1].

But this is only as good as the programmer is!

Unfortunatly this works too:

begin
 <<mainloop>>
 loop
   <<check_some>>
   begin 
     <<dummyloop>>
     for r in (select * from dual) loop
       -- do nothing
       null;
     end loop mainloop;
   end check_sum;
   exit when 1=1;
 end loop dummyloop;
end; 
/

Here I mixed up the labels from the loops. And the “end check_sum” does not match the label at the beginning of the block. In fact the “label” at the end can be anything that is not a reserved word.

It runs identical to the previous code (still doing nothing). But it is way more confusing for the “future me” that has to maintain this mess.

8. pragma SERIALLY_REUSABLE

During the lifetime of a session, the package state (package variables, open cursors, etc.) are held in the UGA (user global area).  Subsequent calls in the same session to the same package profit from that by not needing to reinitialize the package state.

The pragma SERIALLY_REUSABLE is able to change this behaviour.

serially_reusable packages

After the work unit (server call) of a SERIALLY_REUSABLE package completes, Oracle Database does the following:

  • Closes any open cursors.
  • Frees some nonreusable memory (for example, memory for collection and long VARCHAR2 variables)
  • Returns the package instantiation to the pool of reusable instantiations kept for this package.

Essentially this means, that the package state exists only during the package call. Not for the whole session.

So far I never had the need to use this pragma. But I can imagine some very very special situations, where this might become interesting.

7. You can compile a package even while another session is running it

In general this is not possible. Assuming a package is currently running. Or to say it in technical terms: We have an active session (Session A) executing a packaged procedure.

A second session  (Session B) trying to do an “ALTER COMPILE PACKAGE (BODY)” would wait until session A finishes and will then afterwards try to compile the package. Upon success the first session A then will get a “package state has been discarded” error message as soon as it tries to run the same package once again. The second next try to run the package would succeed and will use the new package version. At that time the package state was discarded from the session and the new instantiation can be loaded.

This is all documented and well known behaviour.

We face three potential issues with this

  1. Session B needs to wait
  2. Session A might get an error
  3. Session B might wait so long that the developer decides to kill the client (closing SQL developer) thereby making everything 10 times worse, because the database compile call still is valid on the database session level – blocking all following attempts to run or compile the package.

EBR for the rescue!

Using Edition Based Redefinition (EBR) we can circumvent those issues.

Both sessions just need to use different editions. EBR allows us to store and run different code versions of the same plsql based object in the same database.

 

preparation

First create a package with a long running procedure. My example uses a procedure that runs for exactly 1 minute.


create or replace package myPck is
  procedure runMinute;
end myPck;
/

create or replace package body myPck is
   procedure runMinute is
   begin
     sys.dbms_lock.sleep(60);
   end runMinute;
end myPck;
/

 

Setup an edition DEV$ALPHA that is a child of the default edition (ORA$BASE).


create edition DEV$ALPHA;

 

You need an edition enabled schema to do this.

This is simple to do, but to explain EBR in more detail is beyond the scope of this blog post.

Example scenario

Lets run a few commands in two different sessions.

Session A resembles a USER/TESTER who currently executes the packaged function.

Session B resembles a DEVELOPER who wants to deploy a new version of the package.

Session A does this

set time on

alter session set edition=ORA$BASE;

execute myPck.runMinute;

Session B was started already and after the execute in Session A, run the following script in Session B.

alter session set edition=DEV$ALPHA;

— add a new procedure to the package

create or replace editionable package myPck …
/

create or replace editionable package body myPck …
/

— run the new procedure

exec myPck.run5secs;

Result is Session B finishes way before Session A does complete its 1 minute run.

See screenshotebr_run_sessions

q.e.d.

6. call a (pipelined) table function without the TABLE operator

It works only from 12.2 onwards. It is more of a SQL feature than a plsql one.

This will make a table function look indistinguishable from a parametrized view.

Example: split_string

First lets create a simple little table function. This one here just converts a delimited list into rows.

create or replace function split_string
 (p_str IN VARCHAR2
 ,p_delimiter IN VARCHAR2 default ','
 ) RETURN sys.odcivarchar2list PIPELINED 
IS
/** Function to split strings based upon delimiter
*
* @author Sven Weller
*
* @param p_str input string 
* @param p_delimiter delimiter string, default =, Delimiter should only be 1 char.
* @return list of strings
*
*/
 v_entry varchar2(4000);
 v_remaining_str varchar2(4000);
BEGIN
  -- input string needs to hold something to be able to split
  if p_str is not null then
     <<steps>>
    for i in 1..regexp_count(p_str,'\'||p_delimiter)+1 loop
      -- search + split
      v_entry := rtrim(regexp_substr(p_str,'[^\'||p_delimiter||']*('||p_delimiter||'|$)',1,i),p_delimiter);
      pipe row(v_entry);
    end loop steps; 
  else raise no_data_found; 
  end if; 

END split_string;
/

 

function created.

in 11g we call the function like this:

select * from TABLE(split_string('A:BB::CCC',':'));

in 12.2 we can now call it like that:

select * from split_string('A:BB::CCC',':');

As you can see the TABLE row source operator is gone. And it still works! The results of both statements are identical.

COLUMN_VALUE
A
BB
CCC

Want to test it? I made an example on livesql.com.

At the moment this is an undocumented 12.2 feature. So don’t use it for production code (yet). I quite like it. Less code is better! It might become some de-facto standard (similar to connect by level) and eventually will make it into the documentation.

5. dot notation for parameters

We can refer to parameters using the name of the module that declared them. This is useful when we need to distinguish a parameter from a column name.

example


create or replace function myFancyFunc (dummy in varchar2) return number
is
  ret number := 0;
begin
  begin
    select 1 into ret
    from dual
    where dummy = myFancyFunc.dummy
    and rownum = 1;
  exception
    when no_data_found then null;
  end;
  return ret;
end myFancyFunc;
/

Function MYFANCYFUNC compiled

select myFancyFunc('X') from dual;

1

select myFancyFunc('Y') from dual;

0

The function simply compares the value in the dummy column of the dual table to the value we input. If instead we would just compare dummy=dummy then we would get always 1 as a result. No matter what the input is. Even if we add an alias to the table and prefix the column with an alias, the non aliased “dummy” will still be interpreted as a column.

This behaviour is documented: Oracle Doc 12.1 – plsql name resolution

If a SQL statement references a name that belongs to both a column
and either a local variable or formal parameter,
then the column name takes precedence.

Interestingly we can also use labels on block level for specifying variables that are defined in this block.

set serveroutput on
<<main>>
declare
  dummy varchar2(10) := 'Y';
begin
  <<block1>>
  declare
    dummy varchar2(10) := 'X';
  begin
  <<block2>>
    declare
    dummy varchar2(10) := 'A';
    begin
      select dummy
      into dummy
      from dual
      where dummy = block1.dummy;

      dbms_output.put_line('MainBlock:'||main.dummy);
      dbms_output.put_line('Block1:'||block1.dummy);
      dbms_output.put_line('Block2:'||block2.dummy);
    end block2;
  end block1;
end main;
/

PL/SQL procedure successfully completed.

MainBlock:Y
Block1:X
Block2:X

Without dot notation the innermost variable (block2) is used – as we can see in the INTO part. And we can reference a different variable with the same name from a “higher” declaration by using the dot notation.

 

4. variable names be emojis

example

set serveroutput on
declare
  "💩"exception;
  pragma exception_init("💩",-20001);

  "⌚" timestamp := systimestamp;
  "🕑"interval day to second;
  "🎲"number;
  "💤"number := 2;
begin
  "🎲":= round(dbms_random.value(1,6));
  for "🔜"in 1.."🎲"loop
    dbms_lock.sleep("💤");
  end loop;
  "🕑":= systimestamp - "⌚" ;
  dbms_output.put_line('Slept for '|| "🕑");
exception
  when "💩"then
    dbms_output.put_line('Sorry something bad happend!');
    raise "💩";
end;
/

PL/SQL procedure successfully completed.

Slept for +00 00:00:08.049000

The source code looks a little bit different in sql developer. But trust me. I simply copy&pasted it from there to here.

emojicode

To make this work you need to use a font that supports emoijs/symbols, I used font “Segoe UI Symbol”. It is supposed to look better on windows 10[3].

If you are a hard core emoji lover then I suggest to have a look at emojicode.org

It is a emoji based programming language. Which did not make it into the esoteric programming languages list (yet).  Ook? Ook!

 

3. variables can be made mandatory (NOT NULL)

Check out the NOT NULL keyword during the variable declaration.

declare
  v_index number not null := 0;
begin
  v_index := 1;
  v_index := null;
end;
/

Error report –
ORA-06550: line 5, column 14: PLS-00382: expression is of wrong type
ORA-06550: line 5, column 3: PL/SQL: Statement ignored

The error message is a bit vague about what happened, but it is very exact where it happened (line 5, column 14). And what do we see there? A NULL expression.

The expression is of wrong type, because we added a NOT NULL constraint to the number type that was used. For more complex cases we can create our own sub types and use them. But if we just want to make sure that we do not need to consider null cases during further variable calls, then this is a possible way.

Link to plsql documentation

Currently there are no such other constraints  that we can use.  I could imagine with the potential arrival of SQL assertions, this might become a hot topic in plsql too.

2. you can “hack” dbms_output

Warning! This is dangerous. It might break some (poorly written) code that resides in the same schema. Do it at your own risk! It is also hilariously funny to do on april fools day to your fellow coworkers. I mean they shouldn’t use dbms_output anyway. That will teach them!

I start the example by showing the behaviour first. Then the code to produce this result.

behaviour


create or replace procedure doSomething is
  v_dummy dual.dummy%type;
begin
  select dummy into v_dummy from dual where 1=2;
exception
  when others then
    dbms_output.put_line(sqlerrm);
end doSomething;
/

Now we run the module a couple of times and want to see the output. We should expect a NO_DATA_FOUND error message.

set serveroutput on

execute doSomething;
execute doSomething;
execute doSomething;
execute doSomething;
execute doSomething;

Surprisingly instead of the error message we get something like this.

PL/SQL procedure successfully completed.

Wrong usage of DBMS_OUTPUT detected.

PL/SQL procedure successfully completed.

Make Databases Great Again!

PL/SQL procedure successfully completed.

ORA-01403: no data found

PL/SQL procedure successfully completed.

Make Databases Great Again!

PL/SQL procedure successfully completed.

So tell me what you want, what you really, really want

Omg! What is going on here?

Well here is the catch. We can “overload” dbms_output in out own schema. Then our package is called and not the original package from sys.

source code

create or replace package dbms_output
as
  procedure enable(BUFFER_SIZE number default null);
  procedure put_line(A in varchar2);
  procedure GET_LINE(LINE out VARCHAR2,STATUS out integer);
  procedure GET_LINES(LINES out sys.dbms_output.CHARARR,NUMLINES in out NUMBER);
  procedure GET_LINES(LINES out sys.DBMSOUTPUT_LINESARRAY,NUMLINES in out NUMBER);
end dbms_output ;
/

create or replace package body dbms_output
as
procedure enable(BUFFER_SIZE number default null) is
begin
  sys.dbms_output.enable(BUFFER_SIZE);
end;

function getRandomQuote (A in varchar2) return varchar2
is
  type quotes_t is table of varchar2(4000) index by binary_integer;
  v_quotes quotes_t;
  v_random binary_integer;
begin
  v_quotes(1) := 'You are hacked by the Chinese';
  v_quotes(2) := 'Wrong usage of DBMS_OUTPUT detected.';
  v_quotes(3) := 'System failure. Get away from keyboard';
  v_quotes(4) := 'Close all windows! NOW!';
  v_quotes(5) := 'Make Databases Great Again!';
  v_quotes(6) := A; -- sometimes return the correct text
  v_quotes(7) := A; -- sometimes return the correct text
  v_quotes(8) := 'So tell me what you want, what you really, really want';
  v_quotes(9) := 'None but ourselves can free our minds.';
  v_quotes(10) := 'Let there be light!';
  v_random := round(dbms_random.value(1,v_quotes.last));

  return v_quotes(v_random);

end getRandomQuote;

procedure put_line(A in varchar2) is
begin
  sys.dbms_output.put_line(getRandomQuote(A));
end;

procedure GET_LINE(LINE out VARCHAR2,STATUS out integer)
is
begin
  sys.dbms_output.GET_LINE(LINE,STATUS);
end;

procedure GET_LINES(LINES out sys.dbms_output.CHARARR,NUMLINES in out NUMBER)
is
begin
  sys.dbms_output.GET_LINES(LINES,NUMLINES);
end;

procedure GET_LINES(LINES out sys.DBMSOUTPUT_LINESARRAY,NUMLINES in out NUMBER)
is
begin
  sys.dbms_output.GET_LINES(LINES,NUMLINES);
end;
end dbms_output ;
/

How does it work?

Because how sql name resolution kicks in, the DBMS_OUTPUT package in our schema is used and not the public synonym for the DBMS_OUTPUT package from the sys schema.

The get_line functions then pushes the changed text to the normal buffer mechanism.

How can we avoid it?

Best is not to use DBMS_OUTPUT in real production code. It is a nice quick debugging tool. But not more than that.

Also if you prefix dbms_output always with the SYS schema, then it will call the original logic.
 

1. when others does not catch all exceptions

example

set serveroutput on 
declare
  e_cancelled exception;
  pragma exception_init(e_cancelled, -1013);
begin
  begin
    raise e_cancelled;
  exception
    when others then
      dbms_output.put_line('EXCEPTION OTHERS');
  end;
end;
/

ORA-01013: user requested cancel of current operation
ORA-06512: at line 6

We still see an exception, but not the dbms buffer output!

This needs some explanation.

There is a very limited set of exceptions that will not be captured by the WHEN OTHERS handler. We need to look closely and understand the exception itself to comprehend why this is a good thing.

Here the ORA-01013 is the “user requested cancel of current operation” exception. Essentially it means somebody pressed “CTRL+C” while running the code. In almost all environments this means: Stop doing whatever you do immediately! Or in more technical terms: It is an interrupt to the os process running your command. Same as executing “kill -2” (kill -SIGINT) in a nix environment (the-3-most-important-kill-signals-on-the-linux-unix-command-line). Even if the process is allowed to ignore the command, it shouldn’t do so by default.

ORA-01013 can sometimes also be the result of a timeout. Where the client is waiting for a response and after some time sends this as a timeout signal to the database session.

We are allowed to capture this exception and write a special handler for it.

set serveroutput on
declare
  e_cancelled exception;
  pragma exception_init(e_cancelled, -1013);
begin
  begin
    raise e_cancelled;
  exception
    when others then
      dbms_output.put_line('EXCEPTION OTHERS');
  end;
exception 
  when e_cancelled then
    dbms_output.put_line('OPERATION CANCELED');
END;
/

PL/SQL procedure successfully completed.

OPERATION CANCELED

Nothing to worry about. Just nice to know.

Please note: This example will behave differently in older outdated db versions. I think it was introduced as a fix for bug#12838063 in 11.2.0.4.

Other exceptions that are not handled include “ORA-03113: end-of-file on communication channel”.

But not “ORA-06508: PL/SQL: could not find program unit being called”. This was supposed to go through “when others” but testing on 12.2.0.1 revealed it is captured.

 

 

Footnotes


1. This is probably the only bug free code I ever wrote. It was meant to do nothing and it does that exceptionally well![2]
2. There might be room for some performance improvement. Allowing us to do nothing even faster.
3. On windows 10 💩 is supposed to look like 🔝💩.

 

 

 

10 Oracle SQL features you probably didn’t know

10 Oracle SQL features you probably didn’t know

I can not guarantee that you never heared of this. But the majority of developers doesn’t know or doesn’t seem to know. And honestly –  most of this information is fairly useless – and mostly harmless.

10. Why we have the CONCAT function

There is a concat function that can be used instead of the concat operator ||.

Nobody uses that! But there is a reason why it exists.

This is from the 8.1.7 Oracle documention

On most platforms, the concatenation operator is two solid vertical bars, as shown in Table 3-3. However, some IBM platforms use broken vertical bars for this operator. When moving SQL script files between systems having different character sets, such as between ASCII and EBCDIC, vertical bars might not be translated into the vertical bar required by the target Oracle environment. Oracle provides the CONCAT character function as an alternative to the vertical bar operator for cases when it is difficult or impossible to control translation performed by operating system or network utilities. Use this function in applications that will be moved between environments with differing character sets.

So we do have this function, because in the past this || was not consistent over all the different platforms using different character sets. Since nobody seems to use IBM EBCDIC anymore, there is no real need to use the CONCAT function.

9. select UNIQUE

Instead of doing a SELECT DISTINCT you can do SELECT UNIQUE.

But you shouldn’t. None of them. Forget I mentioned this.

Ok here is an example.

Example

select unique e.deptno from scott.emp e;

DEPTNO
-----
30
20
10

It is also possible to do SELECT COUNT(UNIQUE …) . Not sure when that one was introduced, but it seems to work now.

Sven says: “SELECT DISTINCT|UNIQUE should be considered a bug in real production code.”

I have yet to find an example where SELECT DISTINCT is needed. More likely there is a bug in the data model or missing joins in the where clause. GROUP BY or sometimes EXISTS are the better long term alternatives.

Using SELECT DISTINCT is absolutly fine for developer ad-hoc queries.

I feel this whole concept is a bit inconsistent. 12c introduced the new approximation function APPROX_COUNT_DISTINCT, but there is no APPROX_COUNT_UNIQUE!

8. there is an ANSI datatype DATE

The ANSI datatype DATE does not hold any time information (no hours, minutes, seconds). That is what the ANSI datatype TIME is for. Oracle does not officially know these two datatypes.

However:

EXTRACT function

Purpose

EXTRACT extracts and returns the value of a specified datetime field from a datetime or interval expression. The expr can be any expression that evaluates to a datetime or interval data type compatible with the requested field:

  • If YEAR or MONTH is requested, then expr must evaluate to an expression of data type DATE, TIMESTAMP, TIMESTAMP WITH TIME ZONE, TIMESTAMP WITH LOCAL TIME ZONE, or INTERVAL YEAR TO MONTH.
  • If DAY is requested, then expr must evaluate to an expression of data type DATE, TIMESTAMP, TIMESTAMP WITH TIME ZONE, TIMESTAMP WITH LOCAL TIME ZONE, or INTERVAL DAY TO SECOND.
  • If HOUR, MINUTE, or SECOND is requested, then expr must evaluate to an expression of data type TIMESTAMP, TIMESTAMP WITH TIME ZONE, TIMESTAMP WITH LOCAL TIME ZONE, or INTERVAL DAY TO SECOND. DATE is not valid here, because Oracle Database treats it as ANSI DATE data type, which has no time fields.
  • If TIMEZONE_HOUR, TIMEZONE_MINUTE, TIMEZONE_ABBR, TIMEZONE_REGION, or TIMEZONE_OFFSET is requested, then expr must evaluate to an expression of data type TIMESTAMP WITH TIME ZONE or TIMESTAMP WITH LOCAL TIME ZONE.

So essentially the doc says, that EXTRACT uses an ANSI DATE as an expression input.

Proof

select extract(day from sysdate) from dual;
21

select extract(second from sysdate) from dual;
ORA-30076: invalid extract field for extract source

select extract(second from systimestamp) from dual;
4.823

The error appears, because sysdate is kind of converted into a ANSI DATE and this does not hold time information.

 

Problem is: We just can not use this ANSI date type for normal activities.

Alternative for the ANSI date: TRUNC(datetime). In 12c as a virtual column.

Problem solved. ✔

 

7. group by ()

You can group by an empty parenthesis. Or let’s call it group by the FULL SET. This is in fact useful in some very specific cases.

It gives us the ability to get a NO_DATA_FOUND error if there is no data at all. Oracle guarantees that an aggregation select without a group by will always return a record. GROUP BY () can be added if you want to change this default behaviour on purpose.

Example

Let’s say we want to count how many employees are in one specific department.

select count(e.empno) emp#
from scott.dept d
left join scott.emp e on e.deptno=d.deptno
where d.dname = 'OPERATIONS';

EMP#
----
0

Since there are no employees in operations, this returns 0. This is correct and the information I want.

More specifically this query returns one row, but the e.empno is null and therefore is not counted. Count(*) would have returned 1!

But if we use a department name that does not even exists, then this will also return 0.

select count(e.empno) emp#
from scott.dept d
left join scott.emp e on e.deptno=d.deptno
where d.dname = 'FANCY_CLOUD_SPACE';

EMP#
----
0

Not correct! In this case I prefer to get a no data found error.

Grouping on the full set gives us this option.

select count(e.empno) emp#
from scott.dept d
left join scott.emp e on e.deptno=d.deptno
where d.dname = 'FANCY_CLOUD_SPACE'
group by ();

no rows found.

Btw: the SQL Developer syntax (pre-)parser identifies that () as an syntax error. But it will execute it successfully.

I have to admit, that in this case it would be much better to group by the department. But maybe it would be possible to get the count for multiple departments using an IN list. Then the group on department level would not help much.

 

Btw: GROUP BY NULL does the same thing, and might be less confusing for some developers. I personally prefere GROUP BY () because the () also is used in GROUPING_SETS.

 

6. + vs. –

In some edge cases using a different operator (+|-) leads to disturbing results.

Example: Substracting a tiny number from 1 does not equal 1. But adding a tiny number from 1 does equal 1!

select * from dual
where 1 = 1-0.000000000000000000000000000000000000001;

No data found!

This is expected behaviour…

select * from dual
where 1 = 1+0.000000000000000000000000000000000000001;

DUMMY
-----
X

Wat? Why do we get a result?!

The solution is simple. We overstepped the maximum precision of the number datatype when doing the addition. Max precision is 38 digits. The addition would require a precision of 39. Instead the result is rounded (or truncated) to a precision of 38 digits, which happens to equal 1.  The substraction result still is in the 38 precision range and therefore exact (very slightly less than 1).

Don’t confuse precision with maximum value! Precision is the number of different digits that we can store. This can then be moved by an exponent to the left or right for very large or very small values.

 

To put this into perspective

That precision is so extremly high, that we have major troubles visualizing it. So here is a comparsion. The comparison data comes from this hugely interesting site: http://money.visualcapitalist.com/all-of-the-worlds-money-and-markets-in-one-visualization/

The complete monetary assets in the whole word is estimated about 80 trillion $ (broad money=coins, banknotes, savings accounts, time deposits, …). That is a 80,000,000,000,000  or an 8 with 13 zeros at the end or 8e13 in scientific exponential notation.

select 8e13 from dual;

Considering all the national and private debts is even higher. Here we reach about  200 trillion $.

And if we consider derivates (=high risk gambles)  then the high estimations go as far as 1.2 quadrillion $.  That is a 1 followed by 24 more digits (mostly zeros) or 1.2e24.

Since all this is expressed in dollars, we can increase the number even further by chooseing a different currency.  For example the Nepalease Rupee is worth a little less than 0.01 dollar. So all the worlds derivatees could be expressed in Nepalease Rupees using a number as big as 1.2e26.

That means using a number with a precision of 38 we can easily express all the money in the world including derivates up to the very last Nepalease Rupee. And we still have only used 2/3rds of the maximum available precision. Or to say it differently. If we duplicate the planet Earth 1,000,000,000,000 (1 trillion) times – we could still give the exact amount of money on all planets in Napalease Rupees up to the very last Rupee using the number datatype.

That’s quite impressive!

Btw. The biggest number that can be expressed using the NUMBER datatype is 999…(38 9’s) x10125 . Which is way way bigger than anything we discussed so far.

 

5. instead of IN we can use = ANY

select 'TRUE'
 from dual
 where 1 = ANY (1,2,3,4,5)
TRUE

There are several similar unusual “replacement” options.

e.g.

  • instead of != we can also use ^=.
  • NOT IN is the same as  != ALL
  • SOME is a synonym for ANY – but only in sql!
  • <=LEAST | >=GREATEST can be replaced by <|>= ALL
  • >=LEAST | <=GREATEST can be replaced by >|<= SOME

I expect that SOME people do know about these operators, since questions about them are part of the SQL expert exam. So from ALL experts who took the exam at LEAST the few who passed should know SOMEthing about ANY.

The last replacement differs how NULL values are treated. In theory there could be cases where this comes in handy. I so far never had such an encounter.

Example

with testdata
as (select 1 a, 2 b, 3 c, 4 d from dual union all
   select 5 a, 4 b, 3 c, 2 d from dual union all
   select 1 a, null b, 2 c, null d from dual union all
   select null a, null b, null c, null d from dual)
select * from testdata
where 3 >= LEAST(a,b,c,d);

 

A B C D
-------
1 2 3 4
5 4 3 2

 

with testdata
as (select 1 a, 2 b, 3 c, 4 d from dual union all
    select 5 a, 4 b, 3 c, 2 d from dual union all
    select 1 a, null b, 2 c, null d from dual union all
    select null a, null b, null c, null d from dual)
select * from testdata
where 3 >= ANY(a,b,c,d);

 

A B C D
-------
1 2 3 4
5 4 3 2
1 - 2 -

4. external table on multiple files

External tables can load multiple files at once! As long as they have the same structure of cause. So here is an example.

CREATE TABLE EXT_DUMMY
(
    "RECORDTYPE" VARCHAR2(100 BYTE), 
    "COL1" VARCHAR2(100 BYTE), 
    "COL2" VARCHAR2(100 BYTE) 
)
ORGANIZATION EXTERNAL
(
    TYPE ORACLE_LOADER DEFAULT DIRECTORY "IMPORT_BAD_FILE" 
    ACCESS PARAMETERS ( 
             records delimited BY newline 
             FIELDS TERMINATED BY ';' 
             MISSING FIELD VALUES ARE NULL 
               ( RECORDTYPE CHAR
               , COL1 CHAR 
               , COL2 CHAR 
               ) 
    ) 
    LOCATION ( 'Testfile1.txt, Testfile2.txt' )
)
    reject limit 10
;

When we then do a select on the external table, we will see the data from both files.

We can also switch the external table to a different file.

alter table EXT_DUMMY location ('Testfile3.txt' )

 

For the experts: We can use the ROWID to find out which file was used to load a specific record. Special thanks go to OTN-forum member Odie_63 who found this solution. More info in this old OTN forum thread.

with ext_loc as (
      select position-1 as pos
           , name as filename
      from sys.external_location$
      where obj# = ( select object_id
                     from user_objects
                     where object_name = 'EXT_DUMMY' )
    )
select x.filename,
       t.*
from EXT_DUMMY t
join ext_loc x 
on x.pos = to_number(regexp_substr(dump(t.rowid,10,9,1),'\d+$'))
;

It’s a very clever piece of software. Essentially it extracts the filenumber from the rowid, looks up the file number in the data dictionary and combines that with our data set.

This select was done in a 10g database. In 12c we can probably use dbms_rowid to do the same, instead of regexp_substr(dump(rowid)).

3. insert default values

We can insert into a table using the “default” keyword to force default behaviour for this column.

Example

-- setup
create table swe_default_test 
(col1 varchar2(10) not null,
 col2 varchar2(10) default 'TEST' not null
 );
Table SWE_DEFAULT_TEST created.

-- test
insert into swe_default_test (col1) values ('X');
1 row inserted.

insert into swe_default_test (col1,col2) values ('X',null);
ORA-01400: cannot insert NULL into ("MYUSER"."SWE_DEFAULT_TEST"."COL2")

insert into swe_default_test (col1,col2) values ('X',default);
1 row inserted.

-- double check
select * from swe_default_test;
COL1 COL2 
---------- ----------
X TEST 
X TEST 

-- cleanup
drop table swe_default_test;
Table SWE_DEFAULT_TEST dropped.

 

In 12c we now have a new and I think better option to do the same. The default column can be defined additionally with “ON NULL”. Which has the effect, that inserting a NULL value, will lead to using the default value instead. Of cause the “default” keyword still works too.

-- setup
create table swe_default_test12c
(col1 varchar2(10) not null,
 col2 varchar2(10) default on null 'TEST' not null
 );
Table SWE_DEFAULT_TEST12C created.

-- test
insert into swe_default_test12c (col1,col2) values ('X',null);
1 row inserted.

insert into swe_default_test12c (col1,col2) values ('X',default);
1 row inserted.

-- doublecheck
select * from swe_default_test12c;
COL1 COL2 
---------- ----------
X TEST 
X TEST 

-- cleanup
drop table swe_default_test12c;
Table SWE_DEFAULT_TEST12C dropped.

As we can see both cases now work. The one using a NULL value and also useing the DEFAULT keyword.

This new 12c “default on null” feature can be used to replace the typical BEFORE ROW INSERT trigger. More info how to do this in Sequence and Audit columns with Apex 5 and 12c

 

2. (1,2) = ((1,2))

We can compare expression lists using the = operator. But the right side of the comparison needs an extra set of parenthesis.

select 'TRUE' checked from dual
 where (1,2,3,4,5) = ((1,2,3,4,5));
CHECKED
-------
TRUE

This is just a shorthand form of

1=1 and 2=2 and 3=3 and 4=4 and 5=5

If we do not use the second set of parenthesis on the right hand side of the comparison, then we get an error.

select 'TRUE' checked from dual
 where (1,2,3,4,5) = (1,2,3,4,5) ;
ORA-00920: invalid relational operator

This is documented. Although a tiny bit difficult to read in the syntax diagrams.

See: SQL Reference – Simple Comparison

syntax_compare_lists_1syntax_compare_lists_2

 

1. DUAL is a real table

Although there are some special optimizations in various tools, dual is a real table residing in the sys schema.

select owner, table_name
 from dba_tables
 where table_name = 'DUAL';
OWNER TABLE_NAME
----------------
SYS DUAL

 

In older database versions it was possible to do an insert into the DUAL table.

Warning! This is extremly dangerous. It will probably break ALL applications in your database.

insert into dual values ('Y');
select mysequence.nextval into v_id from dual;

TOO_MANY_ROWS error!

And nobody captures the TOO_MANY_ROWS exception for selects from dual.

Btw: A working safety mechanism is to add rownum = 1 to your query.

select mysequence.nextval into v_id from dual where rownum = 1;

Fortunately manipulating DUAL does not work anymore in recent versions of the database. I think 12c introduced it. Not even DBAs have the privs anymore to do this.

ORA-01031: insufficient privileges
*Cause: An attempt was made to perform a database operation without
the necessary privileges.

0. The Oracle SQL Syntax is documented!

Bam! There you have it! You wouldn’t have thought that. Well, you are not alone. So many developers seem to have no clue that this exists or where it exists.

When googeling use “oracle 12c doc” or “oracle 12.2 SQL ref” in front of your search term. It helps to find the proper pages.

12.2 doc portal

12.2 SQL introduction

12.2 SQL Language reference

 

 

Special thanks go out to Stefanie R., who gave me the idea for this blog post!

 

adaptive cursor sharing and DBMS_SQL

A recent post in the OTN mentioned that DBMS_SQL does not use bind peeking for binded variables. I couldn’t believe that, so I decided to do some tests for myself. The findings are strange…

This is potentially relevant for APEX developers, since the APEX engine uses DBMS_SQL. I still have to do further testing to check the behaviour in APEX.

First I setup some test to show bind peeking and adaptive cursor behaviour using normal statements in SQL*Plus or SQL Developer. After that we move to dynamic SQL, especially DBMS_SQL, and try the same again.

scenario setup

create skewed testdata

--drop table demo_big;
create table demo_big as
select level as id,
       case when mod(level,10000)=0
            then 'VALID'
            else 'INVALID'
       end as status
from dual
connect by level <= 1000000;

desc demo_big;

Name   Null Type
------ ---- -----------
ID          NUMBER
STATUS      VARCHAR2(7)

select status, count(*)
from demo_big
group by rollup(status);
STATUS     COUNT(*)
INVALID    999900
VALID      100
           1000000

So we have a few VALID values and a lot of INVALID ones.

Even if we have only two different values an index will be useful on this column. The data distribution is so skewed that any access trying to read the VALID values would profit from an index. However if we access the INVALID column we don’t want to use the index and instead want a full table scan.

-- create indexes on all the important columns
create unique index demo_big_id_ix on demo_big(id);
create index demo_big_status_ix on demo_big(status);

create statistical data(histograms)

First we create the statistics so that the optimizers knows what is in that table and how the data looks like.

-- create statistics and test histogram
execute dbms_stats.gather_table_stats(user, 'DEMO_BIG', method_opt=>'for all indexed columns size skewonly');

Then we check the data dictionary checks to see what has been created so far.
The hist_numtochar2 function is copied from Martin Widlake (Source: https://mwidlake.wordpress.com/2009/08/11/). It just helps to do a crude translation of the numerical histogram bucket endpoints. The code of the function can be found at the end of this post.

I don’t show the results from all selects but the last one. The other selects are here just as references. They are helpful to see what kind of statistics are in place.

select table_name, num_rows, blocks, last_analyzed
from user_tables
where table_name = 'DEMO_BIG';

select table_name, column_name, num_distinct, histogram, num_buckets, sample_size
from user_tab_columns
where table_name = 'DEMO_BIG';

select *
from user_histograms
where table_name = 'DEMO_BIG' and column_name = 'STATUS';

select table_name, column_name, endpoint_number, endpoint_value, hist_numtochar2(endpoint_value) as translated_value
from user_histograms
where table_name = 'DEMO_BIG' and column_name = 'STATUS';

Here we see a frequency histogram with two buckets for the column STATUS.

TABLE     COLUMN  ENDPOINT_NUMBER    ENDPOINT_VALUE           TRANSLATED_VALUE
DEMO_BIG  STATUS  999900     380626532452853000000000000000000000    INVALJ*
DEMO_BIG  STATUS  1000000    447861930473196000000000000000000000    VALID

The first bucket holds 999900 values where status= INVALID.
The next bucket holds 1000000-999900 = 100 where status = VALID.

This of cause matches exactly what we created. So the statistical info in the dictionary is absolutly correct.

Tests

Now that our setup is in place, we can do some basic testing to see different plans.

check execution plan with LITERALS

-- test different cursor/execution plan using plain selects
select count(*) from demo_big where status = 'VALID';
select * from table(dbms_xplan.display_cursor);
----------------------------------------------------------------------------------------
| Id  | Operation         | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |                    |       |       |     3 (100)|          |
|   1 |  SORT AGGREGATE   |                    |     1 |     8 |            |          |
|*  2 |   INDEX RANGE SCAN| DEMO_BIG_STATUS_IX |   100 |   800 |     3   (0)| 00:00:01 |
----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("STATUS"='VALID')
select count(*) from demo_big where status = 'INVALID';
select * from table(dbms_xplan.display_cursor);
-------------------------------------------------------------------------------
| Id  | Operation          | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |          |       |       |   701 (100)|          |
|   1 |  SORT AGGREGATE    |          |     1 |     8 |            |          |
|*  2 |   TABLE ACCESS FULL| DEMO_BIG |   999K|  7811K|   701   (2)| 00:00:01 |
-------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter("STATUS"='INVALID')

Perfect! As expected one does an index access / index range scan, the other does a full table scan.

check execution plan with BIND parameters

select count(*) from demo_big where status = :P_ENTER_VALID;
select * from table(dbms_xplan.display_cursor);
----------------------------------------------------------------------------------------
| Id  | Operation         | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |                    |       |       |     3 (100)|          |
|   1 |  SORT AGGREGATE   |                    |     1 |     8 |            |          |
|*  2 |   INDEX RANGE SCAN| DEMO_BIG_STATUS_IX |   100 |   800 |     3   (0)| 00:00:01 |
----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("STATUS"=:P_ENTER_VALID)
select count(*) from demo_big where status = :P_ENTER_INVALID;
select * from table(dbms_xplan.display_cursor);
-------------------------------------------------------------------------------
| Id  | Operation          | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |          |       |       |   701 (100)|          |
|   1 |  SORT AGGREGATE    |          |     1 |     8 |            |          |
|*  2 |   TABLE ACCESS FULL| DEMO_BIG |   999K|  7811K|   701   (2)| 00:00:01 |
-------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter("STATUS"=:P_ENTER_INVALID)

The two statements are not identical because the name of the bind parameter is different. Because of that we get two different cursors. Each with a different execution plan.
This test shows that bind peeking works. During the hard parse phase the value of the binded parameter was checked (peeked) so that the correct estimations for the resulting rows/cardinalities were made. Which led in turn to the correct plan for each of the two different statements. However this first parameter “freezes” the execution plan. So that if we change the binded value, then the same plan is reused.

This behaviour was enhanced in 11g with the introduction of adaptive cursor sharing and got steadily improved since then.

To test adaptive behaviour we run the first query again a few times (at least 4 times). But this time we do not pass VALID, but instead INVALID as a parameter.

After that we can see a new child cursor 1 for the sql_id “7rjdcm7v7hfrs”.

select is_bind_sensitive, is_bind_aware, sql_id, child_number, sql_text
from v$sql
where upper(sql_text) like 'SELECT%FROM DEMO_BIG WHERE%'  and sql_text not like '%v$sql%'
;
IS_BIND    IS_BIND SQL_ID        CHILD   SQL_TEXT
_SENSITIVE _AWARE                _NUMBER
Y          N       7rjdcm7v7hfrs 0       select count(*) from demo_big where status = :P_ENTER_VALID
Y          Y       7rjdcm7v7hfrs 1       select count(*) from demo_big where status = :P_ENTER_VALID
Y          N       5zkmtfj331xmc 0       select count(*) from demo_big where status = :P_ENTER_INVALID
select * from table(dbms_xplan.display_cursor('7rjdcm7v7hfrs',0));
----------------------------------------------------------------------------------------
| Id  | Operation         | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |                    |       |       |     3 (100)|          |
|   1 |  SORT AGGREGATE   |                    |     1 |     8 |            |          |
|*  2 |   INDEX RANGE SCAN| DEMO_BIG_STATUS_IX |   100 |   800 |     3   (0)| 00:00:01 |
----------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   2 - access("STATUS"=:P_ENTER_VALID)

select * from table(dbms_xplan.display_cursor('7rjdcm7v7hfrs',1));
-------------------------------------------------------------------------------
| Id  | Operation          | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |          |       |       |   701 (100)|          |
|   1 |  SORT AGGREGATE    |          |     1 |     8 |            |          |
|*  2 |   TABLE ACCESS FULL| DEMO_BIG |   999K|  7811K|   701   (2)| 00:00:01 |
-------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   2 - filter("STATUS"=:P_ENTER_VALID)

This is adaptive behaviour. After a few bad tries a second execution plan is created for the same cursor and used. How many tries are needed? Often it changes on the third try. But it can happen that more are needed.

Test with DBMS_SQL

Now comes the more difficult part. Setup a small plsql block to use DBMS_SQL to run the same statement again using binded parameters.

-- testcase for BIND peeking/aware using DBMS_SQL
declare
  curid    NUMBER;
  ret      INTEGER;
  sql_stmt VARCHAR2(200);
begin
  sql_stmt := 'select count(*) from demo_big where status = :P_STATUS';

  -- get cursor handle
  curid := DBMS_SQL.OPEN_CURSOR;
  DBMS_SQL.PARSE(curid, sql_stmt, DBMS_SQL.NATIVE);

  DBMS_SQL.BIND_VARIABLE(curid, 'P_STATUS', 'VALID');
  ret := DBMS_SQL.EXECUTE_and_fetch(curid);

  DBMS_SQL.PARSE(curid, sql_stmt, DBMS_SQL.NATIVE);
  DBMS_SQL.BIND_VARIABLE(curid, 'P_STATUS', 'INVALID');
  for i in 1..5 loop
    ret := DBMS_SQL.EXECUTE_and_fetch(curid);
  end loop;

DBMS_SQL.close_cursor(curid);
end;
/

The v$sql view has two interesting columns.
IS_BIND_SENSITIVE shows cursors where the execution plan can evolve.
IS_BIND_AWARE shows child cursors where a new plan was created, meaning that the cursor was evolved.

select is_bind_sensitive, is_bind_aware, sql_id, child_number, sql_text
from v$sql
where upper(sql_text) like 'SELECT%FROM DEMO_BIG WHERE%'
and sql_text not like '%v$sql%'
;
IS_BIND_SENSITIVE    IS_BIND_AWARE    SQL_ID    CHILD_NUMBER    SQL_TEXT
Y    N    7rjdcm7v7hfrs    0    select count(*) from demo_big where status = :P_ENTER_VALID
Y    Y    7rjdcm7v7hfrs    1    select count(*) from demo_big where status = :P_ENTER_VALID
N    N    3kpu54a461gkm    0    select count(*) from demo_big where status = :P_STATUS
N    N    3kpu54a461gkm    1    select count(*) from demo_big where status = :P_STATUS
Y    N    5zkmtfj331xmc    0    select count(*) from demo_big where status = :P_ENTER_INVALID
N    N    fjjm63y7c6puq    0    select count(*) from demo_big where status = :P_STATUS2
N    N    1qx03gdh8712m    0    select count(*) from demo_big where status = 'INVALID'
N    N    2jm3371mug58t    0    select count(*) from demo_big where status = 'VALID'

The two child cursors

-- find the cursor id
select sql_id, child_number, bucket_id, count, is_bind_sensitive, is_bind_aware, sql_text
from v$sql s
left join v$sql_cs_histogram h using (sql_id, child_number)
where upper(s.sql_text) like 'SELECT%FROM DEMO_BIG WHERE%'
and s.sql_text not like '%v$sql%'
;

-- check the execution plan for both child cursors
select * from table(dbms_xplan.display_cursor('3kpu54a461gkm',0));
select * from table(dbms_xplan.display_cursor('3kpu54a461gkm',1));

-- see the plans in the SGA
select * from v$sql_plan where sql_id = '3kpu54a461gkm';
select * from v$sql_plan where sql_id = 'fjjm63y7c6puq';

Now the strange thing is: The first cursor is using a FULL table scan. But the first execution was done using the VALID value and should have resulted in the index range scan. The second child cursor does not even have an execution plan!

NOTE: cannot fetch plan for SQL_ID: 3kpu54a461gkm, CHILD_NUMBER: 1
      Please verify value of SQL_ID and CHILD_NUMBER; 
      It could also be that the plan is no longer in cursor cache (check v$sql_plan)

What is going on here? v$sql has a column EXECUTIONS which tells us how often this child cursor was called. It is always 0 for the child 1 from the DBMS_SQL cursor!

I did several more tests using DBMS_SQL. Even a case where the cursor was closed and opened several times. All with the same result.

Interpreting the results

I’m still not yet exactly sure what is going on there. It seems as if bind peeking and adaptive cursor sharing does not work with DBMS_SQL. But why do we see then two child cursors? It seems as if the different parameter values at least have the effect that a new child is created. And this happens only when there is a need for a different execution plan. But where is the plan for that? I still have some doubts. Maybe the execution plan in v$sql is lying is this case? Since DBMS_SQL goes deep into the internals it might be that some of the normal behaviours are not reflected in some of the views.

The cursor itself is in the private SQL workarea and I never checked that. Another approach would be to setup a scenario where we can measure the performance difference. The test case I used was too small to see a desicive difference between the two possible plans.

Also we have to remember that the need for DBMS_SQL is rare. A normal select with binded parameters is certainly not a case where need dynamic SQL. A more typical case would be a cursor | statement where we do not know at compile time what columns are returned. Then we can use DBMS_SQL to analyse the structure of such a cursor and react on that.

However if we build some kind of dynamic frameworks and think about using DBMS_SQL we should rethink our strategy. Maybe it is easier to provide all the possible cases as plsql apis and thereby compiling during creation, instead of building the statement in a completly dynamic fashion but suffering some essential drawbacks.

Recommendations

1) Avoid DBMS_SQL, consider to use native SQL (execute_immediate) instead
2) If you have a skewed data distribution, make sure your plans are bind_sensitive
3) If you can guarantee an even data distribution, consider to add the NO_BIND_AWARE hint. This should be needed only in some extrem situations (very high performance requirements or cursor cache issues)

Appendix

The function that I used previously:

create or replace function hist_numtochar2(p_num number
,p_trunc varchar2 :='Y') return varchar2
-- Author: Martin Widlake
-- Source: https://mwidlake.wordpress.com/2009/08/11/
is
  m_vc varchar2(15);
  m_n number :=0;
  m_n1 number;
  m_loop number :=7;
begin
  m_n :=p_num;
  if length(to_char(m_n))>36 then
    --dbms_output.put_line ('input too short');
    m_vc:='num format err';
  else
    if p_trunc !='Y' then
      m_loop :=15;
    else
      m_n:=m_n+power(256,9);
    end if;
    --dbms_output.put_line(to_char(m_N,'999,999,999,999,999,999,999,999,999,999,999,999'));
    for i in 1..m_loop loop
      m_n1:=trunc(m_n/(power(256,15-i)));
      --    dbms_output.put_line(to_char(m_n1));
      if m_n1!=0 then m_vc:=m_vc||chr(m_n1);
      end if;
      dbms_output.put_line(m_vc);
      m_n:=m_n-(m_n1*power(256,15-i));
    end loop;
  end if;
  return m_vc;
end;
/