APEX 21.2 quickie: syntax highlighting with prism.js

To show (not to edit) pretty code inside an APEX application in the past I had used the libraries that were deployed along with APEX, like CodeMirror (see https://svenweller.wordpress.com/2015/12/07/apex-5-syntax-highlighting/) and CkEditor. In APEX 21 CkEditor got a new version and CodeMirror is not supplied anymore since several APEX versions now. But there is a new very lightweight alternative, which is prism.

In my use case I need this to quickly present currently running edition based plsql code.


Step 1) Load code into a hidden page item

I use a before region process to load the data (=code) into an item. Assuming item name P1_CODE for further references.

Step 2) Add prism files to the page

Javascript file urls


CSS file urls


Note that the substitution variable PRISMJS_DIRECTORY is automatically provided and resolved to the path “/i/libraries/prismjs/1.24.1/” and is updated in future versions.

Step 3) Show item in html region and choose the language

Add this to the html code section of a static region.

Note that you should set the correct language that you want to show. In my case it is language-plsql.

<pre><code class="language-plsql">&P1_CODE.</code></pre>

The value in the item is automatically html escaped. In case the logic that loads the code into the item did already escape it, you can also choose not to escape it again, by using an escape filter like &P1_CODE!RAW. . Just be aware of potential security issues, if you do not do that correctly.

A list of possible languages can be found here. However not all of those languages are added in the APEX deployment.

Step 4) Customize a few colors

I didn’t like the coloring for plsql or sql code. Especially I prefer to show comments in green. This can be changed with a few lines of css.

Add this to the inline css section of the page

/* prism plsql colorization */
.token.comment {
  color: #119b01;
.token.string {
  color: #0c6cb97c;
.token.keyword {
  color: #8C009B;
  font-weight: bold;

Very quick and easy to implement.

If you want different settings, simply inspect the code snippet using the browser. That is one of the advantages of prism. The code is tokenized using span tags and the appropriate classes. We can easily see what kind of token a certain element is and simply change the style for the class we want.


Here is a crude overview how the result looks like

Additional considerations

prism has the possibility to add many additional features. However those features come as plugins that are packed directly into the .css and .js files. To use them, one would have to add/replace the library that is used by APEX. I didn’t investigate in detail, which plugins are already inclued. For example line numbering does NOT seem to work out of the box.

This would do the trick with the line numbers, if the line-numbers plugin is integrated.

<pre class="line-numbers" data-start="50"><code class="language-plsql">&P1_CODE.</code></pre>


For simple requirements prism allows a very easy and quick way to add syntax highlighting to code. For more complex requirements (like editing code) a different library might be better suited. The possibility to have dedicated subtitution strings for several libraries is very welcome.

One additional note. I tried to use the markdown editor item type as an alternative, but the effort to make it work for my simple requirements, was too high. A markdown editor simply serves a different purpose.

ORA-14097 “column type or size mismatch in ALTER TABLE EXCHANGE PARTITION” even when using FOR EXCHANGE

Vector image by VectorStock / Anastasia8

This is something I read about and forgot until Chris Saxon mentioned and showcased it during todays AskTOM Office Hour session.

In Oracle 12.2 the create table command was enhanced to avoid the error

ORA-14097: column type or size mismatch in ALTER TABLE EXCHANGE PARTITION

during an exchange partition operation. We can now do create table ... for exchange.

The basic idea is that the for exchange syntax enhancement considers things like invisible columns that are usually not created and by that it avoids complications during an exchange partition at a later time. For further details see this blog post by Connor McDonald.

Here I show a situation, where the ORA-14097 still happens, even if the for exchange syntax is used. Just something to be aware of and watch out for.


First we create two identical partitioned tables, the source table A and the destination table B.

-- create the source table using NULLABLE columns
create table a (col1 number null, col2 number null, col3 number null)
PARTITION BY range (col1) interval (1)
-- add a primary key constraint
alter table a add primary key (col1, col2);
desc a;

Name Null?    Typ    
---- -------- ------ 
COL3          NUMBER 

As you can see the table looks as if col1 and col2 are not null. This is because the primary key requires all columns to be not null.

Now we do exactly the same for table B. in my real world case scenarion table A was on a remote database and table B was a local one, created by a tablespace transport. So source and destination are identical.

create table b (col1 number null, col2 number null, col3 number null)
PARTITION BY range (col1) interval (1)

alter table b add primary key (col1, col2);

Now we fill the source table A with some dummy data.

-- add some data into a
insert into a (col1, col2, col3)
select mod(level,10) ,level, mod(level,100) from dual connect by level <= 1000;

1000 rows inserted.


Commit finished.

Next step is to create and fill a staging table e which will hold the partition that we want to add to B.
Notice that during the create table command the FOR EXCHANGE syntax is used. Unfortunatly it seems we can not do a CTAS when using the syntax enhancement, so two commands are used: create + insert.

create table e for exchange with table b;
insert into e select * from a partition for (9);

Side note: Because the table is interval partitioned, here “partition for” is useful to specify the source partition.

desc e;

Name Null? Typ    
---- ----- ------ 
COL1       NUMBER 
COL2       NUMBER 

The exchange table e is declared with NULL columns. Although this reflects the original table without the PK, it will lead to a problem during the exchange.

alter table b exchange partition for (9) with table e;
ORA-14097: column type or size mismatch in ALTER TABLE EXCHANGE PARTITION
*Cause:    The corresponding columns in the tables specified in the
           ALTER TABLE EXCHANGE PARTITION are of different type or size
*Action:   Ensure that the two tables have the same number of columns
           with the same type and size.

This error will not happen if we do the same commands, but without adding the primary key constraint.


Even when using for exchange, you can still run into the “mismatched columns” problem (ORA-14097). In this particular example the problem is, that the addition of the primary key converts the key columns to NOT NULL. But this conversion is not reflected in the CREATE TABLE .. FOR EXCHANGE command.

Btw: The error can be avoided by creating the columns with NOT NULL in the first place.

SQL*Plus Basics – automate logfile name based upon script name


-- enable sql*plus to track the name of any currently running script as module name in application_info
set appinfo on

-- define sql*plus substitution variables SCRIPTNAME and LOGFILENAME
column script_name new_value scriptname
column logfile_name new_value logfilename

-- fetch name of the currently running SQL*plus script 
select regexp_replace(sys_context('userenv','module'),'^\d*@ *') as script_name from dual;

-- change suffix from .sql to .log and use as name for the log
select replace('&scriptname.','.sql$','.log') as logfile_name from dual;

-- start the log
spool &logfilename.


Typical for an sql*plus script is that we spool the output to another file. Sometimes to export data and even more often to capture what happend in case of an error. The latter case is a typical example where the presented code will help to reduce hard coding the name of the script file into the file itself. So if the script name changes we don’t need to remember to change the hardcoded values inside the script.

Problem demonstration

$ sqlplus /nolog @myScript.sql
connect myUser/myPwd@myTnsAlias

spool myScript.log;

  v_script varchar2(128);
  v_script := 'myScript.sql';
   when others then 
     logger.log_error ('Error in Script '||v_script);

spool off;

The example shows that the name of the script “myScript” is hardcoded two times inside the script itself.

So whenever the name of the script changes, there is a good chance we need to change those two hardcoded occurences too. This includes cases where we copy this script as the basis for another script,

Here is how we can avoid the hard coding.


$ sqlplus /nolog @myScript.sql

-- enable sql*plus to track the name of any currently running script as module name in application_info
set appinfo on

-- define sql*plus substitution variables SCRIPTNAME and LOGFILENAME
column script_name new_value scriptname
column logfile_name new_value logfilename

-- fetch name of the currently running SQL*plus script 
select regexp_replace(sys_context('userenv','module'),'^\d*@ *') as script_name from dual;

-- change suffix from .sql to .log and use as log name
select replace('&scriptname.','.sql$','.log') as logfile_name from dual;

-- start logging
spool &logfilename.

  v_script varchar2(128);
  -- move sql*plus substitution variable into plsql variable
  v_script := '&scriptname.';
   when others then 
     -- add name of currently running script to the error log
     logger.log_error ('Error in Script '||v_script);
spool off;

Detailed code explanation

Some words what exactly is happening in the script.

Step 1) enable appinfo

First we need to enable SQL*plus to track the name of any script that is running as the module name in the application information of the session.

-- enable sql*plus to track the name of any currently running script 
-- as module name in application_info
set appinfo on

There are several ways to retrieve this information. We can see it in v$session or v$sqlarea, we can use dbms_application_info.read_module or we can check the userenv context module .

Step 2) define sql*plus variables

The name of the script should be stored in a sql*plus substituition variable scriptname which is not a bind variable.

Typically a substitution variable is created using the DEFINE command. But here we use the lesser known way of matching a column name from a select to such a variable.
column <column> new_value <variable>

-- define sql*plus substitution variables SCRIPTNAME and LOGFILENAME
column script_name new_value scriptname
column logfile_name new_value logfilename

To use such a variable the syntax is “&variable.” (without the “). The value of a substitution variable replaces (substitutes) the name of the variable anywhere inside the script. Ambersand “&” marks always the beginning and dot “.” can mark the end of the variable name, but the dot is optional. If a normal character (or even a dot) follows the substitution, then the additional dot is important to specify, where the name of the substition variable ends.

Because it is a sql*plus variable, this can replace values inside of strings.

Step 3) get the script name

Assumption is that a script was started, either by calling start scriptname (or short @scriptname) or by providing the name of the script when calling sqlplus from the shell. Both ways work.

We read the name of the script from the userenv: sys_context('userenv','module')

The value that is delivered seems to have some additional stuff at the beginning. This is something you should check in your system, how the file name exactly looks like. In my case it was the string
01@ myScript.sql

Some string manipulation is needed to get rid of the first part “01@ ” of the returned string. I used a regular expression to do so. Choose whatever you prefer.

-- fetch name of the currently running SQL*plus script 
select regexp_replace(sys_context('userenv','module'),'^\d*@ *') as script_name from dual;

The regexp replace command removes any number of digits “\d*” followed by “@” followed by a blank ” ” from the beginning “^” of the string and keeps the remainder intact.

Step 4) Get the logfilename from the scriptname and spool the log

Remember because of the column mapping earlier, the column logfile_name provides the value for our substitution variable logfilename.

'&scriptname.' is resolved at runtime into 'myScript.sql'.

Assuming that the script has a suffix .sql (default), we replace that suffix with .log.

-- change suffix to .log and use as log name
select replace('&scriptname.','.sql','.log') as logfile_name from dual;

-- start the log
spool &logfilename.

Step 5) Track the script name for error logging

  -- move sql*plus substitution variable into plsql variable
  v_script := '&scriptname.';

Recommendation is to use a plsql variable and put the SQL*plus substitution variable into the plsql variable. Then use this plsql variable wherever you need it, including the error handler. In general this should only be done for code directly inside your script, not inside some plsql packaged procedure for example. Reason is, that there is no guarantee, that the sql*plus variable is set during all executions of the packaged procedure.


  • A working connection to the database is needed. So the log does not store every command that runs inside the script. Especially it can not include any potential connect commands nor the commands to find out about the name of the script.
  • We can not generalize this code (step 3) into another script, because there the script name would be different.
  • Because substitution variables need the ambersand “&” it is not easy to make sql*plus avoid interpreting ambersands that are part of normal strings as the beginning of a substitution variable.
    Or to say it in another way. Careful with set define off. A workaround sometimes can be to set define off at a later time in the script, when our variables are not used anymore or even to move the value from a substitution to a bind variable.

SQL*PLUS Basics – ways to measure time

Here is a quick overview about commands in SQL*plus that help to track and measure time.

set time on/off

This displays a prompt in front of each statement with the current time. Be aware that it is not the time, when the statement was executed, but the time when the line in sql*plus was created. This difference is usually not relevant when running scripts, just something to be aware of when manually typing and executing statements in sql*plus.

SQL> set time on;
10:56:02 SQL>
10:56:10 SQL> execute dbms_session.sleep(3);

PL/SQL procedure successfully completed.

10:56:23 SQL>

The prompt shows that 13 seconds passed by. However the sleep time specified was only 3 seconds. The reason is it took me approximately 10 seconds to type the statement.

set timing on/off

The result is the runtime of a sql statement or an anonymous plsql block in SQL*plus.

SQL> set timing on;
SQL> execute dbms_session.sleep(3);

PL/SQL procedure successfully completed.

Elapsed: 00:00:03.02

After any SQL statement or anonymous plsql blocks, sql*plus shows the elapsed time for that command.

The elapsed time is not shown after SQL*PLUS commands (like SET or SPOOL).

The time is measured on the client. So it includes the time to send the command to the server, execute the command there and move the resulting information from the database server back to the sql*plus client. When very exact measurements are needed network issues will make this elapsed time fluctuate a little more than what is useful.

timing start+stop

Timing start+stop is very similar to set timing on/off. It starts a timer with timing start and shows the elpased time after the timing stop command. This allows to measure the time to run multiple commands. Wheras set timing on/off only measures a single command.

SQL> set time on;
11:14:37 SQL> timing start
11:14:45 SQL> execute dbms_session.sleep(3);

PL/SQL procedure successfully completed.

11:14:55 SQL> select * from dual;


11:15:00 SQL> execute dbms_session.sleep(1);

PL/SQL procedure successfully completed.

11:15:12 SQL> timing stop;
Elapsed: 00:00:31.48
11:15:17 SQL>

We can compare the measurement from timing with the times shown by set time on.
From 11:14:45 to 11:15:17 there are 32 seconds. Which matches perfectly the elapsed time or 31.48 seconds shown by timing start+stop.


This is a predefined sql*plus variable that holds the current time. It uses the sessions nls settings as display format.

SQL> prompt &_DATE
SQL> alter session set nls_date_format = 'dd-mm-yyyy HH24:MI:SS';

Session altered.

SQL> prompt &_DATE
27-04-2022 11:22:20

SQL> set pagesize 10
SQL> ttitle "Important list of numbers &_DATE."
SQL> select level as num from dual connect by level <= 10;
Important list of numbers 27-04-2022 11:59:56

Important list of numbers 27-04-2022 11:59:56

10 rows selected.
SQL> ttitle OFF

Useful when one wants to add the day and/or time to the name of a log file for example.


These are SQL pseudofunctions that give the current datetime. Sysdate delivers a date (incl. time up to the second), systimestamp delivers also fractional seconds. The time is derived from the servers time settings. So client settings do not influence the value. However client nls settings effect the conversion to strings.

SQL> select sysdate from dual;


SQL> alter session set nls_date_format='DD-MM-YYYY HH24:MI:SS';

Session altered.
SQL> select sysdate from dual;

27-04-2022 14:33:03

SQL> select to_char(sysdate,'FMDay, DDth Month YYYY') from dual;

Wednesday, 27TH April 2022

SQL> select systimestamp from dual;

27-APR-22 PM +02:00

SQL> alter session set nls_timestamp_tz_format='DD-MM-YYYY HH24:MI:SSXFF';

Session altered.

SQL> select systimestamp from dual;

27-04-2022 14:33:18.033306

SQL> select to_char(systimestamp,'FMDay, DDth Month YYYY HH24:MI:SSXFF TZR') from dual;

Wednesday, 27TH April 2022 14:33:22.260310 +02:00


A possibility to measure very exact times (1/100 of a second) in plsql.

2   time number;
3 begin
4   time:=dbms_utility.get_time;
5   dbms_session.sleep(3);
6   dbms_output.put_line('Elapsed in sec: '||to_char((dbms_utility.get_time - time)/100));
7 end;
8 /
Elapsed in sec: 3.04

PL/SQL procedure successfully completed.


Be aware that the numbers returned by get_time are often negative integers, like -1618843773.

We can also use this to measure the time between several sql*plus calls by storing the first time in a sql*plus variable. So it doesn’t need to be a single large plsql block.

SQL> var stime number;
SQL> exec :stime := dbms_utility.get_time;

PL/SQL procedure successfully completed.

SQL> exec dbms_session.sleep(3);

PL/SQL procedure successfully completed.

SQL> select * from dual;


SQL> exec dbms_output.put_line('Elapsed in sec: '||to_char((dbms_utility.get_time - :stime)/100));
Elapsed in sec: 18.62

PL/SQL procedure successfully completed.


Serveroutput needs to be switched on to see the result of dbms_output.


  • For crude but easy performance time measurements set time on and set timing on are solid.
  • For exact and comparable measurements on the database use dbms_utility.get_time.
  • To show date and time in scripts sometimes _DATE is a little easier to use than sysdate.
  • To add the full runtime of a script to the end of a spooled logfile timing start+stop can be used.

lateral join – decorrelation gone wrong

A colleaque made me aware of the following misbehaviour of Oracles optimizer. Shoutout to Christine S. who discovered that problem.


create table a 
as select 1 x from dual;

create table b 
as select 1 x, 1 y from dual union all 
   select 1 x ,2 y from dual;

select a.x, c.y
from  a,
     lateral (select b.x, max (b.y) y from b where b.x = a.x) c
where a.x = c.x;


         X          Y
---------- ----------
         1          2

At first glance this looks like what we intended to see. However a closer inspection of the code reveals that select b.x, max (b.y) y from b is not valid syntax. We do get an error message, when running it directly.

ORA-00937: not a single-group group function

We also get the same error message when we use the select with the lateral keyword to drive a materialized view.

create materialized view mv
select a.x, c.y
from a,
     lateral (select b.x, max (b.y) y from b where b.x = a.x) c
where a.x = c.x;
ERROR at line 5:
ORA-00937: not a single-group group function

The ORA-937 error message is correct and should be displayed.

That we get a result in the first case, is an error!


Oracle transforms the lateral join into a decorrelated lateral join. During this transformation the syntax error disappears. We see this when checking the outline section of the execution plan (see below). This transformation is different when used in a materialized view.

select *
from dbms_xplan.display_cursor(sql_id=>'2c1nchvqtjt6s', format=>'+outline');
Plan hash value: 3244120894

| Id  | Operation            | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT     |                 |       |       |     7 (100)|          |
|*  1 |  HASH JOIN           |                 |     1 |    29 |     7  (15)| 00:00:01 |
|   2 |   TABLE ACCESS FULL  | A               |     1 |     3 |     3   (0)| 00:00:01 |
|   3 |   VIEW               | VW_DCL_A18161FF |     1 |    26 |     4  (25)| 00:00:01 |
|   4 |    HASH GROUP BY     |                 |     1 |     6 |     4  (25)| 00:00:01 |
|   5 |     TABLE ACCESS FULL| B               |     2 |    12 |     3   (0)| 00:00:01 |

Outline Data

      OUTER_JOIN_TO_INNER(@"SEL$71877063" "C"@"SEL$1")
      FULL(@"SEL$13D91C5D" "A"@"SEL$1")
      NO_ACCESS(@"SEL$13D91C5D" "C"@"SEL$1")
      LEADING(@"SEL$13D91C5D" "A"@"SEL$1" "C"@"SEL$1")
      USE_HASH(@"SEL$13D91C5D" "C"@"SEL$1")
      FULL(@"SEL$B1519A3D" "B"@"SEL$2")

Predicate Information (identified by operation id):

   1 - access("A"."X"="C"."X")

The name of the view VW_DCL_xxx indicates that it was used for a decorrelated lateral operation. For a normal lateral operation the name of the view woud be VW_LAT_xxx.


I’m a big fan of lateral joins. For more insights what a lateral join does, check this older blog post by the Optimizer Development Group: http://optimizermagic.blogspot.com/2007/12/outerjoins-in-oracle.html

Since 12c (or maybe a little earlier than that) there were some optimizer enhancements to lateral joins, called decorrelation. In essence it means a lateral join, which can be seen as a kind of correlated subquery, is rewritten into a uncorrelated subquery under certain specific circumstances.

I believe this rewrite is almost always wrong. If the developer on purpose decides to use the lateral keyword (same goes for cross apply and outer apply) the optimizer should not try to correct this.

Other authors have written about problems with decorrelation before.

The case demonstrated here is just another example where decorrelation has gone wrong. An optimizer hint should not change the result of a query, and even more so should not change whether the query is formulated correctly or not. There are exceptions to this rule, where the intention of the hint is to change the outcome, for example ignore_row_on_dupkey_index or fresh_mv. But the (undocumented) DECORRELATE/NO_DECORRELATE hints are no such cases.


There are two ways to disable the decorrelation and get the expected ora- error message.

a) use the NO_DECORRELATE hint.

SQL> select /*+ no_decorrelate(@"SEL$2") */
  2   a.x, c.y
  3  from  a,
  4       lateral (select b.x, max (b.y) y from b where b.x = a.x) c
  5  where a.x = c.x;
     lateral (select b.x, max (b.y) y from b where b.x = a.x) c
ERROR at line 4:
ORA-00937: not a single-group group function

b) switch the decorrelation feature off using the hidden (underscore) parameter _optimizer_ansi_join_lateral_enhance

SQL> alter session set "_optimizer_ansi_join_lateral_enhance"=false;

Session altered.

SQL> select a.x, c.y
  2  from  a,
  3       lateral (select b.x, max (b.y) y from b where b.x = a.x) c
  4  where a.x = c.x;
     lateral (select b.x, max (b.y) y from b where b.x = a.x) c
ERROR at line 3:
ORA-00937: not a single-group group function



Although LATERAL joins are a nifty feature, always be aware that such features often come with bugs.

dbms_stats quickie: show global and table preferences


To gather statistics for a schema or a table there is the dbms_stats package. Either we call it manually or the automatic statistic gathering (scheduled) job is used.

We can provide many settings for the statistic gathering job as a parameter during the gather call. For parameters that we do not explicitly set, preferences are used. Either on a global or on individual table level.

Since there are many preferences this article has some SQL statements that help to check how the current dbms_stats preferences are.

Table preferences overrule the global preferences. And preferences set by calling gather_..._stats overrule the table preferences. However it is possible to override this last rule by setting the preference_overrides_parameter to true. In that case the table preferences overrule the gather_..._stats parameters.
see https://docs.oracle.com/en/database/oracle/oracle-database/21/tgsql/options-for-optimizer-statistics-gathering.html#GUID-E616363F-0A7B-4D4D-9384-63934DB69F7D

show global preferences

Script to show the global preferences. Note that the parameter autostats_target can not be set on table level, because it directly influences the area of work for the auto stats gathering job.

set linesize 250
set pagesize 100
column preference_name format a30
column preference_value format a50

-- global preferences
with preflist (preference_name,global_only)
as (select 'APPROXIMATE_NDV_ALGORITHM',0 from dual union all
    select 'AUTO_STAT_EXTENSIONS'     ,0 from dual union all
    select 'AUTO_TASK_STATUS'         ,0 from dual union all
    select 'AUTO_TASK_MAX_RUN_TIME'   ,0 from dual union all
    select 'AUTO_TASK_INTERVAL'       ,0 from dual union all
    select 'AUTOSTATS_TARGET'         ,1 from dual union all
    select 'CASCADE'                  ,0 from dual union all
    select 'CONCURRENT'               ,0 from dual union all
    select 'DEGREE'                   ,0 from dual union all
    select 'ESTIMATE_PERCENT'         ,0 from dual union all
    select 'GLOBAL_TEMP_TABLE_STATS'  ,0 from dual union all
    select 'GRANULARITY'              ,0 from dual union all
    select 'INCREMENTAL'              ,0 from dual union all
    select 'INCREMENTAL_STALENESS'    ,0 from dual union all
    select 'INCREMENTAL_LEVEL'        ,0 from dual union all
    select 'METHOD_OPT'               ,0 from dual union all
    select 'NO_INVALIDATE'            ,0 from dual union all
    select 'OPTIONS'                  ,0 from dual union all
    select 'PREFERENCE_OVERRIDES_PARAMETER',0 from dual union all
    select 'PUBLISH'                  ,0 from dual union all
    select 'STALE_PERCENT'            ,0 from dual union all
    select 'STAT_CATEGORY'            ,0 from dual union all
    select 'TABLE_CACHED_BLOCKS'      ,0 from dual union all
    select 'WAIT_TIME_TO_UPDATE_STATS',0 from dual )
select preference_name, 
       sys.dbms_stats.get_prefs(preference_name) as preference_value,
from preflist;

And the result might look like this

global preferences for dbms_stats

Show table preferences

And here is how to show the preferences for a single (or a few) table(s) and compare them to the global preferences. Add your table names to the list of tables in the tabs subquery.

-- table preferences
with tabs (owner, table_name)
  as (select user , 'MYTABLE1' from dual union all
      select 'STAGE' , 'MYTABLE2' from dual union all
      select 'STAGE' , 'MYTABLE' from dual
, preflist (preference_name, global_only)
as (select 'APPROXIMATE_NDV_ALGORITHM',0 from dual union all
    select 'AUTO_STAT_EXTENSIONS'     ,0 from dual union all
    select 'AUTO_TASK_STATUS'         ,0 from dual union all
    select 'AUTO_TASK_MAX_RUN_TIME'   ,0 from dual union all
    select 'AUTO_TASK_INTERVAL'       ,0 from dual union all
    select 'AUTOSTATS_TARGET'         ,1 from dual union all
    select 'CASCADE'                  ,0 from dual union all
    select 'CONCURRENT'               ,0 from dual union all
    select 'DEGREE'                   ,0 from dual union all
    select 'ESTIMATE_PERCENT'         ,0 from dual union all
    select 'GLOBAL_TEMP_TABLE_STATS'  ,0 from dual union all
    select 'GRANULARITY'              ,0 from dual union all
    select 'INCREMENTAL'              ,0 from dual union all
    select 'INCREMENTAL_STALENESS'    ,0 from dual union all
    select 'INCREMENTAL_LEVEL'        ,0 from dual union all
    select 'METHOD_OPT'               ,0 from dual union all
    select 'NO_INVALIDATE'            ,0 from dual union all
    select 'OPTIONS'                  ,0 from dual union all
    select 'PREFERENCE_OVERRIDES_PARAMETER',0 from dual union all
    select 'PUBLISH'                  ,0 from dual union all
    select 'STALE_PERCENT'            ,0 from dual union all
    select 'STAT_CATEGORY'            ,0 from dual union all
    select 'TABLE_CACHED_BLOCKS'      ,0 from dual union all
    select 'WAIT_TIME_TO_UPDATE_STATS',0 from dual
,tabprefs as (select t.owner, t.table_name, preference_name, 
       sys.dbms_stats.get_prefs(preference_name,null,null) as preference_global,
       case when global_only=0 
            then sys.dbms_stats.get_prefs(preference_name,t.owner,t.table_name) 
       end as preference_table,
       from preflist 
       cross join tabs t
select p.*, 
       case when global_only=1 then null
            when preference_global=preference_table then 'FALSE' 
             else 'TRUE' 
             end as individual_table_pref
from tabprefs p

The select gives a comparison between the preferences on global and on table level.

table preferences for dbms_stats

The last column INDIVIDUAL_TABLE_PREF indicates whether the global setting was changed on table level or not. But take it with a grain of salt. Some preferences might have additional restrictions, like the INCREMENTAL_LEVEL can be set to PARTITION only if the table is actually partitioned. Otherwise the default for the table will be TABLE, even if the global setting differs.

Show all changed preferences on table level for a schema

And finally here is a statement that goes through all the tables in one (or several schemas) and finds settings where the global and the table preference differs. For each preference value combination it lists the number of table and the table names.

In the tabs subquery list all the schemas you want to be checked.

-- All tables with a non default setting
with tabs
  as (select owner, table_name, partitioned 
      from dba_tables
      where owner in (user,'STAGE')
,preflist (preference_name, global_only)
as (select 'APPROXIMATE_NDV_ALGORITHM',0 from dual union all
    select 'AUTO_STAT_EXTENSIONS'     ,0 from dual union all
    select 'AUTO_TASK_STATUS'         ,0 from dual union all
    select 'AUTO_TASK_MAX_RUN_TIME'   ,0 from dual union all
    select 'AUTO_TASK_INTERVAL'       ,0 from dual union all
    select 'AUTOSTATS_TARGET'         ,1 from dual union all
    select 'CASCADE'                  ,0 from dual union all
    select 'CONCURRENT'               ,0 from dual union all
    select 'DEGREE'                   ,0 from dual union all
    select 'ESTIMATE_PERCENT'         ,0 from dual union all
    select 'GLOBAL_TEMP_TABLE_STATS'  ,0 from dual union all
    select 'GRANULARITY'              ,0 from dual union all
    select 'INCREMENTAL'              ,0 from dual union all
    select 'INCREMENTAL_STALENESS'    ,0 from dual union all
    select 'INCREMENTAL_LEVEL'        ,0 from dual union all
    select 'METHOD_OPT'               ,0 from dual union all
    select 'NO_INVALIDATE'            ,0 from dual union all
    select 'OPTIONS'                  ,0 from dual union all
    select 'PREFERENCE_OVERRIDES_PARAMETER',0 from dual union all
    select 'PUBLISH'                  ,0 from dual union all
    select 'STALE_PERCENT'            ,0 from dual union all
    select 'STAT_CATEGORY'            ,0 from dual union all
    select 'TABLE_CACHED_BLOCKS'      ,0 from dual union all
    select 'WAIT_TIME_TO_UPDATE_STATS',0 from dual
,tabprefs as (select t.owner, t.table_name, t.partitioned, 
       preference_name, global_only,
       sys.dbms_stats.get_prefs(preference_name,null,null) as preference_global,
       case when global_only=0 then sys.dbms_stats.get_prefs(preference_name,t.owner,t.table_name) end as preference_table
       from preflist 
       cross join tabs t
select preference_name, preference_global, preference_table, listagg(owner||'.'||table_name,',' on overflow truncate without count) within group (order by owner, table_name) tables, count(*) table#
from tabprefs p
where global_only=0
and preference_global!=preference_table 
-- Special case: incremental_level can only be TABLE for non partitioned tables, even if default is different.
and not (preference_name='INCREMENTAL_LEVEL' and partitioned='NO' and preference_table='TABLE' and preference_global='PARTITION')
group by preference_name, preference_global, preference_table
order by preference_name, preference_global, preference_table;

Note that there is some special consideration for the previously mentioned INCREMENTAL_LEVEL preference.

tables with individual preferences

So in this case there were 203 tables (not all names shown) where the INCREMENTAL preference on table level was set to TRUE compared to the global setting of FALSE. This indicates that the global preference was changed, after most of the tables had been created.


It is hard to remember all the different settings that do influence stats gathering. Especially in cases where we wonder, why the settings did not seem to work as expected, it helps to check all the preferences on each level.

ODA Quickie – How to solve ODABR Error: Dirty bit is set.

The problem

A little while ago during an ODA X7-2S upgrade from 19.6 to 19.9 the following error was encountered.

SUCCESS: 2021-06-04 10:02:05: ...EFI device backup saved as '/opt/odabr/out/hbi/efi.img'
INFO: 2021-06-04 10:02:05: ...step3 - checking EFI device backup
ERROR: 2021-06-04 10:02:05: Error running fsck over /opt/odabr/out/hbi/efi.img
ERROR: 2021-06-04 10:02:05: Command: 'fsck -a /opt/odabr/out/hbi/efi.img' failed as fsck from util-linux 2.23.2 fsck.fat 3.0.20 (12 Jun 2013) 0x25: Dirty bit is set. Fs was not properly unmounted and some data may be corrupt.  Automatically removing dirty bit. Performing changes. /opt/odabr/out/hbi/efi.img: 23 files, 1245/63965 clusters
INFO: 2021-06-04 10:02:05: Mounting EFI back
ERROR: 2021-06-04 10:02:06: Backup not completed, exiting...

This seems to be a known issue for Bare Metal ODAs. But the way to solve the problem is poorly documented.

The mos notes

The Oracle ODABR support document mentions the problem twice and gives slightly different solutions.

Check the “ODABR – Use Case” and the “known issues section”.


The document also mentions Internal Bug 31435951 ODABR FAILS IN FSCK WITH “DIRTY BIT IS SET”.

From the public ODABR document

This is not an ODABR issue. ODABR is signalling a fsck error because your (in this case) efi partition is not in expected status… 
To fix this:

unmount efi
fsck.vfat -v -a -w <efidevice>
mount efi

Unfortunatly the workaround is a bit vague and hard to understand. The efi partition is mounted as /boot/efi . The “efi device” is not the same as the mount point but can be gathered from that.

Here are the exact commands that helped me to solve the issue.

The solution

First check your filesystem (the output was taken after we repaired the issue) – your mileage may vary.

[root@ODA01 odabr]# df -h
Filesystem                          Size  Used Avail Use% Mounted on
devtmpfs                             94G   24K   94G   1% /dev
tmpfs                                94G  1.4G   93G   2% /dev/shm
tmpfs                                94G  4.0G   90G   5% /run
tmpfs                                94G     0   94G   0% /sys/fs/cgroup
/dev/mapper/VolGroupSys-LogVolRoot   30G   11G   17G  40% /
/dev/mapper/VolGroupSys-LogVolU01   148G   92G   49G  66% /u01
/dev/mapper/VolGroupSys-LogVolOpt    59G   43G   14G  77% /opt
tmpfs                                19G     0   19G   0% /run/user/1001
tmpfs                                19G     0   19G   0% /run/user/0
/dev/asm/commonstore-13             5.0G  367M  4.7G   8% /opt/oracle/dcs/commonstore
/dev/asm/reco-215                   497G  260G  238G  53% /u03/app/oracle
/dev/asm/datredacted-13             100G   28G   73G  28% /u02/app/oracle/oradata/redacted
/dev/asm/datredacted2-13            100G   74G   27G  74% /u02/app/oracle/oradata/redacted2
/dev/md0                            477M  208M  244M  47% /boot
/dev/sda1                           500M  9.8M  490M   2% /boot/efi

This shows us the “efi device” is /dev/sda1

Then we did the steps as described in the documentation:

[root@ODA01 odabr]# umount /boot/efi

[root@ODA01 odabr]# fsck.vfat -v -a -w /dev/sda1
fsck.fat 3.0.20 (12 Jun 2013)
fsck.fat 3.0.20 (12 Jun 2013)
Checking we can access the last sector of the filesystem
0x25: Dirty bit is set. Fs was not properly unmounted and some data may be corrupt.
 Automatically removing dirty bit.
Boot sector contents:
System ID "mkdosfs"
Media byte 0xf8 (hard disk)
       512 bytes per logical sector
      8192 bytes per cluster
        16 reserved sectors
First FAT starts at byte 8192 (sector 16)
         2 FATs, 16 bit entries
    131072 bytes per FAT (= 256 sectors)
Root directory starts at byte 270336 (sector 528)
       512 root directory entries
Data area starts at byte 286720 (sector 560)
     63965 data clusters (524001280 bytes)
63 sectors/track, 255 heads
         0 hidden sectors
   1024000 sectors total
Reclaiming unconnected clusters.
Performing changes.
/dev/sda1: 23 files, 1245/63965 clusters

[root@ODA01 odabr]# mount /boot/efi

After this, we could sucessfully create an ODABR snapshot

[root@ODA01 odabr]# ./odabr backup -snap -osize 50 -usize 80
INFO: 2021-06-04 12:14:49: Please check the logfile '/opt/odabr/out/log/odabr_87615.log' for more details

 odabr - ODA node Backup Restore - Version: 2.0.1-62
 Copyright Oracle, Inc. 2013, 2020
 Author: Ruggero Citton <ruggero.citton@oracle.com>
 RAC Pack, Cloud Innovation and Solution Engineering Team

INFO: 2021-06-04 12:14:49: Checking superuser
INFO: 2021-06-04 12:14:49: Checking Bare Metal
INFO: 2021-06-04 12:14:49: Removing existing LVM snapshots
WARNING: 2021-06-04 12:14:49: LVM snapshot for 'opt' does not exist
WARNING: 2021-06-04 12:14:49: LVM snapshot for 'u01' does not exist
WARNING: 2021-06-04 12:14:49: LVM snapshot for 'root' does not exist
INFO: 2021-06-04 12:14:49: Checking LVM size
INFO: 2021-06-04 12:14:49: Boot device backup
INFO: 2021-06-04 12:14:49: Getting EFI device
INFO: 2021-06-04 12:14:49: ...step1 - unmounting EFI
INFO: 2021-06-04 12:14:50: ...step2 - making efi device backup
SUCCESS: 2021-06-04 12:14:54: ...EFI device backup saved as '/opt/odabr/out/hbi/efi.img'
INFO: 2021-06-04 12:14:54: ...step3 - checking EFI device backup
INFO: 2021-06-04 12:14:54: Getting boot device
INFO: 2021-06-04 12:14:54: ...step1 - making boot device backup using tar
SUCCESS: 2021-06-04 12:15:05: ...boot content saved as '/opt/odabr/out/hbi/boot.tar.gz'
INFO: 2021-06-04 12:15:05: ...step2 - unmounting boot
INFO: 2021-06-04 12:15:05: ...step3 - making boot device backup using dd
SUCCESS: 2021-06-04 12:15:10: ...boot device backup saved as '/opt/odabr/out/hbi/boot.img'
INFO: 2021-06-04 12:15:10: ...step4 - mounting boot
INFO: 2021-06-04 12:15:10: ...step5 - mounting EFI
INFO: 2021-06-04 12:15:11: ...step6 - checking boot device backup
INFO: 2021-06-04 12:15:12: OCR backup
INFO: 2021-06-04 12:15:13: ...ocr backup saved as '/opt/odabr/out/hbi/ocrbackup_87615.bck'
INFO: 2021-06-04 12:15:13: Making LVM snapshot backup
SUCCESS: 2021-06-04 12:15:13: ...snapshot backup for 'opt' created successfully
SUCCESS: 2021-06-04 12:15:15: ...snapshot backup for 'u01' created successfully
SUCCESS: 2021-06-04 12:15:15: ...snapshot backup for 'root' created successfully
SUCCESS: 2021-06-04 12:15:15: LVM snapshots backup done successfully

Side note: We used smaller backup sizes, to circumvent issues with not having enough space for the snapshot, although there was enough space. But this was not connected to the “dirty bit” issue.

I hope this helps others to troubleshoot their ODA.

plsql function: who_called – simplify your exception handler!

Does your error logging logic still look like this?

example 1 – old school handler

procedure doSomething is
  when others then
     pk_log.error(p_module    => $$PLSQL_UNIT, 
                  p_submodule => 'doSomething', 
                  p_message   => 'Something not done!'||sqlerrm);

Here is what I prefer!

example 2 – compact handler

procedure doSomething is
  when others then
     pk_log.error('Something not done!');

Such code is way more convinient! Of cause this will store the same information as the previous example 1.

To get there some parts have to be moved into the logging method (pk_log.error).

  1. sqlerrm
    is the error message
    When moving, it should also be improved (at least use dbms_utility.format_error_stack)
    is the name of the package (or of the standalone procedure or function)
  3. ‘doSomething’
    is the hardcoded name of the procedure
    It needs to be gathered dynamically at runtime inside the logging package.

The biggest issue is 3.) – to find the name of the module at runtime. The function who_called can solve that. It also gathers the package name on the way, so 2.) is also covered.

Side note: Is hardcoding the procedure name a problem?

Not really. However if the name changes one needs to change it in several lines. And in case one forgets to change the error handler, the compiler will not issue an error. Then the problem might go unnoticed for some time and could be misleading in case of exceptions.

I have two versions of the who_called function. The first standalone version is not recommend to use. Instead use the packaged version. However the first version demonstrates the implemented logic in a clear way.

Both functions will find the full name of the module that called this code block. The default behaviour is that modules from the same package are ignored. This allows easy usage inside packages ment for logging, auditing or testing. Everything that is instrumentation code can usually profit from this little helper.

Usecase 1 – demo purposes – standalone function

This logic assumes that we use the function to find out who called the procedure that is currently running. So not the name of the procedure itself, but the name of the caller.

Installation code

-- Cleanup
-- drop function who_called;

-- Usecase 1: standalone function - simple demo version

-- check the call stack to find the calling module.
create or replace function who_called return varchar2
    v_caller utl_call_stack.unit_qualified_name;
    v_name      varchar2(128);
    -- print the call stack just for demo purposes
    for i in 1..utl_call_stack.dynamic_depth  loop
         utl_call_stack.concatenate_subprogram( utl_call_stack.subprogram(i) )
    end loop;

    -- step backwards though the call stack 
    --      1 = current module = who_called function
    --      2 = the module that calls who_called 
    --      3 = the caller!
    v_caller := utl_call_stack.subprogram(3);

    v_name := utl_call_stack.concatenate_subprogram(v_caller);

    return v_name;

end who_called;

example 3 usage

set serveroutput on

-- Testcase 1.1 anonymous block
  procedure DoSomething as
    dbms_output.put_line('I want to doSomething!');
    dbms_output.put_line('Who called? '|| who_called);

  procedure DoMore as
    dbms_output.put_line('I want to doMore!');
    dbms_output.put_line('Who called? '|| who_called);


I want to doSomething!
Who called? __anonymous_block.DOMORE
I want to doMore!
Who called? __anonymous_block

PL/SQL-Prozedur erfolgreich abgeschlossen.

Each time the function is called, it prints the full call stack to the screen. This is just for demonstration purposes.
The first time the function who_called is executed is from inside the submodule doSomething.
The call stack at this point looks like this

Position in StackModule

Line 5 in the anonymous block is the line with the who_called function (call stack 1). It is part of the doSomething procedure (call stack 2). On position 3 of the call stack we always find the caller – here the procedure doMore that did execute doSomething.

The second time the function is called is in the doMore procedure. And the call stack looks like this:

Position in StackModule

In this case the caller is the anonymous block.

The example clearly shows that we can fetch the name of any module in the call stack (including anonymous blocks). We just have to fetch the name from the proper level in the call stack.

Usecase 2 – error and trace logging – packaged version

This works a bit differently when used inside a larger package that is used to instrument the code base. Typically we can add error logging logic, and logging debug (trace) calls. In that scenario, we want to find out the name of the module where the trace or error logging call is made.

The logic assumes that we have a package used for this instrumentation purpose only. Which also means, that all calls inside this package can be ignored. So essentially we want to find out what is the full name of the procedure (or function) where the log error or the log trace call is done. The full name can be something like package.module.submodule.subsubmodule.

Installation code

When testing this, make sure you do not have/own a package pk_log already. This would overwrite it.

If you like it copy the code for the who_called function into your own instrumentation package.

-- Cleanup
-- drop function who_called;

-- who_called packaged version 
-- check the call stack to find the calling module.
-- calls from this package are not considered
create or replace package pk_log as
/* this is a very simplified example for error logging 
   only demonstration purposes of the WHO_CALLED function 
   your error logging logic must be better!
  procedure error(message in varchar2);
  procedure trace(message in varchar2); 
end pk_log;

create or replace package body pk_log as
    function who_called return varchar2
        -- author: Sven Weller, 2021, syntegris information soultions GmbH
        v_pkg    varchar2(128) := $$plsql_unit; -- name of the package that holds the who_called function
        v_caller utl_call_stack.UNIT_QUALIFIED_NAME;
        v_name varchar2(128);
        -- step backwards though the call stack 
        --      1 = current module = who_called function
        --      2 = the module that calls who_called 
        --      3 = the caller!
        -- we could start later (level 3 or even 4), if we knew exactly how many other submodules are always in the call stack 
        -- This might be risky if function inlineing is possible, but that can be prevented by a pragma. Example in proc error
        for i in 2..utl_call_stack.dynamic_depth  loop

          v_caller := utl_call_stack.subprogram(i);

          v_name := utl_call_stack.concatenate_subprogram(v_caller);

          -- at lexical depth 1 we find the name of the package
          if v_caller(1) != v_pkg then
            -- In some scenarios, we might want to go even one step upward the call stack. Typically not for error logging purposes.
            -- In such cases we could simply fetch one more name from the call stack
            -- v_caller := utl_call_stack.subprogram(i+1);
            -- build the full name, but only if we found the caller
            v_name := utl_call_stack.concatenate_subprogram(v_caller);
            -- no need to go further through the call stack
          end if;
        end loop;

        return v_name;

    end who_called;

    procedure writeMessage(message in varchar2, message_type in varchar2) as
      case message_type 
      when 'E' then
        sys.dbms_output.put_line('ERROR at ' || who_called);
      when 'T' then
        sys.dbms_output.put_line('Info from ' || who_called);
        sys.dbms_output.put_line('ERROR! Unknown Message Typ ' || message_type || ' in '||who_called);
      end case;        

      if message_type = 'E' then 
        -- Add full error stack
        -- Add line where the error happened
        -- Only the last line from the backtrace is relevant, all other lines are already in the error stack
        -- compare output to sys.dbms_output.put_line(dbms_utility.format_error_backtrace);
        sys.dbms_output.put_line(' in '||nvl(UTL_CALL_STACK.backtrace_unit(utl_call_stack.backtrace_depth),'anonymous block')
                               ||' at line '||sys.utl_call_stack.backtrace_line(utl_call_stack.backtrace_depth)
      end if;
    end writeMessage; 

    procedure error(message in varchar2) as
      pragma autonomous_transaction;
      pragma inline(writeMessage,'NO'); -- we don't want to inline in case of optimization_level = 3.
      writeMessage(message,'E' );
    end error;
    procedure trace(message in varchar2) as
      pragma autonomous_transaction;
      writeMessage(message,'T' );
    end trace;   
end pk_log;

example 4 call package, log trace and log error

set serveroutput on

-- Testcase 2.1 anonymous block
  procedure DoSomething as
    pk_log.trace('I want to doSomething!');

  procedure DoMore as
    raise no_data_found;
  when others then
    pk_log.error('I wanted to do more, but this happend!');


This is the dbms_output (the simplified trace and error logs).

Info from __anonymous_block.DOSOMETHING
I want to doSomething!
ERROR at __anonymous_block
I wanted to do more, but this happend!
ORA-01403: no data found
ORA-06512: at line 10

 in anonymous block at line 13

The output is followed by the exception from the final RAISE; of the block (from SQL developer).

Fehler beim Start in Zeile: 4 in Befehl -
  procedure DoSomething as
    pk_log.trace('I want to doSomething!');

  procedure DoMore as
    raise no_data_found;
  when others then
    pk_log.error('I wanted to do more, but this happend!');
Fehlerbericht -
ORA-01403: no data found
ORA-06512: at line 17
ORA-06512: at line 10
ORA-06512: at line 13
01403. 00000 -  "no data found"
*Cause:    No data was found from the objects.
*Action:   There was no data from the objects which may be due to end of fetch.

In SQL Plus we get one more line, because of the additional raise in line 17.

There is something special in the code. The last line from the error backtrace is captured and returned. "in anonymous block at line 13" .

This last line is missing in the error stack, which means it is usually not returned when you only store dbms_utility.format_error_stack.

To add this last line utl_call_stack.backtrace_line(utl_call_stack.backtrace_depth) is executed. All other lines from the error backtrace are already in the error stack (depends on DB version) – so those can be ignored.


For my purposes it works fine and the resulting code is very clean.

I currently use it in an audit package based upon Connor McDonalds Audit Generator which was massively reworked for some custom needs. Maybe I publish this one day. But that is another story.

who_called advantages

  • Way less to type – less mistakes
  • much easier to see what the error handler does
  • concentrate coding effort on the important bits – mostly the additional information that should be added to the error

who_called disadvantages

  • A tiniest bit slower than hard coding the package and submodule name
    => For exception handling this is not relevant.
    => In huge debug or trace scenarios this time could add up if extreme performance is needed. Conditional compiling can be applied to remove instrumentation calls if you compile for extreme plsql performance.
  • since part of the logic is hidden other developers might wanted to add the module name into the message
  • module name is fetched at run time, not at compile time
    => this can be an issue, especially if optimization level 3 is used. Function inlineing (check the compiler warnings) is a method that the plsql optimizer can choose to speed up logic. However it also means that the function at runtime will not exist anymore. And utl_call_stack will not be able to report the name of the module correctly. Note that there is a compiler warning that inlining happend (which usually is a good thing).

Feel free to adapt this code snippet to your special needs. I’m convinced this will help you to write simpler and more efficient error handlers.

My favorite top 10 new features in Oracle database 19c


This database version is out now for 2 years. Why talk about new features now?

Many of my customers recently made the upgrade to the 19c database. Usually from 12.1 or 12.2, at least one from an even older version. So I compiled a list of things that I enjoy using and now having available in a 19c environment.

Be aware, 19c is not a major new database release it is just a rebranded However it is the final (=long term support) release of the Oracle 12.2 database family. As such it already received a stack of backported features of 21c. And some of them look very useful to me.

This is a highly subjective compilation. My time spent is 75% development, 50% DBA stuff and 25% management (yes those areas overlap) – which might explain some of the preferences.

10 Gradual Database Password Rollover for Applications (19.12)

A new profile parameter PASSWORD_ROLLOVER_TIME allows to change a database account password, without a downtime for the application that needs to use this password.

See also: https://docs.oracle.com/en/database/oracle/oracle-database/21/nfcon/gradual-database-password-rollover-for-applications-222774864.html

This parameter was originally developed for 21c and was backported in version 19.12. It can be set for a profile, but also the alter user syntax was enhanced.

Essentially it means for a certain time a user can login with either the old or with the new password. The maximum allowed time is 7 days.

Some accounts (administrative) can not use this, probably for security reasons.
ORA-28227: Gradual password rollover is not supported for administrative users.

Julian Dontcheff explains the parameter in more detail:

For security consideration check this post by Rodrigo Jorge: https://www.dbarj.com.br/en/2020/12/21c-gradual-database-password-rollover-brings-new-backdoor-opportunities/

9 Hybrid Partitioned Tables

Partitioned external tables were introduced in 12.2. Such tables use the external table driver (ORACLE_LOADER or ORACLE_DATAPUMP) to get the data from a file or even a cloud source (see also DBMS_CLOUD). And each partition can have a different file or even a different oracle directory as the source.

Now we can have tables that have external partitions and normal table partitions. They are called hybrid partitioned tables and are a special case of the external partitioned tables.

The base idea is that actively used partitions would stay in the database, but rarely used partitions can be externalised into cheaper storage systems and out of the DB.

The interesting part is that all external partitioned tables when used in queries can profit from certain partitioning operations, like partition pruning and partition wise joins.

Of cause the partitioning license is needed for this (or use 21cXE)!

Further readings:


Additionally to MAX_IDLE_TIME there is a new parameter that can restrict the duration of a database session: MAX_IDLE_BLOCKER_TIME. Both are initialization parameters, but also ressource plan directives.

Setting such a parameter to 0 means the session is unrestricted. Other values (for the initialization parameters) are in minutes.

MAX_IDLE_BLOCKER_TIME will limit sessions that consume ressources. This is the much better option, because connection pools from application servers usually are idle when the are not in use. Those sessions should not be touched if we set MAX_IDLE_TIME to 0 and MAX_IDLE_BLOCKER_TIME to 30 for example.


7 Bitmap Based Count Distinct SQL Function

Essentially this allows to create materialized views (MV) using COUNT (DISTINCT …) over multiple dimensions. Large aggregations in DWH environments is where this technique shines.

Use case

Imagine a statement as this

select count(distinct product) unique_#_of_products,
          count(distinct customer) unique_#_of_customers
from orders
where order_month = 202108;

If this statement is slow, we can improve it using a materialized view. However such a view is very difficult to write, if we allow different levels of hierachies and filters. Like for a time dimension it could be day, month, year.

The problem is that we can not aggregate the distinct count from a lower level to a higher level. The distinct customer count for single month could be 3. If each month in this year has the same distinct customer count of 3, we still don’t know if it was the same customer each month or totally different customers. All we can deduct is that the distinct customer count for this year is at least 3 and at most 36 (=3*12).

With 19c we got several new BITMAP_* functions that combined help us to create a materialized view for such a scenario. It is a complex task.

Connor McDonald explains how the logic works: https://connor-mcdonald.com/2020/08/17/faster-distinct-operations-in-19c/

Here are direct links to the 5 new functions that are needed to implement this. Three scalar functions and two aggregate functions (those that end with _AGG)

Also see Dani Schniders take on the subject: https://danischnider.wordpress.com/2019/04/20/bitmap-based-countdistinct-functions-in-oracle-19c/

6 SQL Macros (19.7)

SQL Macros were announced for 20c.

There are two kinds of SQL macros – scalar and table macros. In 19c we only have TABLE macros (so far).

Scalar macros can be used in most clauses of a SQL statement (typically a select). The select, the where, the order by clause, table macros can only be used in the from clause.

I was quite sceptical about SQL macros, but I’m starting to see how useful they can be. Personally I think the scalar macros are more usable, but we don’t have them in 19c yet. The table macros are the fancier things of cause.

Macros are implemented useing a plsql function. This function is resolved at compile time (not at runtime!) and provides additional code, that replaces the function. We can see it as some kind of advanced text substitution.


SQL Macros – Creating parameterised views


JSON_MERGEPATCH is a new sql function that allows partial updates to JSON clobs.


It allows easy patching of json objects. However we can not as easily add entries to a json array for example. To do so, the whole array needs to be replaced.

The clob is copied during the mergepatch process. If performance or memory is relevant, keep that in mind.

Examples from the docs

UPDATE j_purchaseorder 
SET po_document =
  json_mergepatch(po_document, '{"Special Instructions":null}');

SELECT json_mergepatch(treat('{"Order":"0815","Special Instructions":"Handle with Care"}' as json), '{"Special Instructions":null}'
                       RETURNING CLOB PRETTY)
  FROM dual;
  "Order" : "0815"

4 Easy Connect Plus

Easy connect plus allows everything that was previousy added in a tnsnames.ora or sqlnet.ora. So no tnsnames.ora is needed anymore, even when setting some special options for the connection.


Easy connect:


Easy Connect Plus:


Important to note here is a way to specify the protocol, which can be TCPS or TCP for example. Multiple servers are supported, which is very useful in case of a data guard environment. And there is a way to add parameters.

Some common parameters could be:

  • failover
  • wallet_location
  • sdu
  • retry_count

What those parameters do can be seen in the white paper about Easy Connect Plus.



Further readings

White Paper from oracle

Easy connect plus in scripts against environments protected by data guard (German article) by Marco Mischke:


3 SQL/JSON Simplifications

Dot notation for JSON inside SQL.


The following example is taken direcly from LiveSQL (link below).

drop table t;
create table t (id number, col clob check (col IS JSON));
insert into t values(1, '{a:[1, "1", "cat", true, null, false, {}, ["dog","mouse"]], b:"json is fun"}');

-- The so-called JSON simplified syntax or 'dot notation' allows to use dots and square brackets to navigate into a SQL expression if it is known to be JSON.

select t.col.a from t t;


select t.col.b from t t;

json is fun

Also see treat( ... as json)

More on LiveSQL:

JSON Path expressions and simplified syntax

2 3 PDBs for Standard Edition 2 (19.5)

Multitenant for Standard Edition!

Starting from 19c all databases including Standard Edition 2 can use up to 3 PDBs without any additional license.

That means for example we can now clone a PDB from Database TEST to Database INTEGRATION without unplugging the existing PDBs (as long as the total number excluding the PDB$SEED is equal or less than three).

The parameter MAX_PDBS influences how many pdbs you are allowed to create. It should be set to 3 if you are on a standard edition and have the recent patches.

See also: https://mikedietrichde.com/2019/09/17/3-pdbs-included-with-oracle-19c-and-desupport-of-non-cdbs-with-20c/

Also I remember that the limit was later further increased to 5. But I didn’t find any documentation for that, so I can only recommend using 3 PDBs if you are on SE2.

1 LISTAGG distinct

LISTAGG is a very usefull aggregate function (also with an analytic version) mostly used in reporting environments. For example we can create a list of all employees that belong to a department.

All following code examples have been run on LiveSQL.

listagg(e.last_name,',') within group (order by e.last_name)

Since each employee is always only in one department, that is a fairly easy list to do.

How about if we want to see the list of jobs that are in each department?

listagg(j.job_title,',') within group (order by j.job_title)

Unfortunatly this does repeat all the jobs.

Listagg distinct avoids those duplications.

listagg(distinct j.job_title,',') within group (order by j.job_title)

Thanks to @oraesque who suggested this feature in the Database Ideas Space.

Try it on LiveSQL: https://livesql.oracle.com/apex/livesql/file/content_HT1O85E4BHSBWN93G1B3M8SI2.html

Honorable mentions:

Some features did not make it into the list. I still like to make a quick mental note about them, but without further explanation.

  • Polymorphic table functions (18c)
  • dynamic sequence cache (19.10 – seems a bit shaky at the moment)
  • blockchain tables
  • dbms_hprof trace output in the database (18c)

10 little Oracle SQL features – that you are probably not using enough

This is a collection of some small features, patterns or snippets, that I seem to use frequently and like to share. The features are written in no specific order. However we can structure the features into performance related [P], convenience [C] or enablers [E].

Enablers [E] would be features that allow us to do something, that is otherwise impossible or at least very hard to achieve. Sometimes even to avoid an error.

Convenience [C] features are things that offer an alternative construct. Often they can be used instead of another option. Sometimes making the code easier to change (maintainability).

Performance [P] features improve execution speed. Often they come with a drawback or one should keep exceptions or rare race conditions in mind.

1. find multiple words using regexp_like [C]

instead of writing multiple like conditions (or repeating the same select using like multiple times), we can write a single regexp_like expression.


where (str like '%word1%' OR str like '%word2%' OR ...)

Much easier:

where regexp_like(str,'(word1|word2|...)')

The pipe “|” inside the regular expression acts as the OR operator and the parenthesis () form the subexpression that is needed for that OR.

A very typical case is to query the v$parameter table for a set of parameter names.

select name, value 
from v$parameter 
where regexp_like(name,'(listen|dispatch|service)');
dispatchers(PROTOCOL=TCP) (SERVICE=feno1podXDB)
query result

What I especially like about it, is that the need for wildcards is gone and it is so easy to add more “words” to search for.

2. fetch first row only [C]

Do you still use rownum frequently? Why not use to the row limiting clause instead?

I use it a lot, mostly for ad hoc queries. One advantage is that the need to create an inner query that does the correct ordering disappears.


-- show the top 3 tables with the most rows
select table_name, num_rows
from user_tables
order by num_rows desc
fetch first 3 rows only;
Query result

3. use expression lists (a,b) [E]

I use this frequently in update statements but also sometimes in joins. Instead of setting each column separately we can set multiple columns at once. Simply by using parenthesis and a comma between the columns.


update persons u
set (u.first_name, u.last_name) = (select a.firstname, a.name 
                                   from applicants a 
                                   where a.person_id = u.id)
where u.status = 'NEEDS REFRESH';

Or we can filter on a combination of columns (a,b) in ((c,d),(d,c),(e,f)). The need for this should be rare, but it happens. A strange edge case is when we do an equality comparison of two expression lists. For some reason the right side needs an additional set of ().

(a,b) = ((c,d))

4. DeMorgans law [C,E]

This might be the single most important math/logic rule, that SQL developers should know. It is usually applied inside the where clause and knowing it by heart will prevent a lot of easy mistakes.

{\displaystyle {\begin{aligned}{\overline {A\cup B}}&={\overline {A}}\cap {\overline {B}},\\{\overline {A\cap B}}&={\overline {A}}\cup {\overline {B}},\end{aligned}}}
demorgans law

written as a SQL expression

not(A or B) = not(A) and not(B)

not(A and B) = not(A) or not(B)

It is easy to remember. OR changes to AND when the parenthesis are resolved (and the other way round).
A and B here are full SQL expressions, for example x=y.

Why is that so important? SQL is a very logic based language. In daily speach we often use logical operators (AND/OR/NOT) differently than what is needed when they should be applied in SQL. This can become confusing very easily. Knowing DeMorgans law helps to quickly check if the logic is used correctly.

An example

Task: “Find all employees that are not working in sales and marketing.”

Converting this sentence 1:1 into SQL would result in this:

select * from emp
where not (department = 'SALES' and department = 'MARKETING'); 

Obviously what is ment, is that we want to find those employees that are working in some other department.

Applying Damorgans Law, we can reformulate our select statement. Also we replace NOT (x=y) with x!=y .

select * from emp
where  department != 'SALES' 
    OR department != 'MARKETING'; 

A developer should understand now, that this condition will always be true. For each row the department is either sales or not sales. And if it is sales, then it is not marketing. So the combination is always true (excluding NULL value considerations). Which is probably not what is wanted.

Here the row logic also plays a part. The where clause is applied to a single row, but normal speach often uses the boolean operation to combine data sets.

The correct query would be

select * from emp
where  not (department = 'SALES' 
             OR department = 'MARKETING'); 

or even better use IN instead or OR

select * from emp
where  department not in ( 'SALES' , 'MARKETING'); 

Historic anecdote: Da Morgan was not the first one to discover this logic. Centuries before Da Morgan, a guy called William of Ockam already wrote about it. He probably had it from Aristotle. Occam (the spelling changed over the centuries) nowadays is more famous for his razor.

5. rollup and grouping sets [C]

To get a total row for a query that uses sum or count, simply add rollup.

example rollup

select tablespace_name, count(*)
from dba_tables 
group by rollup (tablespace_name);
DATA	        362
SYSAUX	        1357
SYSTEM	        1056

The last line is the total number of tables. One problem here are tables without a tablespace (null). The grouping() function helps to separate a normal row with a data value NULL from a superaggregate row (19c grouping).

If there are multiple columns then instead of rollup I use grouping sets. The full set (=total) in a grouping set expression can be expressed by ().

example grouping sets + grouping

select case when grouping(tablespace_name) = 1 then '-all-' else tablespace_name end as tablespace_name
     , case when grouping(cluster_name) = 1 then '-all-' else cluster_name end as cluster_name
     , count(*) 
from dba_tables 
group by grouping sets ((tablespace_name, cluster_name),());
DATA		                362
SYSAUX		                1356
SYSTEM		                1020
SYSTEM	C_RG#	                2
SYSTEM	C_TS#	                2
SYSTEM	C_OBJ#	                17
SYSTEM	C_COBJ#	                2
SYSTEM	C_MLOG#	                2
SYSTEM	C_USER#	                2
DBFS_DATA		        2
SAMPLESCHEMA		        14
-all-	-all-	                3113

Again the last line shows the total number of tables. Since I didn’t want to see subtotals for tablespace or cluster grouping sets is the perfect solution t add this total row.

Also notice that the first line has all null values for the names, same as the total line would have. Using the grouping function allows to find out which is the total line and give it a meaningful text.

6. enhanced listagg [E]

In the more recent database versions, the very useful LISTAGG command got even better. For production code I nowadays always try to remember to add some safety protection in place in case result of the aggregation becomes big. Otherwise it could happen to get ORA-01489: result of string concatenation is too long.

Since 12.2 we can avoid the error by using the OVERFLOW clause

listagg (... on overflow truncate without count) ...

So instead of the error message, when the maximum string size is reached (4000 bytes or 32k bytes depending on max_string_size parameter) we get usable text without the statement raising an error.


select count(*)
, listagg(table_name,', ' on overflow truncate) within group (order by tablespace_name desc, table_name) all_tables
from dba_tables;

The three dots “…” are called an ellipsis and can be configured. without count would avoid writing the total number of entries to the end of the list. with count is the default if truncate is specified.

Although the overflow clause is very usfull, the ultimate goal would be to give the developer more influence over it. Recently there was an interesting twitter discussion around that topic.

Other useful enhancements (19c) were LISTAGG distinct. example on LiveSQL

7. Using sys_context [C,P]

A sys_context is something like a global variable in other languages. The normal context is for the session, but it is also possible to use application wide contexts.

Oracle provides several “preconfigured” contexts for us. The most common are ‘USERENV’ and ‘APEX$SESSION’ (for apex developers). Contexts are also used for security policies with VPD.

Here are the contexts that I frequently like to use

  • sys_context('userenv','client_identifier')
    value set by dbms_application_info.set_client_info
  • sys_context('userenv','current_edition_name')
    when using edition based redefinition (ebr), this shows which edition I’m in. Always good to double check!
  • sys_context('userenv','current_user')
    similar to pseudocolumn user. The schema name that we connected with.
    In some oracle versions (12.1) much faster than user, my recent test shows that this performance difference is now gone (19c)
  • sys_context('userenv','proxy_user')
    When doing a proxy connect then this is the name of the authenticated user, not the target schema
  • sys_context('userenv','os_user')
    useful when client_identifier is empty, for example the name of the Windows login account when using SQL Developer under Windows.
  • sys_context('apex$session','app_user')
    apex authenticated user
  • sys_context('apex$session','app_id')
    id of the current apex app
  • sys_context('trigger_ctl','audit_on')
    Is auditing enabled/disabled? Part of my custom adaptation/enhancement of Connor McDonalds Audit-Utility package
  • sys_context('userenv','sid')
    the session ID of the current database session

Side note: UPPER/lower case does not make a difference for contexts. Only for the values of the context.

Warning: when you start using your own custom contexts, be careful with the grant create any context privilege. It can be a security risk! Revoke it after it was used.

8. hint /*+ driving_site */ [P]

I feel like this is a classic for me. In one of my longest running projects we had a lot of databases connected by database links. One of the most important tuning activities was to understand how to do remote joins in a performant way. The driving_site hint was extremely important for that.

General rule: When doing remote queries, avoid mixing local and remote tables in one select statement. If only remote tables are in the statement always use the driving_site hint.

Nowadays the need for this hint diminishes, mostly because databases are less connected by DB links, but more by webservices (which does not really improve performance, but that is a different story)

9. join via using [C]

This only works reliably when the naming conventions of the data model fit to it.


We have a table PERSON and a table ADDRESS. The primary key in PERSON is PERS_ID. The relationship column (FK) in ADDRESS is also called PERS_ID. It wouldn’t work as well with ADR_PERS_ID for example.

Then we can do a quick join like this:

select * 
from person 
join address using (pers_id) 
where address.type='MAIN';

There are some slight differences compared to the normal way using ON. Mostly one then can not differentiate anymore from which table the pers_id originates. For example you can not refer to person.pers_id in the where clause anymore.

Currently I use it exclusively for ad hoc queries, not in plsql code.

10. interval literals [C]

If I want to add some time to a date or a timestamp, I always use interval literals (unless it is a full day or a full month).

Let’s say we want to check tickets that were entered during the last hour. Using the date datatype we could easily calculate an hour by dividing a full day / 24. This is how many developers calculate time. And it is perfectly ok to do so.

select * from tickets 
where createddt >= sysdate - 1/24 ;

The term sysdate-1/24 resembles one hour before “now”.

This is how I prefer to write it:

select * from tickets 
where createddt >= sysdate - interval '1' hour ;

I feel the code documents itself in a much better way.

Additionally requirements like this easily change.
Let’s change both expressions to 15 minutes.

sysdate - 15/24/60

sysdate - interval '15' minute

Which one is better to read and was quicker to change?

Warning: Sayan Malakshinov in the comments mentions that we should not use interval literals (ymintervals) to add full months. I agree with that. Calendar arithmetic is complex – there are specific functions like add_months for that. So do not use interval '1' month.

honorable mentions

Some more snippets or functions that didn’t quite make it into the list.

  • nvl2 [C]
  • the third parameter of to_char (nlsparams) [E]
  • query block hints [P]
  • hierarchical with clause [C,E]
  • lateral join [C]
  • alter sequence restart [C,E]
  • cross join [C]
  • lnnvl [C,E]
  • hex format “XXXX” [E]
  • sys.odcinumberlist [C]
  • analytic sum + correct window clause: rows between instead of range between [E]


I hope I mentioned something that you didn’t know about before and liked reading about.


Here are some features/patterns that I’m currently not using myself, but where I feel that they might become quite important in the future.

  • SQL Macros
  • boolean in SQL (not implemented yet, but Oracle seems to be working on it – I expect some news about this very soon)
  • median and other statistical functions like cume_dist, percentile_xxx, percent_rank
  • match_recognize
  • with clause udf functions (still think they are overrated, but I might be wrong)
  • approx_count_distinct
  • analytical views