plsql function: who_called – simplify your exception handler!

Does your error logging logic still look like this?

example 1 – old school handler

procedure doSomething is
begin
   ...
exception
  when others then
     pk_log.error(p_module    => $$PLSQL_UNIT, 
                  p_submodule => 'doSomething', 
                  p_message   => 'Something not done!'||sqlerrm);
     raise;
end;

Here is what I prefer!

example 2 – compact handler

procedure doSomething is
begin
   ...
exception
  when others then
     pk_log.error('Something not done!');
     raise;
end;

Such code is way more convinient! Of cause this will store the same information as the previous example 1.

To get there some parts have to be moved into the logging method (pk_log.error).

  1. sqlerrm
    is the error message
    When moving, it should also be improved (at least use dbms_utility.format_error_stack)
  2. $$PLSQL_UNIT
    is the name of the package (or of the standalone procedure or function)
  3. ‘doSomething’
    is the hardcoded name of the procedure
    It needs to be gathered dynamically at runtime inside the logging package.

The biggest issue is 3.) – to find the name of the module at runtime. The function who_called can solve that. It also gathers the package name on the way, so 2.) is also covered.

Side note: Is hardcoding the procedure name a problem?

Not really. However if the name changes one needs to change it in several lines. And in case one forgets to change the error handler, the compiler will not issue an error. Then the problem might go unnoticed for some time and could be misleading in case of exceptions.

I have two versions of the who_called function. The first standalone version is not recommend to use. Instead use the packaged version. However the first version demonstrates the implemented logic in a clear way.

Both functions will find the full name of the module that called this code block. The default behaviour is that modules from the same package are ignored. This allows easy usage inside packages ment for logging, auditing or testing. Everything that is instrumentation code can usually profit from this little helper.

Usecase 1 – demo purposes – standalone function

This logic assumes that we use the function to find out who called the procedure that is currently running. So not the name of the procedure itself, but the name of the caller.

Installation code

-- Cleanup
-- drop function who_called;

-- Usecase 1: standalone function - simple demo version

-- check the call stack to find the calling module.
create or replace function who_called return varchar2
  is
    v_caller utl_call_stack.unit_qualified_name;
    v_name      varchar2(128);
begin
    -- print the call stack just for demo purposes
    for i in 1..utl_call_stack.dynamic_depth  loop
      sys.dbms_output.put_line(
         rpad('+',utl_call_stack.dynamic_depth-i,'+')||
         utl_call_stack.concatenate_subprogram( utl_call_stack.subprogram(i) )
         );
    end loop;

    
    -- step backwards though the call stack 
    --      1 = current module = who_called function
    --      2 = the module that calls who_called 
    --      3 = the caller!
    v_caller := utl_call_stack.subprogram(3);

    v_name := utl_call_stack.concatenate_subprogram(v_caller);

    return v_name;

end who_called;
/

example 3 usage

set serveroutput on

-- Testcase 1.1 anonymous block
declare
  procedure DoSomething as
  begin
    dbms_output.put_line('I want to doSomething!');
    dbms_output.put_line('Who called? '|| who_called);
  end;

  procedure DoMore as
  begin
    doSomething;
    dbms_output.put_line('I want to doMore!');
    dbms_output.put_line('Who called? '|| who_called);
  end;
begin
  doMore;
end;
/

output

I want to doSomething!
+++WHO_CALLED
++__anonymous_block.DOSOMETHING
+__anonymous_block.DOMORE
__anonymous_block
Who called? __anonymous_block.DOMORE
I want to doMore!
++WHO_CALLED
+__anonymous_block.DOMORE
__anonymous_block
Who called? __anonymous_block


PL/SQL-Prozedur erfolgreich abgeschlossen.

Each time the function is called, it prints the full call stack to the screen. This is just for demonstration purposes.
The first time the function who_called is executed is from inside the submodule doSomething.
The call stack at this point looks like this

Position in StackModule
1who_called
2doSomething
3doMore
4__anonymous_block

Line 5 in the anonymous block is the line with the who_called function (call stack 1). It is part of the doSomething procedure (call stack 2). On position 3 of the call stack we always find the caller – here the procedure doMore that did execute doSomething.

The second time the function is called is in the doMore procedure. And the call stack looks like this:

Position in StackModule
1who_called
2doMore
3__anonymous_block

In this case the caller is the anonymous block.

The example clearly shows that we can fetch the name of any module in the call stack (including anonymous blocks). We just have to fetch the name from the proper level in the call stack.

Usecase 2 – error and trace logging – packaged version

This works a bit differently when used inside a larger package that is used to instrument the code base. Typically we can add error logging logic, and logging debug (trace) calls. In that scenario, we want to find out the name of the module where the trace or error logging call is made.

The logic assumes that we have a package used for this instrumentation purpose only. Which also means, that all calls inside this package can be ignored. So essentially we want to find out what is the full name of the procedure (or function) where the log error or the log trace call is done. The full name can be something like package.module.submodule.subsubmodule.

Installation code

When testing this, make sure you do not have/own a package pk_log already. This would overwrite it.

If you like it copy the code for the who_called function into your own instrumentation package.

-- Cleanup
-- drop function who_called;

-- who_called packaged version 
-- check the call stack to find the calling module.
-- calls from this package are not considered
create or replace package pk_log as
/* this is a very simplified example for error logging 
   only demonstration purposes of the WHO_CALLED function 
   your error logging logic must be better!
*/
  procedure error(message in varchar2);
  procedure trace(message in varchar2); 
end pk_log;
/

create or replace package body pk_log as
    function who_called return varchar2
      is
        -- author: Sven Weller, 2021, syntegris information soultions GmbH
        v_pkg    varchar2(128) := $$plsql_unit; -- name of the package that holds the who_called function
        v_caller utl_call_stack.UNIT_QUALIFIED_NAME;
        v_name varchar2(128);
    begin
        -- step backwards though the call stack 
        --      1 = current module = who_called function
        --      2 = the module that calls who_called 
        --      3 = the caller!
        -- we could start later (level 3 or even 4), if we knew exactly how many other submodules are always in the call stack 
        -- This might be risky if function inlineing is possible, but that can be prevented by a pragma. Example in proc error
        for i in 2..utl_call_stack.dynamic_depth  loop

          v_caller := utl_call_stack.subprogram(i);

          v_name := utl_call_stack.concatenate_subprogram(v_caller);

          -- at lexical depth 1 we find the name of the package
          if v_caller(1) != v_pkg then
            -- In some scenarios, we might want to go even one step upward the call stack. Typically not for error logging purposes.
            -- In such cases we could simply fetch one more name from the call stack
            -- v_caller := utl_call_stack.subprogram(i+1);
            
            -- build the full name, but only if we found the caller
            v_name := utl_call_stack.concatenate_subprogram(v_caller);
            -- no need to go further through the call stack
            exit;
          end if;
        end loop;

        return v_name;

    end who_called;

    procedure writeMessage(message in varchar2, message_type in varchar2) as
    begin
      case message_type 
      when 'E' then
        sys.dbms_output.put_line('ERROR at ' || who_called);
      when 'T' then
        sys.dbms_output.put_line('Info from ' || who_called);
      else  
        sys.dbms_output.put_line('ERROR! Unknown Message Typ ' || message_type || ' in '||who_called);
      end case;        
      sys.dbms_output.put_line(message);

      if message_type = 'E' then 
        -- Add full error stack
        sys.dbms_output.put_line(dbms_utility.format_error_stack);
        -- Add line where the error happened
        -- Only the last line from the backtrace is relevant, all other lines are already in the error stack
        -- compare output to sys.dbms_output.put_line(dbms_utility.format_error_backtrace);
        sys.dbms_output.put_line(' in '||nvl(UTL_CALL_STACK.backtrace_unit(utl_call_stack.backtrace_depth),'anonymous block')
                               ||' at line '||sys.utl_call_stack.backtrace_line(utl_call_stack.backtrace_depth)
                               );
      end if;
    end writeMessage; 

    procedure error(message in varchar2) as
      pragma autonomous_transaction;
    begin
      pragma inline(writeMessage,'NO'); -- we don't want to inline in case of optimization_level = 3.
      writeMessage(message,'E' );
    end error;
    procedure trace(message in varchar2) as
      pragma autonomous_transaction;
    begin
      writeMessage(message,'T' );
    end trace;   
end pk_log;
/

example 4 call package, log trace and log error

set serveroutput on

-- Testcase 2.1 anonymous block
declare
  procedure DoSomething as
  begin
    pk_log.trace('I want to doSomething!');
  end;

  procedure DoMore as
  begin
    doSomething;
    raise no_data_found;
  end;
begin
  doMore;
exception 
  when others then
    pk_log.error('I wanted to do more, but this happend!');
    raise;
end;
/

output

This is the dbms_output (the simplified trace and error logs).

Info from __anonymous_block.DOSOMETHING
I want to doSomething!
ERROR at __anonymous_block
I wanted to do more, but this happend!
ORA-01403: no data found
ORA-06512: at line 10

 in anonymous block at line 13

The output is followed by the exception from the final RAISE; of the block (from SQL developer).

Fehler beim Start in Zeile: 4 in Befehl -
declare
  procedure DoSomething as
  begin
    pk_log.trace('I want to doSomething!');
  end;

  procedure DoMore as
  begin
    doSomething;
    raise no_data_found;
  end;
begin
  doMore;
exception 
  when others then
    pk_log.error('I wanted to do more, but this happend!');
    raise;
end;
Fehlerbericht -
ORA-01403: no data found
ORA-06512: at line 17
ORA-06512: at line 10
ORA-06512: at line 13
01403. 00000 -  "no data found"
*Cause:    No data was found from the objects.
*Action:   There was no data from the objects which may be due to end of fetch.

In SQL Plus we get one more line, because of the additional raise in line 17.

There is something special in the code. The last line from the error backtrace is captured and returned. "in anonymous block at line 13" .

This last line is missing in the error stack, which means it is usually not returned when you only store dbms_utility.format_error_stack.

To add this last line utl_call_stack.backtrace_line(utl_call_stack.backtrace_depth) is executed. All other lines from the error backtrace are already in the error stack (depends on DB version) – so those can be ignored.

Conclusion

For my purposes it works fine and the resulting code is very clean.

I currently use it in an audit package based upon Connor McDonalds Audit Generator which was massively reworked for some custom needs. Maybe I publish this one day. But that is another story.

who_called advantages

  • Way less to type – less mistakes
  • much easier to see what the error handler does
  • concentrate coding effort on the important bits – mostly the additional information that should be added to the error

who_called disadvantages

  • A tiniest bit slower than hard coding the package and submodule name
    => For exception handling this is not relevant.
    => In huge debug or trace scenarios this time could add up if extreme performance is needed. Conditional compiling can be applied to remove instrumentation calls if you compile for extreme plsql performance.
  • since part of the logic is hidden other developers might wanted to add the module name into the message
  • module name is fetched at run time, not at compile time
    => this can be an issue, especially if optimization level 3 is used. Function inlineing (check the compiler warnings) is a method that the plsql optimizer can choose to speed up logic. However it also means that the function at runtime will not exist anymore. And utl_call_stack will not be able to report the name of the module correctly. Note that there is a compiler warning that inlining happend (which usually is a good thing).

Feel free to adapt this code snippet to your special needs. I’m convinced this will help you to write simpler and more efficient error handlers.

My favorite top 10 new features in Oracle database 19c

Justification

This database version is out now for 2 years. Why talk about new features now?

Many of my customers recently made the upgrade to the 19c database. Usually from 12.1 or 12.2, at least one from an even older version. So I compiled a list of things that I enjoy using and now having available in a 19c environment.

Be aware, 19c is not a major new database release it is just a rebranded 12.2.0.4. However it is the final (=long term support) release of the Oracle 12.2 database family. As such it already received a stack of backported features of 21c. And some of them look very useful to me.

This is a highly subjective compilation. My time spent is 75% development, 50% DBA stuff and 25% management (yes those areas overlap) – which might explain some of the preferences.

10 Gradual Database Password Rollover for Applications (19.12)

A new profile parameter PASSWORD_ROLLOVER_TIME allows to change a database account password, without a downtime for the application that needs to use this password.

See also: https://docs.oracle.com/en/database/oracle/oracle-database/21/nfcon/gradual-database-password-rollover-for-applications-222774864.html

This parameter was originally developed for 21c and was backported in version 19.12. It can be set for a profile, but also the alter user syntax was enhanced.

Essentially it means for a certain time a user can login with either the old or with the new password. The maximum allowed time is 7 days.

Some accounts (administrative) can not use this, probably for security reasons.
ORA-28227: Gradual password rollover is not supported for administrative users.

Julian Dontcheff explains the parameter in more detail:

For security consideration check this post by Rodrigo Jorge: https://www.dbarj.com.br/en/2020/12/21c-gradual-database-password-rollover-brings-new-backdoor-opportunities/

9 Hybrid Partitioned Tables

Partitioned external tables were introduced in 12.2. Such tables use the external table driver (ORACLE_LOADER or ORACLE_DATAPUMP) to get the data from a file or even a cloud source (see also DBMS_CLOUD). And each partition can have a different file or even a different oracle directory as the source.

Now we can have tables that have external partitions and normal table partitions. They are called hybrid partitioned tables and are a special case of the external partitioned tables.

The base idea is that actively used partitions would stay in the database, but rarely used partitions can be externalised into cheaper storage systems and out of the DB.

The interesting part is that all external partitioned tables when used in queries can profit from certain partitioning operations, like partition pruning and partition wise joins.

Of cause the partitioning license is needed for this (or use 21cXE)!

Further readings:

8 MAX_IDLE_BLOCKER_TIME

Additionally to MAX_IDLE_TIME there is a new parameter that can restrict the duration of a database session: MAX_IDLE_BLOCKER_TIME. Both are initialization parameters, but also ressource plan directives.

Setting such a parameter to 0 means the session is unrestricted. Other values (for the initialization parameters) are in minutes.

MAX_IDLE_BLOCKER_TIME will limit sessions that consume ressources. This is the much better option, because connection pools from application servers usually are idle when the are not in use. Those sessions should not be touched if we set MAX_IDLE_TIME to 0 and MAX_IDLE_BLOCKER_TIME to 30 for example.

https://docs.oracle.com/en/database/oracle/oracle-database/21/nfcon/details-max_idle_blocker_time-parameter-282450835.html

7 Bitmap Based Count Distinct SQL Function

Essentially this allows to create materialized views (MV) using COUNT (DISTINCT …) over multiple dimensions. Large aggregations in DWH environments is where this technique shines.

Use case

Imagine a statement as this

select count(distinct product) unique_#_of_products,
          count(distinct customer) unique_#_of_customers
from orders
where order_month = 202108;

If this statement is slow, we can improve it using a materialized view. However such a view is very difficult to write, if we allow different levels of hierachies and filters. Like for a time dimension it could be day, month, year.

The problem is that we can not aggregate the distinct count from a lower level to a higher level. The distinct customer count for single month could be 3. If each month in this year has the same distinct customer count of 3, we still don’t know if it was the same customer each month or totally different customers. All we can deduct is that the distinct customer count for this year is at least 3 and at most 36 (=3*12).

With 19c we got several new BITMAP_* functions that combined help us to create a materialized view for such a scenario. It is a complex task.

Connor McDonald explains how the logic works: https://connor-mcdonald.com/2020/08/17/faster-distinct-operations-in-19c/

Here are direct links to the 5 new functions that are needed to implement this. Three scalar functions and two aggregate functions (those that end with _AGG)

Also see Dani Schniders take on the subject: https://danischnider.wordpress.com/2019/04/20/bitmap-based-countdistinct-functions-in-oracle-19c/

6 SQL Macros (19.7)

SQL Macros were announced for 20c.

There are two kinds of SQL macros – scalar and table macros. In 19c we only have TABLE macros (so far).

Scalar macros can be used in most clauses of a SQL statement (typically a select). The select, the where, the order by clause, table macros can only be used in the from clause.

I was quite sceptical about SQL macros, but I’m starting to see how useful they can be. Personally I think the scalar macros are more usable, but we don’t have them in 19c yet. The table macros are the fancier things of cause.

Macros are implemented useing a plsql function. This function is resolved at compile time (not at runtime!) and provides additional code, that replaces the function. We can see it as some kind of advanced text substitution.

Links

SQL Macros – Creating parameterised views

5 JSON_MERGEPATCH

JSON_MERGEPATCH is a new sql function that allows partial updates to JSON clobs.

https://docs.oracle.com/en/database/oracle/oracle-database/19/adjsn/updating-json-document-json-merge-patch.html#GUID-31F88F28-3D92-489B-9CCD-BD1931B91F1F

It allows easy patching of json objects. However we can not as easily add entries to a json array for example. To do so, the whole array needs to be replaced.

The clob is copied during the mergepatch process. If performance or memory is relevant, keep that in mind.

Examples from the docs

UPDATE j_purchaseorder 
SET po_document =
  json_mergepatch(po_document, '{"Special Instructions":null}');

SELECT json_mergepatch(treat('{"Order":"0815","Special Instructions":"Handle with Care"}' as json), '{"Special Instructions":null}'
                       RETURNING CLOB PRETTY)
  FROM dual;
{
  "Order" : "0815"
}

4 Easy Connect Plus

Easy connect plus allows everything that was previousy added in a tnsnames.ora or sqlnet.ora. So no tnsnames.ora is needed anymore, even when setting some special options for the connection.

Syntax

Easy connect:


database_host[:port][/[service_name]


Easy Connect Plus:

[[protocol:]//]host1{,host2}[:port1]{,host2:port2}[/[service_name][:server_type][/instance_name]][?parameter_name=value{&parameter_name=value}]

Important to note here is a way to specify the protocol, which can be TCPS or TCP for example. Multiple servers are supported, which is very useful in case of a data guard environment. And there is a way to add parameters.

Some common parameters could be:

  • failover
  • wallet_location
  • sdu
  • retry_count

What those parameters do can be seen in the white paper about Easy Connect Plus.

Example

tcps://salesserver1:1521/sales.us.example.com?wallet_location=”/tmp/
oracle”

Further readings

White Paper from oracle

Easy connect plus in scripts against environments protected by data guard (German article) by Marco Mischke:

https://www.robotron.de/unternehmen/aktuelles/blog/easy-connect-plus-in-version-19c

3 SQL/JSON Simplifications

Dot notation for JSON inside SQL.

Example

The following example is taken direcly from LiveSQL (link below).

drop table t;
create table t (id number, col clob check (col IS JSON));
insert into t values(1, '{a:[1, "1", "cat", true, null, false, {}, ["dog","mouse"]], b:"json is fun"}');

-- The so-called JSON simplified syntax or 'dot notation' allows to use dots and square brackets to navigate into a SQL expression if it is known to be JSON.

select t.col.a from t t;

A
[1,"1","cat",true,null,false,{},["dog","mouse"]]

select t.col.b from t t;

B
json is fun

Also see treat( ... as json)

More on LiveSQL:

JSON Path expressions and simplified syntax

2 3 PDBs for Standard Edition 2 (19.5)

Multitenant for Standard Edition!

Starting from 19c all databases including Standard Edition 2 can use up to 3 PDBs without any additional license.

That means for example we can now clone a PDB from Database TEST to Database INTEGRATION without unplugging the existing PDBs (as long as the total number excluding the PDB$SEED is equal or less than three).

The parameter MAX_PDBS influences how many pdbs you are allowed to create. It should be set to 3 if you are on a standard edition and have the recent patches.

See also: https://mikedietrichde.com/2019/09/17/3-pdbs-included-with-oracle-19c-and-desupport-of-non-cdbs-with-20c/

Also I remember that the limit was later further increased to 5. But I didn’t find any documentation for that, so I can only recommend using 3 PDBs if you are on SE2.

1 LISTAGG distinct

LISTAGG is a very usefull aggregate function (also with an analytic version) mostly used in reporting environments. For example we can create a list of all employees that belong to a department.

All following code examples have been run on LiveSQL.

listagg(e.last_name,',') within group (order by e.last_name)

Since each employee is always only in one department, that is a fairly easy list to do.

How about if we want to see the list of jobs that are in each department?

listagg(j.job_title,',') within group (order by j.job_title)

Unfortunatly this does repeat all the jobs.

Listagg distinct avoids those duplications.

listagg(distinct j.job_title,',') within group (order by j.job_title)

Thanks to @oraesque who suggested this feature in the Database Ideas Space.

Try it on LiveSQL: https://livesql.oracle.com/apex/livesql/file/content_HT1O85E4BHSBWN93G1B3M8SI2.html

Honorable mentions:

Some features did not make it into the list. I still like to make a quick mental note about them, but without further explanation.

  • Polymorphic table functions (18c)
  • dynamic sequence cache (19.10 – seems a bit shaky at the moment)
  • blockchain tables
  • dbms_hprof trace output in the database (18c)