10 Oracle SQL functions and function parameters you might not know

Functions mentioned here are in alphabetical order

BITAND function but no BITOR function

There is a BITAND function in SQL but no BITOR function.

Bitand allows to bit-compare values. It returns a value where all bits have been compared using AND. That means, each bit in the first operand must be matched with the bits in the second operand. This is useful to compare a set of flags (yes/no values) that are stored inside the same integer field.

BIN_TO_NUM is a helper function that “translates” bits into (decimal) numbers.

BITAND example

SELECT BITAND(
    BIN_TO_NUM(1,0,0),  -- 4 in decimal
    BIN_TO_NUM(0,0,1)   -- 1 in decimal
    ) check_flag
  from dual;

0

So 4 and 1 have no bits in common.

SELECT BITAND(
    BIN_TO_NUM(1,0,1),  -- 5 in decimal
    BIN_TO_NUM(1,0,0)   -- 4 in decimal
    ) check_flag
  from dual;

4
4 = 0×2⁰ + 0×2 + 1×2² = BIN_TO_NUM(1,0,0)

So 5 and 4 have bit 3 in common. (Bitcounting starts 1 but powers of 2 start with 0)

BITOR would combine the bits from each operand.

Fortunately BITOR can be calculated using BITAND. We need to sum the operands and then subtract the BITAND result from that.

Here is an example

select 1 + 4  
       - BITAND(1,4) bitor_compare
  from dual;

5

select 5 + 4  
       - BITAND(5,4) bitor_compare
from dual;

5

CAST with default on conversion error

The CAST function allows to convert values from one datatype into another.

Parameter enhancements have been introduced in 12.2 along with similar enhancements to many datatype conversion functions.

https://docs.oracle.com/en/database/oracle/oracle-database/20/sqlrf/CAST.html#GUID-5A70235E-1209-4281-8521-B94497AAEF75

Let’s experiment a little with it.

select cast('01-01-20' as date default null on conversion error
           ,'dd-mm-rr') 
from dual;
01-Jan-20

select cast('INVALID' as date default null on conversion error
           ,'dd-mm-rr') 
from dual;
(null)

Works like a charm. The string was converted into a date (and converted back into a string to print it on the screen using local nls settings). An invalid string could not be converted, instead the default NULL value was chosen.

Sometimes we want to return today in case there is a conversion error…

select cast('01-01-20' as date default sysdate on conversion error
           ,'dd-mm-rr') 
from dual;
01-Jan-20

select cast('INVALID' as date default sysdate on conversion error
           ,'dd-mm-rr') 
from dual;
17-Mar-20

seems to work!

..not so fast

alter session set nls_date_format = 'day-Mon-rr';
select cast('01-01-20' as date default sysdate on conversion error
           ,'dd-mm-rr') 
from dual;

ORA-01858: a non-numeric character was found where a numeric was expected

Although we used cast with a default on conversion error, we still got an conversion error!
The problem here is the default parameter. It needs to be a string. Since it is not, an implicit type conversion happened. The result differs from the function parameter fmt. Which then leads to a conversion error.

Unfortunately the default parameter can not be an expression.

select cast('01-01-20' as date 
            default to_char(sysdate,'dd-mm-rr') 
            on conversion error
           ,'dd-mm-rr')
from dual;

ORA-43907: This argument must be a literal or bind variable.

Also nls_date_format is not allowed as nlsparam.

select cast('01-01-20' as date default sysdate on conversion error
           ,'dd-mm-rr'
           ,'nls_date_format=dd-mm-rr')
from dual;

ORA-12702: invalid NLS parameter string used in SQL function

The same could happen for other conversions, like casting a string into a number. This example is from the docs.

SELECT CAST(200
       AS NUMBER
       DEFAULT 0 ON CONVERSION ERROR)
  FROM DUAL;

It is a very very bad example, since 200 is already a number. So let’s assume this is ‘200.00’ but the default is still 0. And we add the other parameters to specify the format of this string.

select cast('200.00' as number default 0 on conversion error
           ,'FM999d00'
           ,'nls_numeric_characters=.,')
from dual;
200

select cast('200x00' as number default 0 on conversion error
           ,'FM999d00'
           ,'nls_numeric_characters=.,')
from dual;
0

Seems to work. But here again we can manipulate the session settings into a wrong conversion. Notice that the alter session settings and the nlsparam settings differ.

alter session set nls_numeric_characters = ',.';
Session altered.

select cast('200x00' as number default 0 on conversion error
           ,'FM999d00'
           ,'nls_numeric_characters=.,')
from dual;
0

Good. Fortunately the number conversion is slightly more robust than the date conversion. We can not set a session default number format (it is always “TM9” I believe) in the same way as we can set the default date format.

However we get into trouble once the default value uses non-integer numbers.

select cast('200x00' as number default 0.5 on conversion error
           ,'FM999d00'
           ,'nls_numeric_characters=.,')
from dual;

5

The default was 0.5 but we got 5!

The correct way to do it of cause would be to use a string with a format that reflects the fmt and nlsparam parameters.

select cast('200x00' as number default '0.5' on conversion error
           ,'FM999d00'
           ,'nls_numeric_characters=.,')
from dual;

0,5

Now we got the correct default value returned!

CHECKSUM

Checksum can be used to quickly calculate a checksum over a set of rows. There is an analytic version of it too.

Description of checksum.eps follows
https://docs.oracle.com/en/database/oracle/oracle-database/20/sqlrf/checksum.html#GUID-3F55C5DF-F23A-4B2F-BC6F-E03B34B78BA8

If you want to quickly see if data was changed between two databases, you can run a checksum over each column and compare the result on both environments.

select checksum(phone_number)
from hr.employees;

Never heard of this function before? Probably because it just got introduced in 20c.

Dump

The dump function is often used to inspect binary values. Useful for example when solving character set issues.

Not many know that it has some additional parameters.

Description of dump.eps follows
https://docs.oracle.com/en/database/oracle/oracle-database/20/sqlrf/DUMP.html#GUID-A05793C9-B35D-4BA7-B68C-E3693BCF47A5

Interesting is especially the return_fmt parameter. It allows to return the dump in octal (8), hex (16), decimal (10=default) or a special format (17). The return format “17” will return the single byte values in readable ASCII format, but multi byte values that can not be converted are shown in hex.

For example the “€” symbol in UTF-8 uses 3 Bytes: E2 82 AC

Source: https://unicode-table.com/en/20AC/

Dump-17 will return this

select dump('abc€',17) from dual;
Typ=96 Len=6: a,b,c,e2,82,ac

The string is made of 6 bytes. The first 3 are single byte characters. They are converted into ascii. The 4th character is the Euro-Sign, witch is 3 bytes in UTF-8. So the format 17 is interesting, because it helps us to focus and find the problematic characters.

Another interesting option is to add 1000 to the format. This will add the character-set to the output.

select dump('abc€',1010) from dual;
Typ=96 Len=6 CharacterSet=AL32UTF8: 97,98,99,226,130,172

Of cause that works with NCHAR too.

select dump(n'abc€',1010) from dual;
Typ=96 Len=8 CharacterSet=AL16UTF16: 0,97,0,98,0,99,32,172

LNNVL

This is a very special function. Apart from the unspeakable name it is used for Oracle internal SQL transformations by the optimizer. Here is a document that describes such transformations. LNNVL returns a Boolean and can be used directly inside a where clause. This is not yet possible for user defined functions that return Boolean, but other such functions do exist (e.g. regexp_like).

I sometimes use it to find rows where two values differ. If the comparison value is NULL I still want to consider it to be different.

Instead of using the non-equality operator != we have to use the opposite the equality operator =. This is just how LNNVL works.

select dummy from dual
where lnnvl('A'='B');
X;

select dummy from dual
where lnnvl('A'='A');
No data found.

select dummy from dual
where lnnvl('A'=null);
X

This assumes that ‘A’ always is a not null value.

To get the same result typically a comparison condition needs also consider the NULL case.

select dummy from dual
where (('A'!='B' and 'B' is not null) OR 'B' is null);

Since more conditions like this might follow, the statement very soon becomes cluttered with OR checks, parenthesis and IS NULL comparisons.

Unfortunately since not many developers are familiar with this function, we should always add some comments to explain the purpose and the behaviour.

NANVL

NANVL is similar to NVL.

NVL returns a value, when NULL is encountered.
NANVL returns a value when NaN (not a number) is encountered.

NaN is part of the binary_float and binary_double datatypes.

select to_binary_float(1/17) from dual;
0,05882353

select to_binary_float('NaN') from dual;
NaN

Lets apply NANVL to it

select nanvl(to_binary_float('NaN'),'0') from dual;
0.0

Or we could set it to negative infinity…

select nanvl(to_binary_float('NaN'),'-INF') from dual;
-Infinity

Somehow interesting, but rarely useful I think.

NEXT_DAY function parameter

The NEXT_DAY function needs a second parameter that defines what target weekday it points to.

Description of next_day.eps follows

The argument char must be a day of the week in the date language of your session, either the full name or the abbreviation. The minimum number of letters required is the number of letters in the abbreviated version. Any characters immediately following the valid abbreviation are ignored. The return value has the same hours, minutes, and seconds component as the argument date.

https://docs.oracle.com/en/database/oracle/oracle-database/20/sqlrf/NEXT_DAY.html#GUID-01B2CC7A-1A64-4A74-918E-26158C9096F6

This second parameter char works differently than what you might expect. Here is an example.

alter session set nls_date_format = 'Dy, DD-MON-YYYY';
Session altered.

alter session set nls_language='AMERICAN';
select next_day(sysdate,'FRIDAY') from dual;
Fri, 20-MAR-2020

alter session set nls_language='GERMAN';
Session altered.

select next_day(sysdate,'FREITAG') from dual;
Fr, 20-MRZ-2020

select next_day(sysdate,'FRIDAY') from dual;
Fr, 20-MRZ-2020

Isn’t that strange? It looks as if although my session language is GERMAN, but that the American spelling still works!

Let’s test this with a different language. In Spanish “friday” would be “viernes”.

select next_day(sysdate,'VIERNES') from dual;
ORA-01846: Kein gültiger Wochentag
"not a valid day of the week"

alter session set nls_language='SPANISH';
Session altered.

select next_day(sysdate,'VIERNES') from dual;
Vie, 20-MAR-2020

select next_day(sysdate,'FRIDAY') from dual;
ORA-01846: día de la semana no válido
"not a valid day of the week"

Ok. So the Spanish language works as long the NLS settings are correct, but it doesn’t accept the American spelling.

Is German special? In a way yes. In German several weekdays have an abbreviation that matches the American spelling. And the abbreviation is only two letters short. Especially those days that have a good chance to be used in the NEXT_DAY function (monday, friday, saturday, but not sunday!).

For “FREITAG” the abbreviation is “FR”. The NEXT_DAY function accepts anything as long as the char string starts with the abbreviation of a weekday in the correct language. So FREITAG, FRIDAY and even FRITZ or FROG all return the next friday.

alter session set nls_language='GERMAN';
Session altered.

select next_day(sysdate,'FR') from dual;
Fr, 20-MRZ-2020

select next_day(sysdate,'FRITZ') from dual;
Fr, 20-MRZ-2020

select next_day(sysdate,'FROG') from dual;
Fr, 20-MRZ-2020

Is this how you expected the char parameter to work?

Recommendation? Don’t use NEXT_DAY at all. I prefer TRUNC over NEXT_DAY any day.

POWER (sql) = ** (plsql)

In SQL to calculate an exponentiation we have to use the POWER function.

Example 2³

select power(2,3) from dual;
8

In plsql we can use the ** operator instead.

set serveroutput on
begin
  dbms_output.put_line(2**3); 
end;
/
8

Unfortunately the ** operator will not work in SQL.

ROUND_TIES_TO_EVEN function

Everybody knows the round function. Round rounds a value up which is exactly half.

1.5 ≈ 2
2.5 ≈ 3
...
7.5 ≈ 8
8.5 ≈ 9
9.5 ≈ 10

This is what statisticians call a biased function. The computed values do not reflect the distribution of the original values – especially if there are many 0.5 decisions.

An unbiased version of rounding is to round one value up, the next down, then up again.

1.5 ≈ 2
2.5 ≈ 2
...
7.5 ≈ 8
8.5 = 8
9.5 ≈ 10

In German that is called “kaufmännisches Runden”. Loosely translated to Bankers Rounding.

The round_ties_to_even function has been introduced in 18c.

Description of round_ties_to_even.eps follows
https://docs.oracle.com/en/database/oracle/oracle-database/19/sqlrf/ROUND_TIES_TO_EVEN-number.html#GUID-49919B6B-4337-4812-A248-B5D98F102DBD
select level+0.5 as val, 
       round(level+0.5), 
       round_ties_to_even(level+0.5)
from dual connect by level <= 10;
VALROUND(VAL)ROUND_TIES_TO_EVEN(VAL)
1.522
2.532
3.544
4.554
5.566
6.576
7.588
8.598
9.51010
10.51110

This new rounding function implements “Bankers Rounding”. Especially when adding up the rounded values, the rounded sum differs less from the correct (unrounded) sum compared to what normal rounding delivers.

STANDARD_HASH function

If we quickly want to create some hash values we can use the standard_hash function.

select standard_hash(dummy) from dual;
C032ADC1FF629C9B66F22749AD667E6BEADF144B

The default uses an SHA1 hash algorithm, but we can specify SHA1, SHA256, SHA384, SHA512 and MD5.

select standard_hash(dummy,'SHA512') from dual;
3173F0564AB9462B0978A765C1283F96F05AC9E9F8361EE1006DC905C153D85BF0E4C45622E5E990ABCF48FB5192AD34722E8D6A723278B39FEF9E4F9FC62378

MD5 is not recommended at all (https://www.md5online.org/blog/why-md5-is-not-safe/). Also SHA1 is not considered secure anymore, but you can still use it safely to create hash values by it for non security purposes.

Conclusion

That’s it. 10 functions or parameters of functions that you probably didn’t know before. At least not all of them.

APEX 19.2 upgrade quick tip: run check_lovs_for_errors.sql

The APEX 19.2 readme includes info about a small script check_lovs_for_errors.sql that one should run after the upgrade. It is very easy to overlook this in the readme files.

TL;DR;

run

sqlplus user/pwd@dbconnection
spool apex/utilities/check_lovs_for_errors_results.log
set serveroutput on
@apex/utilities/check_lovs_for_errors.sql APP_ID APP_USER
spool off
exit;

Read the result and correct the suspect LOVs in your application.

Behavioural change

Essentially there was a small change in the behaviour of existing LOVs. If the select that supports the LOV uses the identical name for the display and the return value, then this used to work, but will not anymore in 19.2.

And not work, means we will get an error when running that page.

Here is the relevant section of the release note.

Not only selects that use the same name for display and return value are effected, but also selects that run into an error. LOVs with the first kind of select did work before, LOVs with an error select did not work before. Now such an error might prevent the page from opening. Usually you should not have LOVs of the second type.

Fortunately the APEX team provided the little script check_lovs_for_errors.sql that help us to identify such LOVs.

How to run the script

The script is in the APEX installation folder (where you unzipped the APEX file) in the path apex/utilities/. In case you don’t have access to the script I show it at the end of this blog post. It might help you to identify such LOVs in your own application.

In my environment (18.7 database) the script would not run properly as SYS or as DBA. The reason for this is described in section “how to not run the script”. Instead one needs to connect with the schema of your workspace account. I didn’t test it, but I think you might be able to run it in the SQL workshop.

The script uses two input parameters.

1 = APP_ID = application id
2 = APP_USER = name of an apex user who has access to the application

You won’t need the password for the APP_USER.

Only LOVs from this application will be checked. And the name of the user is used to create an apex_session context.

You can add those parameters while calling the script or later in interactive mode, when the script is running .

Before running the script one must enable serveroutput to see the results.

I suggest to spool the output into a file

[oracle@ODA01 utilities]$ sqlplus /nolog
SQL*Plus: Release 18.0.0.0.0 - Production on Fri Feb 7 09:28:49 2020
Version 18.7.0.0.0

Copyright (c) 1982, 2018, Oracle.  All rights reserved.

SQL> connect workspaceschema/pwd@dbconnect
Connected.
SQL> spool check_lovs_for_errors_result.log
SQL> set serveroutput on
SQL> @check_lovs_for_errors.sql
Enter value for 1: 200
Enter value for 2: weller
--------------------
Error #1
Application: 200
Type:        Inline
Page:        3110
Name:        P3110_SELECT_LIST_ITEM
...
...
--------------------
Action Required! 5 LOV definitions raised an error, please review to see if they need to be fixed to avoid facing the same error in your app at runtime.

PL/SQL procedure successfully completed.

spool off

The output prints the errors that are found to the screen via dbms_output. Thats why spooling it to a file helps. You can always rerun the script to see the same or the corrected results.

Don’t forget to switch spooling off after the script run.

How to not run the script

If you connect with an account (db schema) that is doesn’t have the right to read your apex workspace data, then this error message appears.

ORA-20987: APEX – Security Group ID (your workspace identity) is invalid. –
Contact your application administrator.

If you connect as sys or with a dba account another error will probably happen.

SQL> connect / as sysdba
Connected.
SQL> alter session set container=pdb01;
Session altered.

SQL> @check_lovs_for_errors.sql
Enter value for 1: 200
Enter value for 2: weller
declare
*
ERROR at line 1:


ORA-06598: insufficient INHERIT PRIVILEGES privilege
ORA-06512: at “APEX_190200.WWV_FLOW_EXEC_API”, line 1
ORA-06512: at line 71
ORA-06598: insufficient INHERIT PRIVILEGES privilege
ORA-06512: at “APEX_190200.WWV_FLOW_SESSION_API”, line 56
ORA-06512: at line 11

This error will only happen in Oracle DB 12.1 onwards. A new plsql security feature was introduced. SYS and DBA accounts do not have the inherit (any) privileges privilege that they need to execute something using invoker rights.

If you connect with your normal database schema, the script will run.

The problem for SYS connects can be circumented by granting and afterwards revoking the INHERIT PRIVILEGES. I didn’t test this.

grant inherit privileges on user MYSCHEMA to sys;

It could also be that user needs to be the APEX schema APEX_190200.
To restore the original situation, revoke the grant.

revoke inherit privileges on user MYSCHEMA from sys;

Interpret and handle results

The script tries to run all selects that are powering the LOVs inside some apex session context. If an error is encountered, the statement and the error is reported.

A typical final result looks like this:

...
--------------------
Action Required! 5 LOV definitions raised an error, please review to see if they need to be fixed to avoid facing the same error in your app at runtime.

PL/SQL procedure successfully completed.

The typical problematic SQL that this script reports is when the two columns (display value and return value) have the same name.

The best way to correct this is (after the upgrade) to convert the LOV into a shared component that uses the new LOV logic. The new LOVs can have more the 2 columns and can declare, which column is used for the DISPLAY and which for the RETURN value and even one for icons. Display and return value could even be the same single column from the select.

Many but not all changes are so easy.

There are a few additional quirks I would like to mention.

Quirk 1 – false positives

A false positive is when the script reports an error, but in reality the query will never error out. This can happen for example if the select uses page items that are set when the LOV is used, but not set, when the script tests this query. Maybe because during the test all page items are empty.

Here is an example of such a case.

select first_name||' '||last_name as display_value, 
       employee_id as return_value
from hr.employees
where hire_date >= to_date(to_char(:P1_YEAR + 1) || '-01-10' ,'YYYY-MM-DD')

Error: ORA-01841: (full) year must be between -4713 and +9999, and not be
0

The problem in this example is, that the page item P1_YEAR is NULL while testing the query. Using NULL here leads to the ORA-error message. In reality this page item is filled with some value by other means (default, page load process, cascading parent lov, url parameter, …)

There could be many false positives. In my case 2 out of 5 findings were false positives.

If it is easily possible, then I suggest to rewrite such statements into a form that will not break the SELECT when the input items are null. This is not needed, but makes the code more robust, and also avoids that this statement is reported as problematic the next time this script runs.

For the example the solution could be to use an item that delivers the full date string

select first_name||' '||last_name as display_value, 
       employee_id as return_value
from hr.employees
where hire_date >= to_date(:P1_THREASHOLD_DAY ,'YYYY-MM-DD')

Quirk 2 – the output from the script has broken lines

When you take the reported SQL from the log file, then this SQL might not run, because of wrongly reported line breaks.

Example case (SQL statement was slightly modified and shortened)

original statement

Please notice, that there are several one line comments

select d,r
from (
   select anzeige_name||'('||dty_daten_typ||')' d
        , spalte_name r
        , spalte_anzeige_name rank
   from targettable
   --where (:P30_TYP,'J') in
   where :P30_TYP != 'I'
   and (:P30_ART,'J') in
      (('R', ck_bedingung1)
      ,('B', ck_bedingung2)
      ,('Z', druck_relevant2)
   --    ,('T', befehl_bedingung1)
   --    ,('I', befehl_relevant)
       )
--order by tabelle_name, anzeige_name

Broken spool output from script

select d,r
from (
   select anzeige_name||'('||dty_daten_typ||')' d
        , spalte_name r
        , spalte_anzeige_name rank
   from targettable
--where (:P30_TYP,'J') in
where :P30_TYP != 'I'
   and (:P30_ART,'J') in
      (('R', ck_bedingung1)
      ,('B', 
ck_bedingung2)
      ,('Z', druck_relevant2)
   --    ,('T', befehl_bedingung1)
   --    ,('I',
befehl_relevant)
       )
--order by tabelle_name,
anzeige_name

Please notice that there are line breaks now that are not in the original select. Some lines are broken in weird places – like inside comments.

The LOV and the statement that is executed by the script does not have this issue.

This is an example SQL that had a not existing column name somewhere. However I first needed to correct the wrong linebreaks, before I could successfully reproduce the issue inside SQL developer.

The error that is shown by the script itself is correct.

Quirk 3 – errors on outdated pages

Sometimes we have pages in our application that are not in use anymore. There is a higher chance for LOVs/Selects erroring on such a page. The error reported is correct, but the application would never run into this error.

Time to clean up and throw out the old stuff! This is sometimes a tough decision to make. We are only upgrading the APEX version, we should keep changes to our code to a minimum. It certainly helps, when all pages are under version control. Then one can always reconstruct the deleted page from VCS.

Additionally one should keep a backup copy of the old images folder. I usually just rename it to /i_old/.

Main query used

This query is used to identify the LOVs. Each LOV is then executed in a dynamic APEX session context

select *
from ( select 'Inline' lov_type,
              application_id app_id,
              page_id page_id,
              item_name name,
              lov_definition query
       from apex_application_page_items
       where lov_definition is not null
       and lov_definition not like 'STATIC%'
       and lov_definition not like '.%'
       union all
       select 'Shared Component' lov_type,
               application_id app_id,
               null page_id,
               list_of_values_name name,
               list_of_values_query query
               from apex_application_lovs
               where source_type_code = 'LEGACY_SQL' 
        )
where app_id = &APP_ID
order by lov_type, app_id, page_id, name

One way to prevent an LOV to be listed here, is to upgrade the LOV to the newer version. Then it will not be marked as “LEGACY_SQL” anymore.

Tip:
It is advisable to upgrade all old LOVs to the new LOVs. This should be an essential part of the upgrade strategy of your application. Do not misunderstand me. You can keep your LOVs as they were before and the application will still run. But if you want to move forward, upgrade all LOVs.

To upgrade a LOV that is already a shared component is very easy. Just press the Convert Legacy SQL button.

screenshot LOV conversion - before

And check the column mapping afterwards

screenshot LOV conversion - after

LOVs that are not shared components need to be copied into a shared component first. Future APEX versions will probably change this.

Conclusion

After the upgrade to APEX 19.2 control the correctness of your LOVs in all apps by running the /apex/utilities/check_lovs_for_errors.sql script for each app.

The script points out potential LOV issues – analyse and correct those issues.

Take also some time to upgrade old style LOVs into the shared component multicolumn LOV.

The script check_lovs_for_errors.sql

With kind permission from the APEX team I show the script here

Rem  Copyright (c) Oracle Corporation 1999 - 2019. All Rights Reserved.
Rem
Rem    NAME
Rem      check_lovs_for_errors.sql
Rem
Rem    DESCRIPTION
Rem      Checks List of Values defined both inline on an item, and at shared component level for possible execution errors.
Rem
Rem    NOTES
Rem      - This utility should be run via SQL*Plus and connected as your application parsing schema.
Rem
Rem    Arguments:
Rem     Position 1: Application ID you wish to check
Rem     Position 2: APEX user name from the workspace the application belongs to
Rem
Rem    MODIFIED    (MM/DD/YYYY)
Rem      arayner    10/09/2019 - Created

set define '&'
set concat on
set concat .
set verify off
set termout on

define APP_ID       = &1
define APEX_USER    = &2

declare 
    l_context        apex_exec.t_context;
    l_error_count    number := 0;
    l_lov_count      number := 0;
    
    function is_plsql( p_sql_or_plsql in varchar2 ) return boolean is
    begin
        return coalesce( not( upper( substr(trim(p_sql_or_plsql), 1, 6) ) = 'SELECT' or upper(substr(trim(p_sql_or_plsql), 1, 4)) = 'WITH' ), true );
    end is_plsql;
begin 
    apex_session.create_session( &APP_ID, null, '&APEX_USER' );

    for l_lov in ( select *
                      from ( select 'Inline'             lov_type,
                                    application_id       app_id,
                                    page_id              page_id,
                                    item_name            name,
                                    lov_definition       query
                               from apex_application_page_items
                              where lov_definition       is not null
                                and lov_definition       not like 'STATIC%'
                                and lov_definition       not like '.%'
                              union all
                             select 'Shared Component'   lov_type,
                                    application_id       app_id,
                                    null                 page_id,
                                    list_of_values_name  name,
                                    list_of_values_query query
                               from apex_application_lovs
                              where source_type_code     = 'LEGACY_SQL' )
                     where app_id = &APP_ID
                     order by lov_type, app_id, page_id, name
                  )
    loop
        begin 
            l_lov_count := l_lov_count + 1;
            l_context := apex_exec.open_query_context(
                p_location              => 'LOCAL',
                p_sql_query             => case when not is_plsql( l_lov.query) then l_lov.query end,
                p_plsql_function_body   => case when is_plsql( l_lov.query) then l_lov.query end,
                p_max_rows              => 0 );        -- We are not interested in the results, we simple want to see if the query executes without error
            apex_exec.close( l_context );
        exception    
            when others then 
                l_error_count := l_error_count + 1;
                apex_exec.close( l_context );
                sys.dbms_output.put_line( '--------------------' );
                sys.dbms_output.put_line( 'Error #' || l_error_count );
                sys.dbms_output.put_line( 'Application: ' || l_lov.app_id );
                sys.dbms_output.put_line( 'Type:        ' || l_lov.lov_type );
                if l_lov.page_id is not null then 
                    sys.dbms_output.put_line( 'Page:        ' || l_lov.page_id );
                end if;
                sys.dbms_output.put_line( 'Name:        ' || l_lov.name );
                sys.dbms_output.put_line( 'LOV (' || case when is_plsql( l_lov.query ) then 'PL/SQL' else 'SQL' end || '):' ); 
                sys.dbms_output.put_line( l_lov.query );
                sys.dbms_output.put_line( 'Error:       ' || SQLERRM );
                sys.dbms_output.put_line( '' );
        end;
    end loop;
    
    sys.dbms_output.put_line( '--------------------' );
    if l_error_count > 0 then
        sys.dbms_output.put_line( 'Action Required! ' || l_error_count || ' LOV definitions raised an error, please review to see if they need to be fixed to avoid facing the same error in your app at runtime.' );
    else
        sys.dbms_output.put_line( 'No action required! All ' || l_lov_count || ' LOV definitions executed successfully.' );
    end if;

exception    
    when others then 
        apex_exec.close( l_context );
        sys.dbms_output.put_line( SQLERRM );
end;
/
SQL Developer export wizard

workaround: SQL Error: ORA-54013: INSERT operation disallowed on virtual columns

Problem

Unfortunately when we export data using the SQL developer export functionality it also exports data for virtual columns. While this is certainly useful for spreadsheet exports, it is rarely sensible for insert statements.

Any insert into a virtual column fails with error ORA-54013.

SQL Error: ORA-54013: INSERT operation disallowed on virtual columns

54013. 0000 – “INSERT operation disallowed on virtual columns”
*Cause: Attempted to insert values into a virtual column
*Action: Re-issue the statment without providing values for a virtual column

One way to avoid this is to set the virtual column to invisible before the export is started. SQL Developer will not export the data from invisible columns. But changing the data model in the source just to get a little more convenience is usually not the way to go. Remember that setting columns to invisible and visible again will put these columns at the end of the logical column list. Which could be a problem if code depends on the column order (which it shouldn’t). Also this requires to export the data again.

If you already have an export file with a lot of inserts, the following dirty little trick might help you.

Temp column trick

We create a new real column, insert the data into this column and throw it away afterwards. The virtual column is temporarily moved out of the way while doing so.

Lets assume we have a table persons and a virtual column that concatenates the columns firstname and lastname into fullname.

create table person (id number, 
                     firstname varchar2(100), 
                     lastname varchar2(100), 
                     fullname as lastname||',  '||firstname);

We rename the target column and temporarily add a new column at the end of the table. The insert statements produced by SQL Developer have the column names included in the insert, so they do not depend on column order.

alter table person rename fullname to "_FULLNAME";
alter table person add fullname VARCHAR2(1000);

Then run the insert scripts that were created by SQL Developer.

SET DEFINE OFF;
@PERSON_DATA_TABLE.sql

After the data was inserted, restore the old columns.

alter table person drop column fullname;
alter table person rename "_FULLNAME" to fullname;

Automation

And here are two selects that generate all those statements for a complete schema. The second select needs to run before the generated code from the first select is executed. Otherwise the columns in the dictionary already have been changed.

Preparation Script SQL

select 'alter table "'||table_name||'" rename "'||column_name||'" to "_'||column_name||'";'||chr(10)||
       'alter table "'||table_name||'" add "'||column_name||'" '||case when data_type in ('VARCHAR', 'VARCHAR2','CHAR','NCHAR','NVARCHAR2') then 'VARCHAR2(4000 byte)' else data_type end||';'
       as pre_insert_ddl
from user_tab_cols
where virtual_column='YES' 
--and user_generated='YES'
and hidden_column = 'NO';  

Restore Script SQL

select 'alter table "'||table_name||'" drop column "'||column_name||'";'||chr(10)||
'alter table "'||table_name||'" rename "_'||column_name||'" to "'||column_name||'";'
as post_insert_ddl
from user_tab_cols
where virtual_column='YES'
--and user_generated='YES'
and hidden_column = 'NO';

Restrictions and side effects

The scripts are not perfect. For example if you have a column name that is already at maximum identifier length (e.g. 30 or 128 characters long) then adding _ in front of the column will produce an error. I think this situation is very rare and should best handled by manually correcting the scripts.

Varchar2 columns are expected to be equal or less than 4k, which might not be correct when extended_string_size is used in newer DB versions.

Some special datatypes might need extra consideration. In general the datatype of the new column does not matter as long as it doesn’t produce an error during insert.

SQL Quickie: How to reset an identity column

Introduction

Since Oracle 12 we can declare columns as an identity column. Other databases call this feature “autoincrement” column. A term that can easily lead to some misunderstandings, but is already well established. In some of those databases such a column allows to reset the current value. There is no (documented) way for that with an Oracle identity column. A recent discussion in the ODC forums lead me to think whether it is possible to set an identity column to a custom value. And yes it is.

TL;DR;

  • Change the increment of the sequence that supports the identity column by modifying the column.
  • Increase the value by an insert+rollback.
  • Reset the sequence by modifying the column again.

or use the undocumented RESTART option (12.2 onwards)

alter table demo_identity_reset
modify id generated always as identity restart start with 60;

Showcase

demo setup

First we create a table with an identity column. We use default settings for the sequence, which for example means increment +1 and cache 20.

/* prepare demo table */
create table demo_identity_reset
(id number generated always as identity primary key
,name varchar2(500)
);

Then insert some demo values.

/* add some data */
insert into demo_identity_reset (name) values ('test1');
insert into demo_identity_reset (name) values ('test2');
commit;
select * from demo_identity_reset;
ID NAME
1 test1
2 test2

We now have a table with an identity column where some values are used and more values are currently cached in the SGA.

Set a custom value (12.1)

Gather info

First we need to find the name of the sequence. Either by looking at all sequences, but more correct would be to look at the column definitions. The name of the sequence can be seen in the default value of the identity column.

/* check data dictionary and find the sequence name*/
select * from user_sequences;

or

/* check data dictionary and find the sequence name*/
select data_default
from user_tab_columns
where table_name = 'DEMO_IDENTITY_RESET'
and identity_column = 'YES';
DATA_DEFAULT
"MYSCHEMA"."ISEQ$$_172206".nextval

In my case the sequence name is ISEQ$$_172206. The generated name will always start with ISEQ$$_.

Other information we need to find is the current value, the cache size and the increment. The only decent way to see the current value is to set the sequence to nocache and after that check the data dictionary. We need to remember the original cache size setting to be able to restore it later to that value.

Start the change

From this step onward no other session should insert at the same time. It might be a good idea to lock the table in exclusive mode during those steps.

The cache also needs to be set to nocache to prevent caching any values with the wrong increment during the next steps.

/* reset the cache so that we can see the next value */
select cache_size 
from user_sequences 
where sequence_name = 'ISEQ$$_172206';

alter table demo_identity_reset 
modify id generated always as identity nocache;

Cache size was 20. The sequence now is not caching anymore. This is shown in the user_sequences dictionary view.

Now we can read the next value from the data dictionary and use that information to set an increment that jumps to our target value.

Here I assume a target value of 60.

/* find the next value */
select last_number from user_sequences 
where sequence_name = 'ISEQ$$_172206';

/* calculate the increment: 57 = 60(target value) - 3(last number) */

/* change the increment so that it jumps over all the other values. */
alter table demo_identity_reset 
modify id generated always as identity increment by 57 nocache; 

The last number was 3. Last number will always hold the next value after the cached values.

-- this does a sequence.nextval which we can not call otherwise
insert into demo_identity_reset (name) 
values ('dummy. rollback immediatly.'); 
rollback;

It doesn’t matter much, which value was inserted (59). In case you want to avoid any gaps, then this insert needs to be one of the real inserts that needs to be done. And the increment needs to be one value more (58 instead of 57). So that instead of a rollback you can do a commit.

/* reset the changes */
alter table demo_identity_reset 
modify id generated always as identity increment by 1 cache 20;

Done. The next insert will now start with our target value 60.

Set a custom value (12.2)

I tested this in 19c on LiveSql but it should work in all versions that belong to the 12.2. database family (12.2.0.1, 18c, 19c).

Instead of the complicated series of steps in 12.1 we can do it all in a single command. Unfortunately this command is undocumented, so use it at your own risk.

alter table demo_identity_reset 
modify id generated always as identity restart start with 60;

The RESTART option was mentioned in one of the new feature guides and Roger Troller made me aware of it in one of his blog posts. However currently (19c) it is still missing in the SQL reference documentation.

Test the change

The following steps are not needed anymore. But they are proof that the concepts works.

/* check the result in the dictionary*/
select last_number, cache_size 
from user_sequences 
where sequence_name = 'ISEQ$$_172206';
LAST_NUMBER    CACHE_SIZE
 60    20
/* test the result using the the table*/
insert into demo_identity_reset (name) values ('test3');
commit;
select * from demo_identity_reset;
IDNAME
1test1
2test2
60test3

Yes it works!

Cleanup code

Drop the table and the sequence. A simple drop table is not enough, because the sequence will continue to exist as long as the table is still in the recycle bin.

/* undo testcase */
drop table demo_identity_reset purge;

Further information

In general I would not recommend doing that in a production environment. But there are situations where you want to do it, for example after coping a production schema to a test database and cleaning out many values, you might like to reset the sequences too.

Such situations are rare and probably once in a while activities. That is also why I didn’t include the lock table command. And the lock will only hold until the next alter table command. Which makes it not safe to use.

It is not possible to use an alter sequence command for sequences that are connected with an identity column. It results in an error message.

alter sequence "ISEQ$$_172206" restart start with 60;

ORA-32793: cannot alter a system-generated sequence

Here is an example of the definition of an autoincrement column in another database (MySQL).

https://dev.mysql.com/doc/refman/5.7/en/example-auto-increment.html

The AUTO_INCREMENT attribute can be used to generate a unique identity for new rows

No value was specified for the AUTO_INCREMENT column, so MySQL assigned sequence numbers automatically. You can also explicitly assign 0 to the column to generate sequence numbers, unless the NO_AUTO_VALUE_ON_ZERO SQL mode is enabled.

If the column is declared NOT NULL, it is also possible to assign NULL to the column to generate sequence numbers.

When you insert any other value into an AUTO_INCREMENT column, the column is set to that value and the sequence is reset so that the next automatically generated value follows sequentially from the largest column value.

Updating an existing AUTO_INCREMENT column value in an InnoDB table does not reset the AUTO_INCREMENT sequence as it does for MyISAM and NDB tables.

MySQL 5.7 Reference Manual – 3.6.9 Using AUTO_INCREMENT

APEX 5.1 quick tipp: select * considerations for remote views

Many developers don’t like select *. I’m a big fan of it, but there are some edge cases where one needs to be careful. I don’t want to discuss pros and cons here in this thread. This is just to point out a few areas where useing select * gives not the advantage that one could expect.

I like to program select * in multiple scenarios. Usually to express, that I want to access (almost) every column from that table or view. I’m too lazy to write down every single column name.

A typical APEX example would be to edit a table via an Interactive Grid.

Scenario setup

Imagine a remote database with a table MYTAB. We access this remote database via a database link from a local database with APEX installed. In the SQL workshop we have a view that was originally created as

create view myTab_remote as
select * from mytab@dblinkname;

That advantage of that view is, that we create a layer where the name of the DB link is used. Using this layer we don’t need to know the DB link name inside our APEX application. Which makes it easier to switch or change the link.
Also the APEX framework is now able to use the oracle data dictionary for that view. Newer APEX versions have improved capabilities to work with remote tables. But here I’m talking about version 5.1.

Our Interactive Grid uses a fairly simple query.

select * from myTab_remote
where parent_id = :P1_MASTER_ID

Modifications

So what happens when a column is added on the remote database to our table, for example the column price_unit.

alter table myTab
add price_unit varchar2(5);

The APEX application is still working, no issue happend there. It does not automatically has the new column.

In general this is good. We have a robust appliaction that does not break, simply because a new column was added (it would break if a column is dropped however).

Certainly the column was added with some intention in mind. We also want to show and edit this new column in our Interactive Grid.

How to add this new column to our application?

First step is to update the view. One might think this is not necessary, because we created the view using select *. Unfortunatly that is not how Oracle works, especially not via a database link.
When investigating the view via the SQL tab in the SQL workshop, we find that the DDL is different than what one would think.

CREATE OR REPLACE FORCE VIEW "MYTAB_REMOTE" ("ID", "PARENT_ID", "COL_A",  "COL_B",  "PRICE") AS 
   select  "ID", "PARENT_ID", "COL_A",  "COL_B",  "PRICE"  from MYTAB@DBLINKNAME
 /

The column list was expanded into the select statement and also captured as the column list name of the view itself. The new column price_unit is missing.

To add the new column we could again simply create the view. A normal recompile is not enough. Btw. this also happens with views over local tables.

create or replace view myTab_remote as
select * from mytab@dblinkname;

Hint: never use the FORCE keyword for development work. It may hide important error messages.

After that we can select from the view and will see the new column. However it is still missing from the Interactive Grid.

The second step is to push APEX into reevaluating the column list for this view.

Simply validating the SQL query will not be enough. I found the following sequence of steps do work.
First add an alias and a new dummy column to the query.

select m.*, 'x' new_dummy_col
from myTab_remote m
where m.parent_id = :P1_MASTER_ID

Validate it and press OK. This results in two new columns added to the grid column list. Then enter the SQL query again and remove the dummy column.

select m.*
from myTab_remote m
where m.parent_id = :P1_MASTER_ID

Result is we now have the new column in our list.

The third and last step is to add this column to our default reports.

The new column is at the very end of the column list. On an IG it is shown, on an IR it is not shown by default. In any case we want to move this new column via the Actions > Columns menu to the correct position. And then save the public report Actions > Save Report. This should be done for each public report in that Grid.

Conclusion

Using select * in a APEX Interactive Grid scenario did not lower the maintenance needed when adding new columns to the base table. The second step even might be slightly confusing for many developers. So it might be a good idea to use a complete column list there instead of select *.

Site note: Using select * in other scenarios like inside PLSQL indeed does lower the maintenance when done right compared to an explicit full column list.

OGB Appreciation Day 2019: The Dark Mode

#ThanksOGB

This one goes out to the one I love
This one goes out to the one I left behind
A simple theme to occupy my time
This one goes out to the one I love

Dark
Mode

This one goes out to the one I love
This one goes out to the one I’ve left behind
Another theme has occupied my time
This one goes out to the one I love

Dark
Mode

original lyrics by R.E.M. – slightly modified by me [SUW]

Every developer knows that there is a dark mode. Here is why I like that and why you should use it too.

There are two simple reasons.

A) it is better for your eyes (not always)

B) it is better for the climate (always)

I’ll explain that a little bit later and also give some sources for further reading. First lets see how dark mode looks in a typical developer environment.

Examples

Now the DBA had terminals using dark mode for ages. The typical putty session is a black background with a white font color.

There is a reason why so many DBAs still use putty in dark mode (it is possible to change the colors). If a pattern is so well established, there must be a good reason for it.

Developers should think about that and consider to use dark mode too!

Don’t let those pesky DBAs get the upper hand! Fight them with their own weapons!

Now here is an example from APEX. APEX in its newest versions added a dark theme to be used for applications (18.1), but also a dark mode for the development environment (19.2 ea).

First this is how we are used to the application screen in page builder (light mode)

And this is the same page using dark mode

You can change it with a small little switch in your user settings.

Ok dark mode is not perfect for any scenario.

I tried to read up a bit about it. Usually it is recommended if there is lots of text on a screen or if there are graphical elements. Also it depends on the surrounding light. In bright daylight light mode would be better suited – because the difference to surrounding light seems to put some stress on your eyes. But do you program in bright daylight or in darker office light or even at night?

I think dashboard pages profit most from dark mode. Here is an example from the Sample Demo packaged application.

This application works very well in dark mode. Charts do look so much better on a dark background!

But we also see that it was developed in light mode and that there are several elements that are not optimized yet. In the screenshot the legend text is not readable and the tooltip still has a light gray background. Other charts have similar issues. If we do development in “dark mode first” then those issues should disappear.

Why do I claim it is good for the climate?

Dark mode uses less brightness, which means it uses less power on Laptops. The effect is especially noticable on OLED displays.

A study by google showed energy savings up to 60%:
https://www.xda-developers.com/google-wants-developers-to-add-dark-themes-to-save-battery-life/

  • Brightness affects power usage, and battery life, in a mostly linear fashion.
  • A dark theme can reduce battery usage, even with max brightness, by up to 63% on AMOLED displays.
  • Pixel color value can also affect power usage quite a bit, with white being the most power-hungry color.

Recommendation

Give dark mode a try!

PS: For blog posts I still do prefer light mode.

Links about dark mode

https://blog.weekdone.com/why-you-should-switch-on-dark-mode/

scientific research about effect of contrast and colors on humans: https://lite.mst.edu/media/research/ctel/documents/LITE-2003-04.pdf

dark mode for google maps announced: https://www.googlewatchblog.de/2019/10/google-maps-der-dark/

https://www.zdnet.com/article/google-heres-why-dark-mode-massively-extends-your-oled-phones-battery-life/

APEX 19.2 ea – features I like

The following is a not complete and not ordered list of tiny and major features that I found interesting in the upcoming APEX 19.2 version. Since this is an early adopter version, be aware that some of those features might not make it into the final product.

I can not guarantee that all features are exclusively for APEX 19.2 I might have overlooked one or the other in one of the previous very recent versions like 19.1 or 18.3.

Here is a link to the new features page. Not everything I mention is on that new features page.

Star item type

There is a new item type “star”. Essentially the star rating plugin has been integrated directly.

The default setting is ok, but we can customize it.

It works even better with the dark theme.

I encountered a display bug when trying to use a large font with the “fa-lg” modifier. But when trying to reproduce that again, it worked. It seems to help, if there is another star item on the same page.

Debug level default

The “Debug” level setting now has new default options. We can customize them via the toolbar.

Additionally to the old YES/NO debug switch we can also choose “App Trace” and “Full Trace”.

The setting is then attached to the toolbar “Debug” button, so that we can remember which option was chosen.

  • Info = Level4 = YES
  • APP Trace = Level6
  • Full Trace = Level9
dev toolbar
c_log_level_error constant t_log_level := 1; -- critical error 
c_log_level_warn constant t_log_level := 2; -- less critical error 
c_log_level_info constant t_log_level := 4; -- default level if debugging is enabled (for example, used by apex_application.debug) 
c_log_level_app_enter constant t_log_level := 5; -- application: messages when procedures/functions are entered 
c_log_level_app_trace constant t_log_level := 6; -- application: other messages within procedures/functions 
c_log_level_engine_enter constant t_log_level := 8; -- Application Express engine: messages when procedures/functions are entered 
c_log_level_engine_trace constant t_log_level := 9; -- Application Express engine: other messages within procedures/functions 

https://docs.oracle.com/en/database/oracle/application-express/19.1/aeapi/Constants-2.html#GUID-412ED5E2-1739-4A9B-B214-38674B4A4BCD

I think my preferred debug default will be “App Trace”.

faceted search

When I saw the first versions of this during APEX Connect 2019 it was still called “report filter region”. And I was totally hyped about that. It seems to fulfill that promise.

general considerations

There are other products like QlikView/QlikSense that offer such a kind of user experience. And once the users are used to that, they miss it in APEX applications. APEX is not yet on par with those products. Faceted searches are a big step forward for an improved user experience.

Here is a promotion video, that shows some of those enhanced filter capabilities in QlikView.

implementation in APEX

So how does the APEX version of faceted search looks like and how does it work?

  • First create a data region, like a classic report. I didn’t test other region types like Interactive Reports yet.
  • Then create a faceted search region that references the data region.
  • Then create facets that are connected to database columns from the data region.
APEX 19.2 – faceted search example

The user clicks on any of the filter options in the search (the facet region) and the report is filtered to those search options. The total counts for each options also react to the other filters and change accordingly.

In the facet search region we can create facets of different types (checkbox, radio, range, search, select list). A facet is always connected to a database column. Each facet has a large number of configuration options.

feature assessment

Here is a highly educating discussion with examples about how the GUI for faceted filters should work: https://www.uxmatters.com/mt/archives/2009/09/best-practices-for-designing-faceted-search-filters.php.

When we compare the criteria from that article to the options available in APEX we see that APEX does many things right. UX considerations like how to clear filters are supplied in a declarative way. Still the feature is not complete yet and slightly buggy (I had issues with the range type facet for example). But that is to be expected for such a complex enhancement. And we can expect more improvements in the future.

Meanwhile on twitter:

So one trick for range facets to use a special syntax on LOVs. It works!

This is the best and most useful 19.2 feature!

enhanced LOV

I like icons. The new features page mentions that we can now put icons to LOV entries. Unfortunately this works only for multi-column popup LOVs. Not for normal static lists.

An LOV can now be used to populate multiple items. This is a really nice enhancement (and long overdue).

Currently many options seem not to be balanced properly. For example a on a modal page layout – depending on some other settings – the shared component LOV uses the return value and does not show the display value.

Remember it is still an early adopter version.

improved LOV configuration

It is now possible to convert a locally defined LOV (one that is defined on a page) into a shared LOV.

Static LOV entries now have conditions, a sequence number and we can comment the LOV.

New design for the switch item

There are three different component settings possible, that influence how a switch item will look like.

APEX 19.2 display style = Switch

display style = Pill Button

display style = Select List

More options are better!

Unfortunately the new Switch style does not show the label value that is currently choosen. Nevertheless it looks very useful especially for interactive grid columns.

Excel Upload for Data load wizard

I couldn’t test this yet, but it sounds highly useful.

Markdown textareas

Markdown is a very common shorthand syntax to format text. Mostly used on GitHub, but also many wikis use a similar syntax. Useful for developers, but also for end users. If the rich text editor offers too many options, the markdown editor is very nice to concentrate on basic styling.

APEX 19.2 markdown editor vs. richtext editor

For more information check out the many helpful articles about markdown on GitHub:

https://help.github.com/en/articles/basic-writing-and-formatting-syntax

github markdown cheatsheet

Other tiny changes

  • The Hamburger Menu that collapses the left sidebar – removes the sidebar completely. Icons included. This is configurable (Collapse Mode in User Interface).
  • The page loading indicator is smaller and spins faster
  • The icon library can be “Font APEX” or “Font APEX – latest”
  • There is a compatibility mode 19.2
  • More options for the logo line: Like “IMAGE and TEXT”

Some quick facts about sequence caches and gaps in IDs

  1. Oracle sequences generate key values without gaps (minor exception from the rule: scalable sequences).
  2. Surrogate keys can and will have gaps.
  3. Gaps exist because it is not worth the huge effort to avoid them.

The biggest misunderstanding is probably the difference between the value coming from a sequence and the primary key value inside a database column (ID column). We can use a sequence to set a surrogate key. Typically we do this by a database trigger or an identity column or even as a value directly during the insert.

“The history of sequence caches is a history of misunderstandings.”

somebody on the internet, 1963

Essentially it is a two step process. The new ID value is created by sequence.nextval and then used during the insert. Oracle provided the logic to get a new sequence value in a fast, performant and multi-session friendly way. The developer is responsible to make sure this value is not wasted.

The sequence cache is a performance instrument. A lost cache value means we will see gaps in future ID values. There is only a very limited number of activities that makes cached sequence values “disappear”. The majority of gaps originates from other activities, like failed inserts or deletes. Many developers seem to think, that such a gap is caused by the sequence cache. I find it important to separate what reason ultimately lead to a gap in an ID value.

A gap analysis can be the first step. The following select finds gaps and orders them by size. Put your own table and PK column into the “ids” subquery and find out how the gaps are distributed in that table.

-- gap analysis
with ids as (select region_id as id, lead(region_id) over (order by region_id) as nextid from regions)
   , gaps as (select id, nextid - id as gap_size from ids where nextid > id + 1)
select gap_size , count(*) gaps_found  
from gaps
group by gap_size
order by count(*) desc;

Example result

gap_sizegaps_found
205
32
22
191

Here the data shows a few small gaps. Either some insert didn’t make it or rows had been deleted. But there are also several gaps of size 19 and 20, which means the sequence cache was probably lost at least 6 times.

To make it clear, IDs are allowed to have gaps. However when an end user complains why he constantly sees gaps in some ID, then we should investigate what is going on. Frequent gaps should not be the expected normal behaviour – it is worth finding the root cause for such an issue.

So here is a (non complete) list of activities that do or do not influence the caching of sequences and the existence of gaps in an ID column.

action result
multiple sessionscache used
multiple sessions with one session slow to commitIDs not in order of commit => not a gap eventually, but looks like a gap until final commit
alter system flush shared_pool ;cache values lost => gap in ID
shutdown immediate;cache values not lost
shutdown abort;cache values lost => gap in ID
insert; commit;no gap
insert; rollback;gap in ID
failed insertgap in ID
delete; commit;gap in ID
switch sequence from cache to nocachecache values not lost
kill sessionpossible gap in ID
(values not used|commited are lost)
different RAC sessionscache values not lost
(but separate caches on each RAC node)
SGA size too smallcache values lost => gap in ID
(sequence cache ages out of shared pool per LRU list)
session sequencesnot useful for unique ID values => duplicate ID
scalable sequencescache values not lost => planned large gap in ID
(because of prefix)
alter system cancel SQL ‘SID, serial’;
(insert killed)
gap in ID
alter system disconnect session post_transaction;no gap
(unless sequence was fetched before the transaction started)
alter system disconnect session immediate; gap in ID
sequence restartpossible duplicate IDs
sequence interval > 1planned gap in ID

That a sequence ages out of the shared pool is in my experience the most common cause of lost sequence values that are noticed. A strong indication is when frequent gaps of 20 values exist.

Possible workarounds are

a) set the sequence to nocache if the sequence is used rarely
alter myseq nocache;
This might cost performance.

b) pin the sequence in the shared pool
dbms_shared_pool.keep('MYSCHEMA.MYSEQ','Q')
Pinning the sequence is useful when you have times with lots of concurrent activities followed by larger idle periods.
This costs a little memory.

c) increase SGA size – sometimes it helps to increase the restricted shared pool area.
This costs memory (and money) – not available for other non-database processes.

Conclusions

  • Small gaps in surrogate keys (ID columns) are natural, mostly because of delete statements or failed inserts.
  • The sequence object typically does not produce gaps.
  • Very frequent gaps of size 20 (default cache size) are not natural and might need investigation.

Oracle 18c quick tipp: How to change ini_trans

TL;DR;

-- increase INITRANS for the table
alter table myTable initrans 4;

-- Rebuild the table including indexes
alter table myTable move update indexes;

Why to change

The ITL (interested transactions list) is a list that is used during DML to organize which session does currently changes to an oracle block.

INI_TRANS is the guaranteed minimum number of slots for concurrent transactions. The bigger INI_TRANS is the more space is reserved in the header of an oracle block. So do not set it to a high value and not for all tables. Usually setting it to 2 is enough.

Only for tables (blocks) that are very full and where the same block is accessed from different sessions concurrently this should be increased. A strong indicator would be the wait event “enq: TX – allocate ITL entry“. If this wait happens frequently, then you want to increase the available ITL slots by increasing INI_TRANS.

See this post by Arup Nanda for an excellent description about ITLs and ITL waits.

How to change

3 things need to be done.

  1. The table parameter needs to be changed. This setting does only influence new table extents, it will not modify any existing extents.
  2. The existing extents need to be changed by a MOVE operation (into the same tablespace) so that they pick up the new setting.
  3. During the MOVE all indexes on this table will become UNUSABLE. So they have to be rebuild.
-- increase INITRANS for the table
alter table myTable initrans 4;

Since Oracle 12.2 step 2 and 3 can be done with a single command.

-- Rebuild the table including indexes
alter table myTable move update indexes;

Note the UPDATE INDEXES addition to the alter table command.

In older DB versions it had to be done step by step

-- Rebuild the table including indexes
alter table myTable move;

-- check for unusable indexes
SELECT table_name, index_name, tablespace_name
FROM   user_indexes
WHERE  status = 'UNUSABLE'
order by table_name, index_name;

-- prepare a index rebuild statement
SELECT 'alter index '||index_name||' rebuild;'
FROM   user_indexes
WHERE  status = 'UNUSABLE';
and table_name = 'MYTABLE';

--> grab the result and run the index rebuild commands

-- rebuild indexes
alter index MYTABLE_FK04_IX rebuild;
alter index MYTABLE_FK05_IX rebuild;
alter index MYTABLES_PK rebuild;
alter index MYTABLE_UK01 rebuild;
alter index MYTABLE_UK02 rebuild;
alter index MYTABLE_FK01_IX rebuild;
alter index MYTABLE_FK02_IX rebuild;
alter index MYTABLE_FK03_IX rebuild;
alter index MYTABLE_FK06_IX rebuild;
alter index MYTABLE_FK07_IX rebuild;
alter index MYTABLE_FK08_IX rebuild;
alter index MYTABLE_FK09_IX rebuild;
alter index MYTABLE_FK10_IX rebuild;
alter index MYTABLE_FK11_IX rebuild;

Rebuilding tables in 18c is so easy now!

Sven Weller

I didn’t mention that you could MOVE the table as an online operation too. However I would avoid having other active sessions working with the table at the same time.

“greener” APEX apps

This started as a fun discussion in an syntegris internal chat group. But it got me thinking:

Is there a way to make APEX applications more energy efficient?

I came up with a set of ideas and options that have the potential to save energy. Not all options are useful and not all options have a measureable effect. Some are very controversial. The ideas presented here are far from complete.

I added my thoughts to a kialo discussion. Feel free to enter your own arguments there. This is my first kialo discussion – I want to try it out. So please be kind. 😉

I will try to add any interesting considerations that are in the kialo discussion also to this post.

From a very high level view point, we have three areas where we can influence the amount of energy that is used: server – network – client.

server side considerations

Using less CPU means using less energy. APEX applications should avoid wasting CPU.

serve APEX files from the Oracle CDN

A content delivery network provides common files from a server that is geographically close to the client. This also means that the company webserver will not provide the file but the CDN server will. It is possible to configure APEX to get its static files (like jquery, core apex css, etc.) from the CDN.

https://blogs.oracle.com/apex/announcing-oracle-apex-static-resources-on-oracle-content-delivery-network

PRO: The company web/app server will need less energy when the static files are delivered by the CDN.

CON: For local clients the time to get the files from the CDN will be longer. Which also means that energy is wasted somewhere. Probably on the network. Maybe on the firewall.

PRO: Ressources (images, css, js files) that are identical for multiple applications even from different host urls, will be cached only once on the clients browser if they are served from the same CDN.

ORDS should run on the same machine as the database.

PRO: One less server is one less server.

CON: Servers nowadays are mostly virtualized containers running on the same hardware.

ORDS standalone is better than ORDS on Tomcat

A similar consideration could be made for Tomcat vs. Glashfish or Tomcat vs. WebFly (formerly known as JBoss)

PRO: ORDS standalone runs with the Jetty Webserver. Jetty is considered to be very lightweight. Which means it uses less RAM and probably less CPU.

So far I couldn’t test this theory, but there should be a way to measure the impact of ORDS on the different application servers.
Here is a very analytical paper that studies the implications of running java based application on Jetty.
https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=2ahUKEwjPn5mqgYXjAhUD5aYKHddkDOAQFjAAegQIARAC&url=https%3A%2F%2Fhal.inria.fr%2Fhal-00652992v2%2Fdocument&usg=AOvVaw2TAsQuz5MBwOcE7K8Mccav

Using more disk space does not correlate to energy consumption the same way as CPU does. Discs are cheap and they don’t need more energy to store more data.

CON: More data that needs to be transfered and backuped will inevitably need more energy.

PRO: Powered down disks keep the data regardless of how much they store. 4 GB or 5 GB is not a difference to them.

CON: Even a powered down hard drive will still use a little energy. https://superuser.com/questions/565653/how-much-power-does-a-hard-drive-use

Avoid pagination schemes that include Z (x of y from z)

PRO: A pagination scheme with Z means the query is rewritten to include the analytical row_number() over () function. This also means, that more data needs to be read from disk into memory, just to know how many potential rows there are.

CON: For smaller tables a multi-block-read count will load the whole table into memory. Counting or numbering the rows will then have no noticable impact on energy consumption.

client side considerations

In general a tablet uses less energy than a laptop which uses less energy than a desktop PC. So should APEX applications be build for tablets first and foremost?

CON: It does not make sense to exclude users that prefere a certain screen size.

CON: We should make the whole screen available to the end user. More information on a single page will mean less need to navigate to other pages. Which potentially saves energy.

PRO: APEX applications should be carefully tested and optimized for usage on a tablet. If they work really well on a smaller screen, then users will not feel forced to switch to another device when using the app.

APEX 19 has a dark mode. Developers should use dark mode and dark themes to save energy.

The idea is that a monitor uses less energy to show “blackness” than to show “whiteness”.

PRO: That seems to be true. See this fairly recent recommendation by google: https://www.xda-developers.com/google-wants-developers-to-add-dark-themes-to-save-battery-life/

  • Brightness affects power usage, and battery life, in a mostly linear fashion.
  • A dark theme can reduce battery usage, even with max brightness, by up to 63% on AMOLED displays.
  • Pixel color value can also affect power usage quite a bit, with white being the most power-hungry color.

Classic Reports use less energy than Interactive Reports (or Interactive Grids)

PRO: A classic report will be created on the webserver and send as static html to the browser. An interactive report will add a lot of javascript functionality to the data. Which means all those JS files need to be loaded and some are executed during page load.

CON: Interactive Grids have this lazy loading pagination scheme. Data will only be fetched when the end user requests to see more data by scrolling down.

PRO: With APEX 18.2 we also get the option to create a column toggle report. This is slightly more flexible than a classic report without all the interactions made available by an IR.

network traffic

The computers that run the network are there anyway. However we can speculate that by sending less information over the internet, there would be less powerhungry computers that are the internet nodes.

Decrease image size. Provide smaller images depending on screen resolution.

PRO: Smaller images allow faster loading times which positivly effects network traffic and also client rending times. This is especially noticable on low bandwith machines like mobile phones.

CON: There is a huge starting effort to create and provide images for all sizes. It is highly doubtful that this original energy spent will ever be overtaken by the energy savings from smaller network traffic.

How do web applications build with APEX compare to other web applications?

APEX applications have a tendency to consume data in a very efficient way.

The overhead for communication with the database is less then for other types of web applications, like PHP.

CON: Not all applications are data centric.

PRO: There is no additional network between the application layer and the database layer.

CON: With APEX 18 and even more so with APEX 19, reports can be built on top of a web service. The web service provider would be the same no matter what kind of framework was used to build the application. In many cases a pure javascript framework (like Oracle JET) might be more efficient to consume those REST based web services, than an APEX application.

PRO: Because the APEX repository already resides in the database all the optimizions for database logic are used while preparing a page for rendering.

Result?

What started out as a “Schnapsidee” quickly got me thinking. There are actually quite a few ways to influence the energy consumption of an APEX application. And this is true during developement but also for running the application.