Warning! Hidden traps – comments in SQL*Plus

I am a big fan of comments (see also CDD Manifesto a bit further down). Comments are simple ways to add information to objects/code without changing code.

Comments that work in SQL or PLSQL might behave differently in SQL*Plus. This post is to create awareness about such traps.

The basics

comments in SQL and PLSQL

Comments in SQL and PLSQL come in two flavors. The one line comment and the multi line comment.

-- this is a one line comment. 

The double hyphen “–” starts the one line comment. It ends at the end of the line.

/* this
is a
mutli-line
comment */

A slash “/” followed by an asterisks “*” starts a multi line comment. And it is ended in the opposite way.

Comments do not stack.

-- this /* will
not work */

This also will not work.

/* line 1
   /* line 2
      /* line 3
      line 4 */
   line 5 */
line 6 */

The multi-line comment ends after the first time “*/” is encountered. So line 5 and 6 are not commented, instead line 5 will raise a syntax error.

In SQL a very special form of a comment is a hint. A hint tries to give the SQL optimizer additional instructions. A + indicates that this comment is a hint.

select /*+warp_speed(5) */ from big_table;

A hint could be compared to an annotation in other languages. The comment syntax is used to add some special meaning to the surrounding code. This post is not about such annotations.

comments in SQL*Plus

In general we run SQL statements, PLSQL blocks and SQL*plus commands in a SQL*Plus script. Typical examples are SELECT statements, CREATE OR REPLACE PACKAGE commands and SET commands..

The normal SQL and PLSQL comments do function in SQL*plus as well. Additionally we have a REM(ARK) command to add comments.

SQL> REM this is a SQL*plus comment

We can also use PROMPT to have a kind of echo inside a SQL*Plus script. Although it is not a real comment, it is used in a similar way.

SQL> PROMPT *** heading ***
*** heading ***

Surprising differences

SQL*Plus interprets comments slightly differently than SQL or PL/SQL. Here are some examples. Try to guess what happens before reading the result. The examples do work in SQL Developer, but behave differently in SQL*Plus.

Example 1

generate some DDL commands…

select 'drop '||object_type||';' 
from user_objects 
--where object_type = 'VIEW';
where object_type = 'TRIGGER';

Result 1 (real object names changed)
... many similar rows ...
'DROP'||OBJECT_TYPE||''||OBJECT_NAME||';'
drop SYNONYM PLSQL_PROFILER_DATA;
drop SYNONYM PLSQL_PROFILER_RUNNUMBER;
drop SYNONYM PLSQL_PROFILER_RUNS;
drop SYNONYM PLSQL_PROFILER_UNITS;
drop INDEX ABC_UK01;
drop PACKAGE MYPK;
drop PACKAGE BODY MYPK;
drop TABLE MYTAB;
drop DATABASE LINK DISTANT.WORLD.LOCAL;
drop DATABASE LINK REMOTE.WORK.LOCAL;
2243 rows selected.
SQL> SP2-0734: unknown command beginning "where obje…" - rest of line ignored.
SQL>

2242 rows selected? Yes there were some triggers, but not that many. When SQL*plus finds a semicolon at the end of a line, it interprets this as the end of the command. Even if that semicolon is commented out. In this example the statement produced a drop command for each object in the schema. But the intention was to only drop all triggers.

Example 2

Add plsql doc information…

create or replace function convertNum(p1 in varchar2) return number
is
/***************************
@usage: converts a string into a number. 
        If conversion is not possible, 0 is returned.
@author: Sven
@param : p1 : input string
@return : converted number
****************************/
begin
  return to_number(p1 default '0' on conversion error);
end convertNum;
/

Result 2
SQL>
create or replace function convertNum(p1 in varchar2) return number
is
/***************************
@usage: converts a string into a number.
SP2-0310: unable to open file "usage:.sql"
        If conversion is not possible, 0 is returned.
@author: Sven
SP2-0310: unable to open file "author:.sql"
@param : p1 : input string
SP2-0310: unable to open file "param.sql"
@return : converted number
SP2-0310: unable to open file "return.sql"
****************************/
begin
  return to_number(p1 default '0' on conversion error);
end convertNum;
  9  /

Function created.

SQL>

An @ at the start of a line is always interpreted as a call to a script even if it is inside a multi line comment. This lowers the usefulness of the “generate DB doc” functionality in SQL Developer a lot. Putting additional blanks before the @ will not change this behaviour.

Example 3

comment out optional params…

SQL> set serveroutput on -- size unlimited
Result 3
SP2-0158: unknown SET option "--"

This means the SET option is ignored. There is an error message, but it is a SQL*Plus (SP2) error message. This would not stop a script that has when WHENEVER SQLERROR EXIT set at the beginning.

Example 4

comment a statement…

select sysdate from dual; -- print current day

Result 4
SQL> select sysdate from dual; -- print current day
2
Do you notice the 2 at the beginning? The statement is not terminated. The buffer waits for the next line and the next command probably finishes the buffer and runs into a syntax error.

Warning

Personally I think that those examples are bugs. But it is documented behaviour, so Oracle probably disagrees. There is an extra section in the SQL*plus documentation, dedicated to comments in scripts where these and more problems are described.

Remember

If you intend to run some plsql code or a SQL statement from a SQL plus script, then

  • do not have semicolons in a comment, especially not at the end of the line
  • do not use @ in the comment, especially not at the beginning of a line
  • do not use — inside SELECTs (hints seem to work)
  • avoid to use — or /* */ to write sql*plus comments, instead use REMARK or PROMPT

More side effects

Comments near the beginning of a command can confuse sql*plus.

SQL> create /* testproc */ procedure test
2 is
3 begin null;
Warning: Procedure created with compilation errors.
SQL> end;
SP2-0042: unknown command "end" - rest of line ignored.
SQL> /

Blank lines in comments can break the code. This can be suppressed by using SET SQLBLANKLINES ON.

SQL> SET SQLBLANKLINES ON;
SQL> select sysdate from dual
2 /* this is a multi
3 line
4
5 comment */
6 ;
SYSDATE
25-AUG-20

SQL> SET SQLBLANKLINES OFF;
SQL> select sysdate from dual
2 /* this is a multi
3 line
4
SQL> comment */
2 ;
comment */
*
ERROR at line 1:
ORA-00969: missing ON keyword
SQL>

An ampersand “&” in comments is tried to be substituted by a value of a variable. This can be suppressed using SET DEFINE OFF. Not a big deal unless we do not want to suppress it.

SQL> select sysdate from dual
2 -- current day at marks&spencers
3 ;
Enter value for spencers:
old 2: -- current day at marks&spencers
new 2: -- current day at marks
SYSDATE
25-AUG-20

A comment inside a PROMPT is not a comment, but will be printed to the screen.

SQL> prompt /* comment or not comment that is the question */
/* comment or not comment that is the question */

If you want a “;” to be printed to a spool file using PROMPT, use two “;;”.

SQL> prompt test;;
test;

Additional thoughts

Good developers can get a solid overview over unknown code, just by reading the comments.

Set your syntax highlighting in such a way that makes it easy to read comments. Unfortunately Oracle SQL Developer some time ago switched from green comments to grey comments. My recommendation is to change this setting.

Comment Driven Development (CDD) Manifesto

Comments are good!

Every code needs comments!

Comment first, code later!

Comments state code intention.

Comments describe why a technical decision was made

or not made.

Comments can give examples.

The world needs more comments!

APEX 19.2 quick tip: error messages for PL/SQL Dynamic Content regions

Sometimes I use a region of type PL/SQL Dynamic Content. It is rare and there are often better alternatives – like using a select over a pipelined table function – but it still happens.

If the plsql code errors out, then the error is not shown on the same page, but instead is presented as a page rendering error.

Here is a simplified example:

The PL/SQL Dynamic Content region looks like this

   sys.htp.p('Hello APEX world!');
   raise_application_error(-20201, 'Virus detected, world stopped!');

Running the page results in an ugly rendering error on a separate error page. See right display.

Remember the “Technical Info” part is not shown to the end user. So the error message is not shown.

Here is what I did to improve the error presentation. There might be better ways to do it, but it was a quick win and worked for me. So it might help you as well.

1. Capture the error message and write it to the page.
Here I used a hidden div. The class “sqlerror” is used to mark the div.

begin
  sys.htp.p('Hello APEX world!');
  raise_application_error(-20201, 'Virus detected, world stopped!');
exception
  when others then
    -- currently this does not show the error at the page. This would work in a page process.
    apex_error.add_error (
      p_message          => sqlerrm,
      p_display_location => apex_error.c_inline_in_notification 
    );     
    
    --write error into hidden page item
    sys.htp.p('<div class="sqlerror" style="display:none">'||sqlerrm||'</div>');
end;  

2. After page load, use a dynamic action to find this error and display it.

We can not use the “After Refresh” event since PLSQL Dynamic Content regions are not refreshable. So it must be “Page Load”.

// test if there was an error during rendering of any region on the page
if ($('.sqlerror').length) {

    //First clear the errors
    apex.message.clearErrors();

    // Now show new error
    apex.message.showErrors([
        {
            type:       "error",
            location:   [ "page" ],
            message:    $('.sqlerror').text(),
            unsafe:     false
        }]);    
}

Result:

A major difference is that everything that was already rendered at the time of error is now shown in the region. This might be wanted or could be unwanted.

Also we essentially hide the error from the apex engine. So when inspecting /utilities/Debug Messages/ we don’t see that error anymore.

Before the change we would get a debug entry like this

Since we suppressed all errors, we need to call apex_debug.error inside the plsql exception handler.

    
    -- APEX debug  instrumentation call
    apex_debug.error ('Rendering of region failed! %s', sqlerrm);

And then we get such an entry in the APEX debug stack.

tested in APEX 19.2 and 20.1. It should also work in older versions, although the javascript api apex.message.showErrors might need to be replaced with some custom logic.

conclusion

The region type “PL/SQL Dynamic Content” is very different to normal APEX regions. Do not forget to implement and test some basic error handling if you must use such a region type.

Side node: In theory you should also watch out for an XSS attack. If somebody manages to add malicious code into the error message, it might be shown on the page. I didn’t test for that however.

secret oracle db features

10 “secret” Oracle DB features you might not know

“Secret” here means either not documented or hidden in the documentation so deep that is is almost impossible to find.

1 – restart sequence

We can use a single ddl statement to reset a sequence. Works from 12.2 onwards.

alter sequence mySeq restart start with 115;

It is also possible to use this to reset an identity column!

alter table myTable
modify ID generated always as identity 
       restart start with 115;

This is not (yet) documented.

2 – hidden format options

We can display a negative number in round brackets using the “pt” format option. Which is not in the list of format options in the SQL reference.

select to_char(-10,'fm9990pt') from dual;
(10)

documented is the “pr” option which uses angle brackets

select to_char(-10,'fm9990pr') from dual;
<10>

See also this ODC thread.

This is documented, but not in the SQL Reference but in some OBI Publisher documentation.

3 – sample (x,y)

The SAMPLE clause is documented. It is part of the query table expression clause. Not documented is the second secret parameter y.

sample clause
https://docs.oracle.com/en/database/oracle/oracle-database/19/sqlrf/img_text/sample_clause.html

The sample clause allows us to get a portion (a sample) of the tables data.

select * 
from employees SAMPLE(20); 

This returns 20 percent of the employees.

There is a ROW mode and a BLOCK mode. In BLOCK mode 20% of the blocks are returned, instead of 20% of the rows.

The sample clause has a second undocumented parameter.

select *
from employees SAMPLE(20, 0.5);

This also returns 20%. So far no obvious difference…

Jonathan Lewis mentioned that this second parameter is the number of consecutive groups/rows during row or block sampling. It was introduced during changes for the SAMPLE clause in 10g.

My non-representative tests showed that setting this parameter to 1 during block sampling makes sure we get results. Otherwise we frequently get blocks from the tables tablespace that are not filled with data. This seems to depend on the data distribution.

So… if a query like the following returns 0…

-- how many rows are in 1% of the blocks?
select count(*)
from employees SAMPLE BLOCK (1);
0

… then you can set the second parameter to 1 to be sure to get some data.

-- how many rows are in 1% of the blocks?
select count(*)
from employees SAMPLE BLOCK (1, 1);
110

Use at your own risk!

See also this ODC thread.

4 – Exadata test parameter

Warning! This is only for testing purposes! It can potentially destroy your database.

alter system set "_exadata_feature_on"=true scope=spfile;
shutdown immediate;
startup;

Using this parameter we can test some of the newest options which are available only on exadata/cloud, like automated indexing.

5 – sequence cache is not lost during graceful database shutdown

Shutdown normal|immediate|transactional will not loose the sequence cache. Instead the next value will be used and set as last_number. Only a shutdown abort will loose the cache.

This was documented somewhere but I am not able to find it anymore. it might be that only older versions of the oracle docs still have this information.

To confirm it, I tested it. It works in SE and EE. It will probably not work in the same way on a RAC – because there is a separate cache for each instance.

There is an indirect mention of the behaviour in the 11g admin guide.

The database might skip sequence numbers if you choose to cache a set of sequence numbers. For example, when an instance abnormally shuts down (for example, when an instance failure occurs or a SHUTDOWN ABORT statement is issued), sequence numbers that have been cached but not used are lost. Also, sequence numbers that have been used but not saved are lost as well. The database might also skip cached sequence numbers after an export and import. See Oracle Database Utilities for details.

So a shutdown abort will loose the sequence cache. All other “organized” shutdowns will keep the cache intact.

The problem is that most DBAs seem way too keen on doing a shutdown abort. So loosing the sequence cache looks like the default behaviour, although it is not.

6 – reorder columns

We can change the logical column order by setting them invisible and making them visible again. This can be used to reorder columns in the column list. Important to understand is that the table is not physically changed, only the data dictionary entry is manipulated.

If we want to reorder columns physically, maybe to improve data compression, then this is not the way to do it.

example

create table t (col1 number, col2 number);
Table T created.

desc t;
Name Null? Type   
---- ----- ------ 
COL1       NUMBER 
COL2       NUMBER 

/* make the first column invisible */
alter table t modify col1 invisible;
Table T altered.

desc t;
Name Null? Type   
---- ----- ------ 
COL2       NUMBER 

/* make the first column visible again */
alter table t modify col1 visible;
Table T altered.

desc t;
Name Null? Type   
---- ----- ------ 
COL2       NUMBER 
COL1       NUMBER 

/* the column order is now reversed */

To physically reorder a table we need to export/import it, use dbms_redefinition or move the table.

Side note: The describe command in SQL*plus (and SQL Developer) will not show invisible columns. You can change that by setting the documented SQL*PLUS parameter COLINVISIBLE.

SET COLINVISIBLE ON

desc t;
Name             Null? Type   
---------------- ----- ------ 
COL2                   NUMBER 
COL1 (INVISIBLE)       NUMBER 

7 – object type “NON-EXISTENT”

Namespaces are used to avoid naming clashes between objects of different types. Among others the namespace 1 includes the object types table, view and function. So we can not have a table with the same name as a view or as a function.

There is a special object type in namespace 1 that is called “NON-EXISTENT”. This object type is used to mark dropped objects under edition based redefinition (EBR).

demo

Assuming 3 editions: ORA$BASE => CHILD1 => CHILD2

-- switch to base edition (ORA$BASE)
ALTER SESSION SET EDITION="ORA$BASE";

Adding editionable code in the parent edition ORA$BASE.

-- create a new function in base (EDITIONABLE is the default)
create function test_editions return varchar2
as 
begin
  return 'Base';
end;
/

Test the function in the child edition. It is usable. Although we didn’t create it there yet.

-- switch to child edition (CHILD1)
ALTER SESSION SET EDITION="CHILD1";

-- test function
select test_editions from dual;
TEST_EDITIONS
Base
function is functioning…

Now remove the function from the child edition, but not from the parent.

-- drop code
drop function test_editions;

-- test function again
select test_editions from dual;

ORA-00904: “TEST_EDITIONS”: invalid identifier

This error message was expected. The function was removed. No real object with the name “TEST_EDITIONS” exists (in namespace 1) in the edition CHILD1.
Just to be on the safe side, double check in parent edition:

-- switch back to base and test function
ALTER SESSION SET EDITION="ORA$BASE";
select test_editions from dual;
TEST_EDITIONS
Base
function still functioning in BASE

This is also the result that was expected. The function still works in the parent edition.

What does the data dictionary say? A special view user_objects_ae shows the objects from all editions (_ae):

-- Check dictionary
select object_name, edition_name, object_type, namespace, editionable, status
from user_objects_ae
where object_name = 'TEST_EDITIONS';
OBJECT_NAMEEDITION_NAMEOBJECT_TYPENAMESPACEEDITIONABLESTATUS
TEST_EDITIONSORA$BASEFUNCTION1YVALID
TEST_EDITIONSCHILD1NON-EXISTENT1VALID
TEST_EDITIONSCHILD2FUNCTION1YINVALID

We see that in edition “CHILD1” the object type of the function now is “NON-EXISTENT”. This is important, otherwise the database would not know to return the ORA-00904 error message when it searches for this object.

If the whole row would not exist, then the object in edition CHILD1 would be inherited from the parent edition ORA$BASE. This was the data situation before the function was actualized in CHILD1.

Even after we drop the function from all editions, then the dictionary will still show the NON-EXISTENT entries.

A side note/warning:

The status of the function in the second child edition CHILD2 is invalid after running the sample code. This dictionary entry was created, when the function was dropped in CHILD1.

This is because this function was never compiled in CHILD2. That means the code never was actualized there. And when we drop the code from all parent editions, then in CHILD2 any try to compile the code will not give an error, but the status will remain invalid.

This is why it is strongly recommended to recompile all editionable objects after some code changes, but especially after deletes/drops. Each recompile breaks the relationship to the parent and actualizes the code. So code changes in the parent will not propagate to the child anymore after it was recompiled at least once in the child.

8 – connect / as sysdba to PDB

Using a bequeath connection we can login to an Oracle database without a username and a password. This only works directly on the server using an os account that has enhanced privileges (part of the oradba group). To do so the ORACLE_SID environment variable needs to be set. However this connects only to the CDB.

It is possible to do the same using the undocumented environment variable ORACLE_PDB_SID to do the same against a PDB.

Mike Dietrich has posted a nice blog post about it: https://mikedietrichde.com/2020/05/07/can-you-connect-as-sysdba-without-password-into-a-pdb-directly/

export ORACLE_SID=MYCDB
export ORACLE_PDB_SID=MYPDB

$ sqlplus / as sysdba
SQL> show con_name

CON_NAME
------------------------------
MYPDB

9 – print boolean values to serveroutput

There is an undocumented utility package sys.dbms_metadata_util that has a couple of interesting helper methods.

For example put_bool can be used to print boolean variables to serveroutput (using dbms_output).

-- print boolean values to dbms_output
set serveroutput on
begin
  sys.dbms_metadata_util.put_bool('This is ', TRUE);
  sys.dbms_metadata_util.put_bool('This is ', FALSE);
  sys.dbms_metadata_util.put_bool('This is ', NULL);
end;
/
META:13:17:57.489: This is TRUE
META:13:17:57.489: This is FALSE
META:13:17:57.489: This is FALSE

PL/SQL procedure successfully completed.

There are also several other interesting functions in this package. For example a way to convert dictionary column values of type LONG into a CLOB variable.

v_view_source := sys.dbms_metadata_util.long2clob(10000,'USER_SOURCE','TEXT',25) ;

10 – undo_sql

There is a way to create the opposite of an SQL DML statement. For an insert we get a delete, for a delete we get an insert with the deleted column values and for an update we get an update with the previous values.

This is documented, but not many people know about. It is part of the flashback logic and more specifically the flashback transaction query.

Using Oracle Flashback Transaction Query

Use Oracle Flashback Transaction Query to retrieve metadata and historical data for a given transaction or for all transactions in a given time interval. Oracle Flashback Transaction Query queries the static data dictionary view FLASHBACK_TRANSACTION_QUERY, whose columns are described in Oracle Database Reference.

The column UNDO_SQL shows the SQL code that is the logical opposite of the DML operation performed by the transaction. You can usually use this code to reverse the logical steps taken during the transaction. However, there are cases where the UNDO_SQL code is not the exact opposite of the original transaction. For example, a UNDO_SQL INSERT operation might not insert a row back in a table at the same ROWID from which it was deleted.

Some security researchers (Alexander Kornbrust at DOAG 2019 – page 74) believe this poses an security risk, especially because it would allow us to find out about column values that are redacted. There are several ways to circumvent data redaction, and Oracle itself states that this feature is not ment as a high secure implementation (see Security Guidelines). So I don’t see that of a high risk. Using undo_sql to circumvent data redaction is the most complex hack I have seen so far – there are way easier methods.


Still an interesting feature. Here is an example.
It needs a lot of non default settings to make it work.

example

Using the HR demo schema to test this.

First some steps to enable supplemental logging which is needed for undo_sql to work.

-- preparations as DBA in the CDB
select * from dba_pdbs;
alter session set container = CDB$ROOT;
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
alter session set container = ORCL;
-- and as HR
ALTER TABLE HR.EMPLOYEES ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;

Now we can do some changes.

-- as HR user
select * from hr.employees;

update hr.employees
set last_name = '--secret--'
where first_name = 'Randall';
2 rows updated.
delete from hr.employees
where first_name = 'Randall';
2 rows deleted.

Don’t commit or rollback yet. Simply change to another session/account.

-- as DBA
select undo_sql
from FLASHBACK_TRANSACTION_QUERY
where table_name='EMPLOYEES';

Result:

insert into "HR"."EMPLOYEES"("EMPLOYEE_ID","FIRST_NAME","LAST_NAME","EMAIL","PHONE_NUMBER","HIRE_DATE","JOB_ID","SALARY","COMMISSION_PCT","MANAGER_ID","DEPARTMENT_ID") values ('191','Randall','--secret--','RPERKINS','650.505.4876',TO_DATE('19-DEC-99', 'DD-MON-RR'),'SH_CLERK','2500',NULL,'122','50');
insert into "HR"."EMPLOYEES"("EMPLOYEE_ID","FIRST_NAME","LAST_NAME","EMAIL","PHONE_NUMBER","HIRE_DATE","JOB_ID","SALARY","COMMISSION_PCT","MANAGER_ID","DEPARTMENT_ID") values ('143','Randall','--secret--','RMATOS','650.121.2874',TO_DATE('15-MAR-98', 'DD-MON-RR'),'ST_CLERK','2600',NULL,'124','50');
update "HR"."EMPLOYEES" set "LAST_NAME" = 'Perkins' where ROWID = 'AAATiDAAMAAALKzABb';
update "HR"."EMPLOYEES" set "LAST_NAME" = 'Matos' where ROWID = 'AAATiDAAMAAALKzAAr';

-- as HR
rollback;

Alternatively run the 4 statements from undo_sql.

-- reset setup changes

-- as HR
ALTER TABLE HR.EMPLOYEES DROP SUPPLEMENTAL LOG DATA (ALL) COLUMNS;

-- as DBA
ALTER DATABASE DROP SUPPLEMENTAL LOG DATA;

Conclusion

10 undocumented or unknown features. Some are useful, some are interesting, some are simple gaps in the documentation. I hope you found at least one that you didn’t know before and liked to read about.

A quirk with object dependencies under EBR

TL;DR;

USER_DEPENDENCIES does not show dependencies between objects in different editions. Which means, even when no dependencies are shown, we still can get errors when trying to drop objects. This is especially relevant when trying to drop TYPEs.

The basics

Oracle tracks dependencies between objects, for example packages but also between types. It provides us a view USER_DEPENDENCIES (and the matching ALL_|DBA_|CDB_DEPENDENCIES views) where we can see how those objects are related (are calling each other).

Here are two small examples

Example 1) Table + View

create table myTab (col1 number);
create view myView as select * from myTab;
Table MYTAB created.
View MYVIEW created.

Example 2) Object Type + Collection Type

create type myOT as object (col1 number);
/
create type myCT as table of myOT;
/
Type MYOT compiled

Type MYCT compiled

Further code demos will all assume that those 4 objects are in place unless mentioned otherwise.

The data dictionary

select * from user_dependencies;
NAMETYPEREFERENCED_OWNERREFERENCED_NAMEREFERENCED_TYPEREFERENCED_LINK_NAMESCHEMAIDDEPENDENCY_TYPE
MYOTTYPESYSSTANDARDPACKAGE122HARD
MYCTTYPESYSSTANDARDPACKAGE122HARD
MYVIEWVIEWSVENMYTABTABLE122HARD
MYCTTYPESVENMYOTTYPE122HARD

We can ignore the references to the sys.standard package. I think they come from references to datatypes that are defined there. Those dependencies do not effect the following quirks.

Data dictionary shows that the view MYVIEW depends on the table MYTAB (example 1) and the collection type MYCT depends on the object type MYOT (example 2)

The dependency type HARD tells us, that we are not allowed to drop the referenced objects without breaking|invalidating the dependent object.

drop table mytab;
Table MYTAB dropped.
select status from user_objects where object_name = 'MYVIEW';
INVALID

For types we get an error message when trying to drop it.

drop type myot;
ORA-02303: cannot drop or replace a type with type or table dependents

Side note: Another dependency type would be REF, which we encounter for example when using materialized views (MV). If a table is dropped that is used by a select in the materialized view, then we can still fetch data from the MV.

To successfully drop both types we first need to drop the collection type MYCT before we can drop the referenced object type MYOT.

drop type myct;
drop type myot;
Type MYCT dropped.

Type MYOT dropped.

Ok, this worked as expected. Dropping the objects in the opposite order as we had created them.

How does it work under edition based redefinition (EBR)?

Views and types are both editionable objects. That means we can have several “versions” of them in the same schema under different editions.

Assuming my typical 3 edition hierarchy

ORA$BASE ⇒ BETA ⇒ ALPHA 
(Parent ⇒ child1 ⇒ child2)  

I will only use ORA$BASE and ALPHA for this experiment. The 4 objects all have been created in ORA$BASE. Which means, they can be accessed also in the child editions. So let’s switch to ALPHA.

alter session set edition="ALPHA";
select sys_context('userenv','current_edition_name') from dual;
ALPHA

The data dictionary entries are the same as in the base edition.

select * from user_dependencies where name like 'MY%';
NAMETYPEREFERENCED_OWNERREFERENCED_NAMEREFERENCED_TYPEREFERENCED_LINK_NAMESCHEMAIDDEPENDENCY_TYPE
MYOTTYPESYSSTANDARDPACKAGE122HARD
MYCTTYPESYSSTANDARDPACKAGE122HARD
MYVIEWVIEWSVENMYTABTABLE122HARD
MYCTTYPESVENMYOTTYPE122HARD

Now we compile one of the types, but NOT the other.

alter type myct compile;
Type MYCT altered.

This did actualize the type MYCT in the child edition. Actualization is a concept of EBR. When a parent object is (re)compiled in a child edition the code afterwards exists twice. In the parent and in the child edition.

We go back to the parent edition and repeat our test now.

alter session set edition="ORA$BASE";
select sys_context('userenv','current_edition_name') from dual;
ORA$BASE

We try again to drop the types in the correct drop order.

drop type myct;
drop type myot;
Type MYCT dropped.

Error starting at line : 2 in command -
drop type myot
Error report -
ORA-02303: cannot drop or replace a type with type or table dependents

This time we can not drop the object type. Unfortunately the data dictionary does not tell, which object causes this dependency problem.

select * from user_dependencies where name like 'MY%';
NAMETYPEREFERENCED_OWNERREFERENCED_NAMEREFERENCED_TYPEREFERENCED_LINK_NAMESCHEMAIDDEPENDENCY_TYPE
MYOTTYPESYSSTANDARDPACKAGE122HARD
MYVIEWVIEWSVENMYTABTABLE122HARD

MYOT is not in the list of referenced names anymore. So we should be able to drop it.

The problem is that the view USER_DEPENDENCIES only shows objects that are in the current edition or that are not editionable (like sys.standard). But we have created an dependency between the object MYCT that still exists in edition ALPHA to the object MYOT from the parent edition ORA$BASE.

Which objects exist in which edition can be checked using the user_objects_AE view. The AE stands for all editions.

select edition_name, object_name, object_type, editionable
from user_objects_ae
where object_name like 'MY%';
EDITION_NAMEOBJECT_NAMEOBJECT_TYPEEDITIONABLE
ORA$BASEMYCTNON-EXISTENT
ALPHAMYCTTYPEY
ORA$BASEMYOTTYPEY
MYTABTABLE
ORA$BASEMYVIEWVIEWY

Because the type MYOT was never actualized in the edition ALPHA, we only have this object in the parent ORA$BASE. The dependency crosses from one edition to the other. In ALPHA we are still able to see this dependency in the dictionary.

So how can we drop the object in ORA$BASE?

There are three possible ways. Which way to choose depends on the reason why you dropped the object in the first place.

a) Use the force!

drop type myot force;

I do not recommend this, but there are situations were it is the most easy way. This also invalidates any usage of the dependent collection type in the child edition. It is very easy to overlook such an issue therefore only use FORCE when you understand what the reason the the error is and why it is ok to use FORCE,

b) drop the collection type in the child edition first.

alter session set edition="ALPHA";
drop type myct;

alter session set edition="ORA$BASE";
drop type myot;

c) actualize the object type in the child edition
This is easiest when done before dropping the type.

alter session set edition="ALPHA";
alter type myot compile;

Compiling the type creates a copy of the code in the data dictionary. From that point on any drop will leave a proper reference in the data dictionary of each edition. Cross edition references will not happen when all objects are actualized always. That is also why I start creating new objects in ALPHA and then move them to BETA and later for final testing to ORA$BASE.

Further complications

This looks a bit complicated, but after all we knew what we were doing. So it should pose a too big of a problem. Right?

Let me introduce you to 12.2 oracle maintained sys_plsql types.

Since 12.2 plsql pipelined table functions do not need a dedicated SQL type anymore. If we create such a function, typically inside some package, we can create the needed structures (types) purely inside the package spec.

I also dropped all the previously created objects to avoid confusions.

Example

create or replace package myPkg
  authid definer
is 
  type ot_typ is record (col1 number, col2 varchar2(30));
  type ct_typ is table of ot_typ;
  
  function myFnc return ct_typ pipelined;
end;
/

create or replace package body myPkg 
is 
  
  function myFnc return ct_typ pipelined 
  is
    vRow ot_typ := ot_typ();
  begin
    for i in 1..5 loop
      vRow.col1 := i;
      vRow.col2 := chr(ascii('A')-1+i);
      pipe row(vRow);
    end loop;
  end myFnc;
  
end;
/

Now run the function. Since 12.2 there is no need to add the TABLE() operator anymore. However because of name resolution issues, then the function needs a trailing pair of parenthesis.

-- table "operator" syntax
select * from table(myPkg.myFnc); 

-- syntax without table keyword
select * from myPkg.myFnc(); 
COL1 COL2
1    A
2    B
3    C
4    D
5    E

The function successfully returns five rows. No SQL type was actively created. This was needed in the past but is not needed anymore.

Also if we check the dictionary, we do not see any new types. But they secretly exist. If we check user_dependencies (or dba_objects) we can see them.

select * from user_types;
no data found
select * from user_dependencies;
NAMETYPEREFERENCED_OWNERREFERENCED_NAMEREFERENCED_TYPEREFERENCED_LINK_NAMESCHEMAIDDEPENDENCY_TYPE
MYPKGPACKAGESYSSTANDARDPACKAGE122HARD
MYPKGPACKAGE BODYSYSSTANDARDPACKAGE122HARD
SYS_PLSQL_9C0C5336_24_1TYPESYSSTANDARDPACKAGE122HARD
SYS_PLSQL_9C0C5336_DUMMY_1TYPESYSSTANDARDPACKAGE122HARD
SYS_PLSQL_9C0C5336_9_1TYPESYSSTANDARDPACKAGE122HARD
MYPKGPACKAGE BODYSVENMYPKGPACKAGE122HARD
SYS_PLSQL_9C0C5336_24_1TYPESVENSYS_PLSQL_9C0C5336_9_1TYPE122HARD

The view shows a set of types that all start with the almost identical name. Most importantly type SYS_PLSQL_9C0C5336_24_1 depends on type SYS_PLSQL_9C0C5336_9_1.

This was the output in 18c (18.7). The result changed in a 19c database. More about that later.

These two types match the type definitions in the package spec.

type ot_typ is record (col1 number, col2 varchar2(30));
type ct_typ is table of ot_typ;

Depending on how the package is then changed in different editions, it can happen that a cross edition dependency is created for those sys_plsql types. This can then prevent further compilation of the package in one of the editions.

We encountered such a scenario during development in an 18.7 database base. However the used types were considerably more complex. At some point we were not able to compile a package body because of a missing spec. However the spec was there and could apparently be compiled (but secretly would not properly compile).

I was able to track down the problem to a cross edition type dependency for the generated sys_plsql types. Dropping those types (in the relevant editions) allowed us to recompile the package spec and the package bodies.

If you encounter such a case a drop of all the sys_plsql types in each edition and a package recompile then solves the issue.

Behaviour in 19c (19.3)

The general cross edition type dependency issue also exists in 19c.

The sys_plsql types are not shown anymore in the dictionary views – I am not sure if the same behaviour still is in use. I could not reproduce the package problem that we encountered in an 18c (18.7) database that was caused by the sys_plsql types using a simple test case – neither in 18c nor in 19c. Which does not mean, that the problem is gone. Only that it is tricky to recreate.

The 18c and 19c patch release notes do not indicate any changes for the sys_plsql types – so it is not clear if there was a released bug that was fixed. Maybe some internal behaviour was changed but no note is added in the documentation about this.

However in the EBR guide the following is mentioned under Evolutionary capability improvements

The dependency model is now fine-grained: e.g. adding a new column to a table, or a new subprogram to a package spec, no longer invalidates the dependants.

Conclusion

Make sure to always compile all code in your specific edition. This actualizes all the objects and the editions are then independent from each other.

Start by adding new code objects in the child edition first, before you move it to the parent.

Under EBR consider to use specific SQL types for table functions, even if they are not needed anymore. This will give you a tiny bit more control over the code and a better chance to react to problematic dependency issues.

10 Oracle SQL functions and function parameters you might not know

Functions mentioned here are in alphabetical order

BITAND function but no BITOR function

There is a BITAND function in SQL but no BITOR function.

Bitand allows to bit-compare values. It returns a value where all bits have been compared using AND. That means, each bit in the first operand must be matched with the bits in the second operand. This is useful to compare a set of flags (yes/no values) that are stored inside the same integer field.

BIN_TO_NUM is a helper function that “translates” bits into (decimal) numbers.

BITAND example

SELECT BITAND(
    BIN_TO_NUM(1,0,0),  -- 4 in decimal
    BIN_TO_NUM(0,0,1)   -- 1 in decimal
    ) check_flag
  from dual;

0

So 4 and 1 have no bits in common.

SELECT BITAND(
    BIN_TO_NUM(1,0,1),  -- 5 in decimal
    BIN_TO_NUM(1,0,0)   -- 4 in decimal
    ) check_flag
  from dual;

4
4 = 0×2⁰ + 0×2 + 1×2² = BIN_TO_NUM(1,0,0)

So 5 and 4 have bit 3 in common. (Bitcounting starts 1 but powers of 2 start with 0)

BITOR would combine the bits from each operand.

Fortunately BITOR can be calculated using BITAND. We need to sum the operands and then subtract the BITAND result from that.

Here is an example

select 1 + 4  
       - BITAND(1,4) bitor_compare
  from dual;

5

select 5 + 4  
       - BITAND(5,4) bitor_compare
from dual;

5

CAST with default on conversion error

The CAST function allows to convert values from one datatype into another.

Parameter enhancements have been introduced in 12.2 along with similar enhancements to many datatype conversion functions.

https://docs.oracle.com/en/database/oracle/oracle-database/20/sqlrf/CAST.html#GUID-5A70235E-1209-4281-8521-B94497AAEF75

Let’s experiment a little with it.

select cast('01-01-20' as date default null on conversion error
           ,'dd-mm-rr') 
from dual;
01-Jan-20

select cast('INVALID' as date default null on conversion error
           ,'dd-mm-rr') 
from dual;
(null)

Works like a charm. The string was converted into a date (and converted back into a string to print it on the screen using local nls settings). An invalid string could not be converted, instead the default NULL value was chosen.

Sometimes we want to return today in case there is a conversion error…

select cast('01-01-20' as date default sysdate on conversion error
           ,'dd-mm-rr') 
from dual;
01-Jan-20

select cast('INVALID' as date default sysdate on conversion error
           ,'dd-mm-rr') 
from dual;
17-Mar-20

seems to work!

..not so fast

alter session set nls_date_format = 'day-Mon-rr';
select cast('01-01-20' as date default sysdate on conversion error
           ,'dd-mm-rr') 
from dual;

ORA-01858: a non-numeric character was found where a numeric was expected

Although we used cast with a default on conversion error, we still got an conversion error!
The problem here is the default parameter. It needs to be a string. Since it is not, an implicit type conversion happened. The result differs from the function parameter fmt. Which then leads to a conversion error.

Unfortunately the default parameter can not be an expression.

select cast('01-01-20' as date 
            default to_char(sysdate,'dd-mm-rr') 
            on conversion error
           ,'dd-mm-rr')
from dual;

ORA-43907: This argument must be a literal or bind variable.

Also nls_date_format is not allowed as nlsparam.

select cast('01-01-20' as date default sysdate on conversion error
           ,'dd-mm-rr'
           ,'nls_date_format=dd-mm-rr')
from dual;

ORA-12702: invalid NLS parameter string used in SQL function

The same could happen for other conversions, like casting a string into a number. This example is from the docs.

SELECT CAST(200
       AS NUMBER
       DEFAULT 0 ON CONVERSION ERROR)
  FROM DUAL;

It is a very very bad example, since 200 is already a number. So let’s assume this is ‘200.00’ but the default is still 0. And we add the other parameters to specify the format of this string.

select cast('200.00' as number default 0 on conversion error
           ,'FM999d00'
           ,'nls_numeric_characters=.,')
from dual;
200

select cast('200x00' as number default 0 on conversion error
           ,'FM999d00'
           ,'nls_numeric_characters=.,')
from dual;
0

Seems to work. But here again we can manipulate the session settings into a wrong conversion. Notice that the alter session settings and the nlsparam settings differ.

alter session set nls_numeric_characters = ',.';
Session altered.

select cast('200x00' as number default 0 on conversion error
           ,'FM999d00'
           ,'nls_numeric_characters=.,')
from dual;
0

Good. Fortunately the number conversion is slightly more robust than the date conversion. We can not set a session default number format (it is always “TM9” I believe) in the same way as we can set the default date format.

However we get into trouble once the default value uses non-integer numbers.

select cast('200x00' as number default 0.5 on conversion error
           ,'FM999d00'
           ,'nls_numeric_characters=.,')
from dual;

5

The default was 0.5 but we got 5!

The correct way to do it of cause would be to use a string with a format that reflects the fmt and nlsparam parameters.

select cast('200x00' as number default '0.5' on conversion error
           ,'FM999d00'
           ,'nls_numeric_characters=.,')
from dual;

0,5

Now we got the correct default value returned!

CHECKSUM

Checksum can be used to quickly calculate a checksum over a set of rows. There is an analytic version of it too.

Description of checksum.eps follows
https://docs.oracle.com/en/database/oracle/oracle-database/20/sqlrf/checksum.html#GUID-3F55C5DF-F23A-4B2F-BC6F-E03B34B78BA8

If you want to quickly see if data was changed between two databases, you can run a checksum over each column and compare the result on both environments.

select checksum(phone_number)
from hr.employees;

Never heard of this function before? Probably because it just got introduced in 20c.

Dump

The dump function is often used to inspect binary values. Useful for example when solving character set issues.

Not many know that it has some additional parameters.

Description of dump.eps follows
https://docs.oracle.com/en/database/oracle/oracle-database/20/sqlrf/DUMP.html#GUID-A05793C9-B35D-4BA7-B68C-E3693BCF47A5

Interesting is especially the return_fmt parameter. It allows to return the dump in octal (8), hex (16), decimal (10=default) or a special format (17). The return format “17” will return the single byte values in readable ASCII format, but multi byte values that can not be converted are shown in hex.

For example the “€” symbol in UTF-8 uses 3 Bytes: E2 82 AC

Source: https://unicode-table.com/en/20AC/

Dump-17 will return this

select dump('abc€',17) from dual;
Typ=96 Len=6: a,b,c,e2,82,ac

The string is made of 6 bytes. The first 3 are single byte characters. They are converted into ascii. The 4th character is the Euro-Sign, witch is 3 bytes in UTF-8. So the format 17 is interesting, because it helps us to focus and find the problematic characters.

Another interesting option is to add 1000 to the format. This will add the character-set to the output.

select dump('abc€',1010) from dual;
Typ=96 Len=6 CharacterSet=AL32UTF8: 97,98,99,226,130,172

Of cause that works with NCHAR too.

select dump(n'abc€',1010) from dual;
Typ=96 Len=8 CharacterSet=AL16UTF16: 0,97,0,98,0,99,32,172

LNNVL

This is a very special function. Apart from the unspeakable name it is used for Oracle internal SQL transformations by the optimizer. Here is a document that describes such transformations. LNNVL returns a Boolean and can be used directly inside a where clause. This is not yet possible for user defined functions that return Boolean, but other such functions do exist (e.g. regexp_like).

I sometimes use it to find rows where two values differ. If the comparison value is NULL I still want to consider it to be different.

Instead of using the non-equality operator != we have to use the opposite the equality operator =. This is just how LNNVL works.

select dummy from dual
where lnnvl('A'='B');
X;

select dummy from dual
where lnnvl('A'='A');
No data found.

select dummy from dual
where lnnvl('A'=null);
X

This assumes that ‘A’ always is a not null value.

To get the same result typically a comparison condition needs also consider the NULL case.

select dummy from dual
where (('A'!='B' and 'B' is not null) OR 'B' is null);

Since more conditions like this might follow, the statement very soon becomes cluttered with OR checks, parenthesis and IS NULL comparisons.

Unfortunately since not many developers are familiar with this function, we should always add some comments to explain the purpose and the behaviour.

NANVL

NANVL is similar to NVL.

NVL returns a value, when NULL is encountered.
NANVL returns a value when NaN (not a number) is encountered.

NaN is part of the binary_float and binary_double datatypes.

select to_binary_float(1/17) from dual;
0,05882353

select to_binary_float('NaN') from dual;
NaN

Lets apply NANVL to it

select nanvl(to_binary_float('NaN'),'0') from dual;
0.0

Or we could set it to negative infinity…

select nanvl(to_binary_float('NaN'),'-INF') from dual;
-Infinity

Somehow interesting, but rarely useful I think.

NEXT_DAY function parameter

The NEXT_DAY function needs a second parameter that defines what target weekday it points to.

Description of next_day.eps follows

The argument char must be a day of the week in the date language of your session, either the full name or the abbreviation. The minimum number of letters required is the number of letters in the abbreviated version. Any characters immediately following the valid abbreviation are ignored. The return value has the same hours, minutes, and seconds component as the argument date.

https://docs.oracle.com/en/database/oracle/oracle-database/20/sqlrf/NEXT_DAY.html#GUID-01B2CC7A-1A64-4A74-918E-26158C9096F6

This second parameter char works differently than what you might expect. Here is an example.

alter session set nls_date_format = 'Dy, DD-MON-YYYY';
Session altered.

alter session set nls_language='AMERICAN';
select next_day(sysdate,'FRIDAY') from dual;
Fri, 20-MAR-2020

alter session set nls_language='GERMAN';
Session altered.

select next_day(sysdate,'FREITAG') from dual;
Fr, 20-MRZ-2020

select next_day(sysdate,'FRIDAY') from dual;
Fr, 20-MRZ-2020

Isn’t that strange? It looks as if although my session language is GERMAN, but that the American spelling still works!

Let’s test this with a different language. In Spanish “friday” would be “viernes”.

select next_day(sysdate,'VIERNES') from dual;
ORA-01846: Kein gültiger Wochentag
"not a valid day of the week"

alter session set nls_language='SPANISH';
Session altered.

select next_day(sysdate,'VIERNES') from dual;
Vie, 20-MAR-2020

select next_day(sysdate,'FRIDAY') from dual;
ORA-01846: día de la semana no válido
"not a valid day of the week"

Ok. So the Spanish language works as long the NLS settings are correct, but it doesn’t accept the American spelling.

Is German special? In a way yes. In German several weekdays have an abbreviation that matches the American spelling. And the abbreviation is only two letters short. Especially those days that have a good chance to be used in the NEXT_DAY function (monday, friday, saturday, but not sunday!).

For “FREITAG” the abbreviation is “FR”. The NEXT_DAY function accepts anything as long as the char string starts with the abbreviation of a weekday in the correct language. So FREITAG, FRIDAY and even FRITZ or FROG all return the next friday.

alter session set nls_language='GERMAN';
Session altered.

select next_day(sysdate,'FR') from dual;
Fr, 20-MRZ-2020

select next_day(sysdate,'FRITZ') from dual;
Fr, 20-MRZ-2020

select next_day(sysdate,'FROG') from dual;
Fr, 20-MRZ-2020

Is this how you expected the char parameter to work?

Recommendation? Don’t use NEXT_DAY at all. I prefer TRUNC over NEXT_DAY any day.

POWER (sql) = ** (plsql)

In SQL to calculate an exponentiation we have to use the POWER function.

Example 2³

select power(2,3) from dual;
8

In plsql we can use the ** operator instead.

set serveroutput on
begin
  dbms_output.put_line(2**3); 
end;
/
8

Unfortunately the ** operator will not work in SQL.

ROUND_TIES_TO_EVEN function

Everybody knows the round function. Round rounds a value up which is exactly half.

1.5 ≈ 2
2.5 ≈ 3
...
7.5 ≈ 8
8.5 ≈ 9
9.5 ≈ 10

This is what statisticians call a biased function. The computed values do not reflect the distribution of the original values – especially if there are many 0.5 decisions.

An unbiased version of rounding is to round one value up, the next down, then up again.

1.5 ≈ 2
2.5 ≈ 2
...
7.5 ≈ 8
8.5 = 8
9.5 ≈ 10

In German that is called “kaufmännisches Runden”. Loosely translated to Bankers Rounding.

The round_ties_to_even function has been introduced in 18c.

Description of round_ties_to_even.eps follows
https://docs.oracle.com/en/database/oracle/oracle-database/19/sqlrf/ROUND_TIES_TO_EVEN-number.html#GUID-49919B6B-4337-4812-A248-B5D98F102DBD
select level+0.5 as val, 
       round(level+0.5), 
       round_ties_to_even(level+0.5)
from dual connect by level <= 10;
VALROUND(VAL)ROUND_TIES_TO_EVEN(VAL)
1.522
2.532
3.544
4.554
5.566
6.576
7.588
8.598
9.51010
10.51110

This new rounding function implements “Bankers Rounding”. Especially when adding up the rounded values, the rounded sum differs less from the correct (unrounded) sum compared to what normal rounding delivers.

STANDARD_HASH function

If we quickly want to create some hash values we can use the standard_hash function.

select standard_hash(dummy) from dual;
C032ADC1FF629C9B66F22749AD667E6BEADF144B

The default uses an SHA1 hash algorithm, but we can specify SHA1, SHA256, SHA384, SHA512 and MD5.

select standard_hash(dummy,'SHA512') from dual;
3173F0564AB9462B0978A765C1283F96F05AC9E9F8361EE1006DC905C153D85BF0E4C45622E5E990ABCF48FB5192AD34722E8D6A723278B39FEF9E4F9FC62378

MD5 is not recommended at all (https://www.md5online.org/blog/why-md5-is-not-safe/). Also SHA1 is not considered secure anymore, but you can still use it safely to create hash values by it for non security purposes.

Conclusion

That’s it. 10 functions or parameters of functions that you probably didn’t know before. At least not all of them.

APEX 19.2 upgrade quick tip: run check_lovs_for_errors.sql

The APEX 19.2 readme includes info about a small script check_lovs_for_errors.sql that one should run after the upgrade. It is very easy to overlook this in the readme files.

TL;DR;

run

sqlplus user/pwd@dbconnection
spool apex/utilities/check_lovs_for_errors_results.log
set serveroutput on
@apex/utilities/check_lovs_for_errors.sql APP_ID APP_USER
spool off
exit;

Read the result and correct the suspect LOVs in your application.

Behavioural change

Essentially there was a small change in the behaviour of existing LOVs. If the select that supports the LOV uses the identical name for the display and the return value, then this used to work, but will not anymore in 19.2.

And not work, means we will get an error when running that page.

Here is the relevant section of the release note.

Not only selects that use the same name for display and return value are effected, but also selects that run into an error. LOVs with the first kind of select did work before, LOVs with an error select did not work before. Now such an error might prevent the page from opening. Usually you should not have LOVs of the second type.

Fortunately the APEX team provided the little script check_lovs_for_errors.sql that help us to identify such LOVs.

How to run the script

The script is in the APEX installation folder (where you unzipped the APEX file) in the path apex/utilities/. In case you don’t have access to the script I show it at the end of this blog post. It might help you to identify such LOVs in your own application.

In my environment (18.7 database) the script would not run properly as SYS or as DBA. The reason for this is described in section “how to not run the script”. Instead one needs to connect with the schema of your workspace account. I didn’t test it, but I think you might be able to run it in the SQL workshop.

The script uses two input parameters.

1 = APP_ID = application id
2 = APP_USER = name of an apex user who has access to the application

You won’t need the password for the APP_USER.

Only LOVs from this application will be checked. And the name of the user is used to create an apex_session context.

You can add those parameters while calling the script or later in interactive mode, when the script is running .

Before running the script one must enable serveroutput to see the results.

I suggest to spool the output into a file

[oracle@ODA01 utilities]$ sqlplus /nolog
SQL*Plus: Release 18.0.0.0.0 - Production on Fri Feb 7 09:28:49 2020
Version 18.7.0.0.0

Copyright (c) 1982, 2018, Oracle.  All rights reserved.

SQL> connect workspaceschema/pwd@dbconnect
Connected.
SQL> spool check_lovs_for_errors_result.log
SQL> set serveroutput on
SQL> @check_lovs_for_errors.sql
Enter value for 1: 200
Enter value for 2: weller
--------------------
Error #1
Application: 200
Type:        Inline
Page:        3110
Name:        P3110_SELECT_LIST_ITEM
...
...
--------------------
Action Required! 5 LOV definitions raised an error, please review to see if they need to be fixed to avoid facing the same error in your app at runtime.

PL/SQL procedure successfully completed.

spool off

The output prints the errors that are found to the screen via dbms_output. Thats why spooling it to a file helps. You can always rerun the script to see the same or the corrected results.

Don’t forget to switch spooling off after the script run.

How to not run the script

If you connect with an account (db schema) that is doesn’t have the right to read your apex workspace data, then this error message appears.

ORA-20987: APEX – Security Group ID (your workspace identity) is invalid. –
Contact your application administrator.

If you connect as sys or with a dba account another error will probably happen.

SQL> connect / as sysdba
Connected.
SQL> alter session set container=pdb01;
Session altered.

SQL> @check_lovs_for_errors.sql
Enter value for 1: 200
Enter value for 2: weller
declare
*
ERROR at line 1:


ORA-06598: insufficient INHERIT PRIVILEGES privilege
ORA-06512: at “APEX_190200.WWV_FLOW_EXEC_API”, line 1
ORA-06512: at line 71
ORA-06598: insufficient INHERIT PRIVILEGES privilege
ORA-06512: at “APEX_190200.WWV_FLOW_SESSION_API”, line 56
ORA-06512: at line 11

This error will only happen in Oracle DB 12.1 onwards. A new plsql security feature was introduced. SYS and DBA accounts do not have the inherit (any) privileges privilege that they need to execute something using invoker rights.

If you connect with your normal database schema, the script will run.

The problem for SYS connects can be circumented by granting and afterwards revoking the INHERIT PRIVILEGES. I didn’t test this.

grant inherit privileges on user MYSCHEMA to sys;

It could also be that user needs to be the APEX schema APEX_190200.
To restore the original situation, revoke the grant.

revoke inherit privileges on user MYSCHEMA from sys;

Interpret and handle results

The script tries to run all selects that are powering the LOVs inside some apex session context. If an error is encountered, the statement and the error is reported.

A typical final result looks like this:

...
--------------------
Action Required! 5 LOV definitions raised an error, please review to see if they need to be fixed to avoid facing the same error in your app at runtime.

PL/SQL procedure successfully completed.

The typical problematic SQL that this script reports is when the two columns (display value and return value) have the same name.

The best way to correct this is (after the upgrade) to convert the LOV into a shared component that uses the new LOV logic. The new LOVs can have more the 2 columns and can declare, which column is used for the DISPLAY and which for the RETURN value and even one for icons. Display and return value could even be the same single column from the select.

Many but not all changes are so easy.

There are a few additional quirks I would like to mention.

Quirk 1 – false positives

A false positive is when the script reports an error, but in reality the query will never error out. This can happen for example if the select uses page items that are set when the LOV is used, but not set, when the script tests this query. Maybe because during the test all page items are empty.

Here is an example of such a case.

select first_name||' '||last_name as display_value, 
       employee_id as return_value
from hr.employees
where hire_date >= to_date(to_char(:P1_YEAR + 1) || '-01-10' ,'YYYY-MM-DD')

Error: ORA-01841: (full) year must be between -4713 and +9999, and not be
0

The problem in this example is, that the page item P1_YEAR is NULL while testing the query. Using NULL here leads to the ORA-error message. In reality this page item is filled with some value by other means (default, page load process, cascading parent lov, url parameter, …)

There could be many false positives. In my case 2 out of 5 findings were false positives.

If it is easily possible, then I suggest to rewrite such statements into a form that will not break the SELECT when the input items are null. This is not needed, but makes the code more robust, and also avoids that this statement is reported as problematic the next time this script runs.

For the example the solution could be to use an item that delivers the full date string

select first_name||' '||last_name as display_value, 
       employee_id as return_value
from hr.employees
where hire_date >= to_date(:P1_THREASHOLD_DAY ,'YYYY-MM-DD')

Quirk 2 – the output from the script has broken lines

When you take the reported SQL from the log file, then this SQL might not run, because of wrongly reported line breaks.

Example case (SQL statement was slightly modified and shortened)

original statement

Please notice, that there are several one line comments

select d,r
from (
   select anzeige_name||'('||dty_daten_typ||')' d
        , spalte_name r
        , spalte_anzeige_name rank
   from targettable
   --where (:P30_TYP,'J') in
   where :P30_TYP != 'I'
   and (:P30_ART,'J') in
      (('R', ck_bedingung1)
      ,('B', ck_bedingung2)
      ,('Z', druck_relevant2)
   --    ,('T', befehl_bedingung1)
   --    ,('I', befehl_relevant)
       )
--order by tabelle_name, anzeige_name

Broken spool output from script

select d,r
from (
   select anzeige_name||'('||dty_daten_typ||')' d
        , spalte_name r
        , spalte_anzeige_name rank
   from targettable
--where (:P30_TYP,'J') in
where :P30_TYP != 'I'
   and (:P30_ART,'J') in
      (('R', ck_bedingung1)
      ,('B', 
ck_bedingung2)
      ,('Z', druck_relevant2)
   --    ,('T', befehl_bedingung1)
   --    ,('I',
befehl_relevant)
       )
--order by tabelle_name,
anzeige_name

Please notice that there are line breaks now that are not in the original select. Some lines are broken in weird places – like inside comments.

The LOV and the statement that is executed by the script does not have this issue.

This is an example SQL that had a not existing column name somewhere. However I first needed to correct the wrong linebreaks, before I could successfully reproduce the issue inside SQL developer.

The error that is shown by the script itself is correct.

Quirk 3 – errors on outdated pages

Sometimes we have pages in our application that are not in use anymore. There is a higher chance for LOVs/Selects erroring on such a page. The error reported is correct, but the application would never run into this error.

Time to clean up and throw out the old stuff! This is sometimes a tough decision to make. We are only upgrading the APEX version, we should keep changes to our code to a minimum. It certainly helps, when all pages are under version control. Then one can always reconstruct the deleted page from VCS.

Additionally one should keep a backup copy of the old images folder. I usually just rename it to /i_old/.

Main query used

This query is used to identify the LOVs. Each LOV is then executed in a dynamic APEX session context

select *
from ( select 'Inline' lov_type,
              application_id app_id,
              page_id page_id,
              item_name name,
              lov_definition query
       from apex_application_page_items
       where lov_definition is not null
       and lov_definition not like 'STATIC%'
       and lov_definition not like '.%'
       union all
       select 'Shared Component' lov_type,
               application_id app_id,
               null page_id,
               list_of_values_name name,
               list_of_values_query query
               from apex_application_lovs
               where source_type_code = 'LEGACY_SQL' 
        )
where app_id = &APP_ID
order by lov_type, app_id, page_id, name

One way to prevent an LOV to be listed here, is to upgrade the LOV to the newer version. Then it will not be marked as “LEGACY_SQL” anymore.

Tip:
It is advisable to upgrade all old LOVs to the new LOVs. This should be an essential part of the upgrade strategy of your application. Do not misunderstand me. You can keep your LOVs as they were before and the application will still run. But if you want to move forward, upgrade all LOVs.

To upgrade a LOV that is already a shared component is very easy. Just press the Convert Legacy SQL button.

screenshot LOV conversion - before

And check the column mapping afterwards

screenshot LOV conversion - after

LOVs that are not shared components need to be copied into a shared component first. Future APEX versions will probably change this.

Conclusion

After the upgrade to APEX 19.2 control the correctness of your LOVs in all apps by running the /apex/utilities/check_lovs_for_errors.sql script for each app.

The script points out potential LOV issues – analyse and correct those issues.

Take also some time to upgrade old style LOVs into the shared component multicolumn LOV.

The script check_lovs_for_errors.sql

With kind permission from the APEX team I show the script here

Rem  Copyright (c) Oracle Corporation 1999 - 2019. All Rights Reserved.
Rem
Rem    NAME
Rem      check_lovs_for_errors.sql
Rem
Rem    DESCRIPTION
Rem      Checks List of Values defined both inline on an item, and at shared component level for possible execution errors.
Rem
Rem    NOTES
Rem      - This utility should be run via SQL*Plus and connected as your application parsing schema.
Rem
Rem    Arguments:
Rem     Position 1: Application ID you wish to check
Rem     Position 2: APEX user name from the workspace the application belongs to
Rem
Rem    MODIFIED    (MM/DD/YYYY)
Rem      arayner    10/09/2019 - Created

set define '&'
set concat on
set concat .
set verify off
set termout on

define APP_ID       = &1
define APEX_USER    = &2

declare 
    l_context        apex_exec.t_context;
    l_error_count    number := 0;
    l_lov_count      number := 0;
    
    function is_plsql( p_sql_or_plsql in varchar2 ) return boolean is
    begin
        return coalesce( not( upper( substr(trim(p_sql_or_plsql), 1, 6) ) = 'SELECT' or upper(substr(trim(p_sql_or_plsql), 1, 4)) = 'WITH' ), true );
    end is_plsql;
begin 
    apex_session.create_session( &APP_ID, null, '&APEX_USER' );

    for l_lov in ( select *
                      from ( select 'Inline'             lov_type,
                                    application_id       app_id,
                                    page_id              page_id,
                                    item_name            name,
                                    lov_definition       query
                               from apex_application_page_items
                              where lov_definition       is not null
                                and lov_definition       not like 'STATIC%'
                                and lov_definition       not like '.%'
                              union all
                             select 'Shared Component'   lov_type,
                                    application_id       app_id,
                                    null                 page_id,
                                    list_of_values_name  name,
                                    list_of_values_query query
                               from apex_application_lovs
                              where source_type_code     = 'LEGACY_SQL' )
                     where app_id = &APP_ID
                     order by lov_type, app_id, page_id, name
                  )
    loop
        begin 
            l_lov_count := l_lov_count + 1;
            l_context := apex_exec.open_query_context(
                p_location              => 'LOCAL',
                p_sql_query             => case when not is_plsql( l_lov.query) then l_lov.query end,
                p_plsql_function_body   => case when is_plsql( l_lov.query) then l_lov.query end,
                p_max_rows              => 0 );        -- We are not interested in the results, we simple want to see if the query executes without error
            apex_exec.close( l_context );
        exception    
            when others then 
                l_error_count := l_error_count + 1;
                apex_exec.close( l_context );
                sys.dbms_output.put_line( '--------------------' );
                sys.dbms_output.put_line( 'Error #' || l_error_count );
                sys.dbms_output.put_line( 'Application: ' || l_lov.app_id );
                sys.dbms_output.put_line( 'Type:        ' || l_lov.lov_type );
                if l_lov.page_id is not null then 
                    sys.dbms_output.put_line( 'Page:        ' || l_lov.page_id );
                end if;
                sys.dbms_output.put_line( 'Name:        ' || l_lov.name );
                sys.dbms_output.put_line( 'LOV (' || case when is_plsql( l_lov.query ) then 'PL/SQL' else 'SQL' end || '):' ); 
                sys.dbms_output.put_line( l_lov.query );
                sys.dbms_output.put_line( 'Error:       ' || SQLERRM );
                sys.dbms_output.put_line( '' );
        end;
    end loop;
    
    sys.dbms_output.put_line( '--------------------' );
    if l_error_count > 0 then
        sys.dbms_output.put_line( 'Action Required! ' || l_error_count || ' LOV definitions raised an error, please review to see if they need to be fixed to avoid facing the same error in your app at runtime.' );
    else
        sys.dbms_output.put_line( 'No action required! All ' || l_lov_count || ' LOV definitions executed successfully.' );
    end if;

exception    
    when others then 
        apex_exec.close( l_context );
        sys.dbms_output.put_line( SQLERRM );
end;
/
SQL Developer export wizard

workaround: SQL Error: ORA-54013: INSERT operation disallowed on virtual columns

Problem

Unfortunately when we export data using the SQL developer export functionality it also exports data for virtual columns. While this is certainly useful for spreadsheet exports, it is rarely sensible for insert statements.

Any insert into a virtual column fails with error ORA-54013.

SQL Error: ORA-54013: INSERT operation disallowed on virtual columns

54013. 0000 – “INSERT operation disallowed on virtual columns”
*Cause: Attempted to insert values into a virtual column
*Action: Re-issue the statment without providing values for a virtual column

One way to avoid this is to set the virtual column to invisible before the export is started. SQL Developer will not export the data from invisible columns. But changing the data model in the source just to get a little more convenience is usually not the way to go. Remember that setting columns to invisible and visible again will put these columns at the end of the logical column list. Which could be a problem if code depends on the column order (which it shouldn’t). Also this requires to export the data again.

If you already have an export file with a lot of inserts, the following dirty little trick might help you.

Temp column trick

We create a new real column, insert the data into this column and throw it away afterwards. The virtual column is temporarily moved out of the way while doing so.

Lets assume we have a table persons and a virtual column that concatenates the columns firstname and lastname into fullname.

create table person (id number, 
                     firstname varchar2(100), 
                     lastname varchar2(100), 
                     fullname as lastname||',  '||firstname);

We rename the target column and temporarily add a new column at the end of the table. The insert statements produced by SQL Developer have the column names included in the insert, so they do not depend on column order.

alter table person rename fullname to "_FULLNAME";
alter table person add fullname VARCHAR2(1000);

Then run the insert scripts that were created by SQL Developer.

SET DEFINE OFF;
@PERSON_DATA_TABLE.sql

After the data was inserted, restore the old columns.

alter table person drop column fullname;
alter table person rename "_FULLNAME" to fullname;

Automation

And here are two selects that generate all those statements for a complete schema. The second select needs to run before the generated code from the first select is executed. Otherwise the columns in the dictionary already have been changed.

Preparation Script SQL

select 'alter table "'||table_name||'" rename "'||column_name||'" to "_'||column_name||'";'||chr(10)||
       'alter table "'||table_name||'" add "'||column_name||'" '||case when data_type in ('VARCHAR', 'VARCHAR2','CHAR','NCHAR','NVARCHAR2') then 'VARCHAR2(4000 byte)' else data_type end||';'
       as pre_insert_ddl
from user_tab_cols
where virtual_column='YES' 
--and user_generated='YES'
and hidden_column = 'NO';  

Restore Script SQL

select 'alter table "'||table_name||'" drop column "'||column_name||'";'||chr(10)||
'alter table "'||table_name||'" rename "_'||column_name||'" to "'||column_name||'";'
as post_insert_ddl
from user_tab_cols
where virtual_column='YES'
--and user_generated='YES'
and hidden_column = 'NO';

Restrictions and side effects

The scripts are not perfect. For example if you have a column name that is already at maximum identifier length (e.g. 30 or 128 characters long) then adding _ in front of the column will produce an error. I think this situation is very rare and should best handled by manually correcting the scripts.

Varchar2 columns are expected to be equal or less than 4k, which might not be correct when extended_string_size is used in newer DB versions.

Some special datatypes might need extra consideration. In general the datatype of the new column does not matter as long as it doesn’t produce an error during insert.

SQL Quickie: How to reset an identity column

Introduction

Since Oracle 12 we can declare columns as an identity column. Other databases call this feature “autoincrement” column. A term that can easily lead to some misunderstandings, but is already well established. In some of those databases such a column allows to reset the current value. There is no (documented) way for that with an Oracle identity column. A recent discussion in the ODC forums lead me to think whether it is possible to set an identity column to a custom value. And yes it is.

TL;DR;

  • Change the increment of the sequence that supports the identity column by modifying the column.
  • Increase the value by an insert+rollback.
  • Reset the sequence by modifying the column again.

or use the undocumented RESTART option (12.2 onwards)

alter table demo_identity_reset
modify id generated always as identity restart start with 60;

Showcase

demo setup

First we create a table with an identity column. We use default settings for the sequence, which for example means increment +1 and cache 20.

/* prepare demo table */
create table demo_identity_reset
(id number generated always as identity primary key
,name varchar2(500)
);

Then insert some demo values.

/* add some data */
insert into demo_identity_reset (name) values ('test1');
insert into demo_identity_reset (name) values ('test2');
commit;
select * from demo_identity_reset;
ID NAME
1 test1
2 test2

We now have a table with an identity column where some values are used and more values are currently cached in the SGA.

Set a custom value (12.1)

Gather info

First we need to find the name of the sequence. Either by looking at all sequences, but more correct would be to look at the column definitions. The name of the sequence can be seen in the default value of the identity column.

/* check data dictionary and find the sequence name*/
select * from user_sequences;

or

/* check data dictionary and find the sequence name*/
select data_default
from user_tab_columns
where table_name = 'DEMO_IDENTITY_RESET'
and identity_column = 'YES';
DATA_DEFAULT
"MYSCHEMA"."ISEQ$$_172206".nextval

In my case the sequence name is ISEQ$$_172206. The generated name will always start with ISEQ$$_.

Other information we need to find is the current value, the cache size and the increment. The only decent way to see the current value is to set the sequence to nocache and after that check the data dictionary. We need to remember the original cache size setting to be able to restore it later to that value.

Start the change

From this step onward no other session should insert at the same time. It might be a good idea to lock the table in exclusive mode during those steps.

The cache also needs to be set to nocache to prevent caching any values with the wrong increment during the next steps.

/* reset the cache so that we can see the next value */
select cache_size 
from user_sequences 
where sequence_name = 'ISEQ$$_172206';

alter table demo_identity_reset 
modify id generated always as identity nocache;

Cache size was 20. The sequence now is not caching anymore. This is shown in the user_sequences dictionary view.

Now we can read the next value from the data dictionary and use that information to set an increment that jumps to our target value.

Here I assume a target value of 60.

/* find the next value */
select last_number from user_sequences 
where sequence_name = 'ISEQ$$_172206';

/* calculate the increment: 57 = 60(target value) - 3(last number) */

/* change the increment so that it jumps over all the other values. */
alter table demo_identity_reset 
modify id generated always as identity increment by 57 nocache; 

The last number was 3. Last number will always hold the next value after the cached values.

-- this does a sequence.nextval which we can not call otherwise
insert into demo_identity_reset (name) 
values ('dummy. rollback immediatly.'); 
rollback;

It doesn’t matter much, which value was inserted (59). In case you want to avoid any gaps, then this insert needs to be one of the real inserts that needs to be done. And the increment needs to be one value more (58 instead of 57). So that instead of a rollback you can do a commit.

/* reset the changes */
alter table demo_identity_reset 
modify id generated always as identity increment by 1 cache 20;

Done. The next insert will now start with our target value 60.

Set a custom value (12.2)

I tested this in 19c on LiveSql but it should work in all versions that belong to the 12.2. database family (12.2.0.1, 18c, 19c).

Instead of the complicated series of steps in 12.1 we can do it all in a single command. Unfortunately this command is undocumented, so use it at your own risk.

alter table demo_identity_reset 
modify id generated always as identity restart start with 60;

The RESTART option was mentioned in one of the new feature guides and Roger Troller made me aware of it in one of his blog posts. However currently (19c) it is still missing in the SQL reference documentation.

Test the change

The following steps are not needed anymore. But they are proof that the concepts works.

/* check the result in the dictionary*/
select last_number, cache_size 
from user_sequences 
where sequence_name = 'ISEQ$$_172206';
LAST_NUMBER    CACHE_SIZE
 60    20
/* test the result using the the table*/
insert into demo_identity_reset (name) values ('test3');
commit;
select * from demo_identity_reset;
IDNAME
1test1
2test2
60test3

Yes it works!

Cleanup code

Drop the table and the sequence. A simple drop table is not enough, because the sequence will continue to exist as long as the table is still in the recycle bin.

/* undo testcase */
drop table demo_identity_reset purge;

Further information

In general I would not recommend doing that in a production environment. But there are situations where you want to do it, for example after coping a production schema to a test database and cleaning out many values, you might like to reset the sequences too.

Such situations are rare and probably once in a while activities. That is also why I didn’t include the lock table command. And the lock will only hold until the next alter table command. Which makes it not safe to use.

It is not possible to use an alter sequence command for sequences that are connected with an identity column. It results in an error message.

alter sequence "ISEQ$$_172206" restart start with 60;

ORA-32793: cannot alter a system-generated sequence

Here is an example of the definition of an autoincrement column in another database (MySQL).

https://dev.mysql.com/doc/refman/5.7/en/example-auto-increment.html

The AUTO_INCREMENT attribute can be used to generate a unique identity for new rows

No value was specified for the AUTO_INCREMENT column, so MySQL assigned sequence numbers automatically. You can also explicitly assign 0 to the column to generate sequence numbers, unless the NO_AUTO_VALUE_ON_ZERO SQL mode is enabled.

If the column is declared NOT NULL, it is also possible to assign NULL to the column to generate sequence numbers.

When you insert any other value into an AUTO_INCREMENT column, the column is set to that value and the sequence is reset so that the next automatically generated value follows sequentially from the largest column value.

Updating an existing AUTO_INCREMENT column value in an InnoDB table does not reset the AUTO_INCREMENT sequence as it does for MyISAM and NDB tables.

MySQL 5.7 Reference Manual – 3.6.9 Using AUTO_INCREMENT

APEX 5.1 quick tipp: select * considerations for remote views

Many developers don’t like select *. I’m a big fan of it, but there are some edge cases where one needs to be careful. I don’t want to discuss pros and cons here in this thread. This is just to point out a few areas where useing select * gives not the advantage that one could expect.

I like to program select * in multiple scenarios. Usually to express, that I want to access (almost) every column from that table or view. I’m too lazy to write down every single column name.

A typical APEX example would be to edit a table via an Interactive Grid.

Scenario setup

Imagine a remote database with a table MYTAB. We access this remote database via a database link from a local database with APEX installed. In the SQL workshop we have a view that was originally created as

create view myTab_remote as
select * from mytab@dblinkname;

That advantage of that view is, that we create a layer where the name of the DB link is used. Using this layer we don’t need to know the DB link name inside our APEX application. Which makes it easier to switch or change the link.
Also the APEX framework is now able to use the oracle data dictionary for that view. Newer APEX versions have improved capabilities to work with remote tables. But here I’m talking about version 5.1.

Our Interactive Grid uses a fairly simple query.

select * from myTab_remote
where parent_id = :P1_MASTER_ID

Modifications

So what happens when a column is added on the remote database to our table, for example the column price_unit.

alter table myTab
add price_unit varchar2(5);

The APEX application is still working, no issue happend there. It does not automatically has the new column.

In general this is good. We have a robust appliaction that does not break, simply because a new column was added (it would break if a column is dropped however).

Certainly the column was added with some intention in mind. We also want to show and edit this new column in our Interactive Grid.

How to add this new column to our application?

First step is to update the view. One might think this is not necessary, because we created the view using select *. Unfortunatly that is not how Oracle works, especially not via a database link.
When investigating the view via the SQL tab in the SQL workshop, we find that the DDL is different than what one would think.

CREATE OR REPLACE FORCE VIEW "MYTAB_REMOTE" ("ID", "PARENT_ID", "COL_A",  "COL_B",  "PRICE") AS 
   select  "ID", "PARENT_ID", "COL_A",  "COL_B",  "PRICE"  from MYTAB@DBLINKNAME
 /

The column list was expanded into the select statement and also captured as the column list name of the view itself. The new column price_unit is missing.

To add the new column we could again simply create the view. A normal recompile is not enough. Btw. this also happens with views over local tables.

create or replace view myTab_remote as
select * from mytab@dblinkname;

Hint: never use the FORCE keyword for development work. It may hide important error messages.

After that we can select from the view and will see the new column. However it is still missing from the Interactive Grid.

The second step is to push APEX into reevaluating the column list for this view.

Simply validating the SQL query will not be enough. I found the following sequence of steps do work.
First add an alias and a new dummy column to the query.

select m.*, 'x' new_dummy_col
from myTab_remote m
where m.parent_id = :P1_MASTER_ID

Validate it and press OK. This results in two new columns added to the grid column list. Then enter the SQL query again and remove the dummy column.

select m.*
from myTab_remote m
where m.parent_id = :P1_MASTER_ID

Result is we now have the new column in our list.

The third and last step is to add this column to our default reports.

The new column is at the very end of the column list. On an IG it is shown, on an IR it is not shown by default. In any case we want to move this new column via the Actions > Columns menu to the correct position. And then save the public report Actions > Save Report. This should be done for each public report in that Grid.

Conclusion

Using select * in a APEX Interactive Grid scenario did not lower the maintenance needed when adding new columns to the base table. The second step even might be slightly confusing for many developers. So it might be a good idea to use a complete column list there instead of select *.

Site note: Using select * in other scenarios like inside PLSQL indeed does lower the maintenance when done right compared to an explicit full column list.

OGB Appreciation Day 2019: The Dark Mode

#ThanksOGB

This one goes out to the one I love
This one goes out to the one I left behind
A simple theme to occupy my time
This one goes out to the one I love

Dark
Mode

This one goes out to the one I love
This one goes out to the one I’ve left behind
Another theme has occupied my time
This one goes out to the one I love

Dark
Mode

original lyrics by R.E.M. – slightly modified by me [SUW]

Every developer knows that there is a dark mode. Here is why I like that and why you should use it too.

There are two simple reasons.

A) it is better for your eyes (not always)

B) it is better for the climate (always)

I’ll explain that a little bit later and also give some sources for further reading. First lets see how dark mode looks in a typical developer environment.

Examples

Now the DBA had terminals using dark mode for ages. The typical putty session is a black background with a white font color.

There is a reason why so many DBAs still use putty in dark mode (it is possible to change the colors). If a pattern is so well established, there must be a good reason for it.

Developers should think about that and consider to use dark mode too!

Don’t let those pesky DBAs get the upper hand! Fight them with their own weapons!

Now here is an example from APEX. APEX in its newest versions added a dark theme to be used for applications (18.1), but also a dark mode for the development environment (19.2 ea).

First this is how we are used to the application screen in page builder (light mode)

And this is the same page using dark mode

You can change it with a small little switch in your user settings.

Ok dark mode is not perfect for any scenario.

I tried to read up a bit about it. Usually it is recommended if there is lots of text on a screen or if there are graphical elements. Also it depends on the surrounding light. In bright daylight light mode would be better suited – because the difference to surrounding light seems to put some stress on your eyes. But do you program in bright daylight or in darker office light or even at night?

I think dashboard pages profit most from dark mode. Here is an example from the Sample Demo packaged application.

This application works very well in dark mode. Charts do look so much better on a dark background!

But we also see that it was developed in light mode and that there are several elements that are not optimized yet. In the screenshot the legend text is not readable and the tooltip still has a light gray background. Other charts have similar issues. If we do development in “dark mode first” then those issues should disappear.

Why do I claim it is good for the climate?

Dark mode uses less brightness, which means it uses less power on Laptops. The effect is especially noticable on OLED displays.

A study by google showed energy savings up to 60%:
https://www.xda-developers.com/google-wants-developers-to-add-dark-themes-to-save-battery-life/

  • Brightness affects power usage, and battery life, in a mostly linear fashion.
  • A dark theme can reduce battery usage, even with max brightness, by up to 63% on AMOLED displays.
  • Pixel color value can also affect power usage quite a bit, with white being the most power-hungry color.

Recommendation

Give dark mode a try!

PS: For blog posts I still do prefer light mode.

Links about dark mode

https://blog.weekdone.com/why-you-should-switch-on-dark-mode/

scientific research about effect of contrast and colors on humans: https://lite.mst.edu/media/research/ctel/documents/LITE-2003-04.pdf

dark mode for google maps announced: https://www.googlewatchblog.de/2019/10/google-maps-der-dark/

https://www.zdnet.com/article/google-heres-why-dark-mode-massively-extends-your-oled-phones-battery-life/