10 little Oracle SQL features – that you are probably not using enough

This is a collection of some small features, patterns or snippets, that I seem to use frequently and like to share. The features are written in no specific order. However we can structure the features into performance related [P], convenience [C] or enablers [E].

Enablers [E] would be features that allow us to do something, that is otherwise impossible or at least very hard to achieve. Sometimes even to avoid an error.


Convenience [C] features are things that offer an alternative construct. Often they can be used instead of another option. Sometimes making the code easier to change (maintainability).


Performance [P] features improve execution speed. Often they come with a drawback or one should keep exceptions or rare race conditions in mind.

1. find multiple words using regexp_like [C]

instead of writing multiple like conditions (or repeating the same select using like multiple times), we can write a single regexp_like expression.


Cumbersome:

where (str like '%word1%' OR str like '%word2%' OR ...)

Much easier:


where regexp_like(str,'(word1|word2|...)')

The pipe “|” inside the regular expression acts as the OR operator and the parenthesis () form the subexpression that is needed for that OR.

A very typical case is to query the v$parameter table for a set of parameter names.
example

select name, value 
from v$parameter 
where regexp_like(name,'(listen|dispatch|service)');
NAMEVALUE
service_namesfeno1pod
enable_dnfs_dispatcherFALSE
dispatchers(PROTOCOL=TCP) (SERVICE=feno1podXDB)
max_dispatchers
query result

What I especially like about it, is that the need for wildcards is gone and it is so easy to add more “words” to search for.

2. fetch first row only [C]

Do you still use rownum frequently? Why not use to the row limiting clause instead?

I use it a lot, mostly for ad hoc queries. One advantage is that the need to create an inner query that does the correct ordering disappears.


example

-- show the top 3 tables with the most rows
select table_name, num_rows
from user_tables
order by num_rows desc
fetch first 3 rows only;
TABLE_NAMENUM_ROWS
IMP_SD_2744-12_45_1819696
IMP_SD_2822-14_28_5319692
IMP_SD_194-09_40_5019545
Query result

3. use expression lists (a,b) [E]

I use this frequently in update statements but also sometimes in joins. Instead of setting each column separately we can set multiple columns at once. Simply by using parenthesis and a comma between the columns.

example

update persons u
set (u.first_name, u.last_name) = (select a.firstname, a.name 
                                   from applicants a 
                                   where a.person_id = u.id)
where u.status = 'NEEDS REFRESH';

Or we can filter on a combination of columns (a,b) in ((c,d),(d,c),(e,f)). The need for this should be rare, but it happens. A strange edge case is when we do an equality comparison of two expression lists. For some reason the right side needs an additional set of ().

(a,b) = ((c,d))

4. DeMorgans law [C,E]

This might be the single most important math/logic rule, that SQL developers should know. It is usually applied inside the where clause and knowing it by heart will prevent a lot of easy mistakes.

{\displaystyle {\begin{aligned}{\overline {A\cup B}}&={\overline {A}}\cap {\overline {B}},\\{\overline {A\cap B}}&={\overline {A}}\cup {\overline {B}},\end{aligned}}}
demorgans law

written as a SQL expression

not(A or B) = not(A) and not(B)

not(A and B) = not(A) or not(B)

It is easy to remember. OR changes to AND when the parenthesis are resolved (and the other way round).
A and B here are full SQL expressions, for example x=y.


Why is that so important? SQL is a very logic based language. In daily speach we often use logical operators (AND/OR/NOT) differently than what is needed when they should be applied in SQL. This can become confusing very easily. Knowing DeMorgans law helps to quickly check if the logic is used correctly.

An example

Task: “Find all employees that are not working in sales and marketing.”

Converting this sentence 1:1 into SQL would result in this:

select * from emp
where not (department = 'SALES' and department = 'MARKETING'); 

Obviously what is ment, is that we want to find those employees that are working in some other department.

Applying Damorgans Law, we can reformulate our select statement. Also we replace NOT (x=y) with x!=y .

select * from emp
where  department != 'SALES' 
    OR department != 'MARKETING'; 

A developer should understand now, that this condition will always be true. For each row the department is either sales or not sales. And if it is sales, then it is not marketing. So the combination is always true (excluding NULL value considerations). Which is probably not what is wanted.

Here the row logic also plays a part. The where clause is applied to a single row, but normal speach often uses the boolean operation to combine data sets.

The correct query would be

select * from emp
where  not (department = 'SALES' 
             OR department = 'MARKETING'); 


or even better use IN instead or OR

select * from emp
where  department not in ( 'SALES' , 'MARKETING'); 

Historic anecdote: Da Morgan was not the first one to discover this logic. Centuries before Da Morgan, a guy called William of Ockam already wrote about it. He probably had it from Aristotle. Occam (the spelling changed over the centuries) nowadays is more famous for his razor.

5. rollup and grouping sets [C]

To get a total row for a query that uses sum or count, simply add rollup.


example rollup

select tablespace_name, count(*)
from dba_tables 
group by rollup (tablespace_name);
TABLESPACE_NAME	COUNT(*)
DATA	        362
DBFS_DATA	2
SAMPLESCHEMA	14
SYSAUX	        1357
SYSTEM	        1056
	        322
	        3113

The last line is the total number of tables. One problem here are tables without a tablespace (null). The grouping() function helps to separate a normal row with a data value NULL from a superaggregate row (19c grouping).

If there are multiple columns then instead of rollup I use grouping sets. The full set (=total) in a grouping set expression can be expressed by ().

example grouping sets + grouping

select case when grouping(tablespace_name) = 1 then '-all-' else tablespace_name end as tablespace_name
     , case when grouping(cluster_name) = 1 then '-all-' else cluster_name end as cluster_name
     , count(*) 
from dba_tables 
group by grouping sets ((tablespace_name, cluster_name),());
TABLESPACE_NAME	CLUSTER_NAME	COUNT(*)
		                322
DATA		                362
SYSAUX		                1356
SYSAUX	SMON_SCN_TO_TIME_AUX	1
SYSTEM		                1020
SYSTEM	C_RG#	                2
SYSTEM	C_TS#	                2
SYSTEM	C_OBJ#	                17
SYSTEM	C_COBJ#	                2
SYSTEM	C_MLOG#	                2
SYSTEM	C_USER#	                2
SYSTEM	C_FILE#_BLOCK#	        2
SYSTEM	C_OBJ#_INTCOL#	        1
SYSTEM	C_TOID_VERSION#	        6
DBFS_DATA		        2
SAMPLESCHEMA		        14
-all-	-all-	                3113

Again the last line shows the total number of tables. Since I didn’t want to see subtotals for tablespace or cluster grouping sets is the perfect solution t add this total row.

Also notice that the first line has all null values for the names, same as the total line would have. Using the grouping function allows to find out which is the total line and give it a meaningful text.

6. enhanced listagg [E]

In the more recent database versions, the very useful LISTAGG command got even better. For production code I nowadays always try to remember to add some safety protection in place in case result of the aggregation becomes big. Otherwise it could happen to get ORA-01489: result of string concatenation is too long.

Since 12.2 we can avoid the error by using the OVERFLOW clause

listagg (... on overflow truncate without count) ...

So instead of the error message, when the maximum string size is reached (4000 bytes or 32k bytes depending on max_string_size parameter) we get usable text without the statement raising an error.

example

select count(*)
, listagg(table_name,', ' on overflow truncate) within group (order by tablespace_name desc, table_name) all_tables
from dba_tables;
COUNT(*)ALL_TABLES
3113AQ$_ALERT_QT_G, AQ$_ALERT_QT_H, AQ$_ALERT_QT, … many many more tables …, SDO_CRS_GEOGRAPHIC_PLUS_HEIGHT, SDO_CS_SRS, SDO_DATUMS, …(1304)

The three dots “…” are called an ellipsis and can be configured. without count would avoid writing the total number of entries to the end of the list. with count is the default if truncate is specified.

Although the overflow clause is very usfull, the ultimate goal would be to give the developer more influence over it. Recently there was an interesting twitter discussion around that topic.

Other useful enhancements (19c) were LISTAGG distinct. example on LiveSQL

7. Using sys_context [C,P]

A sys_context is something like a global variable in other languages. The normal context is for the session, but it is also possible to use application wide contexts.

Oracle provides several “preconfigured” contexts for us. The most common are ‘USERENV’ and ‘APEX$SESSION’ (for apex developers). Contexts are also used for security policies with VPD.

Here are the contexts that I frequently like to use

  • sys_context('userenv','client_identifier')
    value set by dbms_application_info.set_client_info
  • sys_context('userenv','current_edition_name')
    when using edition based redefinition (ebr), this shows which edition I’m in. Always good to double check!
  • sys_context('userenv','current_user')
    similar to pseudocolumn user. The schema name that we connected with.
    In some oracle versions (12.1) much faster than user, my recent test shows that this performance difference is now gone (19c)
  • sys_context('userenv','proxy_user')
    When doing a proxy connect then this is the name of the authenticated user, not the target schema
  • sys_context('userenv','os_user')
    useful when client_identifier is empty, for example the name of the Windows login account when using SQL Developer under Windows.
  • sys_context('apex$session','app_user')
    apex authenticated user
  • sys_context('apex$session','app_id')
    id of the current apex app
  • sys_context('trigger_ctl','audit_on')
    Is auditing enabled/disabled? Part of my custom adaptation/enhancement of Connor McDonalds Audit-Utility package
  • sys_context('userenv','sid')
    the session ID of the current database session

Side note: UPPER/lower case does not make a difference for contexts. Only for the values of the context.

Warning: when you start using your own custom contexts, be careful with the grant create any context privilege. It can be a security risk! Revoke it after it was used.

8. hint /*+ driving_site */ [P]

I feel like this is a classic for me. In one of my longest running projects we had a lot of databases connected by database links. One of the most important tuning activities was to understand how to do remote joins in a performant way. The driving_site hint was extremely important for that.

General rule: When doing remote queries, avoid mixing local and remote tables in one select statement. If only remote tables are in the statement always use the driving_site hint.

Nowadays the need for this hint diminishes, mostly because databases are less connected by DB links, but more by webservices (which does not really improve performance, but that is a different story)

9. join via using [C]

This only works reliably when the naming conventions of the data model fit to it.

example

We have a table PERSON and a table ADDRESS. The primary key in PERSON is PERS_ID. The relationship column (FK) in ADDRESS is also called PERS_ID. It wouldn’t work as well with ADR_PERS_ID for example.

Then we can do a quick join like this:

select * 
from person 
join address using (pers_id) 
where address.type='MAIN';

There are some slight differences compared to the normal way using ON. Mostly one then can not differentiate anymore from which table the pers_id originates. For example you can not refer to person.pers_id in the where clause anymore.

Currently I use it exclusively for ad hoc queries, not in plsql code.

10. interval literals [C]

If I want to add some time to a date or a timestamp, I always use interval literals (unless it is a full day or a full month).

example
Let’s say we want to check tickets that were entered during the last hour. Using the date datatype we could easily calculate an hour by dividing a full day / 24. This is how many developers calculate time. And it is perfectly ok to do so.

select * from tickets 
where createddt >= sysdate - 1/24 ;

The term sysdate-1/24 resembles one hour before “now”.

This is how I prefer to write it:

select * from tickets 
where createddt >= sysdate - interval '1' hour ;

I feel the code documents itself in a much better way.

Additionally requirements like this easily change.
Let’s change both expressions to 15 minutes.

sysdate - 15/24/60

sysdate - interval '15' minute

Which one is better to read and was quicker to change?

Warning: Sayan Malakshinov in the comments mentions that we should not use interval literals (ymintervals) to add full months. I agree with that. Calendar arithmetic is complex – there are specific functions like add_months for that. So do not use interval '1' month.

honorable mentions

Some more snippets or functions that didn’t quite make it into the list.

  • nvl2 [C]
  • the third parameter of to_char (nlsparams) [E]
  • query block hints [P]
  • hierarchical with clause [C,E]
  • lateral join [C]
  • alter sequence restart [C,E]
  • cross join [C]
  • lnnvl [C,E]
  • hex format “XXXX” [E]
  • sys.odcinumberlist [C]
  • analytic sum + correct window clause: rows between instead of range between [E]

Summary

I hope I mentioned something that you didn’t know about before and liked reading about.

Outlook

Here are some features/patterns that I’m currently not using myself, but where I feel that they might become quite important in the future.

  • SQL Macros
  • boolean in SQL (not implemented yet, but Oracle seems to be working on it – I expect some news about this very soon)
  • median and other statistical functions like cume_dist, percentile_xxx, percent_rank
  • match_recognize
  • with clause udf functions (still think they are overrated, but I might be wrong)
  • approx_count_distinct
  • analytical views

Audit column performance retested in 19c: “user” is fast now!

Motivation

In an older and quite successful blog post of mine I tried to find the “perfect” solution for audit columns. See: sequence-and-audit-columns-with-apex-5-and-12c.

One finding then was that sys_context('userenv','current_user') is considerably faster than the user pseudocolumn.

I recently noticed that this seem to have changed and decided to retest the behavior.

The old test was done against a 12.1.0.1 standard edition database.

The new test is done against a 19.5 enterprise edition (on ATP which features an exadata).

Test setup

Mass inserting 100000 records into empty tables. A very fast select was used to generate 100k rows of null values.

3 columns were filled with data:

  • ID by a sequence using a cache of 10000
  • created_by filled with the schema name => sys_context or user
  • created_on filled with sysdate

Test 1 (T1) was using sys_context as default value.

Test 2 (T2) was using user as default value.

Test 3 (T3) was using a trigger with sys_context.

Test 4 (T4) was using a trigger with user.

Each test was run once to warm-up the database (and the tablespace). This warm-up run didn’t count. Then each test was run 3 times. Only the execution speed of the inserts were measured.

Results

Test 1 – sys_context as default value

Elapsed:1.23 seconds
Elapsed:1.21 seconds
Elapsed:1.26 seconds

Average: 1.23 s

Test 2 – user as default value

Elapsed:1,32 seconds (This looks suspicious. Maybe the warm-up run didn’t warm up enough)
Elapsed:1,16 seconds
Elapsed:1,19 seconds

Average: 1.22 s

Test 3 – sys_context in trigger

Elapsed:17,33 seconds
Elapsed:17,08 seconds
Elapsed:17,05 seconds

Average: 17.15 s

Test 4 – user in trigger

Elapsed:17,03 seconds
Elapsed:16,71 seconds
Elapsed:16,97 seconds

Average: 16.90 s

Comparison

My test shows that user was even a tiny tiny bit faster than the sys_context. Which means the previous 12.1 recommendation not to use “user” anymore is now outdated. The difference is so small that you can choose whatever you want.

User is fast now!

The main message still is, if you can get rid of the trigger, then do it. The difference between a default value logic and the trigger is huge! But even in the trigger logic there seems to be a slight performance advantage for user now.

If you are still on 12.1 or 12.2 you should run your own tests and compare the results. If you are on 19c or higher I would now use user again.

Test scripts

Create statements

-- cleanup
drop table test_insert_perf_t1_sys_context purge;
drop table test_insert_perf_t2_user purge;
drop table test_insert_perf_t3_trigger purge;
drop sequence test_insert_perf_t3_trigger_seq;
drop table test_insert_perf_t4_trigger purge;
drop sequence test_insert_perf_t4_trigger_seq;

-- create objects
create table test_insert_perf_t1_sys_context
             (id number generated by default on null as identity (cache 10000) primary key
             ,created_by varchar2(128) default on null sys_context('userenv','current_user') not null 
             ,created_on date default on null sysdate not null);

create table test_insert_perf_t2_user
             (id number generated by default on null as identity (cache 10000) primary key
             ,created_by varchar2(128) default on null user not null 
             ,created_on date default on null sysdate not null);

create table test_insert_perf_t3_trigger
             (id number not null primary key
             ,created_by varchar2(128) not null 
             ,created_on date not null);

create sequence test_insert_perf_t3_trigger_seq cache 10000;          

create or replace trigger test_insert_perf_t3_trigger_bri
before insert on test_insert_perf_t3_trigger 
for each row
begin
  :new.id := test_insert_perf_t3_trigger_seq.nextval;
  :new.created_by := sys_context('userenv','current_user');
  :new.created_on := sysdate;
end;
/

create table test_insert_perf_t4_trigger
             (id number not null primary key
             ,created_by varchar2(128) not null 
             ,created_on date not null);

create sequence test_insert_perf_t4_trigger_seq cache 10000;          

create or replace trigger test_insert_perf_t4_trigger_bri
before insert on test_insert_perf_t4_trigger 
for each row
begin
  :new.id := test_insert_perf_t4_trigger_seq.nextval;
  :new.created_by := user;
  :new.created_on := sysdate;
end;
/

Run statements

-------------------------------------------------------------------
-- run tests (insert 100000 rows)
-------------------------------------------------------------------

-- Test 1 --------------------------------------
set serveroutput on
set time on;
declare 
    v_time number;
begin
    v_time := dbms_utility.get_time;

    -- insert 100000 records
    insert into test_insert_perf_t1_sys_context(id)
    (select n1.nr
     from (select cast(null as number) nr from dual connect by level <=100) n1
     cross join (select cast(null as number) nr from dual connect by level <=1000) n2 
    );

    sys.dbms_output.put_line('Elapsed:'||to_char((dbms_utility.get_time - v_time) / 100)||' seconds');
    rollback;

end;
/

-- Test 2 --------------------------------------
set serveroutput on
set time on;
declare 
    v_time number;
begin
    v_time := dbms_utility.get_time;

    -- insert 100000 records
    insert into test_insert_perf_t2_user(id)
    (select n1.nr
     from (select cast(null as number) nr from dual connect by level <=100) n1
     cross join (select cast(null as number) nr from dual connect by level <=1000) n2 
    );

    sys.dbms_output.put_line('Elapsed:'||to_char((dbms_utility.get_time - v_time) / 100)||' seconds');
    rollback;

end;
/


-- Test 3 --------------------------------------
set serveroutput on
set time on;
declare 
    v_time number;
begin
    v_time := dbms_utility.get_time;

    -- insert 100000 records
    insert into test_insert_perf_t3_trigger(id)
    (select n1.nr
     from (select cast(null as number) nr from dual connect by level <=100) n1
     cross join (select cast(null as number) nr from dual connect by level <=1000) n2 
    );

    sys.dbms_output.put_line('Elapsed:'||to_char((dbms_utility.get_time - v_time) / 100)||' seconds');
    rollback;

end;
/


-- Test 4 --------------------------------------
set serveroutput on
set time on;
declare 
    v_time number;
begin
    v_time := dbms_utility.get_time;

    -- insert 100000 records
    insert into test_insert_perf_t4_trigger(id)
    (select n1.nr
     from (select cast(null as number) nr from dual connect by level <=100) n1
     cross join (select cast(null as number) nr from dual connect by level <=1000) n2 
    );

    sys.dbms_output.put_line('Elapsed:'||to_char((dbms_utility.get_time - v_time) / 100)||' seconds');
    rollback;

end;
/

How to setup APEX Developer Accounts on Oracle Cloud Infrastructure (free tier) – a suggestion

If you are an APEX Developer who just wants to move to the cloud, there are some things that you should be aware of. Here is my proposal what steps should be done to “cloud enable” APEX developer accounts. This post is only about setting up an account, not about any nifty features to use.

Scenario: We are an admin who already created a new APEX workspace and now wants to give some developers access to that workspace.

The actions were confirmed on Oracle Free Tier cloud featuring a 19c database and APEX 20.2. There is a good chance that some of the obstacles are removed in a newer version. At time of writing it is possible to setup an 21c database on the oracle free tier cloud.

Admin actions

1. create APEX developer account

Inside the workspace create a new APEX account.

Administration/Manage Users and Groups/Create User

Set it to developer. Set and remember the password. You will notice, when switching developer on, the button to “Require Change of Password on First Use” will deactivate.

APEX developer and admin accounts in the cloud are created as database users (in uppercase). This has several major implications.

  • The developer can NOT login into the workspace and change the password. For admins it is possible to set a new password using the admin page. Trying to change the password using the normal way of “change my password” results in a success message, but the change is silently ignored. I assume this is the reason, why the "Require Change of Password on First Use" button is deactivated.
  • The password rules of the database apply. On Autonomous Transaction Database (ATP) the accounts use the default profile, which uses the CLOUD_VERIFY_FUNCTION
https://docs.oracle.com/en/cloud/paas/autonomous-database/adbsa/manage-user-profiles.html#GUID-BD3C5573-DF0E-4E44-919A-08A471462B87
  • Using the same developer name over different workspaces means the same password is used for all. I think this is a welcomed feature, but it certainly is a change to what one is used on other platforms like on premises or on apex.oracle.com.
  • If the developer username has special characters in it, for example a dot or an @ as in “firstname.lastname@my.email”, then the database user later always needs to be put in double parenthesis “firstname.lastname@my.email”. Although that works, it prevents certain other options, like rest enabling the account (ORA-20018: Schema or user name must be a simple unquoted literal).

    I strongly recommend to only use letters, numbers and underscores for the name (simple literals). No dots, no @, no blanks or any other special chars. For specific rules see the docs:

Nonquoted identifiers must begin with an alphabetic character from your database character set. …
Nonquoted identifiers can only contain alphanumeric characters from your database character set and the underscore (_), dollar sign ($), and pound sign (#). … Oracle strongly discourages you from using $ and # in nonquoted identifiers.

From https://docs.oracle.com/en/database/oracle/oracle-database/19/sqlrf/Database-Object-Names-and-Qualifiers.html#GUID-75337742-67FD-4EC0-985F-741C93D918DA
  • The option to create multi users via one dialog does not work properly for developer accounts. Do not use this dialog! (It is possible to use it for end users.)

Side note: End users are not created as a database users. This means we can change the end user password using the normal APEX logic. And the other implications do not apply for them as well.

2. enable the developer to connect with SQL Developer

enable connect

Per default the created db user can not create a session. We need to give him that, so he/she can connect and change the password.

grant create session to <developer>;

proxy connect

A developer usually needs full access to the application schema (with the tables in it) he/she is working with. To avoid distributing the application schema password one can setup a proxy connection. This means the developer authenticates him/herself using the individual password, not the password of the application schema.

alter user <application_schema> grant connect through <developer>;

There are additional options possible like setting an extra password for the proxy connect or allowing only specific roles. For the typical developer scenario those extra settings seem not necessary.

Side note: A proxy connection is what APEX also uses to connect to the schema using only the APEX_PUBLIC_USER.

send wallet

To access the cloud database from a local client a key is needed. Oracle provides this key in a PKCS#12 wallet inside a wallet.zip file. The admin can download this wallet from the database service console. This is a database instance wallet that is only valid for this specific database. There are also regional wallets that are valid for all databases inside a region.

Write down who you sent the zip to. This is important since the admin might want to rotate the wallet. I prefer long intervals for the wallet rotation. But it should be rotated from time to time (for more information see: Rotate Wallets for Autonomous Database ).

Make the developer understand that he/she is not allowed to give the wallet to other users/developers.

Side note: There is an api to handle wallets (https://docs.oracle.com/en-us/iaas/api/#/en/database/20160918/AutonomousDatabaseWallet/UpdateAutonomousDatabaseWallet). Using this api one could automatically rotate the wallet at defined intervals.

Developer actions

3. wallet handling

Store the wallet_INSTANCENAME.zip into some local folder.

In general there is no need to unzip the wallet file. A noticeable exception would be if you want to use edition based redefinition. I wrote about this before:

4. setup SQL developer

Two connections are recommended.

One for the developer account itself. This one is needed to easily change the password. Additionally most developers like to have an environment were they can try out stuff. Further privs might be needed for that (create view, etc.).

And one connection for the application schema using a proxy connect.

To connect to the cloud from a local (and recent) SQL Developer installation is simple. Choose “Cloud Wallet” as the connection type and point the developer to the wallet zip file, choose the tns alias (service) and your done. On the free tier you should choose the “xxx_low” alias.

To create a proxy connect the name of the target schema (in my example DEMODB) needs to be configured under the tab “Proxy User”.

I also suggest to add this schema to the name of the connection. I use [schemaname] for that, because that is the syntax for proxy connects in SQL Plus.

SQL> connect "SVEN.WELLER"[DEMODB]/mysecretPwd1111

5. change the password

There are multiple ways to reset a password using SQL Developer. The good thing is, they also work for cloud connects (a recent jdbc driver is needed for way a) ).

a) Right-click on the connection and choose “Reset Password” . This also works for proxy connects and does reset the password for the individual user.

b) Type password in a connected worksheet

This changes the password for the schema. For a normal connection without a proxy this changes the password of the connected user.

In case of a proxy connect it tries to change the password for the PROXY client.

Since the developer will not have this password, he/she will not be able to change it.

c) And it is possible to change the password using the ALTER USER command. Everybody can do this for their own password even without enhanced privileges.

If the username has special chars in it – like in my case the dot – then do not forget to use ” around the name. Regardless of the special chars, the name will be in UPPERCASE. And of cause the password rules apply.

In addition to that alter user command, in the cloud environment, the original password needs to be mentioned too. otherwise the above command will result in the following error message

ORA-28221: REPLACE not specified
*Cause: User is changing password but password verification function is turned on and the original password is not specified and the user does not have the alter user system privilege.
*Action: Supply the original password.

Solution is easy, just add the old password to the alter user command. The complete syntax is

alter user <developerschema> 
     identified by <new password> 
     replace <old password>;

Conclusion

In the cloud each APEX developer is a database user. Treat it like one.

Jeff Smith shows a brand new option to manage DB users using SQL Developer web: https://www.thatjeffsmith.com/archive/2021/03/creating-users-granting-storage-quota-using-database-actions/

How to connect SQLDeveloper to an edition (EBR) on Autonomous Database (ATP)

On the Oracle Autonomous Database there are some restrictions on what is possible and what not. In generell Edition Based Redefinition (EBR) can be used on the Oracle Cloud Infrastructure. EBR is now part of Oracles Maximum Availability Architecture (MAA). It is possible to enable schemas to use editions, to create editionable objects, editioning views and cross edition triggers.

One thing that doesn’t work on Autonomous Transaction Database (ATP) – and I assume on the other Autonomous DBs as well – is to create a custom service using DBMS_SERVICE. This is not a feature directly needed for EBR. But such a service can be connected to a specific edition. When doing so this service acts as a separate layer for all kind of connects from the application or even from development tools like SQL Developer. This is super helpful. I wrote about this a couple of years ago: https://svenweller.wordpress.com/2015/06/16/working-with-editions-part-2/

For most tools like sql*plus or JDBC there are alternatives to specify an edition at connect time. Unfortunately so not for SQL Developer. Until I found out how to make it work there.

Installation steps

Here are the steps how to workaround that problem. This assumes a recent version of SQL Developer (I think at least 19.x is needed). I tested it with 20.4. More specifically the JDBC driver 18.3 is needed.

  1. Download the wallet zip file with the credentials from your cloud service control
  2. Unzip the wallet. It includes several files, among others the tnsnames.ora that defines the cloud connection strings.
  3. Create a connection in SQL Developer, but instead of a cloud connect to the zip file, use a custom JDBC connection
  4. The connect string needs the name of the alias as defined in the tnsnames.ora of the zip file and two custom parameters TNS_ADMIN and oracle.jdbc.editionName.
    Here is a template for the full string:
    jdbc:oracle:thin:@<tnsalias>?TNS_ADMIN=<path to the unzipped wallet folder> &oracle.jdbc.editionName=<Edition Name>

    If the path is a windows path, then the “\” needs to be doubled (escaped). For example TNS_ADMIN=”C:\\Users\\MyAccount\\Documents\\Tools\\sqldev connections\\Wallet_DEMOEBR”

Additional comments

The TNS_ADMIN parameter is a feature of the 18.3 JDBC thin driver (see What’s New in 19c and 18c JDBC and UCP). Using this parameter the sqlnet.ora file that otherwise defines the location of the wallet is not needed anymore. You also don’t need a TNS_ADMIN environment variable.

The new jdbc driver now has a ojdbc.properties file. It does not work to add oracle.jdbc.editionName to that file. The jdbc properties are needed already before the file is called.

Using the same syntax (&parameter=value) you can add any jdbc property to the connection string.

We can also add custom tns aliases to the tnsnames.ora file in the wallet location. This could make sense, for example if we like to lessen the number of retries in case a connect is not possible.

Several alternative approaches did not work. For example to use the ORA_EDITION environment variable. At least under windows this was not considered as a local environment parameter in combination with cloud connect. It would have been a pretty bad solution anyway, because all connections are then influenced by it.

One major difference compared to a service name is that when using JDBC if the target edition does not exist we get an error message “edition does not exist”. Whereas with a service name the connection silently falls back to the default edition. This can be a good or a bad thing, depending on your specific needs. Personally I prefer to get the error message.

Small demo about sequences and identity columns

A recent twitter discussion lead me to (re)test the syntax a little more around auto-incrementing ID columns that work without triggers. I was especially curious about the LIMIT VALUE option, that I didn’t know before.

identity options

This option is documented (as opposed to RESTART which still seems not to made it into the docs)

Demo 1 – Attach a sequence to a table via the DEFAULT option

The comments explain what I’m trying to do/show.

-- Create some sequences to be used later
create sequence mySeq1;
Sequence MYSEQ1 created.

create sequence mySeq2;
Sequence MYSEQ2 created.

-- Table with PK not tied to a sequence
create table t1 (id  number not null primary key
               , txt varchar2(30)); 
Table T1 created.

-- fill PK using a Sequence
insert into t1 values (mySeq1.nextval, 'Test1');
1 row inserted.

-- set the sequence to a new start value 55
alter sequence mySeq1 restart start with 55;
Sequence MYSEQ1 altered.

-- insert more data
insert into t1 values (mySeq1.nextval, 'Test2');
1 row inserted.

-- check data
select * from t1;
ID TXT
1 Test1
55 Test2

Unfortunately we can not easily change our PK into an identity column.

alter table t1 modify id generated as identity;

ORA-30673: column to be modified is not an identity column

alter table t1 modify id generated by default on null as identity;

ORA-30673: column to be modified is not an identity column

Other syntax variants including using LIMIT VALUE also do not work.

However it is easily possible to use a default value for the column and populate it by an existing (user managed) sequence.

-- Use a default value column and a different sequence
alter table t1 modify id default on null mySeq2.nextval;
Table T1 altered.

-- Trying to insert a values give a UK error, 
-- because we used a fresh sequence which gave us value 1
-- which exists already.
insert into t1 (txt) values ('Test3');
ORA-00001: unique constraint (MYUSER.SYS_C00145531) violated

-- set the sequence to a different value using RESTART
alter sequence mySeq2 restart start with 70;
Sequence MYSEQ2 altered.

insert into t1 (txt) values ('Test3');
1 row inserted.

insert into t1 values (null, 'Test4');
1 row inserted.

insert into t1 values (default, 'Test5');
1 row inserted.

-- Check data
select * from t1;
ID TXT
1 Test1
55 Test2
70 Test3
71 Test4
72 Test5

So Demo 1 shows some commands around normal ID columns that are populated via a default value setting.

Demo 2 – Use identity column from the start

create table t2 (id  number generated always as identity primary key
               , txt varchar2(30)); 
Table T2 created.

-- Try to insert some rows
insert into t2 values (null, 'Test1');
ORA-32795: cannot insert into a generated always identity column

insert into t2 values (default, 'Test2');
1 row inserted.

insert into t2 (txt) values ('Test3');
1 row inserted.

-- Try to insert a record via plsql
declare 
  r t2%rowtype;
begin 
  r.txt := 'Test4';
  insert into t2 values r;
end;
/

ORA-32795: cannot insert into a generated always identity column

-- hide the ID column then try row insert by record again
alter table t2 modify id invisible;
Table T2 altered.

declare 
  r t2%rowtype;
begin 
  r.txt := 'Test5';
  insert into t2 values r;
end;
/
PL/SQL procedure successfully completed.

-- Check data
select * from t2;
TXT
Test2
Test3
Test5

The ID is missing? Well it is there.

select id, txt from t2;
ID TXT
1 Test2
2 Test3
3 Test5

Hiding the ID column is one possible way to make certain types of inserts work again. Not my preferred way thou.

-- make column visible again
alter table t2 modify id visible;
Table T2 altered.

-- logical column order now is changed. This can impact DML that depend on column order!
select * from t2;
TXT ID
Test2 1
Test3 2
Test5 3
-- restore original column order by toggling other columns invisible
alter table t2 modify txt invisible;
Table T2 altered.

alter table t2 modify txt visible;
Table T2 altered.

-- modify to allow other data values (instead of generated ALWAYS)
alter table t2 modify id generated by default on null as identity ;
Table T2 altered.

-- Try inserts again that didn't work previously
insert into t2 values (null, 'Test1');
1 row inserted.

declare 
  r t2%rowtype;
begin 
  r.txt := 'Test4';
  insert into t2 values r;
end;
/
PL/SQL procedure successfully completed.

-- Check data
select * from t2;
ID TXT
1 Test2
2 Test3
3 Test5
4 Test1
5 Test4
-- add custom IDs
insert into t2 values (-1, 'Test6');
1 row inserted.

insert into t2 values (100, 'Test7');
1 row inserted.

-- Check data
select * from t2;
ID TXT
1 Test2
2 Test3
3 Test5
4 Test1
5 Test4
-1 Test6
100 Test7

So now we can insert other ID values into that column, even thou, we have an identity column. Eventually this will lead to a UK error when the value created by the Oracle managed sequence reaches 100.

This can be solved with the LIMIT VALUE clause

-- Check current high water mark (last value) of sequence identity column
select identity_column, data_default from user_tab_columns where table_name = 'T2' and column_name = 'ID';

IDENTITY_COLUMN	DATA_DEFAULT
-------
YES	"MYUSER"."ISEQ$$_258212".nextval

-- using the sequence name from the previous select
select last_number, cache_size 
from user_sequences where sequence_name = 'ISEQ$$_258212';

LAST_NUMBER	CACHE_SIZE
-------
24	20

-- reset Identity column to the highest value that is currently in the table 
alter table t2 modify ID  generated by default on null as identity start with limit value;
Table T2 altered. 

-- Check high water mark again 
select last_number, cache_size 
from user_sequences where sequence_name = 'ISEQ$$_258212';

LAST_NUMBER	CACHE_SIZE
-------
101	20

-- insert 
insert into t2 values (default, 'Test8');
1 row inserted.

-- Check data
select * from t2;
ID TXT
1 Test2
2 Test3
3 Test5
4 Test1
5 Test4
-1 Test6
100 Test7
101 Test8

Success!

So Demo 2 showed commands how to work with an identity column. Including the possibility to switch it from generated always as identity to generated by default on null. Which opens up some ways to manipulate the data inside this managed ID column.

Remember we can not influence (alter) the Oracle managed sequence that drives the identity column directly. But we can change its properties via ALTER TABLE MODIFY .

cleanup code

-- cleanup
drop sequence mySeq1;
drop sequence mySeq2;
drop table t1 purge;
drop table t2 purge;

How to avoid misleading plsql warning PLW-07206: analysis suggests that the assignment to ‘V_DUMMY’ may be unnecessary

PLW-messages are warnings issued by the PL/SQL compiler.

The PLW-07206 warning tells us that we have some code (an assignment statement) that is not needed. Usually because the target of the assignment is not used later anywhere.

PLW-07206: analysis suggests that the assignment to <something> may be unnecessary

Problem demonstration

Example 1)

Here is a very reduced demonstration of the issue.

First we enable the PL/SQL warnings. I assume this setting is still in place for all following examples.

-- set plsql warning on
alter session set PLSQL_WARNINGS = 'ENABLE:ALL';

In SQL Developer we can set the warning level in the preferences, but this setting is not applied to code running in a SQL worksheet as a script.

Then a function is created.

create or replace function dummy_existsCheck return boolean 
authid definer
is
      v_dummy number(1);
begin
      select 1
      into v_dummy
      from all_objects
      where 1=2;
      return true;  
      -- Do something here if a record was found
exception  
     when no_data_found then
      -- Do something else here if NO record was found
      return false;
end dummy_existsCheck;
/  

Compiling the function shows the warning.

Function DUMMY_EXISTSCHECK compiled
LINE/COL ERROR

6/7 PLW-07206: analysis suggests that the assignment to 'V_DUMMY' may be unnecessary
Example 2)

Here is an example where the warning is correctly added.

create or replace function dummy_existsCheck return boolean 
authid definer
is
   v_dummy varchar2(1);
begin
   v_dummy := 'B';
     
   -- Do something here without using v_dummy
   return null; 
end dummy_existsCheck;
/ 
Function DUMMY_EXISTSCHECK compiled
LINE/COL ERROR

6/5 PLW-07206: analysis suggests that the assignment to 'V_DUMMY' may be unnecessary

The difference between the two examples is that a SELECT statement in PL/SQL needs an INTO clause. And because of that this dummy variable is needed. So the warning in the first example is wrong.

The compiler is aware of this situation already. For some strange reason the warning goes away when we change the datatype of the variable from number to varchar2. Which is also the suggested workaround.

Workaround for example 1)
create or replace function dummy_existsCheck return boolean 
authid definer
is
      v_dummy varchar(1);
begin
      select 'x'
      into v_dummy
      from all_objects
      where 1=2;
      return true;  
      -- Do something here if a record was found
exception  
     when no_data_found then
      -- Do something else here if NO record was found
      return false;
end dummy_existsCheck;
/  

Function DUMMY_EXISTSCHECK compiled

No warning issued.

Obviously this is a bug. Although a very minor one.

Example 3)

Here is another slightly more complex example. The same message appears when we assign the result of a function. Because a function always has a return value, we need to assign it to some variable, even if the variable later in the code is not used anymore.

create or replace function dummy_getID(p_table in varchar2) return number
authid definer
is
      v_result number(1);
begin
      select 1
      into v_result
      from user_tables
      where table_name = p_table;
      
      return v_result;
end dummy_getID;
/  

create or replace function dummy_existsCheck return boolean
authid definer
is
  v_tab varchar2(30) := 'DUMMY';
  v_dummy number(1);
begin
  v_dummy := dummy_getID(v_tab);
  return true;
exception
  when no_data_found then
    dbms_output.put_line('Table '||v_tab||' does not exist.');
    return false;
end dummy_existsCheck;
/ 

Function DUMMY_GETID compiled
Function DUMMY_EXISTSCHECK compiled
LINE/COL ERROR

7/3 PLW-07206: analysis suggests that the assignment to 'V_DUMMY' may be unnecessary
Workaround for example 3)

Again, changing the variable to a varchar2 data type, avoids the warning.

create or replace function dummy_existsCheck return boolean
authid definer
is
  v_tab varchar2(30) := 'DUMMY';
  v_dummy varchar2(1);
begin
  v_dummy := dummy_getID(v_tab);
  return true;
exception
  when no_data_found then
    dbms_output.put_line('Table '||v_tab||' does not exist.');
    return false;
end dummy_existsCheck;
/ 
Function DUMMY_EXISTSCHECK compiled

Datatype check

Surprisingly other numeric data types also avoid the warning. It seems that it happens mostly for number(x).

The following data types did not produce the warning:

  • varchar2(1)
  • varchar(1)
  • char(1)
  • pls_integer
  • binary_integer
  • number
  • float

The following data types did produce the warning:

  • number(1)
  • number(38)
  • decimal
  • integer
  • integer(1)
  • simple_integer

Conclusion

Compiler warnings are a good thing. Wrong or misleading warnings are not.

PLW-07206 can (sometimes) be avoided by changing the data type of the assignment target from number(x) to some other data type.

Cleanup demo code
drop function dummy_existsCheck;
drop function dummy_getID;

Warning! Hidden traps – comments in SQL*Plus

I am a big fan of comments (see also CDD Manifesto a bit further down). Comments are simple ways to add information to objects/code without changing code.

Comments that work in SQL or PLSQL might behave differently in SQL*Plus. This post is to create awareness about such traps.

The basics

comments in SQL and PLSQL

Comments in SQL and PLSQL come in two flavors. The one line comment and the multi line comment.

-- this is a one line comment. 

The double hyphen “–” starts the one line comment. It ends at the end of the line.

/* this
is a
mutli-line
comment */

A slash “/” followed by an asterisks “*” starts a multi line comment. And it is ended in the opposite way.

Comments do not stack.

-- this /* will
not work */

This also will not work.

/* line 1
   /* line 2
      /* line 3
      line 4 */
   line 5 */
line 6 */

The multi-line comment ends after the first time “*/” is encountered. So line 5 and 6 are not commented, instead line 5 will raise a syntax error.

In SQL a very special form of a comment is a hint. A hint tries to give the SQL optimizer additional instructions. A + indicates that this comment is a hint.

select /*+warp_speed(5) */ from big_table;

A hint could be compared to an annotation in other languages. The comment syntax is used to add some special meaning to the surrounding code. This post is not about such annotations.

comments in SQL*Plus

In general we run SQL statements, PLSQL blocks and SQL*plus commands in a SQL*Plus script. Typical examples are SELECT statements, CREATE OR REPLACE PACKAGE commands and SET commands..

The normal SQL and PLSQL comments do function in SQL*plus as well. Additionally we have a REM(ARK) command to add comments.

SQL> REM this is a SQL*plus comment

We can also use PROMPT to have a kind of echo inside a SQL*Plus script. Although it is not a real comment, it is used in a similar way.

SQL> PROMPT *** heading ***
*** heading ***

Surprising differences

SQL*Plus interprets comments slightly differently than SQL or PL/SQL. Here are some examples. Try to guess what happens before reading the result. The examples do work in SQL Developer, but behave differently in SQL*Plus.

Example 1

generate some DDL commands…

select 'drop '||object_type||';' 
from user_objects 
--where object_type = 'VIEW';
where object_type = 'TRIGGER';

Result 1 (real object names changed)
... many similar rows ...
'DROP'||OBJECT_TYPE||''||OBJECT_NAME||';'
drop SYNONYM PLSQL_PROFILER_DATA;
drop SYNONYM PLSQL_PROFILER_RUNNUMBER;
drop SYNONYM PLSQL_PROFILER_RUNS;
drop SYNONYM PLSQL_PROFILER_UNITS;
drop INDEX ABC_UK01;
drop PACKAGE MYPK;
drop PACKAGE BODY MYPK;
drop TABLE MYTAB;
drop DATABASE LINK DISTANT.WORLD.LOCAL;
drop DATABASE LINK REMOTE.WORK.LOCAL;
2243 rows selected.
SQL> SP2-0734: unknown command beginning "where obje…" - rest of line ignored.
SQL>

2242 rows selected? Yes there were some triggers, but not that many. When SQL*plus finds a semicolon at the end of a line, it interprets this as the end of the command. Even if that semicolon is commented out. In this example the statement produced a drop command for each object in the schema. But the intention was to only drop all triggers.

Example 2

Add plsql doc information…

create or replace function convertNum(p1 in varchar2) return number
is
/***************************
@usage: converts a string into a number. 
        If conversion is not possible, 0 is returned.
@author: Sven
@param : p1 : input string
@return : converted number
****************************/
begin
  return to_number(p1 default '0' on conversion error);
end convertNum;
/

Result 2
SQL>
create or replace function convertNum(p1 in varchar2) return number
is
/***************************
@usage: converts a string into a number.
SP2-0310: unable to open file "usage:.sql"
        If conversion is not possible, 0 is returned.
@author: Sven
SP2-0310: unable to open file "author:.sql"
@param : p1 : input string
SP2-0310: unable to open file "param.sql"
@return : converted number
SP2-0310: unable to open file "return.sql"
****************************/
begin
  return to_number(p1 default '0' on conversion error);
end convertNum;
  9  /

Function created.

SQL>

An @ at the start of a line is always interpreted as a call to a script even if it is inside a multi line comment. This lowers the usefulness of the “generate DB doc” functionality in SQL Developer a lot. Putting additional blanks before the @ will not change this behaviour.

Example 3

comment out optional params…

SQL> set serveroutput on -- size unlimited
Result 3
SP2-0158: unknown SET option "--"

This means the SET option is ignored. There is an error message, but it is a SQL*Plus (SP2) error message. This would not stop a script that has when WHENEVER SQLERROR EXIT set at the beginning.

Example 4

comment a statement…

select sysdate from dual; -- print current day

Result 4
SQL> select sysdate from dual; -- print current day
2
Do you notice the 2 at the beginning? The statement is not terminated. The buffer waits for the next line and the next command probably finishes the buffer and runs into a syntax error.

Warning

Personally I think that those examples are bugs. But it is documented behaviour, so Oracle probably disagrees. There is an extra section in the SQL*plus documentation, dedicated to comments in scripts where these and more problems are described.

Remember

If you intend to run some plsql code or a SQL statement from a SQL plus script, then

  • do not have semicolons in a comment, especially not at the end of the line
  • do not use @ in the comment, especially not at the beginning of a line
  • do not use — inside SELECTs (hints seem to work)
  • avoid to use — or /* */ to write sql*plus comments, instead use REMARK or PROMPT

More side effects

Comments near the beginning of a command can confuse sql*plus.

SQL> create /* testproc */ procedure test
2 is
3 begin null;
Warning: Procedure created with compilation errors.
SQL> end;
SP2-0042: unknown command "end" - rest of line ignored.
SQL> /

Blank lines in comments can break the code. This can be suppressed by using SET SQLBLANKLINES ON.

SQL> SET SQLBLANKLINES ON;
SQL> select sysdate from dual
2 /* this is a multi
3 line
4
5 comment */
6 ;
SYSDATE
25-AUG-20

SQL> SET SQLBLANKLINES OFF;
SQL> select sysdate from dual
2 /* this is a multi
3 line
4
SQL> comment */
2 ;
comment */
*
ERROR at line 1:
ORA-00969: missing ON keyword
SQL>

An ampersand “&” in comments is tried to be substituted by a value of a variable. This can be suppressed using SET DEFINE OFF. Not a big deal unless we do not want to suppress it.

SQL> select sysdate from dual
2 -- current day at marks&spencers
3 ;
Enter value for spencers:
old 2: -- current day at marks&spencers
new 2: -- current day at marks
SYSDATE
25-AUG-20

A comment inside a PROMPT is not a comment, but will be printed to the screen.

SQL> prompt /* comment or not comment that is the question */
/* comment or not comment that is the question */

If you want a “;” to be printed to a spool file using PROMPT, use two “;;”.

SQL> prompt test;;
test;

Additional thoughts

Good developers can get a solid overview over unknown code, just by reading the comments.

Set your syntax highlighting in such a way that makes it easy to read comments. Unfortunately Oracle SQL Developer some time ago switched from green comments to grey comments. My recommendation is to change this setting.

Comment Driven Development (CDD) Manifesto

Comments are good!

Every code needs comments!

Comment first, code later!

Comments state code intention.

Comments describe why a technical decision was made

or not made.

Comments can give examples.

The world needs more comments!

APEX 19.2 quick tip: error messages for PL/SQL Dynamic Content regions

Sometimes I use a region of type PL/SQL Dynamic Content. It is rare and there are often better alternatives – like using a select over a pipelined table function – but it still happens.

If the plsql code errors out, then the error is not shown on the same page, but instead is presented as a page rendering error.

Here is a simplified example:

The PL/SQL Dynamic Content region looks like this

   sys.htp.p('Hello APEX world!');
   raise_application_error(-20201, 'Virus detected, world stopped!');

Running the page results in an ugly rendering error on a separate error page. See right display.

Remember the “Technical Info” part is not shown to the end user. So the error message is not shown.

Here is what I did to improve the error presentation. There might be better ways to do it, but it was a quick win and worked for me. So it might help you as well.

1. Capture the error message and write it to the page.
Here I used a hidden div. The class “sqlerror” is used to mark the div.

begin
  sys.htp.p('Hello APEX world!');
  raise_application_error(-20201, 'Virus detected, world stopped!');
exception
  when others then
    -- currently this does not show the error at the page. This would work in a page process.
    apex_error.add_error (
      p_message          => sqlerrm,
      p_display_location => apex_error.c_inline_in_notification 
    );     
    
    --write error into hidden page item
    sys.htp.p('<div class="sqlerror" style="display:none">'||sqlerrm||'</div>');
end;  

2. After page load, use a dynamic action to find this error and display it.

We can not use the “After Refresh” event since PLSQL Dynamic Content regions are not refreshable. So it must be “Page Load”.

// test if there was an error during rendering of any region on the page
if ($('.sqlerror').length) {

    //First clear the errors
    apex.message.clearErrors();

    // Now show new error
    apex.message.showErrors([
        {
            type:       "error",
            location:   [ "page" ],
            message:    $('.sqlerror').text(),
            unsafe:     false
        }]);    
}

Result:

A major difference is that everything that was already rendered at the time of error is now shown in the region. This might be wanted or could be unwanted.

Also we essentially hide the error from the apex engine. So when inspecting /utilities/Debug Messages/ we don’t see that error anymore.

Before the change we would get a debug entry like this

Since we suppressed all errors, we need to call apex_debug.error inside the plsql exception handler.

    
    -- APEX debug  instrumentation call
    apex_debug.error ('Rendering of region failed! %s', sqlerrm);

And then we get such an entry in the APEX debug stack.

tested in APEX 19.2 and 20.1. It should also work in older versions, although the javascript api apex.message.showErrors might need to be replaced with some custom logic.

conclusion

The region type “PL/SQL Dynamic Content” is very different to normal APEX regions. Do not forget to implement and test some basic error handling if you must use such a region type.

Side node: In theory you should also watch out for an XSS attack. If somebody manages to add malicious code into the error message, it might be shown on the page. I didn’t test for that however.

secret oracle db features

10 “secret” Oracle DB features you might not know

“Secret” here means either not documented or hidden in the documentation so deep that is is almost impossible to find.

1 – restart sequence

We can use a single ddl statement to reset a sequence. Works from 12.2 onwards.

alter sequence mySeq restart start with 115;

It is also possible to use this to reset an identity column!

alter table myTable
modify ID generated always as identity 
       restart start with 115;

This is not (yet) documented.

2 – hidden format options

We can display a negative number in round brackets using the “pt” format option. Which is not in the list of format options in the SQL reference.

select to_char(-10,'fm9990pt') from dual;
(10)

documented is the “pr” option which uses angle brackets

select to_char(-10,'fm9990pr') from dual;
<10>

See also this ODC thread.

This is documented, but not in the SQL Reference but in some OBI Publisher documentation.

3 – sample (x,y)

The SAMPLE clause is documented. It is part of the query table expression clause. Not documented is the second secret parameter y.

sample clause
https://docs.oracle.com/en/database/oracle/oracle-database/19/sqlrf/img_text/sample_clause.html

The sample clause allows us to get a portion (a sample) of the tables data.

select * 
from employees SAMPLE(20); 

This returns 20 percent of the employees.

There is a ROW mode and a BLOCK mode. In BLOCK mode 20% of the blocks are returned, instead of 20% of the rows.

The sample clause has a second undocumented parameter.

select *
from employees SAMPLE(20, 0.5);

This also returns 20%. So far no obvious difference…

Jonathan Lewis mentioned that this second parameter is the number of consecutive groups/rows during row or block sampling. It was introduced during changes for the SAMPLE clause in 10g.

My non-representative tests showed that setting this parameter to 1 during block sampling makes sure we get results. Otherwise we frequently get blocks from the tables tablespace that are not filled with data. This seems to depend on the data distribution.

So… if a query like the following returns 0…

-- how many rows are in 1% of the blocks?
select count(*)
from employees SAMPLE BLOCK (1);
0

… then you can set the second parameter to 1 to be sure to get some data.

-- how many rows are in 1% of the blocks?
select count(*)
from employees SAMPLE BLOCK (1, 1);
110

Use at your own risk!

See also this ODC thread.

4 – Exadata test parameter

Warning! This is only for testing purposes! It can potentially destroy your database.

alter system set "_exadata_feature_on"=true scope=spfile;
shutdown immediate;
startup;

Using this parameter we can test some of the newest options which are available only on exadata/cloud, like automated indexing.

5 – sequence cache is not lost during graceful database shutdown

Shutdown normal|immediate|transactional will not loose the sequence cache. Instead the next value will be used and set as last_number. Only a shutdown abort will loose the cache.

This was documented somewhere but I am not able to find it anymore. it might be that only older versions of the oracle docs still have this information.

To confirm it, I tested it. It works in SE and EE. It will probably not work in the same way on a RAC – because there is a separate cache for each instance.

There is an indirect mention of the behaviour in the 11g admin guide.

The database might skip sequence numbers if you choose to cache a set of sequence numbers. For example, when an instance abnormally shuts down (for example, when an instance failure occurs or a SHUTDOWN ABORT statement is issued), sequence numbers that have been cached but not used are lost. Also, sequence numbers that have been used but not saved are lost as well. The database might also skip cached sequence numbers after an export and import. See Oracle Database Utilities for details.

So a shutdown abort will loose the sequence cache. All other “organized” shutdowns will keep the cache intact.

The problem is that most DBAs seem way too keen on doing a shutdown abort. So loosing the sequence cache looks like the default behaviour, although it is not.

6 – reorder columns

We can change the logical column order by setting them invisible and making them visible again. This can be used to reorder columns in the column list. Important to understand is that the table is not physically changed, only the data dictionary entry is manipulated.

If we want to reorder columns physically, maybe to improve data compression, then this is not the way to do it.

example

create table t (col1 number, col2 number);
Table T created.

desc t;
Name Null? Type   
---- ----- ------ 
COL1       NUMBER 
COL2       NUMBER 

/* make the first column invisible */
alter table t modify col1 invisible;
Table T altered.

desc t;
Name Null? Type   
---- ----- ------ 
COL2       NUMBER 

/* make the first column visible again */
alter table t modify col1 visible;
Table T altered.

desc t;
Name Null? Type   
---- ----- ------ 
COL2       NUMBER 
COL1       NUMBER 

/* the column order is now reversed */

To physically reorder a table we need to export/import it, use dbms_redefinition or move the table.

Side note: The describe command in SQL*plus (and SQL Developer) will not show invisible columns. You can change that by setting the documented SQL*PLUS parameter COLINVISIBLE.

SET COLINVISIBLE ON

desc t;
Name             Null? Type   
---------------- ----- ------ 
COL2                   NUMBER 
COL1 (INVISIBLE)       NUMBER 

7 – object type “NON-EXISTENT”

Namespaces are used to avoid naming clashes between objects of different types. Among others the namespace 1 includes the object types table, view and function. So we can not have a table with the same name as a view or as a function.

There is a special object type in namespace 1 that is called “NON-EXISTENT”. This object type is used to mark dropped objects under edition based redefinition (EBR).

demo

Assuming 3 editions: ORA$BASE => CHILD1 => CHILD2

-- switch to base edition (ORA$BASE)
ALTER SESSION SET EDITION="ORA$BASE";

Adding editionable code in the parent edition ORA$BASE.

-- create a new function in base (EDITIONABLE is the default)
create function test_editions return varchar2
as 
begin
  return 'Base';
end;
/

Test the function in the child edition. It is usable. Although we didn’t create it there yet.

-- switch to child edition (CHILD1)
ALTER SESSION SET EDITION="CHILD1";

-- test function
select test_editions from dual;
TEST_EDITIONS
Base
function is functioning…

Now remove the function from the child edition, but not from the parent.

-- drop code
drop function test_editions;

-- test function again
select test_editions from dual;

ORA-00904: “TEST_EDITIONS”: invalid identifier

This error message was expected. The function was removed. No real object with the name “TEST_EDITIONS” exists (in namespace 1) in the edition CHILD1.
Just to be on the safe side, double check in parent edition:

-- switch back to base and test function
ALTER SESSION SET EDITION="ORA$BASE";
select test_editions from dual;
TEST_EDITIONS
Base
function still functioning in BASE

This is also the result that was expected. The function still works in the parent edition.

What does the data dictionary say? A special view user_objects_ae shows the objects from all editions (_ae):

-- Check dictionary
select object_name, edition_name, object_type, namespace, editionable, status
from user_objects_ae
where object_name = 'TEST_EDITIONS';
OBJECT_NAMEEDITION_NAMEOBJECT_TYPENAMESPACEEDITIONABLESTATUS
TEST_EDITIONSORA$BASEFUNCTION1YVALID
TEST_EDITIONSCHILD1NON-EXISTENT1VALID
TEST_EDITIONSCHILD2FUNCTION1YINVALID

We see that in edition “CHILD1” the object type of the function now is “NON-EXISTENT”. This is important, otherwise the database would not know to return the ORA-00904 error message when it searches for this object.

If the whole row would not exist, then the object in edition CHILD1 would be inherited from the parent edition ORA$BASE. This was the data situation before the function was actualized in CHILD1.

Even after we drop the function from all editions, then the dictionary will still show the NON-EXISTENT entries.

A side note/warning:

The status of the function in the second child edition CHILD2 is invalid after running the sample code. This dictionary entry was created, when the function was dropped in CHILD1.

This is because this function was never compiled in CHILD2. That means the code never was actualized there. And when we drop the code from all parent editions, then in CHILD2 any try to compile the code will not give an error, but the status will remain invalid.

This is why it is strongly recommended to recompile all editionable objects after some code changes, but especially after deletes/drops. Each recompile breaks the relationship to the parent and actualizes the code. So code changes in the parent will not propagate to the child anymore after it was recompiled at least once in the child.

8 – connect / as sysdba to PDB

Using a bequeath connection we can login to an Oracle database without a username and a password. This only works directly on the server using an os account that has enhanced privileges (part of the oradba group). To do so the ORACLE_SID environment variable needs to be set. However this connects only to the CDB.

It is possible to do the same using the undocumented environment variable ORACLE_PDB_SID to do the same against a PDB.

Mike Dietrich has posted a nice blog post about it: https://mikedietrichde.com/2020/05/07/can-you-connect-as-sysdba-without-password-into-a-pdb-directly/

export ORACLE_SID=MYCDB
export ORACLE_PDB_SID=MYPDB

$ sqlplus / as sysdba
SQL> show con_name

CON_NAME
------------------------------
MYPDB

9 – print boolean values to serveroutput

There is an undocumented utility package sys.dbms_metadata_util that has a couple of interesting helper methods.

For example put_bool can be used to print boolean variables to serveroutput (using dbms_output).

-- print boolean values to dbms_output
set serveroutput on
begin
  sys.dbms_metadata_util.put_bool('This is ', TRUE);
  sys.dbms_metadata_util.put_bool('This is ', FALSE);
  sys.dbms_metadata_util.put_bool('This is ', NULL);
end;
/
META:13:17:57.489: This is TRUE
META:13:17:57.489: This is FALSE
META:13:17:57.489: This is FALSE

PL/SQL procedure successfully completed.

There are also several other interesting functions in this package. For example a way to convert dictionary column values of type LONG into a CLOB variable.

v_view_source := sys.dbms_metadata_util.long2clob(10000,'USER_SOURCE','TEXT',25) ;

10 – undo_sql

There is a way to create the opposite of an SQL DML statement. For an insert we get a delete, for a delete we get an insert with the deleted column values and for an update we get an update with the previous values.

This is documented, but not many people know about. It is part of the flashback logic and more specifically the flashback transaction query.

Using Oracle Flashback Transaction Query

Use Oracle Flashback Transaction Query to retrieve metadata and historical data for a given transaction or for all transactions in a given time interval. Oracle Flashback Transaction Query queries the static data dictionary view FLASHBACK_TRANSACTION_QUERY, whose columns are described in Oracle Database Reference.

The column UNDO_SQL shows the SQL code that is the logical opposite of the DML operation performed by the transaction. You can usually use this code to reverse the logical steps taken during the transaction. However, there are cases where the UNDO_SQL code is not the exact opposite of the original transaction. For example, a UNDO_SQL INSERT operation might not insert a row back in a table at the same ROWID from which it was deleted.

Some security researchers (Alexander Kornbrust at DOAG 2019 – page 74) believe this poses an security risk, especially because it would allow us to find out about column values that are redacted. There are several ways to circumvent data redaction, and Oracle itself states that this feature is not ment as a high secure implementation (see Security Guidelines). So I don’t see that of a high risk. Using undo_sql to circumvent data redaction is the most complex hack I have seen so far – there are way easier methods.


Still an interesting feature. Here is an example.
It needs a lot of non default settings to make it work.

example

Using the HR demo schema to test this.

First some steps to enable supplemental logging which is needed for undo_sql to work.

-- preparations as DBA in the CDB
select * from dba_pdbs;
alter session set container = CDB$ROOT;
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
alter session set container = ORCL;
-- and as HR
ALTER TABLE HR.EMPLOYEES ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;

Now we can do some changes.

-- as HR user
select * from hr.employees;

update hr.employees
set last_name = '--secret--'
where first_name = 'Randall';
2 rows updated.
delete from hr.employees
where first_name = 'Randall';
2 rows deleted.

Don’t commit or rollback yet. Simply change to another session/account.

-- as DBA
select undo_sql
from FLASHBACK_TRANSACTION_QUERY
where table_name='EMPLOYEES';

Result:

insert into "HR"."EMPLOYEES"("EMPLOYEE_ID","FIRST_NAME","LAST_NAME","EMAIL","PHONE_NUMBER","HIRE_DATE","JOB_ID","SALARY","COMMISSION_PCT","MANAGER_ID","DEPARTMENT_ID") values ('191','Randall','--secret--','RPERKINS','650.505.4876',TO_DATE('19-DEC-99', 'DD-MON-RR'),'SH_CLERK','2500',NULL,'122','50');
insert into "HR"."EMPLOYEES"("EMPLOYEE_ID","FIRST_NAME","LAST_NAME","EMAIL","PHONE_NUMBER","HIRE_DATE","JOB_ID","SALARY","COMMISSION_PCT","MANAGER_ID","DEPARTMENT_ID") values ('143','Randall','--secret--','RMATOS','650.121.2874',TO_DATE('15-MAR-98', 'DD-MON-RR'),'ST_CLERK','2600',NULL,'124','50');
update "HR"."EMPLOYEES" set "LAST_NAME" = 'Perkins' where ROWID = 'AAATiDAAMAAALKzABb';
update "HR"."EMPLOYEES" set "LAST_NAME" = 'Matos' where ROWID = 'AAATiDAAMAAALKzAAr';

-- as HR
rollback;

Alternatively run the 4 statements from undo_sql.

-- reset setup changes

-- as HR
ALTER TABLE HR.EMPLOYEES DROP SUPPLEMENTAL LOG DATA (ALL) COLUMNS;

-- as DBA
ALTER DATABASE DROP SUPPLEMENTAL LOG DATA;

Conclusion

10 undocumented or unknown features. Some are useful, some are interesting, some are simple gaps in the documentation. I hope you found at least one that you didn’t know before and liked to read about.

A quirk with object dependencies under EBR

TL;DR;

USER_DEPENDENCIES does not show dependencies between objects in different editions. Which means, even when no dependencies are shown, we still can get errors when trying to drop objects. This is especially relevant when trying to drop TYPEs.

The basics

Oracle tracks dependencies between objects, for example packages but also between types. It provides us a view USER_DEPENDENCIES (and the matching ALL_|DBA_|CDB_DEPENDENCIES views) where we can see how those objects are related (are calling each other).

Here are two small examples

Example 1) Table + View

create table myTab (col1 number);
create view myView as select * from myTab;
Table MYTAB created.
View MYVIEW created.

Example 2) Object Type + Collection Type

create type myOT as object (col1 number);
/
create type myCT as table of myOT;
/
Type MYOT compiled

Type MYCT compiled

Further code demos will all assume that those 4 objects are in place unless mentioned otherwise.

The data dictionary

select * from user_dependencies;
NAMETYPEREFERENCED_OWNERREFERENCED_NAMEREFERENCED_TYPEREFERENCED_LINK_NAMESCHEMAIDDEPENDENCY_TYPE
MYOTTYPESYSSTANDARDPACKAGE122HARD
MYCTTYPESYSSTANDARDPACKAGE122HARD
MYVIEWVIEWSVENMYTABTABLE122HARD
MYCTTYPESVENMYOTTYPE122HARD

We can ignore the references to the sys.standard package. I think they come from references to datatypes that are defined there. Those dependencies do not effect the following quirks.

Data dictionary shows that the view MYVIEW depends on the table MYTAB (example 1) and the collection type MYCT depends on the object type MYOT (example 2)

The dependency type HARD tells us, that we are not allowed to drop the referenced objects without breaking|invalidating the dependent object.

drop table mytab;
Table MYTAB dropped.
select status from user_objects where object_name = 'MYVIEW';
INVALID

For types we get an error message when trying to drop it.

drop type myot;
ORA-02303: cannot drop or replace a type with type or table dependents

Side note: Another dependency type would be REF, which we encounter for example when using materialized views (MV). If a table is dropped that is used by a select in the materialized view, then we can still fetch data from the MV.

To successfully drop both types we first need to drop the collection type MYCT before we can drop the referenced object type MYOT.

drop type myct;
drop type myot;
Type MYCT dropped.

Type MYOT dropped.

Ok, this worked as expected. Dropping the objects in the opposite order as we had created them.

How does it work under edition based redefinition (EBR)?

Views and types are both editionable objects. That means we can have several “versions” of them in the same schema under different editions.

Assuming my typical 3 edition hierarchy

ORA$BASE ⇒ BETA ⇒ ALPHA 
(Parent ⇒ child1 ⇒ child2)  

I will only use ORA$BASE and ALPHA for this experiment. The 4 objects all have been created in ORA$BASE. Which means, they can be accessed also in the child editions. So let’s switch to ALPHA.

alter session set edition="ALPHA";
select sys_context('userenv','current_edition_name') from dual;
ALPHA

The data dictionary entries are the same as in the base edition.

select * from user_dependencies where name like 'MY%';
NAMETYPEREFERENCED_OWNERREFERENCED_NAMEREFERENCED_TYPEREFERENCED_LINK_NAMESCHEMAIDDEPENDENCY_TYPE
MYOTTYPESYSSTANDARDPACKAGE122HARD
MYCTTYPESYSSTANDARDPACKAGE122HARD
MYVIEWVIEWSVENMYTABTABLE122HARD
MYCTTYPESVENMYOTTYPE122HARD

Now we compile one of the types, but NOT the other.

alter type myct compile;
Type MYCT altered.

This did actualize the type MYCT in the child edition. Actualization is a concept of EBR. When a parent object is (re)compiled in a child edition the code afterwards exists twice. In the parent and in the child edition.

We go back to the parent edition and repeat our test now.

alter session set edition="ORA$BASE";
select sys_context('userenv','current_edition_name') from dual;
ORA$BASE

We try again to drop the types in the correct drop order.

drop type myct;
drop type myot;
Type MYCT dropped.

Error starting at line : 2 in command -
drop type myot
Error report -
ORA-02303: cannot drop or replace a type with type or table dependents

This time we can not drop the object type. Unfortunately the data dictionary does not tell, which object causes this dependency problem.

select * from user_dependencies where name like 'MY%';
NAMETYPEREFERENCED_OWNERREFERENCED_NAMEREFERENCED_TYPEREFERENCED_LINK_NAMESCHEMAIDDEPENDENCY_TYPE
MYOTTYPESYSSTANDARDPACKAGE122HARD
MYVIEWVIEWSVENMYTABTABLE122HARD

MYOT is not in the list of referenced names anymore. So we should be able to drop it.

The problem is that the view USER_DEPENDENCIES only shows objects that are in the current edition or that are not editionable (like sys.standard). But we have created an dependency between the object MYCT that still exists in edition ALPHA to the object MYOT from the parent edition ORA$BASE.

Which objects exist in which edition can be checked using the user_objects_AE view. The AE stands for all editions.

select edition_name, object_name, object_type, editionable
from user_objects_ae
where object_name like 'MY%';
EDITION_NAMEOBJECT_NAMEOBJECT_TYPEEDITIONABLE
ORA$BASEMYCTNON-EXISTENT
ALPHAMYCTTYPEY
ORA$BASEMYOTTYPEY
MYTABTABLE
ORA$BASEMYVIEWVIEWY

Because the type MYOT was never actualized in the edition ALPHA, we only have this object in the parent ORA$BASE. The dependency crosses from one edition to the other. In ALPHA we are still able to see this dependency in the dictionary.

So how can we drop the object in ORA$BASE?

There are three possible ways. Which way to choose depends on the reason why you dropped the object in the first place.

a) Use the force!

drop type myot force;

I do not recommend this, but there are situations were it is the most easy way. This also invalidates any usage of the dependent collection type in the child edition. It is very easy to overlook such an issue therefore only use FORCE when you understand what the reason the the error is and why it is ok to use FORCE,

b) drop the collection type in the child edition first.

alter session set edition="ALPHA";
drop type myct;

alter session set edition="ORA$BASE";
drop type myot;

c) actualize the object type in the child edition
This is easiest when done before dropping the type.

alter session set edition="ALPHA";
alter type myot compile;

Compiling the type creates a copy of the code in the data dictionary. From that point on any drop will leave a proper reference in the data dictionary of each edition. Cross edition references will not happen when all objects are actualized always. That is also why I start creating new objects in ALPHA and then move them to BETA and later for final testing to ORA$BASE.

Further complications

This looks a bit complicated, but after all we knew what we were doing. So it shouldn’t pose a too big of a problem. Right?

Let me introduce you to 12.2 oracle maintained sys_plsql types.

Since 12.2 plsql pipelined table functions do not need a dedicated SQL type anymore. If we create such a function, typically inside some package, we can create the needed structures (types) purely inside the package spec.

I also dropped all the previously created objects to avoid confusions.

Example

create or replace package myPkg
  authid definer
is 
  type ot_typ is record (col1 number, col2 varchar2(30));
  type ct_typ is table of ot_typ;
  
  function myFnc return ct_typ pipelined;
end;
/

create or replace package body myPkg 
is 
  
  function myFnc return ct_typ pipelined 
  is
    vRow ot_typ := ot_typ();
  begin
    for i in 1..5 loop
      vRow.col1 := i;
      vRow.col2 := chr(ascii('A')-1+i);
      pipe row(vRow);
    end loop;
  end myFnc;
  
end;
/

Now run the function. Since 12.2 there is no need to add the TABLE() operator anymore. However because of name resolution issues, then the function needs a trailing pair of parenthesis.

-- table "operator" syntax
select * from table(myPkg.myFnc); 

-- syntax without table keyword
select * from myPkg.myFnc(); 
COL1 COL2
1    A
2    B
3    C
4    D
5    E

The function successfully returns five rows. No SQL type was actively created. This was needed in the past but is not needed anymore.

Also if we check the dictionary, we do not see any new types. But they secretly exist. If we check user_dependencies (or dba_objects) we can see them.

select * from user_types;
no data found
select * from user_dependencies;
NAMETYPEREFERENCED_OWNERREFERENCED_NAMEREFERENCED_TYPEREFERENCED_LINK_NAMESCHEMAIDDEPENDENCY_TYPE
MYPKGPACKAGESYSSTANDARDPACKAGE122HARD
MYPKGPACKAGE BODYSYSSTANDARDPACKAGE122HARD
SYS_PLSQL_9C0C5336_24_1TYPESYSSTANDARDPACKAGE122HARD
SYS_PLSQL_9C0C5336_DUMMY_1TYPESYSSTANDARDPACKAGE122HARD
SYS_PLSQL_9C0C5336_9_1TYPESYSSTANDARDPACKAGE122HARD
MYPKGPACKAGE BODYSVENMYPKGPACKAGE122HARD
SYS_PLSQL_9C0C5336_24_1TYPESVENSYS_PLSQL_9C0C5336_9_1TYPE122HARD

The view shows a set of types that all start with the almost identical name. Most importantly type SYS_PLSQL_9C0C5336_24_1 depends on type SYS_PLSQL_9C0C5336_9_1.

This was the output in 18c (18.7). The result changed in a 19c database. More about that later.

These two types match the type definitions in the package spec.

type ot_typ is record (col1 number, col2 varchar2(30));
type ct_typ is table of ot_typ;

Depending on how the package is then changed in different editions, it can happen that a cross edition dependency is created for those sys_plsql types. This can then prevent further compilation of the package in one of the editions.

We encountered such a scenario during development in an 18.7 database base. However the used types were considerably more complex. At some point we were not able to compile a package body because of a missing spec. However the spec was there and could apparently be compiled (but secretly would not properly compile).

I was able to track down the problem to a cross edition type dependency for the generated sys_plsql types. Dropping those types (in the relevant editions) allowed us to recompile the package spec and the package bodies.

If you encounter such a case a drop of all the sys_plsql types in each edition and a package recompile then solves the issue.

Behaviour in 19c (19.3)

The general cross edition type dependency issue also exists in 19c.

The sys_plsql types are not shown anymore in the dictionary views – I am not sure if the same behaviour still is in use. I could not reproduce the package problem that we encountered in an 18c (18.7) database that was caused by the sys_plsql types using a simple test case – neither in 18c nor in 19c. Which does not mean, that the problem is gone. Only that it is tricky to recreate.

The 18c and 19c patch release notes do not indicate any changes for the sys_plsql types – so it is not clear if there was a released bug that was fixed. Maybe some internal behaviour was changed but no note is added in the documentation about this.

However in the EBR guide the following is mentioned under Evolutionary capability improvements

The dependency model is now fine-grained: e.g. adding a new column to a table, or a new subprogram to a package spec, no longer invalidates the dependants.

Conclusion

Make sure to always compile all code in your specific edition. This actualizes all the objects and the editions are then independent from each other.

Start by adding new code objects in the child edition first, before you move it to the parent.

Under EBR consider to use specific SQL types for table functions, even if they are not needed anymore. This will give you a tiny bit more control over the code and a better chance to react to problematic dependency issues.