10 little Oracle SQL features – that you are probably not using enough

This is a collection of some small features, patterns or snippets, that I seem to use frequently and like to share. The features are written in no specific order. However we can structure the features into performance related [P], convenience [C] or enablers [E].

Enablers [E] would be features that allow us to do something, that is otherwise impossible or at least very hard to achieve. Sometimes even to avoid an error.


Convenience [C] features are things that offer an alternative construct. Often they can be used instead of another option. Sometimes making the code easier to change (maintainability).


Performance [P] features improve execution speed. Often they come with a drawback or one should keep exceptions or rare race conditions in mind.

1. find multiple words using regexp_like [C]

instead of writing multiple like conditions (or repeating the same select using like multiple times), we can write a single regexp_like expression.


Cumbersome:

where (str like '%word1%' OR str like '%word2%' OR ...)

Much easier:


where regexp_like(str,'(word1|word2|...)')

The pipe “|” inside the regular expression acts as the OR operator and the parenthesis () form the subexpression that is needed for that OR.

A very typical case is to query the v$parameter table for a set of parameter names.
example

select name, value 
from v$parameter 
where regexp_like(name,'(listen|dispatch|service)');
NAMEVALUE
service_namesfeno1pod
enable_dnfs_dispatcherFALSE
dispatchers(PROTOCOL=TCP) (SERVICE=feno1podXDB)
max_dispatchers
query result

What I especially like about it, is that the need for wildcards is gone and it is so easy to add more “words” to search for.

2. fetch first row only [C]

Do you still use rownum frequently? Why not use to the row limiting clause instead?

I use it a lot, mostly for ad hoc queries. One advantage is that the need to create an inner query that does the correct ordering disappears.


example

-- show the top 3 tables with the most rows
select table_name, num_rows
from user_tables
order by num_rows desc
fetch first 3 rows only;
TABLE_NAMENUM_ROWS
IMP_SD_2744-12_45_1819696
IMP_SD_2822-14_28_5319692
IMP_SD_194-09_40_5019545
Query result

3. use expression lists (a,b) [E]

I use this frequently in update statements but also sometimes in joins. Instead of setting each column separately we can set multiple columns at once. Simply by using parenthesis and a comma between the columns.

example

update persons u
set (u.first_name, u.last_name) = (select a.firstname, a.name 
                                   from applicants a 
                                   where a.person_id = u.id)
where u.status = 'NEEDS REFRESH';

Or we can filter on a combination of columns (a,b) in ((c,d),(d,c),(e,f)). The need for this should be rare, but it happens. A strange edge case is when we do an equality comparison of two expression lists. For some reason the right side needs an additional set of ().

(a,b) = ((c,d))

4. DeMorgans law [C,E]

This might be the single most important math/logic rule, that SQL developers should know. It is usually applied inside the where clause and knowing it by heart will prevent a lot of easy mistakes.

{\displaystyle {\begin{aligned}{\overline {A\cup B}}&={\overline {A}}\cap {\overline {B}},\\{\overline {A\cap B}}&={\overline {A}}\cup {\overline {B}},\end{aligned}}}
demorgans law

written as a SQL expression

not(A or B) = not(A) and not(B)

not(A and B) = not(A) or not(B)

It is easy to remember. OR changes to AND when the parenthesis are resolved (and the other way round).
A and B here are full SQL expressions, for example x=y.


Why is that so important? SQL is a very logic based language. In daily speach we often use logical operators (AND/OR/NOT) differently than what is needed when they should be applied in SQL. This can become confusing very easily. Knowing DeMorgans law helps to quickly check if the logic is used correctly.

An example

Task: “Find all employees that are not working in sales and marketing.”

Converting this sentence 1:1 into SQL would result in this:

select * from emp
where not (department = 'SALES' and department = 'MARKETING'); 

Obviously what is ment, is that we want to find those employees that are working in some other department.

Applying Damorgans Law, we can reformulate our select statement. Also we replace NOT (x=y) with x!=y .

select * from emp
where  department != 'SALES' 
    OR department != 'MARKETING'; 

A developer should understand now, that this condition will always be true. For each row the department is either sales or not sales. And if it is sales, then it is not marketing. So the combination is always true (excluding NULL value considerations). Which is probably not what is wanted.

Here the row logic also plays a part. The where clause is applied to a single row, but normal speach often uses the boolean operation to combine data sets.

The correct query would be

select * from emp
where  not (department = 'SALES' 
             OR department = 'MARKETING'); 


or even better use IN instead or OR

select * from emp
where  department not in ( 'SALES' , 'MARKETING'); 

Historic anecdote: Da Morgan was not the first one to discover this logic. Centuries before Da Morgan, a guy called William of Ockam already wrote about it. He probably had it from Aristotle. Occam (the spelling changed over the centuries) nowadays is more famous for his razor.

5. rollup and grouping sets [C]

To get a total row for a query that uses sum or count, simply add rollup.


example rollup

select tablespace_name, count(*)
from dba_tables 
group by rollup (tablespace_name);
TABLESPACE_NAME	COUNT(*)
DATA	        362
DBFS_DATA	2
SAMPLESCHEMA	14
SYSAUX	        1357
SYSTEM	        1056
	        322
	        3113

The last line is the total number of tables. One problem here are tables without a tablespace (null). The grouping() function helps to separate a normal row with a data value NULL from a superaggregate row (19c grouping).

If there are multiple columns then instead of rollup I use grouping sets. The full set (=total) in a grouping set expression can be expressed by ().

example grouping sets + grouping

select case when grouping(tablespace_name) = 1 then '-all-' else tablespace_name end as tablespace_name
     , case when grouping(cluster_name) = 1 then '-all-' else cluster_name end as cluster_name
     , count(*) 
from dba_tables 
group by grouping sets ((tablespace_name, cluster_name),());
TABLESPACE_NAME	CLUSTER_NAME	COUNT(*)
		                322
DATA		                362
SYSAUX		                1356
SYSAUX	SMON_SCN_TO_TIME_AUX	1
SYSTEM		                1020
SYSTEM	C_RG#	                2
SYSTEM	C_TS#	                2
SYSTEM	C_OBJ#	                17
SYSTEM	C_COBJ#	                2
SYSTEM	C_MLOG#	                2
SYSTEM	C_USER#	                2
SYSTEM	C_FILE#_BLOCK#	        2
SYSTEM	C_OBJ#_INTCOL#	        1
SYSTEM	C_TOID_VERSION#	        6
DBFS_DATA		        2
SAMPLESCHEMA		        14
-all-	-all-	                3113

Again the last line shows the total number of tables. Since I didn’t want to see subtotals for tablespace or cluster grouping sets is the perfect solution t add this total row.

Also notice that the first line has all null values for the names, same as the total line would have. Using the grouping function allows to find out which is the total line and give it a meaningful text.

6. enhanced listagg [E]

In the more recent database versions, the very useful LISTAGG command got even better. For production code I nowadays always try to remember to add some safety protection in place in case result of the aggregation becomes big. Otherwise it could happen to get ORA-01489: result of string concatenation is too long.

Since 12.2 we can avoid the error by using the OVERFLOW clause

listagg (... on overflow truncate without count) ...

So instead of the error message, when the maximum string size is reached (4000 bytes or 32k bytes depending on max_string_size parameter) we get usable text without the statement raising an error.

example

select count(*)
, listagg(table_name,', ' on overflow truncate) within group (order by tablespace_name desc, table_name) all_tables
from dba_tables;
COUNT(*)ALL_TABLES
3113AQ$_ALERT_QT_G, AQ$_ALERT_QT_H, AQ$_ALERT_QT, … many many more tables …, SDO_CRS_GEOGRAPHIC_PLUS_HEIGHT, SDO_CS_SRS, SDO_DATUMS, …(1304)

The three dots “…” are called an ellipsis and can be configured. without count would avoid writing the total number of entries to the end of the list. with count is the default if truncate is specified.

Although the overflow clause is very usfull, the ultimate goal would be to give the developer more influence over it. Recently there was an interesting twitter discussion around that topic.

Other useful enhancements (19c) were LISTAGG distinct. example on LiveSQL

7. Using sys_context [C,P]

A sys_context is something like a global variable in other languages. The normal context is for the session, but it is also possible to use application wide contexts.

Oracle provides several “preconfigured” contexts for us. The most common are ‘USERENV’ and ‘APEX$SESSION’ (for apex developers). Contexts are also used for security policies with VPD.

Here are the contexts that I frequently like to use

  • sys_context('userenv','client_identifier')
    value set by dbms_application_info.set_client_info
  • sys_context('userenv','current_edition_name')
    when using edition based redefinition (ebr), this shows which edition I’m in. Always good to double check!
  • sys_context('userenv','current_user')
    similar to pseudocolumn user. The schema name that we connected with.
    In some oracle versions (12.1) much faster than user, my recent test shows that this performance difference is now gone (19c)
  • sys_context('userenv','proxy_user')
    When doing a proxy connect then this is the name of the authenticated user, not the target schema
  • sys_context('userenv','os_user')
    useful when client_identifier is empty, for example the name of the Windows login account when using SQL Developer under Windows.
  • sys_context('apex$session','app_user')
    apex authenticated user
  • sys_context('apex$session','app_id')
    id of the current apex app
  • sys_context('trigger_ctl','audit_on')
    Is auditing enabled/disabled? Part of my custom adaptation/enhancement of Connor McDonalds Audit-Utility package
  • sys_context('userenv','sid')
    the session ID of the current database session

Side note: UPPER/lower case does not make a difference for contexts. Only for the values of the context.

Warning: when you start using your own custom contexts, be careful with the grant create any context privilege. It can be a security risk! Revoke it after it was used.

8. hint /*+ driving_site */ [P]

I feel like this is a classic for me. In one of my longest running projects we had a lot of databases connected by database links. One of the most important tuning activities was to understand how to do remote joins in a performant way. The driving_site hint was extremely important for that.

General rule: When doing remote queries, avoid mixing local and remote tables in one select statement. If only remote tables are in the statement always use the driving_site hint.

Nowadays the need for this hint diminishes, mostly because databases are less connected by DB links, but more by webservices (which does not really improve performance, but that is a different story)

9. join via using [C]

This only works reliably when the naming conventions of the data model fit to it.

example

We have a table PERSON and a table ADDRESS. The primary key in PERSON is PERS_ID. The relationship column (FK) in ADDRESS is also called PERS_ID. It wouldn’t work as well with ADR_PERS_ID for example.

Then we can do a quick join like this:

select * 
from person 
join address using (pers_id) 
where address.type='MAIN';

There are some slight differences compared to the normal way using ON. Mostly one then can not differentiate anymore from which table the pers_id originates. For example you can not refer to person.pers_id in the where clause anymore.

Currently I use it exclusively for ad hoc queries, not in plsql code.

10. interval literals [C]

If I want to add some time to a date or a timestamp, I always use interval literals (unless it is a full day or a full month).

example
Let’s say we want to check tickets that were entered during the last hour. Using the date datatype we could easily calculate an hour by dividing a full day / 24. This is how many developers calculate time. And it is perfectly ok to do so.

select * from tickets 
where createddt >= sysdate - 1/24 ;

The term sysdate-1/24 resembles one hour before “now”.

This is how I prefer to write it:

select * from tickets 
where createddt >= sysdate - interval '1' hour ;

I feel the code documents itself in a much better way.

Additionally requirements like this easily change.
Let’s change both expressions to 15 minutes.

sysdate - 15/24/60

sysdate - interval '15' minute

Which one is better to read and was quicker to change?

Warning: Sayan Malakshinov in the comments mentions that we should not use interval literals (ymintervals) to add full months. I agree with that. Calendar arithmetic is complex – there are specific functions like add_months for that. So do not use interval '1' month.

honorable mentions

Some more snippets or functions that didn’t quite make it into the list.

  • nvl2 [C]
  • the third parameter of to_char (nlsparams) [E]
  • query block hints [P]
  • hierarchical with clause [C,E]
  • lateral join [C]
  • alter sequence restart [C,E]
  • cross join [C]
  • lnnvl [C,E]
  • hex format “XXXX” [E]
  • sys.odcinumberlist [C]
  • analytic sum + correct window clause: rows between instead of range between [E]

Summary

I hope I mentioned something that you didn’t know about before and liked reading about.

Outlook

Here are some features/patterns that I’m currently not using myself, but where I feel that they might become quite important in the future.

  • SQL Macros
  • boolean in SQL (not implemented yet, but Oracle seems to be working on it – I expect some news about this very soon)
  • median and other statistical functions like cume_dist, percentile_xxx, percent_rank
  • match_recognize
  • with clause udf functions (still think they are overrated, but I might be wrong)
  • approx_count_distinct
  • analytical views

Audit column performance retested in 19c: “user” is fast now!

Motivation

In an older and quite successful blog post of mine I tried to find the “perfect” solution for audit columns. See: sequence-and-audit-columns-with-apex-5-and-12c.

One finding then was that sys_context('userenv','current_user') is considerably faster than the user pseudocolumn.

I recently noticed that this seem to have changed and decided to retest the behavior.

The old test was done against a 12.1.0.1 standard edition database.

The new test is done against a 19.5 enterprise edition (on ATP which features an exadata).

Test setup

Mass inserting 100000 records into empty tables. A very fast select was used to generate 100k rows of null values.

3 columns were filled with data:

  • ID by a sequence using a cache of 10000
  • created_by filled with the schema name => sys_context or user
  • created_on filled with sysdate

Test 1 (T1) was using sys_context as default value.

Test 2 (T2) was using user as default value.

Test 3 (T3) was using a trigger with sys_context.

Test 4 (T4) was using a trigger with user.

Each test was run once to warm-up the database (and the tablespace). This warm-up run didn’t count. Then each test was run 3 times. Only the execution speed of the inserts were measured.

Results

Test 1 – sys_context as default value

Elapsed:1.23 seconds
Elapsed:1.21 seconds
Elapsed:1.26 seconds

Average: 1.23 s

Test 2 – user as default value

Elapsed:1,32 seconds (This looks suspicious. Maybe the warm-up run didn’t warm up enough)
Elapsed:1,16 seconds
Elapsed:1,19 seconds

Average: 1.22 s

Test 3 – sys_context in trigger

Elapsed:17,33 seconds
Elapsed:17,08 seconds
Elapsed:17,05 seconds

Average: 17.15 s

Test 4 – user in trigger

Elapsed:17,03 seconds
Elapsed:16,71 seconds
Elapsed:16,97 seconds

Average: 16.90 s

Comparison

My test shows that user was even a tiny tiny bit faster than the sys_context. Which means the previous 12.1 recommendation not to use “user” anymore is now outdated. The difference is so small that you can choose whatever you want.

User is fast now!

The main message still is, if you can get rid of the trigger, then do it. The difference between a default value logic and the trigger is huge! But even in the trigger logic there seems to be a slight performance advantage for user now.

If you are still on 12.1 or 12.2 you should run your own tests and compare the results. If you are on 19c or higher I would now use user again.

Test scripts

Create statements

-- cleanup
drop table test_insert_perf_t1_sys_context purge;
drop table test_insert_perf_t2_user purge;
drop table test_insert_perf_t3_trigger purge;
drop sequence test_insert_perf_t3_trigger_seq;
drop table test_insert_perf_t4_trigger purge;
drop sequence test_insert_perf_t4_trigger_seq;

-- create objects
create table test_insert_perf_t1_sys_context
             (id number generated by default on null as identity (cache 10000) primary key
             ,created_by varchar2(128) default on null sys_context('userenv','current_user') not null 
             ,created_on date default on null sysdate not null);

create table test_insert_perf_t2_user
             (id number generated by default on null as identity (cache 10000) primary key
             ,created_by varchar2(128) default on null user not null 
             ,created_on date default on null sysdate not null);

create table test_insert_perf_t3_trigger
             (id number not null primary key
             ,created_by varchar2(128) not null 
             ,created_on date not null);

create sequence test_insert_perf_t3_trigger_seq cache 10000;          

create or replace trigger test_insert_perf_t3_trigger_bri
before insert on test_insert_perf_t3_trigger 
for each row
begin
  :new.id := test_insert_perf_t3_trigger_seq.nextval;
  :new.created_by := sys_context('userenv','current_user');
  :new.created_on := sysdate;
end;
/

create table test_insert_perf_t4_trigger
             (id number not null primary key
             ,created_by varchar2(128) not null 
             ,created_on date not null);

create sequence test_insert_perf_t4_trigger_seq cache 10000;          

create or replace trigger test_insert_perf_t4_trigger_bri
before insert on test_insert_perf_t4_trigger 
for each row
begin
  :new.id := test_insert_perf_t4_trigger_seq.nextval;
  :new.created_by := user;
  :new.created_on := sysdate;
end;
/

Run statements

-------------------------------------------------------------------
-- run tests (insert 100000 rows)
-------------------------------------------------------------------

-- Test 1 --------------------------------------
set serveroutput on
set time on;
declare 
    v_time number;
begin
    v_time := dbms_utility.get_time;

    -- insert 100000 records
    insert into test_insert_perf_t1_sys_context(id)
    (select n1.nr
     from (select cast(null as number) nr from dual connect by level <=100) n1
     cross join (select cast(null as number) nr from dual connect by level <=1000) n2 
    );

    sys.dbms_output.put_line('Elapsed:'||to_char((dbms_utility.get_time - v_time) / 100)||' seconds');
    rollback;

end;
/

-- Test 2 --------------------------------------
set serveroutput on
set time on;
declare 
    v_time number;
begin
    v_time := dbms_utility.get_time;

    -- insert 100000 records
    insert into test_insert_perf_t2_user(id)
    (select n1.nr
     from (select cast(null as number) nr from dual connect by level <=100) n1
     cross join (select cast(null as number) nr from dual connect by level <=1000) n2 
    );

    sys.dbms_output.put_line('Elapsed:'||to_char((dbms_utility.get_time - v_time) / 100)||' seconds');
    rollback;

end;
/


-- Test 3 --------------------------------------
set serveroutput on
set time on;
declare 
    v_time number;
begin
    v_time := dbms_utility.get_time;

    -- insert 100000 records
    insert into test_insert_perf_t3_trigger(id)
    (select n1.nr
     from (select cast(null as number) nr from dual connect by level <=100) n1
     cross join (select cast(null as number) nr from dual connect by level <=1000) n2 
    );

    sys.dbms_output.put_line('Elapsed:'||to_char((dbms_utility.get_time - v_time) / 100)||' seconds');
    rollback;

end;
/


-- Test 4 --------------------------------------
set serveroutput on
set time on;
declare 
    v_time number;
begin
    v_time := dbms_utility.get_time;

    -- insert 100000 records
    insert into test_insert_perf_t4_trigger(id)
    (select n1.nr
     from (select cast(null as number) nr from dual connect by level <=100) n1
     cross join (select cast(null as number) nr from dual connect by level <=1000) n2 
    );

    sys.dbms_output.put_line('Elapsed:'||to_char((dbms_utility.get_time - v_time) / 100)||' seconds');
    rollback;

end;
/

How to setup APEX Developer Accounts on Oracle Cloud Infrastructure (free tier) – a suggestion

If you are an APEX Developer who just wants to move to the cloud, there are some things that you should be aware of. Here is my proposal what steps should be done to “cloud enable” APEX developer accounts. This post is only about setting up an account, not about any nifty features to use.

Scenario: We are an admin who already created a new APEX workspace and now wants to give some developers access to that workspace.

The actions were confirmed on Oracle Free Tier cloud featuring a 19c database and APEX 20.2. There is a good chance that some of the obstacles are removed in a newer version. At time of writing it is possible to setup an 21c database on the oracle free tier cloud.

Admin actions

1. create APEX developer account

Inside the workspace create a new APEX account.

Administration/Manage Users and Groups/Create User

Set it to developer. Set and remember the password. You will notice, when switching developer on, the button to “Require Change of Password on First Use” will deactivate.

APEX developer and admin accounts in the cloud are created as database users (in uppercase). This has several major implications.

  • The developer can NOT login into the workspace and change the password. For admins it is possible to set a new password using the admin page. Trying to change the password using the normal way of “change my password” results in a success message, but the change is silently ignored. I assume this is the reason, why the "Require Change of Password on First Use" button is deactivated.
  • The password rules of the database apply. On Autonomous Transaction Database (ATP) the accounts use the default profile, which uses the CLOUD_VERIFY_FUNCTION
https://docs.oracle.com/en/cloud/paas/autonomous-database/adbsa/manage-user-profiles.html#GUID-BD3C5573-DF0E-4E44-919A-08A471462B87
  • Using the same developer name over different workspaces means the same password is used for all. I think this is a welcomed feature, but it certainly is a change to what one is used on other platforms like on premises or on apex.oracle.com.
  • If the developer username has special characters in it, for example a dot or an @ as in “firstname.lastname@my.email”, then the database user later always needs to be put in double parenthesis “firstname.lastname@my.email”. Although that works, it prevents certain other options, like rest enabling the account (ORA-20018: Schema or user name must be a simple unquoted literal).

    I strongly recommend to only use letters, numbers and underscores for the name (simple literals). No dots, no @, no blanks or any other special chars. For specific rules see the docs:

Nonquoted identifiers must begin with an alphabetic character from your database character set. …
Nonquoted identifiers can only contain alphanumeric characters from your database character set and the underscore (_), dollar sign ($), and pound sign (#). … Oracle strongly discourages you from using $ and # in nonquoted identifiers.

From https://docs.oracle.com/en/database/oracle/oracle-database/19/sqlrf/Database-Object-Names-and-Qualifiers.html#GUID-75337742-67FD-4EC0-985F-741C93D918DA
  • The option to create multi users via one dialog does not work properly for developer accounts. Do not use this dialog! (It is possible to use it for end users.)

Side note: End users are not created as a database users. This means we can change the end user password using the normal APEX logic. And the other implications do not apply for them as well.

2. enable the developer to connect with SQL Developer

enable connect

Per default the created db user can not create a session. We need to give him that, so he/she can connect and change the password.

grant create session to <developer>;

proxy connect

A developer usually needs full access to the application schema (with the tables in it) he/she is working with. To avoid distributing the application schema password one can setup a proxy connection. This means the developer authenticates him/herself using the individual password, not the password of the application schema.

alter user <application_schema> grant connect through <developer>;

There are additional options possible like setting an extra password for the proxy connect or allowing only specific roles. For the typical developer scenario those extra settings seem not necessary.

Side note: A proxy connection is what APEX also uses to connect to the schema using only the APEX_PUBLIC_USER.

send wallet

To access the cloud database from a local client a key is needed. Oracle provides this key in a PKCS#12 wallet inside a wallet.zip file. The admin can download this wallet from the database service console. This is a database instance wallet that is only valid for this specific database. There are also regional wallets that are valid for all databases inside a region.

Write down who you sent the zip to. This is important since the admin might want to rotate the wallet. I prefer long intervals for the wallet rotation. But it should be rotated from time to time (for more information see: Rotate Wallets for Autonomous Database ).

Make the developer understand that he/she is not allowed to give the wallet to other users/developers.

Side note: There is an api to handle wallets (https://docs.oracle.com/en-us/iaas/api/#/en/database/20160918/AutonomousDatabaseWallet/UpdateAutonomousDatabaseWallet). Using this api one could automatically rotate the wallet at defined intervals.

Developer actions

3. wallet handling

Store the wallet_INSTANCENAME.zip into some local folder.

In general there is no need to unzip the wallet file. A noticeable exception would be if you want to use edition based redefinition. I wrote about this before:

4. setup SQL developer

Two connections are recommended.

One for the developer account itself. This one is needed to easily change the password. Additionally most developers like to have an environment were they can try out stuff. Further privs might be needed for that (create view, etc.).

And one connection for the application schema using a proxy connect.

To connect to the cloud from a local (and recent) SQL Developer installation is simple. Choose “Cloud Wallet” as the connection type and point the developer to the wallet zip file, choose the tns alias (service) and your done. On the free tier you should choose the “xxx_low” alias.

To create a proxy connect the name of the target schema (in my example DEMODB) needs to be configured under the tab “Proxy User”.

I also suggest to add this schema to the name of the connection. I use [schemaname] for that, because that is the syntax for proxy connects in SQL Plus.

SQL> connect "SVEN.WELLER"[DEMODB]/mysecretPwd1111

5. change the password

There are multiple ways to reset a password using SQL Developer. The good thing is, they also work for cloud connects (a recent jdbc driver is needed for way a) ).

a) Right-click on the connection and choose “Reset Password” . This also works for proxy connects and does reset the password for the individual user.

b) Type password in a connected worksheet

This changes the password for the schema. For a normal connection without a proxy this changes the password of the connected user.

In case of a proxy connect it tries to change the password for the PROXY client.

Since the developer will not have this password, he/she will not be able to change it.

c) And it is possible to change the password using the ALTER USER command. Everybody can do this for their own password even without enhanced privileges.

If the username has special chars in it – like in my case the dot – then do not forget to use ” around the name. Regardless of the special chars, the name will be in UPPERCASE. And of cause the password rules apply.

In addition to that alter user command, in the cloud environment, the original password needs to be mentioned too. otherwise the above command will result in the following error message

ORA-28221: REPLACE not specified
*Cause: User is changing password but password verification function is turned on and the original password is not specified and the user does not have the alter user system privilege.
*Action: Supply the original password.

Solution is easy, just add the old password to the alter user command. The complete syntax is

alter user <developerschema> 
     identified by <new password> 
     replace <old password>;

Conclusion

In the cloud each APEX developer is a database user. Treat it like one.

Jeff Smith shows a brand new option to manage DB users using SQL Developer web: https://www.thatjeffsmith.com/archive/2021/03/creating-users-granting-storage-quota-using-database-actions/