First thoughts about Oracle 18XE db

On the 20th october the new free Oracle database version 18XE (Express Edition) was anounced (https://blogs.oracle.com/oracle-database/oracle-database-18c-xe-now-available).

I had a first quick look at the database but mostly at the documentation. And here are my thoughts about it.

Restrictions

  • Up to 12 GB of user data (previously 11 GB)
  • Up to 2 GB of database RAM (previously 1 GB) – that is SGA + PGA combined.
  • Up to 2 CPU threads (previously 1 CPU – but multiple threads?!)
  • Up to 3 Pluggable Databases (this is new)

Not much of an improvement.

Previously there was a restriction of only 1 XE per computer (not per VM). The license guide does not mention this restriction anymore. I think it means that we can have multiple XEs on the same machine, but they need to be in different VMs. In the area of cloud based installations, we might not even know if another XE is on the same server. So this is very good news.

missing and included features

The following list does not claim to be complete. But those are features that I find interesting to consider.

multi tenant (up to 3 PDBs)
no plug / unplug into all other databases
partitioning
 no sharding
in-memory column store
compression
 no active session history (ASH) and other performance related diagnostics
spatial & graph support
encryption and redaction
Database Vault, FGA
 RAC

 

In general everything that works is allowed! This is different from 11gXE!

 

Thoughts

Server/VM limitations

11g XE was limited to one installation per machine. This was specified in the 11gXE license agreement.

Any use of the Oracle Database Express Edition is subject to the following limitations;
1. Express Edition is limited to a single instance on any server;

18cXE is limited to one installation per VM (my interpretation).

The restriction that was previously in the license guide is gone, however the install guide for linux, still has this passage

4.2 Oracle Database XE Installation and Execution Restrictions

Only one installation of Oracle Database XE can be performed on a single environment. …
To run more than one Oracle Database instance or install more than one copy of the database software, upgrade to Oracle Database 18c Personal Edition, Oracle Database 18c Standard Edition 2, or Oracle Database 18c Enterprise Edition.

The word server was replaced by environment. I interpret that so, that on a single server with multiple environments (e.g. VM images) we can have multiple XE installations.

The other restricting factor is that the oracle SID is fixed to XE: I first thought it is hardcoded, but I seen one trustworthy report, where the claim was made that this can be changed. But even if that is possible, it is not clear whether that is allowed.

This might be problematic for upgrade scenarios (from 18XE to 19XE). We can not have two active DBs one old 18XE and the new 19XE on the same environment. Maybe not even if one is shut down (because of the only one copy of the db software issue).

There is a section in the installation guide, that seems to support this impression: Exporting and Importing Data for non-APEX Users. It describes an upgrade scenario via datapump. It specifically mentions to deinstall 11cXE.

2. Deinstall Oracle Database XE 11.2 if installation of Oracle Database XE 18c is planned on the same system. See Deinstalling Oracle Database XE for more information

There certainly is some clarification with regards to possible upgrade scenarios needed in the future. For now, only one XE installation on a single virtual box/machine.

Ressource restrictions

The 2 CPU threads might be one of the more limiting factors. It feels like a downgrade from the previous version. With hyperthreading a CPU could support easily up to 4 or even 8 threads. This is now limited to two threads. Oracle tries to make sure that we can not trick the limitations by useing advanced hardware.

This reminds my of the restriction introduced when switching to SE2 (standard edition 2). SE2 is limited to 16 (user) threads. Which essentially means, we should disable hyperthreading and just use normal CPU power.

Especially for web based applications the thread limit seems severe. In APEX we typcially have connection pools that serve up to 10 concurrent connections. Sometimes multiple connections pools (apex, apex_al, apex_pu, apex_rt). We need to rethink the default apex and ords settings. A thread on a CPU usually does not resemble a user process 1:1 but it can serve as a crude first estimation.

The 12 GB user data means we can not store many document files in the database. For most well structured data this should be reasonably enough. As long as no video, audio or very large photo collections are used. This still leaves the option to store documents as BFILE directly in the file system. This would avoid the data being used for the limited user table space. I’m not sure if temp and undo space is included in that limit. Main question seems to be, what exactly is measured.

Also note that system tablespace is not included in that 12 GB limit.  Unfortunatly I can’t find the source anymore where I read about this.

This query might help measureing how much space is used.

select nvl(tablespace_name,' - Total - ') tablespace_name,
round(sum(bytes)/1024/1024/1024,1) as size_in_GB
from dba_segments
where tablespace_name not in ('SYSAUX','SYSTEM')
group by rollup(tablespace_name)
order by size_in_GB desc;

I think that restriction is quite ok. Although the improvement from 11gXE is only marginal.

In-memory

Since we are only able to use 2 GB RAM in-memory column storage is not really useful. It can be nice to test out this feature or maybe use it on very tiny databases.
It might also be useful in combination with partitioning. I’m not sure if that is possible, but if we load only the relevant (e.g. current) partitions into memory we might be able to stay within the 2GB limit even when data sizes grow.

More of a marketing stunt, that in-memory option is included.

Compression

There is basic and advanced compression. Could be highly useful, especially to save up on disk space. However if you ever plan to upgrade to a Standard Edition, don’t depend on compression. This also is an EE feature.

On the other hand compression might be used to lower the pressure on the 12 GB user data limit. Using advanced compression it should be easily possible to double the size of the user data without reaching the limit.

APEX

APEX is not preinstalled. This is in line with the current 18c db situation. In 11g/12cR1 APEX was preinstalled in the CDB, which was generally considered bad practice. It was changed with 18c (i believe) and XE behaves the same way. Still it would be nice to have an option during install that gives us ORDS and APEX in a PDB.

In general APEX (and ORDS) update their versions more frequently than the database,
This might change with the new release cycles of the database however. 19XE is supposed to come out with the 19.3 db version.

upgrade thoughts

We can not directly upgrade from 11gXE to 18cXE. Nor will we be able to upgrade from 18cXE to 19XE. The way to upgrade is to export the data and import it into the new DB.

multitenant

We got 3 PDBs to use. This is very nice. Remember SE2 (standard edition 2) only allows one plugged PDB (+the seed PDB). I still would only put one PDB into major use. But it might become much easier to clone a PDB to try out a few things there, without the need to shut the original PDB down.

Unfortunatly it is not possible to plug the XE PDB into a SE2 database. It is possible to plug it into an EE database or even into a Oracle Cloud Db. I believe the SE restriction has to do with the problem that several EE features are available in the XE, which are not available in SE/SE2.

database links

So far I do not know if it is is possible to create a DB link on an XE. I totally expect that it will work. The whole CDB/PDB concept uses DB links.

backup

It is now possible to use RMAN to backup a XE. Recommended!

Tuning and diagnostic tools

At first I heared that ASH and AWR reports will not be possible in XE. But in this thread from July 2018 Gerald Venzl explained otherwise

Will it be legal to use all these options like DIAGNOSTIC+TUNING, ADDM, AWR, ASH in production systems?

–> Yes

Meanwhile Gerald clarified that the feature did not make it into 18XE. It is now planned for 19XE.

The license guide has a detailed chapter showing which options are available and which not.

It would be great to have it. Performance troubleshooting is a topic on any production system. I don’t expect XE to be an exception.

 

production ready and other usages

Can we use it for production environments?

From a pure license standpoint it is ok. See the quote from the faq

Can I use Oracle 18c XE in production?

Oracle Database Express Edition does not restrict in which environment it can be deployed. However, Oracle Database Express Edition is not supported and does not receive any patches, including security patches. Oracle recommends to run production deployments on fully supported Oracle Database editions or Cloud Services.

So – yes we can! But should we use it in production? Yes and no. For non-critical applications (like if you want to manage your local club data via a XE + apex) that is certainly fine. Although the cloud based alternatives are interesting.  However we will not receive any security fixes and don’t have oracle support (we are not paying for it, so that is fair). This might be a risk. And that’s why I wouldn’t recommend XE for mission critical stuff.

The other thing people already speculated, is to use an XE for testing purposes. If it is feature testing, then this we can already do with an EE (enterprise edition) using the OTN Developer  license. Just recently the oracle prebuild developer-vm was upgraded to 18c. If you just want the database then you can use a vagrant-box to to provision the installation: Oracle DB 18.3 . Since XE will not receive any patches, we will not be able to test the newest version of a feature on it. So for pure testing purposes it would be a bad choice.

It might be an exellent tool to demonstrate the capability of an oracle database to critical customers however. If you are an ISV you could even use it to give your potential customer a trial version of your software running on an XE.

Conclusion

I’m very excited that 18cXE finally made it. I’m also happy that a lot of EE features are available. For demos, talks and presentations I will probably stay with the preconfigured Developer-VM and an EE version of the database. For long-running small side projects, I consider to setup one or two XEs. Additionally it is a great counter argument for all those “Oracle DB is (too) expensive” opinions.

useful links

XE Quick Start: https://www.oracle.com/database/technologies/appdev/xe/quickstart.html

XE licensing information: https://docs.oracle.com/en/database/oracle/oracle-database/18/xelic/licensing-information.html#GUID-0F2574A6-360F-4237-8098-17B02FFC3BB3

XE Forum: https://community.oracle.com/community/groundbreakers/database/developer-tools/oracle_database_express_edition_xe

Oracle Vagrant-Boxes: https://blogs.oracle.com/developers/announcing-the-oracle-vagrant-boxes-github-repository

Jonathan Lewis about compression: https://www.red-gate.com/simple-talk/sql/oracle/compression-oracle-basic-table-compression/

OTN/ODC Appreciation Day 2018 – Thanks for the mutating table error!

This is not sarcasm!

What is the mutating table error?

I firmly believe the mutating table error is a good thing. For anybody who does not know what I am talking about here is an example.

There are various ways how the error can happen. A before row trigger is probably the most typical scenario.

Since Halloween is getting close, I need a table to organize the personell for my evil lab. I setup a table that holds each member devided by departments. Only one labhead per department is allowed. This integrity rule is enforced by table triggers.
maniac_mansion_scientist

-- build demo
create table swe_test_mutating
  (id number not null primary key 
  ,name varchar2(1000) not null
  ,department varchar2(254)
  ,is_head_of_department varchar2(1) 
     constraint yn_ck check (is_head_of_department in ('Y','N'))
  );


-- trigger to make sure that there is only one head of department.
create or replace trigger swe_test_mutating_bri
  before insert or update on swe_test_mutating
  for each row 
declare
  v_dummy varchar2(1);
begin
  if :new.is_head_of_department = 'Y' then 
    -- check if there is at least one other person 
    -- for the same department
    -- which is also marked as HEAD
    begin
      select is_head_of_department
      into v_dummy
      from swe_test_mutating
      where id != :new.id -- must be different 
      and department = :new.department -- must be same department
      and is_head_of_department = 'Y' -- is head
      and rownum <= 1  -- enough if we find one
      ; 
      
      raise_application_error(-20000, 'There can only be one head of '||:new.department||'!');
    exception
      when no_data_found then
        -- ok. Single head of department only
        null;
    end;    
  end if;
end;
/

Trigger SWE_TEST_MUTATING_BRI compiled

add some data


-- add some personell to the evil lab
insert into swe_test_mutating
values (1, 'The master', 'LAB05', 'Y');

1 row inserted.

insert into swe_test_mutating
values (2, 'The apprentice', 'LAB05', 'N');

1 row inserted.

insert into swe_test_mutating
values (3, 'Doc Holiday', 'LAB05', 'Y');

ORA-20000: There can only be one head of LAB05!
ORA-06512: at "EVIL_LAB.SWE_TEST_MUTATING_BRI", line 17
ORA-04088: error during execution of trigger 'EVIL_LAB.SWE_TEST_MUTATING_BRI'

This shows, the trigger worked for single row inserts. The logic inside the trigger prevented us from adding a second head for one department (LAB05).

Now a test with multi row inserts.

insert into swe_test_mutating
(
select 4, 'Mr. Sissorhands', 'LAB07', 'Y' from dual union all
select 5, 'Nagging Nanny', 'LAB07', 'N' from dual union all
select 6, 'Evil Clown', 'LAB07', 'Y' from dual
);

ORA-04091: table EVIL_LAB.SWE_TEST_MUTATING is mutating, trigger/function may not see it
ORA-06512: at "EVIL_LAB.SWE_TEST_MUTATING_BRI", line 8
ORA-04088: error during execution of trigger 'EVIL_LAB.SWE_TEST_MUTATING_BRI'

So what happend? We got an error (ORA-04091) saying, we are not allowed to work with the same table that is currently modified.

We would have gotten the same mutating error even if the select part has only one row. If there is an insert..select then it is considered a multi row insert and then the mutating table error bites.

Why is the mutating table error a good thing?

There are multiple reasons

1) The error prevents developers from making stupid mistakes.

I like to compare it to a safety catch. On a pistol it prevents the developer from shooting their own foot. On a climbing rope it prevents the developer from plummeting to death, if something breaks loose. If you remove the safety catch on purpose, then it is your own risk. Only remove it if you are absolutly sure what you are doing.

Without the error it is easy to create endless loops. Think about what happens, if we do an update on the same table in an after update row trigger. This update would again trigger another update which would trigger yet another update and so on. Eventually we run out of disk space (archive/redo logs) or something else dramatic happens.

The mutating table error stops us (developers) from creating such an endless loop or making other mistakes (that mostly have to do with read consistency).

2) The Error and I have a little bit of history together.

In one of my first big projects almost all tables had validity columns (valid_from and valid_to). We used triggers to make sure that certain validity rules were also considered over relationships.

For example if the validity of a parent record was extended, then the validity of a detail record needed to be extended too – but only, if it didn’t clash with any of the other detail records. Very complex rules.

Those triggers always ended up with mutating or constraining table errors, so in the end the whole logic needed to be moved to after statement triggers. It was a hell to maintain.

Nowadays I would not put all those logic into triggers, instead I would use packaged low level APIs that also make sure have all the integrity rules in place. The code would look so much cleaner with that.

I believe having gone through this experience, made me a better developer.

3) The error is complex to understand.

I do understand it. Which gives me the chance to educate about it. Sharing and passing on knowledge is a fun and highly satisfying thing to do. As is writing such a blog post.

Error Darwinism

Interestingly errors seem to comply to the laws of natural selection.

Here is what I noticed how the mutating table error has evolved over time.

  • It now (since 10g I believe) bites only if more than one row is inserted (or updated or deleted. Which is a very hideous thing to do. Novice developers test using single row inserts. Experienced developers know they should use bulk inserts and test accordingly.
  • The little brother of the mutating table error was the constraining table error.
    AskTom: “A mutating table is a table that is being modified. A constraining table is one involved in declaritive referential integrity with the mutating table.”

    Evolution took its toll on it. The constraining table error now is rarly seen in the wild. I asked around a bit, and there were no developers that found it recently. Most didn’t even know it existed. We might consider it to be extinct.

  • Compound triggers appeared in 11.1. They help to deal with the error in a quick and not so dirty way. So they are a kind of master predator for this error.
  • The recommendation about the errors have changed. Certain tools, like Oracle SQL developer show additional information if an error happend. In the case of the ORA-04091 those additional notes included something about “pragma autonomous transaction”. This hint was very misleading. Nowadays the misinformation is gone.
    It was like a mosquito (bug) infection in a swamp. The swamp is now dried up, so we see less mosquitos feeding upon that error.

How to avoid the error?

Anyone who tries to avoid the mutating table error by using the PRAGMA AUTONOMOUS TRANSACTION creates another bug. For a while the error message was misleading and trapped novice developers into thinking that this pragma is the right way to circumvent the issue. It is not. Bug count >= 2 after that pragma.

So how to correct it? We move the logic to the after statement trigger. In the past we needed 3 different table triggers to do it correctly. A before statement trigger to initialize a package variable, a row trigger to store the IDs of all records that were touched by the DML and an after statement trigger to do the actual work based upon those stored IDs. By using a compound trigger we can combine those three triggers and make the code look much cleaner.

Here is one way how to do it.

-- compound trigger to make sure that there is only one head of each evil lab (department).

create or replace trigger swe_test_mutating_comptrg
  for insert or update on swe_test_mutating
  compound trigger 

  -- define collection type
  type ID_t is table of swe_test_mutating.id%type 
    index by binary_integer;
  -- define variable to hold a list of IDs
  IDs ID_t;
  
before statement is
begin
   -- no initialization needed for compound trigger
   null;  
end before statement;

after each row is
begin
  -- check if we have a new head of department
  if :new.is_head_of_department = 'Y'
  and (:old.is_head_of_department is null -- insert
       or :new.is_head_of_department != 'Y' ) then

     -- remember ID
     IDs(IDs.count + 1) := :new.id;
  end if;     
       
end after each row;

-- main logic
after statement is
  v_department swe_test_mutating.department%type;
begin
  if IDs.count > 0 then 
    -- check if there is at least one other person 
    -- for the same department
    -- which is also marked as HEAD
    for i in 1..IDs.count loop
      begin
        -- since we didn't store the department, we have to reread it from the table
        select t1.department
        into v_department
        from swe_test_mutating t1
        join swe_test_mutating t2 on t1.department = t2.department -- must be same department
        where t1.id = IDs(i)
        and t1.id != t2.id -- must be different id
        and t1.is_head_of_department = 'Y' -- remembered ID is head
        and t2.is_head_of_department = 'Y' -- second personell is also head
        and rownum <= 1  -- enough if we find one
        ; 
        
        raise_application_error(-20000, 'There can only be one head of '||v_department||'!');
      exception
        when no_data_found then
          -- ok. Single head of department only. 
          null;
      end;    
    end loop;
  end if;
end after statement;
end;
/

-- remove the old trigger
drop trigger swe_test_mutating_bri;

If we now do a multi-row insert we will not see the mutating table error. But instead the error message that we want to see (only 1 head allowed).

— repeat the test!
insert into swe_test_mutating
(
select 4, 'Mr. Sissorhands', 'LAB07', 'Y' from dual union all
select 5, 'Nagging Nanny', 'LAB07', 'N' from dual union all
select 6, 'Evil Clown', 'LAB07', 'Y' from dual
);

ORA-20000: There can only be one head of LAB07!
ORA-06512: at "EVIL_LAB.SWE_TEST_MUTATING_COMPTRG", line 50
ORA-04088: error during execution of trigger 'EVIL_LAB.SWE_TEST_MUTATING_COMPTRG'

There are multiple optimisations for this code possible. We also could enfore the integrity rule by using a function based unique constraint. The point however was to show how code that demonstrates how to work around the mutating table error.

Also note, that the usage of table triggers is not only to create error messages, like I did in the example with the “there can only be one” check. Often they are used to store aggregated (=redundant) data in some header tables. And many other scenarios are possible.

Addemdum – integrity not guaranteed

The trigger as I wrote it first does not guarantee the intended data integrity. It avoids successfully the mutating table error. However it does not ensure, that there will only be one labhead.

Here is the scenario to show the problem. If we insert a single labhead from two separate sessions without commiting the first session. Then we end up with two active labheads. Which is not what our business rule says.

-- In session A
insert into swe_test_mutating
(
select 4, 'Mr. Sissorhands', 'LAB07', 'Y' from dual union all
select 5, 'Nagging Nanny', 'LAB07', 'N' from dual 
);

-- do not commit in session A yet!

Now add another head of lab in a different session.

-- session B
insert into evil_lab.swe_test_mutating
(select 6, 'Evil Clown', 'LAB07', 'Y' from dual);
commit;

After commit in session A we can check the results of the table

-- session A
commit;

select * from swe_test_mutating;
ID	NAME	        DEPARTMENT	IS_HEAD_OF_DEPARTMENT
6	Evil Clown	LAB07	        Y
4	Mr. Sissorhands	LAB07	        Y
5	Nagging Nanny	LAB07	        N

Thanks to Alex Nuijten who pointed out this issue in the comments.

On a more abstract level: In a multi user environment, we need to make sure that all data that is needed for a decision is immutable until the transaction is finished. This also means we need to serialize the access to the same lab.

The solution is not so simple.

We can consider to lock all the personell, at least for the lab in question. But this will not prevent a rule violation via an insert.

The proper way is to lock the parent row (the lab) itself. Now in my simplified datamodel we do not have this part. So I first need to extend the datamodel and then do the row locking.

Here is the updated scenario

Improved datamodel with two tables and a FK

drop table swe_test_mutating;
drop table swe_test_mutating_labs;
drop table swe_test_mutating_personell;
-- build lab enhancement
create table swe_test_mutating_labs
   (name varchar2(254) not null primary key 
   ,description varchar2(4000));
   
create table swe_test_mutating_personell
  (id number not null primary key
  ,name varchar2(1000) not null
  ,department varchar2(254) references swe_test_mutating_labs(name)
  ,is_head_of_department varchar2(1) 
     constraint yn_ck check (is_head_of_department in ('Y','N'))
  );

-- We really should create an index on the FK to lessen the chance for a major deadlock scenario.
-- However dealing with the dead is daily business for the evil lab. 
-- So I don't care at the moment.

-- construct some labs
insert into swe_test_mutating_labs (name) 
select 'LAB'||to_char(level,'fm00') 
from dual connect by level <= 10;

commit;

The compound trigger also needed to be changed. The most important thing is that we need to lock the lab in question, before the duplicate check is made.
Improved compound trigger

-- compound trigger to make sure that there is only one head of each evil lab (department).
 
create or replace trigger swe_test_mutating_personell_comptrg
  for insert or update on swe_test_mutating_personell
  compound trigger
 
  -- define collection type
  type ID_t is table of swe_test_mutating_personell.id%type 
    index by binary_integer;
  -- define variable to hold a list of IDs
  IDs ID_t;
   
before statement is
begin
   -- no initialization needed for compound trigger
   null;  
end before statement;
 
after each row is
begin
  -- check if we have a new head of department
  if :new.is_head_of_department = 'Y'
  and (:old.is_head_of_department is null -- insert
       or :new.is_head_of_department != 'Y' ) then
 
     -- remember ID
     IDs(IDs.count + 1) := :new.id;
  end if;     
        
end after each row;
 
-- main logic
after statement is
  v_department swe_test_mutating_personell.department%type;
begin
  if IDs.count > 0 then
    -- check if there is at least one other person 
    -- for the same department
    -- which is also marked as HEAD
    for i in 1..IDs.count loop
      begin
        -- since we didn't store the department, we have to reread it from the table
        select p1.department
        into v_department
        from swe_test_mutating_personell p1
        where p1.id = IDs(i)
        and p1.is_head_of_department = 'Y' -- inserted ID is a labhead
        ;
        
        -- lock the department
        select d.name 
        into v_department
        from swe_test_mutating_labs d
        where d.name = v_department
        for update of d.name;

        -- after we ensured exclusive access to this lab, 
        -- we can test if there is another labhead 
        select p2.department
        into v_department
        from swe_test_mutating_personell p2
        where p2.department = v_department -- must be same department
        and p2.id != IDs(i) -- must be different person
        and p2.is_head_of_department = 'Y' -- second personell is also head
        and rownum <= 1  -- enough if we find one
        ; 

        -- found one! 
        raise_application_error(-20000, 'There can only be one head of '||v_department||'!');
      exception
        when no_data_found then
          -- ok. Single head of department only. 
          null;
      end; 
    end loop;
  end if;
end after statement;
end;
/

We can now repeat the test using the new tables.

— In session A
insert into swe_test_mutating_personell
(
select 4, 'Mr. Sissorhands', 'lab07', 'Y' from dual union all
select 5, 'Nagging Nanny', 'lab07', 'N' from dual
);

Before we commit in session A, we do an insert in session B

— session B
insert into swe_test_mutating_personell
(select 6, 'Evil Clown', 'lab07', 'Y' from dual);

This insert will now WAIT (it "hangs") until the first session finishes its transaction. Essentially we serialized the access to each lab. Only one session at a time is allowed to do changes with regards who is the head.

The result in session B depends whether session A does a COMMIT or a ROLLBACK.

session A: commit;
session B: ORA-20000: There can only be one head of lab07!

session A: rollback;
session B: 1 row inserted.

But this again shows how much trouble we have to avoid and to consider if we decide to a) implement business rules inside database triggers and b) try to avoid the mutating table error.

Thank you mutating table error! We have come a good way since I first encountered you.