APEX 5.1 quick tipp: select * considerations for remote views

Many developers don’t like select *. I’m a big fan of it, but there are some edge cases where one needs to be careful. I don’t want to discuss pros and cons here in this thread. This is just to point out a few areas where useing select * gives not the advantage that one could expect.

I like to program select * in multiple scenarios. Usually to express, that I want to access (almost) every column from that table or view. I’m too lazy to write down every single column name.

A typical APEX example would be to edit a table via an Interactive Grid.

Scenario setup

Imagine a remote database with a table MYTAB. We access this remote database via a database link from a local database with APEX installed. In the SQL workshop we have a view that was originally created as

create view myTab_remote as
select * from mytab@dblinkname;

That advantage of that view is, that we create a layer where the name of the DB link is used. Using this layer we don’t need to know the DB link name inside our APEX application. Which makes it easier to switch or change the link.
Also the APEX framework is now able to use the oracle data dictionary for that view. Newer APEX versions have improved capabilities to work with remote tables. But here I’m talking about version 5.1.

Our Interactive Grid uses a fairly simple query.

select * from myTab_remote
where parent_id = :P1_MASTER_ID

Modifications

So what happens when a column is added on the remote database to our table, for example the column price_unit.

alter table myTab
add price_unit varchar2(5);

The APEX application is still working, no issue happend there. It does not automatically has the new column.

In general this is good. We have a robust appliaction that does not break, simply because a new column was added (it would break if a column is dropped however).

Certainly the column was added with some intention in mind. We also want to show and edit this new column in our Interactive Grid.

How to add this new column to our application?

First step is to update the view. One might think this is not necessary, because we created the view using select *. Unfortunatly that is not how Oracle works, especially not via a database link.
When investigating the view via the SQL tab in the SQL workshop, we find that the DDL is different than what one would think.

CREATE OR REPLACE FORCE VIEW "MYTAB_REMOTE" ("ID", "PARENT_ID", "COL_A",  "COL_B",  "PRICE") AS 
   select  "ID", "PARENT_ID", "COL_A",  "COL_B",  "PRICE"  from MYTAB@DBLINKNAME
 /

The column list was expanded into the select statement and also captured as the column list name of the view itself. The new column price_unit is missing.

To add the new column we could again simply create the view. A normal recompile is not enough. Btw. this also happens with views over local tables.

create or replace view myTab_remote as
select * from mytab@dblinkname;

Hint: never use the FORCE keyword for development work. It may hide important error messages.

After that we can select from the view and will see the new column. However it is still missing from the Interactive Grid.

The second step is to push APEX into reevaluating the column list for this view.

Simply validating the SQL query will not be enough. I found the following sequence of steps do work.
First add an alias and a new dummy column to the query.

select m.*, 'x' new_dummy_col
from myTab_remote m
where m.parent_id = :P1_MASTER_ID

Validate it and press OK. This results in two new columns added to the grid column list. Then enter the SQL query again and remove the dummy column.

select m.*
from myTab_remote m
where m.parent_id = :P1_MASTER_ID

Result is we now have the new column in our list.

The third and last step is to add this column to our default reports.

The new column is at the very end of the column list. On an IG it is shown, on an IR it is not shown by default. In any case we want to move this new column via the Actions > Columns menu to the correct position. And then save the public report Actions > Save Report. This should be done for each public report in that Grid.

Conclusion

Using select * in a APEX Interactive Grid scenario did not lower the maintenance needed when adding new columns to the base table. The second step even might be slightly confusing for many developers. So it might be a good idea to use a complete column list there instead of select *.

Site note: Using select * in other scenarios like inside PLSQL indeed does lower the maintenance when done right compared to an explicit full column list.

Advertisements

OGB Appreciation Day 2019: The Dark Mode

#ThanksOGB

This one goes out to the one I love
This one goes out to the one I left behind
A simple theme to occupy my time
This one goes out to the one I love

Dark
Mode

This one goes out to the one I love
This one goes out to the one I’ve left behind
Another theme has occupied my time
This one goes out to the one I love

Dark
Mode

original lyrics by R.E.M. – slightly modified by me [SUW]

Every developer knows that there is a dark mode. Here is why I like that and why you should use it too.

There are two simple reasons.

A) it is better for your eyes (not always)

B) it is better for the climate (always)

I’ll explain that a little bit later and also give some sources for further reading. First lets see how dark mode looks in a typical developer environment.

Examples

Now the DBA had terminals using dark mode for ages. The typical putty session is a black background with a white font color.

There is a reason why so many DBAs still use putty in dark mode (it is possible to change the colors). If a pattern is so well established, there must be a good reason for it.

Developers should think about that and consider to use dark mode too!

Don’t let those pesky DBAs get the upper hand! Fight them with their own weapons!

Now here is an example from APEX. APEX in its newest versions added a dark theme to be used for applications (18.1), but also a dark mode for the development environment (19.2 ea).

First this is how we are used to the application screen in page builder (light mode)

And this is the same page using dark mode

You can change it with a small little switch in your user settings.

Ok dark mode is not perfect for any scenario.

I tried to read up a bit about it. Usually it is recommended if there is lots of text on a screen or if there are graphical elements. Also it depends on the surrounding light. In bright daylight light mode would be better suited – because the difference to surrounding light seems to put some stress on your eyes. But do you program in bright daylight or in darker office light or even at night?

I think dashboard pages profit most from dark mode. Here is an example from the Sample Demo packaged application.

This application works very well in dark mode. Charts do look so much better on a dark background!

But we also see that it was developed in light mode and that there are several elements that are not optimized yet. In the screenshot the legend text is not readable and the tooltip still has a light gray background. Other charts have similar issues. If we do development in “dark mode first” then those issues should disappear.

Why do I claim it is good for the climate?

Dark mode uses less brightness, which means it uses less power on Laptops. The effect is especially noticable on OLED displays.

A study by google showed energy savings up to 60%:
https://www.xda-developers.com/google-wants-developers-to-add-dark-themes-to-save-battery-life/

  • Brightness affects power usage, and battery life, in a mostly linear fashion.
  • A dark theme can reduce battery usage, even with max brightness, by up to 63% on AMOLED displays.
  • Pixel color value can also affect power usage quite a bit, with white being the most power-hungry color.

Recommendation

Give dark mode a try!

PS: For blog posts I still do prefer light mode.

Links about dark mode

https://blog.weekdone.com/why-you-should-switch-on-dark-mode/

scientific research about effect of contrast and colors on humans: https://lite.mst.edu/media/research/ctel/documents/LITE-2003-04.pdf

dark mode for google maps announced: https://www.googlewatchblog.de/2019/10/google-maps-der-dark/

https://www.zdnet.com/article/google-heres-why-dark-mode-massively-extends-your-oled-phones-battery-life/

APEX 19.2 ea – features I like

The following is a not complete and not ordered list of tiny and major features that I found interesting in the upcoming APEX 19.2 version. Since this is an early adopter version, be aware that some of those features might not make it into the final product.

I can not guarantee that all features are exclusively for APEX 19.2 I might have overlooked one or the other in one of the previous very recent versions like 19.1 or 18.3.

Here is a link to the new features page. Not everything I mention is on that new features page.

Star item type

There is a new item type “star”. Essentially the star rating plugin has been integrated directly.

The default setting is ok, but we can customize it.

It works even better with the dark theme.

I encountered a display bug when trying to use a large font with the “fa-lg” modifier. But when trying to reproduce that again, it worked. It seems to help, if there is another star item on the same page.

Debug level default

The “Debug” level setting now has new default options. We can customize them via the toolbar.

Additionally to the old YES/NO debug switch we can also choose “App Trace” and “Full Trace”.

The setting is then attached to the toolbar “Debug” button, so that we can remember which option was chosen.

  • Info = Level4 = YES
  • APP Trace = Level6
  • Full Trace = Level9
dev toolbar
c_log_level_error constant t_log_level := 1; -- critical error 
c_log_level_warn constant t_log_level := 2; -- less critical error 
c_log_level_info constant t_log_level := 4; -- default level if debugging is enabled (for example, used by apex_application.debug) 
c_log_level_app_enter constant t_log_level := 5; -- application: messages when procedures/functions are entered 
c_log_level_app_trace constant t_log_level := 6; -- application: other messages within procedures/functions 
c_log_level_engine_enter constant t_log_level := 8; -- Application Express engine: messages when procedures/functions are entered 
c_log_level_engine_trace constant t_log_level := 9; -- Application Express engine: other messages within procedures/functions 

https://docs.oracle.com/en/database/oracle/application-express/19.1/aeapi/Constants-2.html#GUID-412ED5E2-1739-4A9B-B214-38674B4A4BCD

I think my preferred debug default will be “App Trace”.

faceted search

When I saw the first versions of this during APEX Connect 2019 it was still called “report filter region”. And I was totally hyped about that. It seems to fulfill that promise.

general considerations

There are other products like QlikView/QlikSense that offer such a kind of user experience. And once the users are used to that, they miss it in APEX applications. APEX is not yet on par with those products. Faceted searches are a big step forward for an improved user experience.

Here is a promotion video, that shows some of those enhanced filter capabilities in QlikView.

implementation in APEX

So how does the APEX version of faceted search looks like and how does it work?

  • First create a data region, like a classic report. I didn’t test other region types like Interactive Reports yet.
  • Then create a faceted search region that references the data region.
  • Then create facets that are connected to database columns from the data region.
APEX 19.2 – faceted search example

The user clicks on any of the filter options in the search (the facet region) and the report is filtered to those search options. The total counts for each options also react to the other filters and change accordingly.

In the facet search region we can create facets of different types (checkbox, radio, range, search, select list). A facet is always connected to a database column. Each facet has a large number of configuration options.

feature assessment

Here is a highly educating discussion with examples about how the GUI for faceted filters should work: https://www.uxmatters.com/mt/archives/2009/09/best-practices-for-designing-faceted-search-filters.php.

When we compare the criteria from that article to the options available in APEX we see that APEX does many things right. UX considerations like how to clear filters are supplied in a declarative way. Still the feature is not complete yet and slightly buggy (I had issues with the range type facet for example). But that is to be expected for such a complex enhancement. And we can expect more improvements in the future.

Meanwhile on twitter:

So one trick for range facets to use a special syntax on LOVs. It works!

This is the best and most useful 19.2 feature!

enhanced LOV

I like icons. The new features page mentions that we can now put icons to LOV entries. Unfortunately this works only for multi-column popup LOVs. Not for normal static lists.

An LOV can now be used to populate multiple items. This is a really nice enhancement (and long overdue).

Currently many options seem not to be balanced properly. For example a on a modal page layout – depending on some other settings – the shared component LOV uses the return value and does not show the display value.

Remember it is still an early adopter version.

improved LOV configuration

It is now possible to convert a locally defined LOV (one that is defined on a page) into a shared LOV.

Static LOV entries now have conditions, a sequence number and we can comment the LOV.

New design for the switch item

There are three different component settings possible, that influence how a switch item will look like.

APEX 19.2 display style = Switch

display style = Pill Button

display style = Select List

More options are better!

Unfortunately the new Switch style does not show the label value that is currently choosen. Nevertheless it looks very useful especially for interactive grid columns.

Excel Upload for Data load wizard

I couldn’t test this yet, but it sounds highly useful.

Markdown textareas

Markdown is a very common shorthand syntax to format text. Mostly used on GitHub, but also many wikis use a similar syntax. Useful for developers, but also for end users. If the rich text editor offers too many options, the markdown editor is very nice to concentrate on basic styling.

APEX 19.2 markdown editor vs. richtext editor

For more information check out the many helpful articles about markdown on GitHub:

https://help.github.com/en/articles/basic-writing-and-formatting-syntax

github markdown cheatsheet

Other tiny changes

  • The Hamburger Menu that collapses the left sidebar – removes the sidebar completely. Icons included. This is configurable (Collapse Mode in User Interface).
  • The page loading indicator is smaller and spins faster
  • The icon library can be “Font APEX” or “Font APEX – latest”
  • There is a compatibility mode 19.2
  • More options for the logo line: Like “IMAGE and TEXT”

Some quick facts about sequence caches and gaps in IDs

  1. Oracle sequences generate key values without gaps (minor exception from the rule: scalable sequences).
  2. Surrogate keys can and will have gaps.
  3. Gaps exist because it is not worth the huge effort to avoid them.

The biggest misunderstanding is probably the difference between the value coming from a sequence and the primary key value inside a database column (ID column). We can use a sequence to set a surrogate key. Typically we do this by a database trigger or an identity column or even as a value directly during the insert.

“The history of sequence caches is a history of misunderstandings.”

somebody on the internet, 1963

Essentially it is a two step process. The new ID value is created by sequence.nextval and then used during the insert. Oracle provided the logic to get a new sequence value in a fast, performant and multi-session friendly way. The developer is responsible to make sure this value is not wasted.

The sequence cache is a performance instrument. A lost cache value means we will see gaps in future ID values. There is only a very limited number of activities that makes cached sequence values “disappear”. The majority of gaps originates from other activities, like failed inserts or deletes. Many developers seem to think, that such a gap is caused by the sequence cache. I find it important to separate what reason ultimately lead to a gap in an ID value.

A gap analysis can be the first step. The following select finds gaps and orders them by size. Put your own table and PK column into the “ids” subquery and find out how the gaps are distributed in that table.

-- gap analysis
with ids as (select region_id as id, lead(region_id) over (order by region_id) as nextid from regions)
   , gaps as (select id, nextid - id as gap_size from ids where nextid > id + 1)
select gap_size , count(*) gaps_found  
from gaps
group by gap_size
order by count(*) desc;

Example result

gap_sizegaps_found
205
32
22
191

Here the data shows a few small gaps. Either some insert didn’t make it or rows had been deleted. But there are also several gaps of size 19 and 20, which means the sequence cache was probably lost at least 6 times.

To make it clear, IDs are allowed to have gaps. However when an end user complains why he constantly sees gaps in some ID, then we should investigate what is going on. Frequent gaps should not be the expected normal behaviour – it is worth finding the root cause for such an issue.

So here is a (non complete) list of activities that do or do not influence the caching of sequences and the existence of gaps in an ID column.

action result
multiple sessionscache used
multiple sessions with one session slow to commitIDs not in order of commit => not a gap eventually, but looks like a gap until final commit
alter system flush shared_pool ;cache values lost => gap in ID
shutdown immediate;cache values not lost
shutdown abort;cache values lost => gap in ID
insert; commit;no gap
insert; rollback;gap in ID
failed insertgap in ID
delete; commit;gap in ID
switch sequence from cache to nocachecache values not lost
kill sessionpossible gap in ID
(values not used|commited are lost)
different RAC sessionscache values not lost
(but separate caches on each RAC node)
SGA size too smallcache values lost => gap in ID
(sequence cache ages out of shared pool per LRU list)
session sequencesnot useful for unique ID values => duplicate ID
scalable sequencescache values not lost => planned large gap in ID
(because of prefix)
alter system cancel SQL ‘SID, serial’;
(insert killed)
gap in ID
alter system disconnect session post_transaction;no gap
(unless sequence was fetched before the transaction started)
alter system disconnect session immediate; gap in ID
sequence restartpossible duplicate IDs
sequence interval > 1planned gap in ID

That a sequence ages out of the shared pool is in my experience the most common cause of lost sequence values that are noticed. A strong indication is when frequent gaps of 20 values exist.

Possible workarounds are

a) set the sequence to nocache if the sequence is used rarely
alter myseq nocache;
This might cost performance.

b) pin the sequence in the shared pool
dbms_shared_pool.keep('MYSCHEMA.MYSEQ','Q')
Pinning the sequence is useful when you have times with lots of concurrent activities followed by larger idle periods.
This costs a little memory.

c) increase SGA size – sometimes it helps to increase the restricted shared pool area.
This costs memory (and money) – not available for other non-database processes.

Conclusions

  • Small gaps in surrogate keys (ID columns) are natural, mostly because of delete statements or failed inserts.
  • The sequence object typically does not produce gaps.
  • Very frequent gaps of size 20 (default cache size) are not natural and might need investigation.

Oracle 18c quick tipp: How to change ini_trans

TL;DR;

-- increase INITRANS for the table
alter table myTable initrans 4;

-- Rebuild the table including indexes
alter table myTable move update indexes;

Why to change

The ITL (interested transactions list) is a list that is used during DML to organize which session does currently changes to an oracle block.

INI_TRANS is the guaranteed minimum number of slots for concurrent transactions. The bigger INI_TRANS is the more space is reserved in the header of an oracle block. So do not set it to a high value and not for all tables. Usually setting it to 2 is enough.

Only for tables (blocks) that are very full and where the same block is accessed from different sessions concurrently this should be increased. A strong indicator would be the wait event “enq: TX – allocate ITL entry“. If this wait happens frequently, then you want to increase the available ITL slots by increasing INI_TRANS.

See this post by Arup Nanda for an excellent description about ITLs and ITL waits.

How to change

3 things need to be done.

  1. The table parameter needs to be changed. This setting does only influence new table extents, it will not modify any existing extents.
  2. The existing extents need to be changed by a MOVE operation (into the same tablespace) so that they pick up the new setting.
  3. During the MOVE all indexes on this table will become UNUSABLE. So they have to be rebuild.
-- increase INITRANS for the table
alter table myTable initrans 4;

Since Oracle 12.2 step 2 and 3 can be done with a single command.

-- Rebuild the table including indexes
alter table myTable move update indexes;

Note the UPDATE INDEXES addition to the alter table command.

In older DB versions it had to be done step by step

-- Rebuild the table including indexes
alter table myTable move;

-- check for unusable indexes
SELECT table_name, index_name, tablespace_name
FROM   user_indexes
WHERE  status = 'UNUSABLE'
order by table_name, index_name;

-- prepare a index rebuild statement
SELECT 'alter index '||index_name||' rebuild;'
FROM   user_indexes
WHERE  status = 'UNUSABLE';
and table_name = 'MYTABLE';

--> grab the result and run the index rebuild commands

-- rebuild indexes
alter index MYTABLE_FK04_IX rebuild;
alter index MYTABLE_FK05_IX rebuild;
alter index MYTABLES_PK rebuild;
alter index MYTABLE_UK01 rebuild;
alter index MYTABLE_UK02 rebuild;
alter index MYTABLE_FK01_IX rebuild;
alter index MYTABLE_FK02_IX rebuild;
alter index MYTABLE_FK03_IX rebuild;
alter index MYTABLE_FK06_IX rebuild;
alter index MYTABLE_FK07_IX rebuild;
alter index MYTABLE_FK08_IX rebuild;
alter index MYTABLE_FK09_IX rebuild;
alter index MYTABLE_FK10_IX rebuild;
alter index MYTABLE_FK11_IX rebuild;

Rebuilding tables in 18c is so easy now!

Sven Weller

I didn’t mention that you could MOVE the table as an online operation too. However I would avoid having other active sessions working with the table at the same time.

“greener” APEX apps

This started as a fun discussion in an syntegris internal chat group. But it got me thinking:

Is there a way to make APEX applications more energy efficient?

I came up with a set of ideas and options that have the potential to save energy. Not all options are useful and not all options have a measureable effect. Some are very controversial. The ideas presented here are far from complete.

I added my thoughts to a kialo discussion. Feel free to enter your own arguments there. This is my first kialo discussion – I want to try it out. So please be kind. ūüėČ

I will try to add any interesting considerations that are in the kialo discussion also to this post.

From a very high level view point, we have three areas where we can influence the amount of energy that is used: server – network – client.

server side considerations

Using less CPU means using less energy. APEX applications should avoid wasting CPU.

serve APEX files from the Oracle CDN

A content delivery network provides common files from a server that is geographically close to the client. This also means that the company webserver will not provide the file but the CDN server will. It is possible to configure APEX to get its static files (like jquery, core apex css, etc.) from the CDN.

https://blogs.oracle.com/apex/announcing-oracle-apex-static-resources-on-oracle-content-delivery-network

PRO: The company web/app server will need less energy when the static files are delivered by the CDN.

CON: For local clients the time to get the files from the CDN will be longer. Which also means that energy is wasted somewhere. Probably on the network. Maybe on the firewall.

PRO: Ressources (images, css, js files) that are identical for multiple applications even from different host urls, will be cached only once on the clients browser if they are served from the same CDN.

ORDS should run on the same machine as the database.

PRO: One less server is one less server.

CON: Servers nowadays are mostly virtualized containers running on the same hardware.

ORDS standalone is better than ORDS on Tomcat

A similar consideration could be made for Tomcat vs. Glashfish or Tomcat vs. WebFly (formerly known as JBoss)

PRO: ORDS standalone runs with the Jetty Webserver. Jetty is considered to be very lightweight. Which means it uses less RAM and probably less CPU.

So far I couldn’t test this theory, but there should be a way to measure the impact of ORDS on the different application servers.
Here is a very analytical paper that studies the implications of running java based application on Jetty.
https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=2ahUKEwjPn5mqgYXjAhUD5aYKHddkDOAQFjAAegQIARAC&url=https%3A%2F%2Fhal.inria.fr%2Fhal-00652992v2%2Fdocument&usg=AOvVaw2TAsQuz5MBwOcE7K8Mccav

Using more disk space does not correlate to energy consumption the same way as CPU does. Discs are cheap and they don’t need more energy to store more data.

CON: More data that needs to be transfered and backuped will inevitably need more energy.

PRO: Powered down disks keep the data regardless of how much they store. 4 GB or 5 GB is not a difference to them.

CON: Even a powered down hard drive will still use a little energy. https://superuser.com/questions/565653/how-much-power-does-a-hard-drive-use

Avoid pagination schemes that include Z (x of y from z)

PRO: A pagination scheme with Z means the query is rewritten to include the analytical row_number() over () function. This also means, that more data needs to be read from disk into memory, just to know how many potential rows there are.

CON: For smaller tables a multi-block-read count will load the whole table into memory. Counting or numbering the rows will then have no noticable impact on energy consumption.

client side considerations

In general a tablet uses less energy than a laptop which uses less energy than a desktop PC. So should APEX applications be build for tablets first and foremost?

CON: It does not make sense to exclude users that prefere a certain screen size.

CON: We should make the whole screen available to the end user. More information on a single page will mean less need to navigate to other pages. Which potentially saves energy.

PRO: APEX applications should be carefully tested and optimized for usage on a tablet. If they work really well on a smaller screen, then users will not feel forced to switch to another device when using the app.

APEX 19 has a dark mode. Developers should use dark mode and dark themes to save energy.

The idea is that a monitor uses less energy to show “blackness” than to show “whiteness”.

PRO: That seems to be true. See this fairly recent recommendation by google: https://www.xda-developers.com/google-wants-developers-to-add-dark-themes-to-save-battery-life/

  • Brightness affects power usage, and battery life, in a mostly linear fashion.
  • A dark theme can reduce battery usage, even with max brightness, by up to 63% on AMOLED displays.
  • Pixel color value can also affect power usage quite a bit, with white being the most power-hungry color.

Classic Reports use less energy than Interactive Reports (or Interactive Grids)

PRO: A classic report will be created on the webserver and send as static html to the browser. An interactive report will add a lot of javascript functionality to the data. Which means all those JS files need to be loaded and some are executed during page load.

CON: Interactive Grids have this lazy loading pagination scheme. Data will only be fetched when the end user requests to see more data by scrolling down.

PRO: With APEX 18.2 we also get the option to create a column toggle report. This is slightly more flexible than a classic report without all the interactions made available by an IR.

network traffic

The computers that run the network are there anyway. However we can speculate that by sending less information over the internet, there would be less powerhungry computers that are the internet nodes.

Decrease image size. Provide smaller images depending on screen resolution.

PRO: Smaller images allow faster loading times which positivly effects network traffic and also client rending times. This is especially noticable on low bandwith machines like mobile phones.

CON: There is a huge starting effort to create and provide images for all sizes. It is highly doubtful that this original energy spent will ever be overtaken by the energy savings from smaller network traffic.

How do web applications build with APEX compare to other web applications?

APEX applications have a tendency to consume data in a very efficient way.

The overhead for communication with the database is less then for other types of web applications, like PHP.

CON: Not all applications are data centric.

PRO: There is no additional network between the application layer and the database layer.

CON: With APEX 18 and even more so with APEX 19, reports can be built on top of a web service. The web service provider would be the same no matter what kind of framework was used to build the application. In many cases a pure javascript framework (like Oracle JET) might be more efficient to consume those REST based web services, than an APEX application.

PRO: Because the APEX repository already resides in the database all the optimizions for database logic are used while preparing a page for rendering.

Result?

What started out as a “Schnapsidee” quickly got me thinking. There are actually quite a few ways to influence the energy consumption of an APEX application. And this is true during developement but also for running the application.

EBR – how to run a background job in the correct edition

Introduction

Edition based redefinition is a no extra cost feature of the oracle database. It allows to have multiple versions of the same plsql based code (packages, views, triggers, object types, synonyms, …) in the database at the same time.

Code, that starts a scheduled background job, should execute this background job in the same current edition.

Here is how do it using job classes.

TL;DR.

First create a job class that connects to a specific edition via a service name

begin
  dbms_scheduler.create_job_class (
    job_class_name => 'JCED_DEV$ALPHA',
    service => 'DEV_ALPHA'
  );
end;
/

Then start the background job using that job class.

v_jobclass_prefix := 'JCED_';
v_jobname := 'myJob';
dbms_scheduler.create_job(job_name            => v_jobname,
                          job_type            => 'STORED_PROCEDURE',
                          job_action          => 'myPkg.doSomething',
                          job_class           => v_jobclass_prefix ||sys_context('userenv','current_edition_name'),
                          number_of_arguments => 0,
                          start_date          => sysdate,
                          enabled             => true,
                          auto_drop           => true);

Problem description

This article assumes some basic understanding of the edition based redefinition (EBR) feature of the oracle database.

Scheduled jobs do run in the background in a separate new session. Any new session runs in the default edition of the database, unless it specifies the edition at the time the session is created.

It is possible to switch the edition on demand during a session, but this is not recommended. Chris Saxon misused this possibility for one of his SQL Magic tricks. Not everything that is possible should be used.

Chris Saxon – SQL Magic

Solution

A scheduled job can use a job class. A job class can be set to use a database service. A database service can be set to connect to a specific edition.

When using EBR it is a recommended practice to create a service for each edition and possibly another one for the default edition. Information how to create a service for an edition can by found on one of my older blog entries: working with editions – part 2. Also Oren Nakdimon recently publiced a very nice article about it: using services for exposing new editions .

The following assumes that we have three editions ORA$BASE, TST$BETA and DEV$ALPHA. It also assumes that we have created matching database services ORA_BASE, TST_BETA and DEV_ALPHA and that those services are running.

create the job classes

For each edition we create a job class JCED_<edition_name> (JCED_ORA$BASE, JCED_TST$BETA and JCED_DEV$ALPHA). This can be done with DBA or with the MANAGE SCHEDULER privilege.

Job classes also allow to prioritize between jobs and to connect them to a ressource group.

I’m using a prefix JCED (Job Class EDitioned) to tag the classes. You can use any name you want, but it is advisable to have some common identification for the classes. The name also allows me later to find the correct job class, depending on the current edition of the session.

-- Run as DBA
begin
  dbms_scheduler.create_job_class (
    job_class_name => 'JCED_ORA$BASE',
    service => 'ORA_BASE'
  );
end;
/

begin
  dbms_scheduler.create_job_class (
    job_class_name => 'JCED_TST$BETA',
    service => 'TST_BETA'
  );
end;
/

begin
  dbms_scheduler.create_job_class (
    job_class_name => 'JCED_DEV$ALPHA',
    service => 'DEV_ALPHA'
  );
end;
/

Then we need to make those classes available to the user that will later create the scheduled jobs.

grant execute on sys.JCED_ORA$BASE to mySchema;
grant execute on sys.JCED_TST$BETA to mySchema;
grant execute on sys.JCED_DEV$ALPHA to mySchema;

It is also possible to allow a schema to use any class.

grant execute any class to mySchema;

Job classes do not belong to any schema. They are always created in SYS. Something to keep in mind, for example when exporting/importing schemas to another database.

programatically create a job

The following function creates a background job using the current edition. The job calls a procedure with 3 arguments.

The function is part of some larger package myPackage. It is assumed that the doSomething procedure is also part of the same package.

/* constant declaration section in package body */
  g_jobclass_prefix     constant varchar2(10) := 'JCED_';


/* modules */

function createBackgroundJob(p_id in number, p_starttime in timestamp) return varchar2
is
  co_modul_name CONSTANT VARCHAR2(96) := $$PLSQL_UNIT || '.createBackgroundJob';
  v_job_nr binary_Integer;
  v_jobname varchar2(100);
begin

  v_jobname := 'myJob_'||to_char(p_id);
  dbms_scheduler.create_job(job_name            => v_jobname,
                            job_type            => 'STORED_PROCEDURE',
                            job_action          => $$plsql_unit || '.doSomething',
                            job_class           => g_jobclass_Prefix||sys_context('userenv','current_edition_name'),
                            number_of_arguments => 3,
                            start_date          => p_starttime,
                            enabled             => FALSE,
                            auto_drop           => true);
  dbms_scheduler.set_job_argument_value(job_name=>v_jobname, argument_position=>1, argument_value=>p_id);
  dbms_scheduler.set_job_argument_value(job_name=>v_jobname, argument_position=>2, argument_value=>'RUN QUICK');
  dbms_scheduler.set_job_argument_value(job_name=>v_jobname, argument_position=>3, argument_value=>myPackage.g_trace_level);

  return v_jobname;
end createBackgroundJob;

The name of the job class is calculated using the name of the current edition by sys_context('userenv','current_edition_name').

start the background job

The createBackgroundJob function only prepares the background job. To run it, we need to call the create function and enable the job afterwards.

declare
  job varchar2(128);
begin  
  job := myPackage.createBackgroundJob(1, localtimestamp);
  dbms_scheduler.enable(job);
end;
/

Conclusion

Using job classes is a easy way to start a scheduled job in the same edition that the currently running session is in

Further remarks

Justification

I used this logic to split a plsql heavy task into several worker tasks that could run in parallel. I wanted to make sure that the parallel execution was done using the same edition as the thread that started it.

Restrictions

I did not test if it is possible to run lightweight jobs in a specific edition. There seems to be no obvious restriction that prevents using a job class also for lightweight jobs.

Since job classes belong to sys sometimes they need to be with the schemaname in front :sys.JCED_DEV$ALPHA.

It is not recommened to switch an edition from inside some code, because that code itself is running in a specific edition (the default one). This is very hard to control and to do properly.

Jobs are created with job names in upper case. In some cases we need to make sure to use the upper cased job name, to find/handle the correct job.

Error handling

Error handling needs special care when combining scheduled jobs and editions. For example I make sure that proper instrumentation is in place and that the log entry also includes the edition in which the error happend. In some situations, like if the service is dropped, then the session is still created, but falls back to the default edition. We want to be sure, that we notice any issues arising from such a situation.

18c scheduler EXTERNAL_SCRIPT with error ORA-27369: job of type EXECUTABLE failed with exit code: No child processes

After an upgrade to oracle database version 18.3 from 12.2 I encountered this error for a scheduled job of type EXTERNAL_SCRIPT. The job failed.

error code: 10 – Command not found

The view all_scheduler_job_details had additional information.

ORA-27369: job of type EXECUTABLE failed with exit code: No child processes

Last year I wrote about using scheduled job type EXTERNAL_SCRIPT instead of EXECUTABLE. Even the simplest demo code block from that time was not running anymore in 18c.

Here is a very simple shell script that needs to run

#!/bin/bash
echo “Job ok!”;

I execute this script using a scheduled job. The job runs as user oracle (via credential ORACLE_OS_CREDS).

-- First test a script that should not produce an error
declare
v_jobname varchar2(200);
v_good_script clob;
begin
v_jobname := upper('DEMO_SCHEDULED_EXTERNAL_SCRIPT_NO_ERROR');
-- the following line breaks are important.
-- Do NOT remove them, they are part of the linux script.
v_good_script := '#!/bin/bash
echo "Job ok!"';

-- create and run the job
dbms_scheduler.create_job(job_name => v_jobname,
job_type => 'EXTERNAL_SCRIPT',
job_action => v_good_script,
credential_name => 'ORACLE_OS_CREDS',
enabled => true,
auto_drop => true
);
end;
/

After several failed trails that including checking credentials, privs on OS level extproc settings, etc. I found a simple reason.

We are not allowed anymore to set the shell in the first line. So the script needed to change.

echo “Job ok!”;

In case you didn’t know, this first line is called Shebang.

I didn’t find any notice in the documentation about that. So it might be a bug or some intented stealth change.
Oracle 12c doc
Oracle 18c doc
Both say the same thing.

EXTERNAL_SCRIPT‘ This specifies that the job is an external script that uses the command shell of the computer running the job. For Windows this is cmd.exe and for UNIX based systems the sh shell, unless a different interpreter is specified by prefixing the first line of the script with #!.

Oracle Documentation

It is definitly not working in my environment. In case you encounter the same issue, here is my suggested solution.

To circumvent the issue I added some conditional compiling so that the same code is running in 12.2 and in 18.3.
In 12.2 it adds the shell call as the first line of the script – in 18c it avoids it.
I didn’t test which shell really is used to run the script. The difference between bash and sh is usually not relevant for my scripts.

-- First test a script that does not produce an error
declare
v_jobname varchar2(200);
v_good_script clob;
begin
v_jobname := upper('DEMO_SCHEDULED_EXTERNAL_SCRIPT_NO_ERROR');
-- conditional compile used because shebang not allowed anymore in 18c
v_good_script :=
$IF DBMS_DB_VERSION.VER_LE_12_2 $THEN
'#!/bin/bash
'||
$END
'echo "Job ok!"';
dbms_scheduler.create_job(job_name => v_jobname,
job_type => 'EXTERNAL_SCRIPT',
job_action => v_good_script,
credential_name => 'ORACLE_OS_CREDS',
enabled => false,
auto_drop => false
);
-- run the job
dbms_scheduler.enable(v_jobname);
end;
/

I also updated my older blog post “dbms_scheduler 12c ‚Äď run EXTERNAL_SCRIPT” to considered those findings.

You might also be interested in a post by Markus Fletchner: File ownership after patching or relinking Oracle RDBMS software

He describes issues with scheduled jobs of type EXECUTABLE after patching the database to 18 because of changed permissions. I first suspected I have the same problem, but it turned out differently. Still an interesting read.

I hope this post will help some others to waste less time, when encountering this error.

APEX Instrumentation and the SQL Developer

In general I do instrument all my code, but usually I have plsql code that calls some framework like logger. Because of that I rarely add extra APEX instrumentation code. Any trace or debug information is already written into the logger table.

Sometimes it makes sense to add calls to apex_debug too. Especially if this part of the logic is checked frequently using the APEX built-in debug functionality.

APEX 19.1 developer toolbar

Debugging in APEX has a huge advantage. As a developer it is very easy to do and to access the output.

The apex_debug package is available at least since APEX 5. Among others it includes a procedure enter to store parameters. It is recommended to call this at the beginning of your modules and add the combination of parameter name and value to the procedure.

Recently I added apex_debug.enter to several of my modules. Here are a few tricks I’d like to share.

Tipp 1: Debug level 5

Apex_debug has several levels. The default level is 4 (info).

If you want to see the information stored with apex_debug.enter, you need to show at least level 5.

The level can be set in the url. Instead of YES, set it to LEVEL5.

f?p=App:Page:Session:Request:Debug:ClearCache:itemNames:itemValues:PrinterFriendly

In APEX I use the enter procedure to store page item values that are used inside plsql blocks but also to see what is going on at the database level. Here is an example

before header plsql process

The process first calls apex_debug.enter. Then the procedure pk_setze_defaults.prepare_topbar calls apex_debug.enter a second time and stores the supplied parameter values.

And this is how the view debug output can look like

Show debug

Useful to see the state of page items at time of processing. And even more interesting is to see what procedures where called and which parameters had been used.

Tipp 2: avoid hardcoding the module name

The first parameter of the enter procedure is p_routine_name.

Instead of hardcoding the name we can call utl_call_stack to return the name of the module. You need to have at least database version 12c to use it.

utl_call_stack.concatenate_subprogram( 
    utl_call_stack.subprogram(1))

Utl_call_stack.subprogram gives us access to the module names inside the call stack. Number 1 in the stack is always the current module. Subprogram returns a collection which holds the package name and the submodule name. Sometimes multiple submodule names. The concatenate_subprogram function translates this collection into a readable string (divided by dots).

Example: Instead of hardcoding the module name ‘post_authenticate’

apex_debug.enter(p_routine_name=>'post_authenticate'
                ,p_name01 =>'p_user', p_value01 => p_user);


I use utl_call_stack to have the database fetch the module name at runtime

apex_debug.enter(p_routine_name=>utl_call_stack.concatenate_subprogram(utl_call_stack.subprogram(1))
                ,p_name01 =>'p_user', p_value01 => p_user);

The result:

PK_APEX_UTIL.post_authenticate p_user=>Sven

The first part is the module name (incuding package name), the second part is a list of parameter=>value combinations.

There are some side effects to it. The name is fetched at runtime, instead of decided at compile time.1 In some cirumstances (module inlining) a procedure can be rewritten by the plsql optimizer, so that the name of the module disappears. Utl_call_Stack would then return the name of the module, where the code was inlined into.

The runtime call is also slightly slower than the literal value.

Inside a plsql process in APEX the name will be __anonymous_block . Which is correct. APEX executes those blocks using dbms_sql. The name of the process is not known inside the block. But it can be seen in the APEX debug view output one line before the anonymous block.

So the advantage of not hardcoding the module name must be weighted against the possible side effects.

If you want the procedure name, but not the package name, then the following code will help. It returns only the name of the current (innermost) submodule:

utl_call_stack.subprogram(1)(utl_call_stack.lexical_depth(1)+1)

Tipp 3: use the newest SQL Developer version (18.4)

SQL Developer 18.4 has the built-in ability to grey out instrumentation code. I like this feature a lot. It allows the trained developers eye to quickly scan plsql code – without resting too much on the less important bits and pieces.

What surprised me is that this also includes apex_debug.

Here is an example screenshot. Notice how the whole call to apex_debug is greyed out.

SQL Developer 18.4 – PL/SQL code colors

Other packages that are greyed out are dbms_output, log and logger.

And we can add our own instrumentation framework to it. Which leads me to tipp 4.

Tipp 4: configure SQL Developer – add your instrumentation framework

It is a bit hard to find, but the color rule PlSqlCustom2 is where we can add our own package. Search for color in the preferences to find the entry. In one of my projects the instrumentation package is called pk_logging. So I add it like the screenshot shows.


Tools / Preferences => Code Editor / PL/SQL Syntax colors

And this is how the sample result looks like.

SQL Developer 18.4 – PL/SQL code colors – enhanced

Tipp 5: use snippets

Snippets are a nice little feature in SQL Developer. And you can add your own useful code snippets to it. I added a snippet for apex_debug.enter

SQL Developer – snippets view

Once implemented I simply double click the snippet and it adds the following code block.

    apex_debug.enter(p_routine_name => utl_call_stack.concatenate_subprogram(utl_call_stack.subprogram(1))
                    ,p_name01=>'XXX',p_value01=>xxx
                    ,p_name02=>'YYY',p_value02=>yyy
                    ,p_name03=>'ZZZ',p_value03=>zzz
                    );

If you like you can download this snippet from the SQL Developer exchange plattform (https://apex.oracle.com/pls/apex/f?p=43135:16:0::NO:RP,16:P16_ID:1141)

But it is easier just to create your own snippet and copy the code from here.

Conclusion

Make use of the developer tools you have. Use apex_debug, use SQLDeveloper. Think about your instrumentation framework. Make it as easy to use as possible.

Further readings

Footnotes

1. In APEX compile time and runtime of a PLSQL process are almost identical. The PLSQL process is dynamically compiled at runtime.

Basic SQL: All about sequences

Intro

There are still a lot of misunderstandings about Oracle sequences. Sometimes even experts tell you things about sequences that are easy to misunderstand, especially if we look into the details. The following post wants to give a detailed overview about what are sequences, why they work as they do, and how we should use them.

There are also a lot of parameters that the sequence object has and that you can use to tweak the behaviour. I will cover the most common things here.

Wording

Many of the misunderstandings come from how we use the word “sequence”. It can mean several slightly different things.

meaning a) The sequence object in the database aka the number generator
meaning b) the number value that is retrieved via mySeq.nextval
meaning c) an attribute for a list of numbers, stored typically in an ID column
“This list is in sequence” often means that we have an ordered list of numbers without gaps (math: monoton increasing integer values).

For the remainder of the document I will try to make always clear which meaning I am referring to. The relevant words will be written in italics¬†to hint about the specific interpretation in that sentence. In cases where I say “sequence” without additional specification details, I will mean the sequence object.

Purpose

The most common sequence usage is as technical values for ID columns. A typical ID column is a surrogate key. Opposed to a natural key, a surrogate key has no intrinsic meaning. It’s only use is to identify (=ID) a database record in a table. No intrinsic meaning also implies that we can not use this ID value to make business decisions dependend on it.

For example the following sentence should be considered a wrong deduction.
“Employee ID=17 was hired before Employee ID=26 because he/she has a lower ID”.

If we want to make qualified statements, then we must add the needed information to the data. For example add a column “hire_date”. Then we can use it to deduct when an employee was hired and what the order among different employees is.

The main advantage of a surrogate (meaningless technical) key is that the database can use it to ensure referential integrity. And this integrity rule is ensured even if something changes with regards to the business key. Typically business keys do not change. But if it happens, then the relationship is ensured by the foreign key still pointing to the surrogate key. For example we might have an INVOICE table. The business key might be the invoice number. In general this number is immutable, however it could be that there was some typo or scanner fault while the invoice was registered into the system. Using a surrogate key it is possible to change this invoice number without having to change all dependent records (like invoice positions) as well.

One of the best ways to supply values for such a surrogate key column (ID) is to use a sequence object and call the NEXTVAL function (pseudocolumn) on it. We can do that with a database trigger, as an identity column or directly in an insert statement.

 

Usage

standard usage

The standard usage of a sequence simply is to provide values for an ID column in the most performant way.

If you are new to the concept of Oracle sequences, then I suggest to go to livesql.com and try out the next few examples there by yourselfs. Experienced developers might want to skip those basic examples.

A) sequence + nextval on insert

First create a sequence using all default settings. We then use this sequence to provide ID values for our super-employees.


create table super_emp
(id number primary key,
   first_name varchar2(100),
   last_name varchar2(100),
   hire_date date);

create sequence emp_seq;

Then call nextval directly in an insert statement


insert into super_emp (id, first_name, last_name, hire_date)
values (emp_seq.nextval, 'Peter', 'Parker', trunc(sysdate));

insert into super_emp (id, first_name, last_name, hire_date)
values (emp_seq.nextval, 'Clark', 'Kent', trunc(sysdate));

1 row inserted.

1 row inserted.

The NEXTVAL pseudocolumn was used directly in the values section of the insert statement.

B) before row insert table trigger

Create a table trigger that fires during insert (pre 12c solution)

create or replace trigger trg_emp_bri
  before insert on super_emp
  for each row
begin
  if inserting then
    if :NEW.ID is null then
      :NEW.ID := emp_seq.nextval;
    end if;
  end if;
end;
/

The Oracle SQL Developer has a very nice wizard that helps to quickly create such a trigger. The table context menu (rightclick) has an entry to create a PK trigger based with a sequence. It creates a trigger very similar to the one above (I removed a select from dual in favour of a direct assignment).

Then insert into the table using either a NULL value or without the ID column.


insert into super_emp (id, first_name, last_name, hire_date)
values (null, 'Tony', 'Stark', trunc(sysdate));

insert into super_emp (first_name, last_name, hire_date)
values ('Bruce', 'Wayne', trunc(sysdate));

1 row inserted.

1 row inserted.

This is very nice. The application code that does the insert does not need to bother with the name of the sequence.

The trigger fires once FOR EACH ROW that is inserted. The code executes slightly BEFORE the row data is inserted. Before row triggers are typically used to set default values for columns or do some more complicated checks. After row triggers also exists. They are usually used for monitoring purposes, like writing data into an audit trail.

C) Use the sequence in the column definition (since 12c)

Since 12c we have two new options. Create a column AS an IDENTITY column or set the default value for the column to sequence.NEXTVAL. Both options can be configured to work only ON NULL. In case of an identity column, Oracle will automatically create a sequence. More about this in the chapter “identity columns”. Here is an example using the default setting.

The table trigger from B) is not needed anymore, so we can drop it.

alter table super_emp modify id default on null emp_seq.nextval;

drop trigger trg_emp_bri;

Then run the inserts.


insert into super_emp (id, first_name, last_name, hire_date)
values (null, 'Diana', 'Prince', trunc(sysdate));

insert into super_emp (first_name, last_name, hire_date)
values ('Steve', 'Rogers', trunc(sysdate));

1 row inserted.

1 row inserted.

Before 12c it was not possible to use pseudocolumns or non-deterministic functions like sysdate as a default value for a column. With 12c this is possible now. The result is the same as with a before row trigger, but usually it is noticably faster when we insert multiple rows.

 

Check the results

select id, first_name, last_name from super_emp;

ID FIRST_NAME LAST_NAME
1 Peter Parker
2 Clark Kent
3 Tony Stark
4 Bruce Wayne
5 Diana Prince
6 Steve Rogers

All inserts were done successfully. All three methods work.

 

18c create sequence parameters

create sequence syntax diagram 18c

syntax diagram

Some basic stuff first

Here we go through the different parameters. Behind some of those are very complex concepts. If so, those concepts are explained in a later section. This basic section tackles the way how to set the parameter and the immediate effects of setting or not setting it.

INCREMENT BY vs. START WITH

START WITH says what the very first value will be. It can be negative.

INCREMENT BY says how the next value will be calculated. It can also be negative but not 0.

The syntax diagram is slightly misleading. It gives the impression as if we can only specify one during the creation. Either INCREMENT or START WITH, but not both. This is not true, we can create a sequence and specify both. The default for both is 1.


create sequence testseq increment by 10 start with 2;
select testseq.nextval from dual connect by level <= 3;

NEXTVAL
2
12
22

Other parameters like CYCLE and NOCYCLE can not be specified both at the same time. The syntax diagram is correct for those.

For the reminder of this document, we assume the increment is always 1 (unless clearly mentioned otherwise)

Note that we can not alter the START WITH value, but we can alter the INCREMENT BY.

Hint: The undocumented RESTART clause allows to set a new START WITH value. See section about “How to reset a sequence”.

MAXVALUE and MINVALUE

Typically we don’t have the need to set those two parameters, the defaults are good.

Facts

  • MAXVALUE and MINVALUE specify the highest and the lowest possible value a sequence can have. MAXVALUE must be greater than MINVALUE.
  • NOMAXVALUE (1028-1) is the highest possible value.
    NOMINVALUE -(1027 -1) is the lowest possible value.
  • The default for MAXVALUE is NOMAXVALUE.
    The default for MINVALUE is 1.
  • The MINVALUE can not exceed the value defined by START WITH. Or otherwise we get an error.
    ORA-04006: START WITH cannot be less than MINVALUE

CYCLE vs. NOCYCLE (default)

CYCLE specifies, that the sequence after it reached the MAXVALUE, will start again with the MINVALUE (not with the START WITH value). The theoretical maxvalue of a sequence is 28 digits. It is a bit less with scalable sequences.

Nowadays there is no compelling reason to use CYCLE.

I believe in the old days (1990 – Oracle 7) disc space was still a premium commodity. Therefore number columns were often limited to a low number of digits (5 or 6). Under certain specific circumstances a cycling sequence then might have been useful to prevent numeric or value errors. Those days are gone.

CACHE (default) vs. NOCACHE

Caching a sequence is a huge performance feature. The default setting is CACHE 20, which is good for most scenarios.  It means 20 sequence values are read from shared memory (SGA) instead from hard drive. And after that the dictionary will be updated one time.

See the section about caching considerations for more information about this very important parameter.

Demo:

create sequence mySeq cache 1000;
select sequence_name, cache_size, last_number
from user_sequences
where sequence_name ='MYSEQ';
SEQUENCE_NAME CACHE_SIZE LAST_NUMBER
MYSEQ         1000       1

select myseq.nextval from dual connect by level <= 3;
NEXTVAL
1
2
3

select sequence_name, cache_size, last_number
from user_sequences
where sequence_name ='MYSEQ';
SEQUENCE_NAME CACHE_SIZE LAST_NUMBER
MYSEQ         1000       1001

After this, we still have 997 cached sequence values.

The default value of cache 20 is a kind of sweet spot for OLTP purposes. Only when you have the need to create a very large number of sequence values in a short time, then consider to increase the cache. This typically happens during data load situations. Don’t forget to lower the cache value again after the data load is over.

 

ORDER vs. NOORDER (default)

It is a common misconception that we need to specify ORDER to get ordered values from a sequence object. The sequence object will always produce ordered values! Oracle did not implement some kind of random mechanism. Sequence.nextval will always give you the last value + the increment. Any kind of “randomness” comes from other things, like that you seem to have no control over who fetched the last value (multi user), when was that value inserted (seq.nextval call < insert time < commit time) and lost sequence caches.

The ORDER setting is only relevant in a RAC (Real Application Cluster) environment. And even there it should always be NOORDER (the default). Read the chapter about the performance considerations for an explaination.

Remember

ORDER on RAC = slow

ORDER on non-RAC = no effect

easy.

 

KEEP vs. NOKEEP (default)

This is a switch that most database developers will never need. It might be more relevant for Java developers.

In 12.2 a new feature called application continuity was introduced. It allows to capture and replay a certain workload on the database. It comes with the license options for RAC or Active Data Guard.

Problem is that a call to sequence.nextval would deliver a new value. This is not wanted for REPLAY purposes. Altering a sequence to KEEP would provide the same sequence value during the replay.

From the appendix of Oracles White paper about Application continuity:

Mutable Functions
Mutable functions are functions that can change their results each time that they are called. Mutable functions can cause replay to be rejected because the results visible to the client can change at replay.
Consider sequence.NEXTVAL that is often used in key values. If a primary key is built with a sequence value and this is later used in foreign keys or other binds, the same function result must be returned at replay.
Application Continuity provides mutable value replacement at replay for Oracle function calls if GRANT KEEP or ALTER.. KEEP has been configured.
If the call uses database functions that support retaining original mutable
values, including sequence.NEXTVAL, SYSDATE, SYSTIMESTAMP, and SYS_GUID, then, the original values returned from the function execution can be saved and reapplied at replay. If an application decides not to grant mutable support and different results are returned to the client at replay, replay for these requests is rejected.

Important to remember is that the KEEP parameter during creation has nothing to do with keeping the sequence pinned in the SGA. An example how to do that is in the “discussion about gapless IDs” section.

SCALE vs. NOSCALE (default)

SCALE is a very interesting new setting. It allows to improve the clustering factor of the index on the ID column. More details about that in the performance section.

Useing SCALE adds the session ID (SID) to the beginning of the sequence value.

SCALE has two options EXTEND and NOEXTEND (default). See how it works and differs.

 

create sequence myseq;

Sequence MYSEQ created.

                               NEXTVAL
--------------------------------------
                                     1
alter sequence mySeq scale;
                               NEXTVAL
--------------------------------------
          1017670000000000000000000002

alter sequence mySeq scale extend;
                               NEXTVAL
--------------------------------------
    1017670000000000000000000000000003

For sake of brevity I removed the “sequence altered.” results and “select mySeq.nextval from dual;” calls.

My session ID in this demo was 767. The 101 is the instance ID (1) + 100. So in a RAC environment, this will ensure that the values provided by different nodes will not clash. On non-RAC systems this leading part should always be 101.

NOSCALE gives us a normal sequence value of 1.
SCALE NOEXTEND gives us a sequence value of 28 digits (MAXVALUE) with a 2 at the end (last value+increment by) and a 101767 at the beginning.
SCALE EXTEND gives us a sequence value of  28+6 digits with a 3 at the end (last value+increment by) and a 101767 at the beginning.

So EXTEND adds the additional digits on top of the MAXVALUE setting, whereas NOEXTEND adds it inside the range defined by MAXVALUE.

In most circumstances – if we consider scalable sequences – we should use SCALE NOEXTEND. Just to be sure, that the generated value still does fit into the table column. For very large tables if there are already some extremly high values, we might need to use EXTEND, but I expect this situation to be very rare.

When is this useful? Only for cases when extrem performance matters. So for large or very large tables, with a lot of inserts from multiple sessions (parallel inserts).

SESSION vs. GLOBAL (default)

Sequence values do not depend on a user session. Every call to sequence.nextval will give the next incremented value regardless of which session executed this. This feature ensures that nobody gets a duplicate key.

User/Session A calls mySeq.nextval 3 times and gets 1,2,3.
User/Session B calls mySeq.nextval 3 times and gets 4,5,6.
If both sessions fetch the values almost simultaniously then A might get 1,3,4 and B might get 2,5,6. Notice that there might be gap from the perspective of a single session, but the values are still ordered for each session.

With SESSION sequences this behaviour changes. Session A calls mySeq.nextval three times and gets 1,2,3. Session B calls mySeq.nextval three times and also gets 1,2,3. The values are not shared between sessions.

Where do we need this? Only for global temporary tables (GTT). The data in a GTT  persists for the duration of a session (alternatively until commit) and then is gone. Same behaviour for the SESSION sequence Рthe generated sequence values only persist for the duration of the session.

 

Practical considerations

For most cases the default settings are perfect.

Only if you encounter issues (performance  or unusual number of gaps) or if your data has some special scenarios (batch ETL jobs, very large number of rows) then you should start thinking about tinkering with the default settings.

The following sub chapters discuss common questions and show cases how to work with sequences to solve some typical tasks.

How to avoid reusing the same ID in different dbs

Sometimes we have a distributed database. Especially for global companies each region might have its own database. The data for the different regions still needs to be comparable. And sometimes the data will be consolidated or exchanged. In such cases it helps, if the ID values do not overlap.

One way to do that is to use the INCREMENT parameter. On database 1 we use a sequence object starting by 1 and and increment of 10. So this will give IDs like 1, 11, 21, 31, 41,….

create sequence testseq increment by 10 start with 1;

On Database 2 we use a sequence starting with 2 and increment 10. This will work up to 10 regions. So this will give IDs like 2, 12, 22, 32, 42, ….

create sequence testseq increment by 10 start with 2;

Result is that those values do not overlap. There are other (and possibly better) ways to solve the situation, like sys_guid(). But this is a fairly easy and stable concept.

Caching considerations

If the sequence is used very infrequently, then you can set it to NOCACHE. For example if you have an staff table; I don’t expect that new personell is hired every second. Typically it will be a few people per month (depends on the size of your company of cause). For such low frequency inserts performance doesn’t matter. You can set the sequence object to NOCACHE or to a very low cache value. However if you do a large data import, consider to increase the cache size before running that data load.

Does setting a larger cache size need more SGA memory?

No.

Or to explain it with Tom Kytes words

All we need to keep in the cache is:

the sequence on disk was N
the cache size is M
the current value is X

As long as X is less than N+M – we just increment X when someone calls NEXTVAL.

we do not need to keep in the cache “N, N+1, N+2, … N+M-1”, we just keep N, M and X and increment X when someone asks for a new sequence value. When X=M, we update SEQ$ and reset N in the cache.

So, cache 1000 and cache 20 take the same amount of space in the cache.

How to reset a sequence?

There are three general ways to set a sequence to a different value.

  1. Call sequence.NEXTVAL so often until you reach the target value
  2. Manipulate the increment parameter using a negative increment. Call nextval once. Reset.
  3. Restart the sequence (new undocumented feature)

The first way usually is not practical. A noticable exception might be, if you manually added some data without using the sequence and you want to jump over those few values.

If you want way 1, then the CONNECT BY LEVEL clause helps to do it quickly.


select myseq.nextval from dual connect by level <= 996;

 

And here is a demo for way 2:
Preparation setup


drop sequence mySeq;
create sequence mySeq cache 1000;
set autotrace traceonly statistics
select myseq.nextval from dual connect by level <= 996;
set autotrace off
select myseq.nextval from dual;
   NEXTVAL
----------
       997

The “set autotrace traceonly” command works in sql*plus. I used it here to avoid printing 996 values onto the screen. It is not relevant for the demo itself.

The current value now is 997 but we want that the next call to nextval should give us 1.

Now reset the sequence.

alter sequence mySeq increment by -996 nocache;
select myseq.nextval from dual;
alter sequence mySeq increment by 1 cache 1000;

After this code, the very first session that calls myseq.nextval, will see 2 as the value returned.

If we really need to see 1 we also must lower the MINVALUE. Because INCREMENT BY can not result in anything lower than the MINVALUE (ORA-08004: sequence MYSEQ.NEXTVAL goes below MINVALUE and cannot be instantiated).

alter sequence mySeq increment by -997 nocache minvalue 0;
select myseq.nextval from dual;
alter sequence mySeq increment by 1 cache 1000;
select myseq.nextval from dual; 

Notice that we incremented now by -997 instead of -996 and that we are calling nextval twice. We can not reset the minvalue to 1 during the second ALTER sequence command, because that also would violate the rules (ORA-04007: MINVALUE cannot be made to exceed the current value). Easyiest solution is to let it stay at 0.

Using NOCACHE is important, to avoid having issues with the stored last_value. Also check the increment by and the cache setting, before you alter the sequence. If the increment by is different, then you need to change the above code and probably need to call nextval a second time.

In 18c we got a third option to reset a sequence – the RESTART option.


ALTER SEQUENCE mySeq RESTART;

This is currently undocumented.

Thanks to Roger Troller for makeing me aware about it (Blog).

I tested it a little bit further and found out two more things.

  • We can already use it in 12.2.0.1. Which makes sense, since 18c is really just 12.2.0.2.
  • And we can combine it with the START WITH clause.

So the following works !

alter sequence testseq_20 restart;
VAL1
1
alter sequence testseq_20 restart start with 15;
Sequence TESTSEQ_20 altered.
select testseq_20.nextval val1 from dual;
VAL1
15

Very convinient. This should be the preferred way to reset a sequence whenever you need to do that.

Not recommended is to drop and recreate the sequence. While this will also allow us to set a new START_WITH value, it has a major drawback. All references to the seqeunce are then broken. Especially all privileges are lost, like GRANT SELECT on #sequence to #schema.

 

Can we use an ID from a sequence to order by insert time?

Short answer no. The order of inserts and the order of sequenced values often match but are not guaranteed to match.

Detailed answer: Usually it works.

I very often use a ID column filled by a sequence as a second order criteria. For example I typically sort a logging table Рwhere trace information is written Рby the insert date and the LOG_ID (sequence based PK ).

order by insert_date desc, log_id desc

The insert date (if it is a date) is only accurate to the second. Even if it is a timestamp there might have been multiple inserts at the same fraction of a second. The log_id is a perfect second order criterium.

We can safely assume, that the inserts that were done from the same session, have ordered sequence values. There might be gaps, but the sequence values will be produced in the same order as we did the inserts. There can be ID values in between, that are from a different sessions. But for trace log information, usually it does not matter if a different session is ordered before or after our session. However the data from one session should be correctly ordered. And this is guaranteed.

Is cycling useful?

I never had the need of cycling sequences. I firmly believe if you think you need those, you have a much deeper problem somewhere else. It would probably better to solve that problem, instead of useing a cycling sequence.

With 18c we get SESSION sequences. For some cases where CYCLE was considered in the past, a SESSION sequence might be the better choice.

Also ROWNUM and the analytic function ROW_NUMBER can be used to create consecutive values at time of select, instead of a sequence providing those values at time of insert.

Discussion of gapless IDs

This is a problem/question I often encountered: How to make a sequence gapless?

TL;DR: You don’t need to. The effort and the restrictions to make an ID column gapless is to high in (almost) all use cases.

The sequence object can and does provide gapless numbers. In a multi user environment we just can not reliably use the provided values to store them in a gapless way. Even in a single user environment, the stored IDs could be deleted. So one consequence of the gaplessness requirement, would be to forbid delete operations.

The main point is that almost all the things that will create “holes” in an ID column are under our control. It is not the Oracle database that can not provide gapless sequences. It is the complexity of the business rules combined with performance requirements in a multi user environment, that make it almost impossible to have an ID column without potential gaps.

Performance + Multi User + Gapless IDs build a triangle of goals that exclude each other. We can not reach all three goals at the same time, one needs to suffer. However those goals do not react in the same way, when we sacrifice one a tiny bit. So let’s investigate what happens then:

We still can not reach good performance (instead of very high performance) if we need multi user capability and gaplessness at the same time. To enable this we need to serialize access to the whole table. Which in turn means only one session can write into the table and all other sessions will need to wait until the other session finishes the whole transaction.

We can have very high performance and gaplessness if we only have a single user. But as soon as a second user wants to write at the same time, we need to introduce severe serialization of the whole transaction, just to ensure gaplessness. And this means performance drops immensely. Btw. this is how MS Access works. Only one user can write into the so-called database.

But we can get almost gapless IDs and still have very high performance for multiple users. Almost gapless means, we sometimes might have gaps in our sequence, but this situation is rare. This is the default behaviour of Oracle sequences.

How do we get gaps in our IDs?

a) a record in our table was deleted.
b) the insert run into an error
(remember sequence.nextval is called a tiny moment before the insert is executed).
c) Somebody called sequence.nextval but didn’t use the value.
d) The sequence cache was lost. One way how this happens is if the database decides that other objects need to be in SGA memory and the sequence wasn’t called for a longer time.

By pinning the sequence we can avoid situations where the sequence cache ages out of the shared pool. A better alternative is to size the shared pool appropriately, so that in general sequence caches will not age out of it.

execute sys.dbms_shared_pool.keep(owner.mySequence,'Q');

This still doesn’t guarantee gapless IDs, but for most use cases it would be good enough.

The oracle docs about skipping cached numbers:

18.1 Database Admin Guide – Managing Sequences
The database might skip sequence numbers if you choose to cache a set of sequence numbers. For example, when an instance abnormally shuts down (for example, when an instance failure occurs or a SHUTDOWN ABORT statement is issued), sequence numbers that have been cached but not used are lost. Also, sequence numbers that have been used but not saved are lost as well. The database might also skip cached sequence numbers after an export and import. See Oracle Database Utilities for details.

A normal or immediate shutdown of the database will not loose sequence numbers. Instead the database will update the data dictionary (table sys.seq$) with the last used value.¬† Unfortunatly most DBAs prefer to shutdown a database using abort, since they don’t bother enough about user sessions.

Sequence Performance

Why is a sequence fast and how can we use it in the most performant way?

Oracle invented sequences with performance in mind. They provide a way to create surrogate keys in a multi user environment while minimizing serialization.

Basic working of an Oracle sequence

A sequence is just an entry in the dictionary table sys.seq$.

desc sys.seq$;

Name       Null?    Type 
---------- -------- ------------ 
OBJ#       NOT NULL NUMBER 
INCREMENT$ NOT NULL NUMBER 
MINVALUE            NUMBER 
MAXVALUE            NUMBER 
CYCLE#     NOT NULL NUMBER 
ORDER$     NOT NULL NUMBER 
CACHE      NOT NULL NUMBER 
HIGHWATER  NOT NULL NUMBER 
AUDIT$     NOT NULL VARCHAR2(38) 
FLAGS               NUMBER 
PARTCOUNT           NUMBER

The highwater column is the same as the last_number column in the view user_sequences.

When a sequence fetches a new sequence value (using .nextval) then the dictionary table needs to be read and the row needs to be updated with the new value. Now if multiple sessions do that, then one would have to wait for the other. This is called serialization. To avoid that issue, Oracle uses two clever mechanisms.

  1.  The dictionary table is updated using an autonomous transaction. So the value is stored and other sessions can see it, even if the main transaction (from the user session) is not finished.
  2. The new highwater value that is stored, is not the next value, but it is the value + the cache. Any call to sequence.nextval will first read from the sequence cache and only once the numbers there are exhausted, it will read from the table and update it.

It is of cause possible to write a similar mechanism ourselfs with our own table. And I have seen projects where they did exactly that. But it is very hard to do properly and even then will not beat the performance of the original sequence.  So you would need a very special business case to justify writing your own sequence mechanism.

Speed it up

If you aim for maximum performance there are some considerations to do.

  1. You must use a sequence cache. The cache size also plays an important role. For most OLTP tables the default setting of cache=20 is a very good choice. However when you do large dataloads, then a much larger cache size is advisable. There is a diminishing returns effect. Doubleing the cache does not double the performance.
  2. On a RAC you really should use NOORDER. The ORDER keyword is only relevant for real application clusters. Using ORDER would try to synchronize the sequence caches over all cluster nodes. This is extremly bad for the performance. Useing NOORDER gives each RAC node a separate sequence cache. Which also means that an insert on node1 might have sequence value 1 and the next insert on node2 might have a sequence value 1001. The third insert on node1 again would use value 2.
  3. Sequences should be used as late as possible. There is usually no need to fetch a sequence value first and then do the insert later. Use the sequence while doing the insert. Either by adding it to the insert statement, or via a database trigger or since 12c as an identity column or a default on NULL column setting. Using the 12c mechanics allows to avoid the database trigger. This results in much better performance, as I have shown in a previous blog post.
  4. Consider scalable sequences for large tables if you are on 18c already. The effect can not be seen immediatly, but scalable sequences should give a better and more stable performance in the long run.
    For small and medium sized tables I expect scalable sequences to be slower than non-scalable sequences (because a bigger number needs to be stored). I didn’t test the effect, but a normal (small) sequence value only needs 2-6 bytes, wheras a scaled sequence value needs always 15 (NOEXTEND) or 18 (EXTEND) bytes. These bytes are used by the table column, the unique index that supports the primary key (PK), all foreign key (FK) columns pointing to the PK and the indexes supporting those FKs.

 

If you need the value of the sequence later in your code again you can either use .currval (not recommended) or use the returning clause to give you the generated ID.

best practice: returning clause

Several SQL and PL/SQL DML commands have a returning clause. It allows to get back data that is created or manipulated while the DML (insert or update) is running.

The most common usage is to return the ID value, that is filled by a database trigger (or an identity column) so that this ID can now be used furthermore in the same session or transaction or to be returned back to the client. For example to insert any child records or to show the freshly generated record in a GUI report.


insert into super_emp (first_name, last_name, hire_date)
values ('Bruce', 'Banor', trunc(sysdate))
returning id into :ID;

print :ID;

ID
--------------------------------------------------------------------------------
9

The returning clause is more typical in pl/sql. Here is an example using a record of %rowtype. We can even return the generated ID value directly into the record.

declare
r_super_emp super_emp%rowtype;
begin
r_super_emp.first_name := 'Hal';
r_super_emp.last_name := 'Jordan';
r_super_emp.hire_date := trunc(sysdate);

insert into super_emp
values r_super_emp
returning id into r_super_emp.id;

sys.dbms_output.put_line('New ID = '||r_super_emp.id);
end;
/
New ID = 10

 

Identity columns

Identity columns and Default on null are a great enhancements in db version 12.1.

It allows us to use a sequence as late as possible (while inserting). But without the need for a before row insert table trigger. This improves insert performance dramatically. A trigger is plsql based. It runs during the execution of a SQL DML statement (insert). Because of that a context switch from the SQL to the PL/SQL engine (and back) is needed. If we can avoid the trigger completly we can avoid the context switch and this will improve performance considerably.

I made some tests and under very favourible circumstances (nothing else inserted but the ID) the insert performance was 900% faster using IDENTITY or DEFAULT columns instead of a trigger.

With DEFAULT ON NULL we would still create the sequence by ourself. Which also means we know the name. With IDENTITY the sequence is automatically created and maintained by Oracle.

The name of the generated sequence will always begin with “ISEQ$$_”.

demo


create table test
( id number generated by default on null as identity (start with 20) primary key
);

select table_name, column_name, identity_column, default_on_null, data_default
from user_tab_columns;

TABLE_NAME COLUMN_NAME IDENTITY_COLUMN DEFAULT_ON_NULL DATA_DEFAULT
TEST ID YES YES “SCHEMANAME”.”ISEQ$$_10707661″.nextval

 

Drawbacks

It can be problematic to use identity columns over a database link. Especially if the ID value is needed. The main issue is that¬†the returning clause does not work over a db link and there are no good alternatives for identity columns. This works slightly better with “default on null”. We know the sequence object and can use it to fetch the id value over a DB link first and use it then later for the insert. Not performant at all, but it works.

We also can not directly alter an existing ID column into an IDENTITY column. Although it is possible to modify an existing identity column (for example switching between generated always and generated by default on null).

Another minor inconvinience is that the system generated sequence will still be there when the table is dropped. At least as long as the table is still in the recycle bin.

There were also some other very special bugs using identity columns. All have workarounds, but my experience is, that default on null is slightly less error prone.

 

Index contention and Scalable Sequences

Scalable sequences where secretly introduced in 12.1.0.1 but only documented in 18.1.

Richard Foote did a three part series about scalable sequences that covers all you need to know.

The basic problem has to do with index contention.

To give a very brief explanation: when we have an ID column that is inserted using a sequence the index -over time- will become unbalanced. Because new values will only be added to the right side of the index leaf block splits will happen there frequently. Sometimes it will be 50-50 block split and the space in those blocks usually will not be filled up.  This eventually leads to a heavily right (un)balanced index tree.

Such a block split is a very ressource intensive operation and other sessions will need to wait for it. If you see a high number of “enq: TX – index contention issue” wait events (check MOS 873243.1) the reason could be those index block splits.

One workaround for the index contention problem in the past was to use a REVERSE KEY index. But this created other performance problems, like the CBO will not do any range scans on that index.

Scalable sequences are a slightly better solution to avoid those index contention issues (hot index blocks). Because they have the session ID in front of the number, values provided by a scalable sequences are distributed more evenly over the index. At least as long as multiple sessions do the insert.

 

Export and Import

consistency issues

When you do an export of a database or a schema it is crucial to do a time consistent export using

exp ... consistent=Y ...

Why? Otherwise the sequence object including the current value as start with is exported first. And later the tables with their data. Which means, that in between some session could call sequence.nextval and use up a value. You won’t notice the issue during import. But as soon as an insert in the imported schema happens, you will get an dup_val_on_index error, because the table has an ID value already, that the sequence generator just provided.

sys warning

Consistent=Y does not work as SYS. So never export data as SYS! The reason is that sys can not do read only transactions. Using SYSTEM is fine.

datapump

For datapump the equivalent to consistent=Y is the flashback parameter.

expdp ... flashback_time=systimestamp ...

There is also a flashback_scn parameter. Both do a time consistent export.

And since 11.2 there is a legacy mode for datapump, which allows to use consistent=Y (it is rewritten into the flashback_time parameter).

 

Other ways to generate ordered numbers

Sometimes a sequence is not the best way to generate ordered numbers. For example when we want to sort entries from a detail table based upon their parent keys. Each detail record should start with 1 for each parent entry. A sequence is not the proper tool to get such values.

Alternatives are ROWNUM, the analytical function ROW_NUMBER() and certain ways to create lists in SQL, for example by using hierachical queries with CONNECT BY.

 

 

Further reads