is {JSON}, or not is {JSON}, that is the question

Whether ’tis nobler in the mind to suffer
The [] and “” of outrageous json,
Or to take Arms against a Sea of troubles,
And by opposing {} end them: to select, to browse
No more; and by a browse, to say we end
the json array, and the thousand json objects
that Flesh is heir to? ‘Tis a consummation
devoutly to be wished. To select, to browse…

Intro

So I have this logging table where sometimes the message can be a json document. Most of the times it is an error message or some tracing information. But sometimes I want to show what data currently is inside a plsql collection. I wrote a small conversion function, that returns a clob which should contain JSON. This json ends up as a message in my logging table.

Inside an apex application I added the possibility to show this JSON as a modal page. I use the JSONView Plugin, however most browsers now have a default JSON display capability.

I detect if the message contains json data by using the 12.1 IS JSON operator.

In case it is json, I provide a link to the modal page and print the data there.

case 
  when LOG_MESSAGE is json 
  then
    '<a href="'||
    apex_util.prepare_url('f?p='||v('APP_ID')||':123:'||v('APP_SESSION')
    ||'::::P123_LOG_ID:'||to_char(LOG_ID))
    ||'" title="show JSON data">{JSON}</a>'
else 
  '<pre>'||LOG_MESSAGE||'</pre>'
end as log_message_enhanced

 
The modal page uses the application/json mime type.

The result looks similar to this:

 
modal_json_ok
 

How build such an apex page or how to do a conversion from a plsql collection to json is not part of this blog post. If your are interested in that, please leave a comment and I might consider to publish how to do so.

Problem

For certain documents the browser was not able to show the json. Instead it returned an error message and the json in text format. Here is an example.
modal_json_error

Sorry for the German Message, but I was not able to switch my Firefox to english mode easily. The error essentially says: “We could not parse the json, it seems the document does not comply to the specifications.”

So the database says it is json, but the browser says it is not!

What is going on? That is the question.
 

Solution

The reason for the behaviour is that Oracles IS JSON check uses the lax json syntax (by default). LAX json allows several things, among others it allows to have a list of objects with a trailing comma at the end. Exactly my issue.

Here is a basic SQL demonstration. Note the comma after the “Larry” inside the json object step.

select * from dual
where '[{"index":1,"name":"Larry",}]' 
      is json;
DUMMY
-----
X

So it is JSON. But only lax json.

Fortunatly we can also do a check for the more strict json interpretation.

select * from dual
where '[{"index":1,"name":"Larry",}]' 
      is json (STRICT);
No rows selected.

Can you spot the difference? The STRICT keyword including parenthesis tells the database that the document needs to confirm to the more strict specification.

This is of cause documented: About Strict and Lax JSON Syntax

The main differences are

  • STRICT: each JSON field and each string value must be enclosed in double quotation marks (“).
    LAX: An object literal can also be enclosed in single quotation marks (‘).
  • LAX: Case variations for keywords true, false, and null (for example, TRUE, True, TrUe, fALSe, NulL).
  • LAX: An extra comma (,) after the last element of an array or the last member of an object (for example, [a, b, c,], {a:b, c:d,}).
  • LAX: Numerals
    • with one or more leading zeros (for example, 0042.3).
    • Fractional numerals that lack 0 before the decimal point (for example, .14 instead of 0.14).
    • Numerals with no fractional part after the decimal point (for example, 342. or 1.e27).
    • A plus sign (+) preceding a numeral, meaning that the number is non-negative (for example, +1.3).

But there are more differences.

Using the (STRICT) keyword, solved my problem. Only those json documents where linked, that could be shown in the browser. The others were rendered as normal text.

Conclusion

 
Be aware that there are slightly different JSON specifications available.

If you want to show json inside a browser, then make sure the json document confirms to the strict json specification.

If you want to use json inside javascript, then probably the lax version is better suited.
 

 

10 Oracle plsql things you probably didn’t know

Many people enjoyed reading my last blog post “10 Oracle SQL features you probably didn’t know”. So I decided to spice it up a little more and do something similar for plsql.

I hope you like that one too.

With our further ado, let’s get started with the list.

10. The first Oracle version to feature plsql was Oracle DB version 6 (1988)

And no. Steven Feuerstein did NOT invent it.

At that time PLSQL did not have stored procedures nor did it have proper exception handling. But it already had embedded SQL.

I learned that from the great Lewis Cunningham. One of the godfathers of development with SQL and PLSQL.

Stored Procedures were added in Oracle 7 (1992). 7.3 was the version when I started to work with an Oracle Database. At that point plsql was in version 2.x. However there never was a version 3. Plsql versioning jumped to 8 when Oracle DB version 8 was introduced and plsql versioning was aligned with the db versions. So there are no plsql versions 3-7. But honestly? Nobody cares anymore that plsql does have its own versioning.

9. labels do not need to match

We can use <<labels>> in plsql. Mostly to increase readability of code. This is especially useful for loop constructs, but it also works for normal begin..end blocks.

begin
 <<mainloop>>
 loop
   <<check_some>>
   begin 
     <<dummyloop>>
     for r in (select * from dual) loop
       -- do nothing
       null;
     end loop dummyloop;
   end check_some;
   exit when 1=1;
 end loop mainloop;
end; 
/

As we can see there are several <<labels>>. And the usage of those labels at the “end” helps to distinguish which code part we are looking at [1].

But this is only as good as the programmer is!

Unfortunatly this works too:

begin
 <<mainloop>>
 loop
   <<check_some>>
   begin 
     <<dummyloop>>
     for r in (select * from dual) loop
       -- do nothing
       null;
     end loop mainloop;
   end check_sum;
   exit when 1=1;
 end loop dummyloop;
end; 
/

Here I mixed up the labels from the loops. And the “end check_sum” does not match the label at the beginning of the block. In fact the “label” at the end can be anything that is not a reserved word.

It runs identical to the previous code (still doing nothing). But it is way more confusing for the “future me” that has to maintain this mess.

8. pragma SERIALLY_REUSABLE

During the lifetime of a session, the package state (package variables, open cursors, etc.) are held in the UGA (user global area).  Subsequent calls in the same session to the same package profit from that by not needing to reinitialize the package state.

The pragma SERIALLY_REUSABLE is able to change this behaviour.

serially_reusable packages

After the work unit (server call) of a SERIALLY_REUSABLE package completes, Oracle Database does the following:

  • Closes any open cursors.
  • Frees some nonreusable memory (for example, memory for collection and long VARCHAR2 variables)
  • Returns the package instantiation to the pool of reusable instantiations kept for this package.

Essentially this means, that the package state exists only during the package call. Not for the whole session.

So far I never had the need to use this pragma. But I can imagine some very very special situations, where this might become interesting.

7. You can compile a package even while another session is running it

In general this is not possible. Assuming a package is currently running. Or to say it in technical terms: We have an active session (Session A) executing a packaged procedure.

A second session  (Session B) trying to do an “ALTER COMPILE PACKAGE (BODY)” would wait until session A finishes and will then afterwards try to compile the package. Upon success the first session A then will get a “package state has been discarded” error message as soon as it tries to run the same package once again. The second next try to run the package would succeed and will use the new package version. At that time the package state was discarded from the session and the new instantiation can be loaded.

This is all documented and well known behaviour.

We face three potential issues with this

  1. Session B needs to wait
  2. Session A might get an error
  3. Session B might wait so long that the developer decides to kill the client (closing SQL developer) thereby making everything 10 times worse, because the database compile call still is valid on the database session level – blocking all following attempts to run or compile the package.

EBR for the rescue!

Using Edition Based Redefinition (EBR) we can circumvent those issues.

Both sessions just need to use different editions. EBR allows us to store and run different code versions of the same plsql based object in the same database.

 

preparation

First create a package with a long running procedure. My example uses a procedure that runs for exactly 1 minute.


create or replace package myPck is
  procedure runMinute;
end myPck;
/

create or replace package body myPck is
   procedure runMinute is
   begin
     sys.dbms_lock.sleep(60);
   end runMinute;
end myPck;
/

 

Setup an edition DEV$ALPHA that is a child of the default edition (ORA$BASE).


create edition DEV$ALPHA;

 

You need an edition enabled schema to do this.

This is simple to do, but to explain EBR in more detail is beyond the scope of this blog post.

Example scenario

Lets run a few commands in two different sessions.

Session A resembles a USER/TESTER who currently executes the packaged function.

Session B resembles a DEVELOPER who wants to deploy a new version of the package.

Session A does this

set time on

alter session set edition=ORA$BASE;

execute myPck.runMinute;

Session B was started already and after the execute in Session A, run the following script in Session B.

alter session set edition=DEV$ALPHA;

— add a new procedure to the package

create or replace editionable package myPck …
/

create or replace editionable package body myPck …
/

— run the new procedure

exec myPck.run5secs;

Result is Session B finishes way before Session A does complete its 1 minute run.

See screenshotebr_run_sessions

q.e.d.

6. call a (pipelined) table function without the TABLE operator

It works only from 12.2 onwards. It is more of a SQL feature than a plsql one.

This will make a table function look indistinguishable from a parametrized view.

Example: split_string

First lets create a simple little table function. This one here just converts a delimited list into rows.

create or replace function split_string
 (p_str IN VARCHAR2
 ,p_delimiter IN VARCHAR2 default ','
 ) RETURN sys.odcivarchar2list PIPELINED 
IS
/** Function to split strings based upon delimiter
*
* @author Sven Weller
*
* @param p_str input string 
* @param p_delimiter delimiter string, default =, Delimiter should only be 1 char.
* @return list of strings
*
*/
 v_entry varchar2(4000);
 v_remaining_str varchar2(4000);
BEGIN
  -- input string needs to hold something to be able to split
  if p_str is not null then
     <<steps>>
    for i in 1..regexp_count(p_str,'\'||p_delimiter)+1 loop
      -- search + split
      v_entry := rtrim(regexp_substr(p_str,'[^\'||p_delimiter||']*('||p_delimiter||'|$)',1,i),p_delimiter);
      pipe row(v_entry);
    end loop steps; 
  else raise no_data_found; 
  end if; 

END split_string;
/

 

function created.

in 11g we call the function like this:

select * from TABLE(split_string('A:BB::CCC',':'));

in 12.2 we can now call it like that:

select * from split_string('A:BB::CCC',':');

As you can see the TABLE row source operator is gone. And it still works! The results of both statements are identical.

COLUMN_VALUE
A
BB
CCC

Want to test it? I made an example on livesql.com.

At the moment this is an undocumented 12.2 feature. So don’t use it for production code (yet). I quite like it. Less code is better! It might become some de-facto standard (similar to connect by level) and eventually will make it into the documentation.

5. dot notation for parameters

We can refer to parameters using the name of the module that declared them. This is useful when we need to distinguish a parameter from a column name.

example


create or replace function myFancyFunc (dummy in varchar2) return number
is
  ret number := 0;
begin
  begin
    select 1 into ret
    from dual
    where dummy = myFancyFunc.dummy
    and rownum = 1;
  exception
    when no_data_found then null;
  end;
  return ret;
end myFancyFunc;
/

Function MYFANCYFUNC compiled

select myFancyFunc('X') from dual;

1

select myFancyFunc('Y') from dual;

0

The function simply compares the value in the dummy column of the dual table to the value we input. If instead we would just compare dummy=dummy then we would get always 1 as a result. No matter what the input is. Even if we add an alias to the table and prefix the column with an alias, the non aliased “dummy” will still be interpreted as a column.

This behaviour is documented: Oracle Doc 12.1 – plsql name resolution

If a SQL statement references a name that belongs to both a column
and either a local variable or formal parameter,
then the column name takes precedence.

Interestingly we can also use labels on block level for specifying variables that are defined in this block.

set serveroutput on
<<main>>
declare
  dummy varchar2(10) := 'Y';
begin
  <<block1>>
  declare
    dummy varchar2(10) := 'X';
  begin
  <<block2>>
    declare
    dummy varchar2(10) := 'A';
    begin
      select dummy
      into dummy
      from dual
      where dummy = block1.dummy;

      dbms_output.put_line('MainBlock:'||main.dummy);
      dbms_output.put_line('Block1:'||block1.dummy);
      dbms_output.put_line('Block2:'||block2.dummy);
    end block2;
  end block1;
end main;
/

PL/SQL procedure successfully completed.

MainBlock:Y
Block1:X
Block2:X

Without dot notation the innermost variable (block2) is used – as we can see in the INTO part. And we can reference a different variable with the same name from a “higher” declaration by using the dot notation.

 

4. variable names be emojis

example

set serveroutput on
declare
  "💩"exception;
  pragma exception_init("💩",-20001);

  "⌚" timestamp := systimestamp;
  "🕑"interval day to second;
  "🎲"number;
  "💤"number := 2;
begin
  "🎲":= round(dbms_random.value(1,6));
  for "🔜"in 1.."🎲"loop
    dbms_lock.sleep("💤");
  end loop;
  "🕑":= systimestamp - "⌚" ;
  dbms_output.put_line('Slept for '|| "🕑");
exception
  when "💩"then
    dbms_output.put_line('Sorry something bad happend!');
    raise "💩";
end;
/

PL/SQL procedure successfully completed.

Slept for +00 00:00:08.049000

The source code looks a little bit different in sql developer. But trust me. I simply copy&pasted it from there to here.

emojicode

To make this work you need to use a font that supports emoijs/symbols, I used font “Segoe UI Symbol”. It is supposed to look better on windows 10[3].

If you are a hard core emoji lover then I suggest to have a look at emojicode.org

It is a emoji based programming language. Which did not make it into the esoteric programming languages list (yet).  Ook? Ook!

 

3. variables can be made mandatory (NOT NULL)

Check out the NOT NULL keyword during the variable declaration.

declare
  v_index number not null := 0;
begin
  v_index := 1;
  v_index := null;
end;
/

Error report –
ORA-06550: line 5, column 14: PLS-00382: expression is of wrong type
ORA-06550: line 5, column 3: PL/SQL: Statement ignored

The error message is a bit vague about what happened, but it is very exact where it happened (line 5, column 14). And what do we see there? A NULL expression.

The expression is of wrong type, because we added a NOT NULL constraint to the number type that was used. For more complex cases we can create our own sub types and use them. But if we just want to make sure that we do not need to consider null cases during further variable calls, then this is a possible way.

Link to plsql documentation

Currently there are no such other constraints  that we can use.  I could imagine with the potential arrival of SQL assertions, this might become a hot topic in plsql too.

2. you can “hack” dbms_output

Warning! This is dangerous. It might break some (poorly written) code that resides in the same schema. Do it at your own risk! It is also hilariously funny to do on april fools day to your fellow coworkers. I mean they shouldn’t use dbms_output anyway. That will teach them!

I start the example by showing the behaviour first. Then the code to produce this result.

behaviour


create or replace procedure doSomething is
  v_dummy dual.dummy%type;
begin
  select dummy into v_dummy from dual where 1=2;
exception
  when others then
    dbms_output.put_line(sqlerrm);
end doSomething;
/

Now we run the module a couple of times and want to see the output. We should expect a NO_DATA_FOUND error message.

set serveroutput on

execute doSomething;
execute doSomething;
execute doSomething;
execute doSomething;
execute doSomething;

Surprisingly instead of the error message we get something like this.

PL/SQL procedure successfully completed.

Wrong usage of DBMS_OUTPUT detected.

PL/SQL procedure successfully completed.

Make Databases Great Again!

PL/SQL procedure successfully completed.

ORA-01403: no data found

PL/SQL procedure successfully completed.

Make Databases Great Again!

PL/SQL procedure successfully completed.

So tell me what you want, what you really, really want

Omg! What is going on here?

Well here is the catch. We can “overload” dbms_output in out own schema. Then our package is called and not the original package from sys.

source code

create or replace package dbms_output
as
  procedure enable(BUFFER_SIZE number default null);
  procedure put_line(A in varchar2);
  procedure GET_LINE(LINE out VARCHAR2,STATUS out integer);
  procedure GET_LINES(LINES out sys.dbms_output.CHARARR,NUMLINES in out NUMBER);
  procedure GET_LINES(LINES out sys.DBMSOUTPUT_LINESARRAY,NUMLINES in out NUMBER);
end dbms_output ;
/

create or replace package body dbms_output
as
procedure enable(BUFFER_SIZE number default null) is
begin
  sys.dbms_output.enable(BUFFER_SIZE);
end;

function getRandomQuote (A in varchar2) return varchar2
is
  type quotes_t is table of varchar2(4000) index by binary_integer;
  v_quotes quotes_t;
  v_random binary_integer;
begin
  v_quotes(1) := 'You are hacked by the Chinese';
  v_quotes(2) := 'Wrong usage of DBMS_OUTPUT detected.';
  v_quotes(3) := 'System failure. Get away from keyboard';
  v_quotes(4) := 'Close all windows! NOW!';
  v_quotes(5) := 'Make Databases Great Again!';
  v_quotes(6) := A; -- sometimes return the correct text
  v_quotes(7) := A; -- sometimes return the correct text
  v_quotes(8) := 'So tell me what you want, what you really, really want';
  v_quotes(9) := 'None but ourselves can free our minds.';
  v_quotes(10) := 'Let there be light!';
  v_random := round(dbms_random.value(1,v_quotes.last));

  return v_quotes(v_random);

end getRandomQuote;

procedure put_line(A in varchar2) is
begin
  sys.dbms_output.put_line(getRandomQuote(A));
end;

procedure GET_LINE(LINE out VARCHAR2,STATUS out integer)
is
begin
  sys.dbms_output.GET_LINE(LINE,STATUS);
end;

procedure GET_LINES(LINES out sys.dbms_output.CHARARR,NUMLINES in out NUMBER)
is
begin
  sys.dbms_output.GET_LINES(LINES,NUMLINES);
end;

procedure GET_LINES(LINES out sys.DBMSOUTPUT_LINESARRAY,NUMLINES in out NUMBER)
is
begin
  sys.dbms_output.GET_LINES(LINES,NUMLINES);
end;
end dbms_output ;
/

How does it work?

Because how sql name resolution kicks in, the DBMS_OUTPUT package in our schema is used and not the public synonym for the DBMS_OUTPUT package from the sys schema.

The get_line functions then pushes the changed text to the normal buffer mechanism.

How can we avoid it?

Best is not to use DBMS_OUTPUT in real production code. It is a nice quick debugging tool. But not more than that.

Also if you prefix dbms_output always with the SYS schema, then it will call the original logic.
 

1. when others does not catch all exceptions

example

set serveroutput on 
declare
  e_cancelled exception;
  pragma exception_init(e_cancelled, -1013);
begin
  begin
    raise e_cancelled;
  exception
    when others then
      dbms_output.put_line('EXCEPTION OTHERS');
  end;
end;
/

ORA-01013: user requested cancel of current operation
ORA-06512: at line 6

We still see an exception, but not the dbms buffer output!

This needs some explanation.

There is a very limited set of exceptions that will not be captured by the WHEN OTHERS handler. We need to look closely and understand the exception itself to comprehend why this is a good thing.

Here the ORA-01013 is the “user requested cancel of current operation” exception. Essentially it means somebody pressed “CTRL+C” while running the code. In almost all environments this means: Stop doing whatever you do immediately! Or in more technical terms: It is an interrupt to the os process running your command. Same as executing “kill -2” (kill -SIGINT) in a nix environment (the-3-most-important-kill-signals-on-the-linux-unix-command-line). Even if the process is allowed to ignore the command, it shouldn’t do so by default.

ORA-01013 can sometimes also be the result of a timeout. Where the client is waiting for a response and after some time sends this as a timeout signal to the database session.

We are allowed to capture this exception and write a special handler for it.

set serveroutput on
declare
  e_cancelled exception;
  pragma exception_init(e_cancelled, -1013);
begin
  begin
    raise e_cancelled;
  exception
    when others then
      dbms_output.put_line('EXCEPTION OTHERS');
  end;
exception 
  when e_cancelled then
    dbms_output.put_line('OPERATION CANCELED');
END;
/

PL/SQL procedure successfully completed.

OPERATION CANCELED

Nothing to worry about. Just nice to know.

Please note: This example will behave differently in older outdated db versions. I think it was introduced as a fix for bug#12838063 in 11.2.0.4.

Other exceptions that are not handled include “ORA-03113: end-of-file on communication channel”.

But not “ORA-06508: PL/SQL: could not find program unit being called”. This was supposed to go through “when others” but testing on 12.2.0.1 revealed it is captured.

 

 

Footnotes


1. This is probably the only bug free code I ever wrote. It was meant to do nothing and it does that exceptionally well![2]
2. There might be room for some performance improvement. Allowing us to do nothing even faster.
3. On windows 10 💩 is supposed to look like 🔝💩.

 

 

 

10 Oracle SQL features you probably didn’t know

10 Oracle SQL features you probably didn’t know

I can not guarantee that you never heared of this. But the majority of developers doesn’t know or doesn’t seem to know. And honestly –  most of this information is fairly useless – and mostly harmless.

10. Why we have the CONCAT function

There is a concat function that can be used instead of the concat operator ||.

Nobody uses that! But there is a reason why it exists.

This is from the 8.1.7 Oracle documention

On most platforms, the concatenation operator is two solid vertical bars, as shown in Table 3-3. However, some IBM platforms use broken vertical bars for this operator. When moving SQL script files between systems having different character sets, such as between ASCII and EBCDIC, vertical bars might not be translated into the vertical bar required by the target Oracle environment. Oracle provides the CONCAT character function as an alternative to the vertical bar operator for cases when it is difficult or impossible to control translation performed by operating system or network utilities. Use this function in applications that will be moved between environments with differing character sets.

So we do have this function, because in the past this || was not consistent over all the different platforms using different character sets. Since nobody seems to use IBM EBCDIC anymore, there is no real need to use the CONCAT function.

9. select UNIQUE

Instead of doing a SELECT DISTINCT you can do SELECT UNIQUE.

But you shouldn’t. None of them. Forget I mentioned this.

Ok here is an example.

Example

select unique e.deptno from scott.emp e;

DEPTNO
-----
30
20
10

It is also possible to do SELECT COUNT(UNIQUE …) . Not sure when that one was introduced, but it seems to work now.

Sven says: “SELECT DISTINCT|UNIQUE should be considered a bug in real production code.”

I have yet to find an example where SELECT DISTINCT is needed. More likely there is a bug in the data model or missing joins in the where clause. GROUP BY or sometimes EXISTS are the better long term alternatives.

Using SELECT DISTINCT is absolutly fine for developer ad-hoc queries.

I feel this whole concept is a bit inconsistent. 12c introduced the new approximation function APPROX_COUNT_DISTINCT, but there is no APPROX_COUNT_UNIQUE!

8. there is an ANSI datatype DATE

The ANSI datatype DATE does not hold any time information (no hours, minutes, seconds). That is what the ANSI datatype TIME is for. Oracle does not officially know these two datatypes.

However:

EXTRACT function

Purpose

EXTRACT extracts and returns the value of a specified datetime field from a datetime or interval expression. The expr can be any expression that evaluates to a datetime or interval data type compatible with the requested field:

  • If YEAR or MONTH is requested, then expr must evaluate to an expression of data type DATE, TIMESTAMP, TIMESTAMP WITH TIME ZONE, TIMESTAMP WITH LOCAL TIME ZONE, or INTERVAL YEAR TO MONTH.
  • If DAY is requested, then expr must evaluate to an expression of data type DATE, TIMESTAMP, TIMESTAMP WITH TIME ZONE, TIMESTAMP WITH LOCAL TIME ZONE, or INTERVAL DAY TO SECOND.
  • If HOUR, MINUTE, or SECOND is requested, then expr must evaluate to an expression of data type TIMESTAMP, TIMESTAMP WITH TIME ZONE, TIMESTAMP WITH LOCAL TIME ZONE, or INTERVAL DAY TO SECOND. DATE is not valid here, because Oracle Database treats it as ANSI DATE data type, which has no time fields.
  • If TIMEZONE_HOUR, TIMEZONE_MINUTE, TIMEZONE_ABBR, TIMEZONE_REGION, or TIMEZONE_OFFSET is requested, then expr must evaluate to an expression of data type TIMESTAMP WITH TIME ZONE or TIMESTAMP WITH LOCAL TIME ZONE.

So essentially the doc says, that EXTRACT uses an ANSI DATE as an expression input.

Proof

select extract(day from sysdate) from dual;
21

select extract(second from sysdate) from dual;
ORA-30076: invalid extract field for extract source

select extract(second from systimestamp) from dual;
4.823

The error appears, because sysdate is kind of converted into a ANSI DATE and this does not hold time information.

 

Problem is: We just can not use this ANSI date type for normal activities.

Alternative for the ANSI date: TRUNC(datetime). In 12c as a virtual column.

Problem solved. ✔

 

7. group by ()

You can group by an empty parenthesis. Or let’s call it group by the FULL SET. This is in fact useful in some very specific cases.

It gives us the ability to get a NO_DATA_FOUND error if there is no data at all. Oracle guarantees that an aggregation select without a group by will always return a record. GROUP BY () can be added if you want to change this default behaviour on purpose.

Example

Let’s say we want to count how many employees are in one specific department.

select count(e.empno) emp#
from scott.dept d
left join scott.emp e on e.deptno=d.deptno
where d.dname = 'OPERATIONS';

EMP#
----
0

Since there are no employees in operations, this returns 0. This is correct and the information I want.

More specifically this query returns one row, but the e.empno is null and therefore is not counted. Count(*) would have returned 1!

But if we use a department name that does not even exists, then this will also return 0.

select count(e.empno) emp#
from scott.dept d
left join scott.emp e on e.deptno=d.deptno
where d.dname = 'FANCY_CLOUD_SPACE';

EMP#
----
0

Not correct! In this case I prefer to get a no data found error.

Grouping on the full set gives us this option.

select count(e.empno) emp#
from scott.dept d
left join scott.emp e on e.deptno=d.deptno
where d.dname = 'FANCY_CLOUD_SPACE'
group by ();

no rows found.

Btw: the SQL Developer syntax (pre-)parser identifies that () as an syntax error. But it will execute it successfully.

I have to admit, that in this case it would be much better to group by the department. But maybe it would be possible to get the count for multiple departments using an IN list. Then the group on department level would not help much.

 

Btw: GROUP BY NULL does the same thing, and might be less confusing for some developers. I personally prefere GROUP BY () because the () also is used in GROUPING_SETS.

 

6. + vs. –

In some edge cases using a different operator (+|-) leads to disturbing results.

Example: Substracting a tiny number from 1 does not equal 1. But adding a tiny number from 1 does equal 1!

select * from dual
where 1 = 1-0.000000000000000000000000000000000000001;

No data found!

This is expected behaviour…

select * from dual
where 1 = 1+0.000000000000000000000000000000000000001;

DUMMY
-----
X

Wat? Why do we get a result?!

The solution is simple. We overstepped the maximum precision of the number datatype when doing the addition. Max precision is 38 digits. The addition would require a precision of 39. Instead the result is rounded (or truncated) to a precision of 38 digits, which happens to equal 1.  The substraction result still is in the 38 precision range and therefore exact (very slightly less than 1).

Don’t confuse precision with maximum value! Precision is the number of different digits that we can store. This can then be moved by an exponent to the left or right for very large or very small values.

 

To put this into perspective

That precision is so extremly high, that we have major troubles visualizing it. So here is a comparsion. The comparison data comes from this hugely interesting site: http://money.visualcapitalist.com/all-of-the-worlds-money-and-markets-in-one-visualization/

The complete monetary assets in the whole word is estimated about 80 trillion $ (broad money=coins, banknotes, savings accounts, time deposits, …). That is a 80,000,000,000,000  or an 8 with 13 zeros at the end or 8e13 in scientific exponential notation.

select 8e13 from dual;

Considering all the national and private debts is even higher. Here we reach about  200 trillion $.

And if we consider derivates (=high risk gambles)  then the high estimations go as far as 1.2 quadrillion $.  That is a 1 followed by 24 more digits (mostly zeros) or 1.2e24.

Since all this is expressed in dollars, we can increase the number even further by chooseing a different currency.  For example the Nepalease Rupee is worth a little less than 0.01 dollar. So all the worlds derivatees could be expressed in Nepalease Rupees using a number as big as 1.2e26.

That means using a number with a precision of 38 we can easily express all the money in the world including derivates up to the very last Nepalease Rupee. And we still have only used 2/3rds of the maximum available precision. Or to say it differently. If we duplicate the planet Earth 1,000,000,000,000 (1 trillion) times – we could still give the exact amount of money on all planets in Napalease Rupees up to the very last Rupee using the number datatype.

That’s quite impressive!

Btw. The biggest number that can be expressed using the NUMBER datatype is 999…(38 9’s) x10125 . Which is way way bigger than anything we discussed so far.

 

5. instead of IN we can use = ANY

select 'TRUE'
 from dual
 where 1 = ANY (1,2,3,4,5)
TRUE

There are several similar unusual “replacement” options.

e.g.

  • instead of != we can also use ^=.
  • NOT IN is the same as  != ALL
  • SOME is a synonym for ANY – but only in sql!
  • <=LEAST | >=GREATEST can be replaced by <|>= ALL
  • >=LEAST | <=GREATEST can be replaced by >|<= SOME

I expect that SOME people do know about these operators, since questions about them are part of the SQL expert exam. So from ALL experts who took the exam at LEAST the few who passed should know SOMEthing about ANY.

The last replacement differs how NULL values are treated. In theory there could be cases where this comes in handy. I so far never had such an encounter.

Example

with testdata
as (select 1 a, 2 b, 3 c, 4 d from dual union all
   select 5 a, 4 b, 3 c, 2 d from dual union all
   select 1 a, null b, 2 c, null d from dual union all
   select null a, null b, null c, null d from dual)
select * from testdata
where 3 >= LEAST(a,b,c,d);

 

A B C D
-------
1 2 3 4
5 4 3 2

 

with testdata
as (select 1 a, 2 b, 3 c, 4 d from dual union all
    select 5 a, 4 b, 3 c, 2 d from dual union all
    select 1 a, null b, 2 c, null d from dual union all
    select null a, null b, null c, null d from dual)
select * from testdata
where 3 >= ANY(a,b,c,d);

 

A B C D
-------
1 2 3 4
5 4 3 2
1 - 2 -

4. external table on multiple files

External tables can load multiple files at once! As long as they have the same structure of cause. So here is an example.

CREATE TABLE EXT_DUMMY
(
    "RECORDTYPE" VARCHAR2(100 BYTE), 
    "COL1" VARCHAR2(100 BYTE), 
    "COL2" VARCHAR2(100 BYTE) 
)
ORGANIZATION EXTERNAL
(
    TYPE ORACLE_LOADER DEFAULT DIRECTORY "IMPORT_BAD_FILE" 
    ACCESS PARAMETERS ( 
             records delimited BY newline 
             FIELDS TERMINATED BY ';' 
             MISSING FIELD VALUES ARE NULL 
               ( RECORDTYPE CHAR
               , COL1 CHAR 
               , COL2 CHAR 
               ) 
    ) 
    LOCATION ( 'Testfile1.txt, Testfile2.txt' )
)
    reject limit 10
;

When we then do a select on the external table, we will see the data from both files.

We can also switch the external table to a different file.

alter table EXT_DUMMY location ('Testfile3.txt' )

 

For the experts: We can use the ROWID to find out which file was used to load a specific record. Special thanks go to OTN-forum member Odie_63 who found this solution. More info in this old OTN forum thread.

with ext_loc as (
      select position-1 as pos
           , name as filename
      from sys.external_location$
      where obj# = ( select object_id
                     from user_objects
                     where object_name = 'EXT_DUMMY' )
    )
select x.filename,
       t.*
from EXT_DUMMY t
join ext_loc x 
on x.pos = to_number(regexp_substr(dump(t.rowid,10,9,1),'\d+$'))
;

It’s a very clever piece of software. Essentially it extracts the filenumber from the rowid, looks up the file number in the data dictionary and combines that with our data set.

This select was done in a 10g database. In 12c we can probably use dbms_rowid to do the same, instead of regexp_substr(dump(rowid)).

3. insert default values

We can insert into a table using the “default” keyword to force default behaviour for this column.

Example

-- setup
create table swe_default_test 
(col1 varchar2(10) not null,
 col2 varchar2(10) default 'TEST' not null
 );
Table SWE_DEFAULT_TEST created.

-- test
insert into swe_default_test (col1) values ('X');
1 row inserted.

insert into swe_default_test (col1,col2) values ('X',null);
ORA-01400: cannot insert NULL into ("MYUSER"."SWE_DEFAULT_TEST"."COL2")

insert into swe_default_test (col1,col2) values ('X',default);
1 row inserted.

-- double check
select * from swe_default_test;
COL1 COL2 
---------- ----------
X TEST 
X TEST 

-- cleanup
drop table swe_default_test;
Table SWE_DEFAULT_TEST dropped.

 

In 12c we now have a new and I think better option to do the same. The default column can be defined additionally with “ON NULL”. Which has the effect, that inserting a NULL value, will lead to using the default value instead. Of cause the “default” keyword still works too.

-- setup
create table swe_default_test12c
(col1 varchar2(10) not null,
 col2 varchar2(10) default on null 'TEST' not null
 );
Table SWE_DEFAULT_TEST12C created.

-- test
insert into swe_default_test12c (col1,col2) values ('X',null);
1 row inserted.

insert into swe_default_test12c (col1,col2) values ('X',default);
1 row inserted.

-- doublecheck
select * from swe_default_test12c;
COL1 COL2 
---------- ----------
X TEST 
X TEST 

-- cleanup
drop table swe_default_test12c;
Table SWE_DEFAULT_TEST12C dropped.

As we can see both cases now work. The one using a NULL value and also useing the DEFAULT keyword.

This new 12c “default on null” feature can be used to replace the typical BEFORE ROW INSERT trigger. More info how to do this in Sequence and Audit columns with Apex 5 and 12c

 

2. (1,2) = ((1,2))

We can compare expression lists using the = operator. But the right side of the comparison needs an extra set of parenthesis.

select 'TRUE' checked from dual
 where (1,2,3,4,5) = ((1,2,3,4,5));
CHECKED
-------
TRUE

This is just a shorthand form of

1=1 and 2=2 and 3=3 and 4=4 and 5=5

If we do not use the second set of parenthesis on the right hand side of the comparison, then we get an error.

select 'TRUE' checked from dual
 where (1,2,3,4,5) = (1,2,3,4,5) ;
ORA-00920: invalid relational operator

This is documented. Although a tiny bit difficult to read in the syntax diagrams.

See: SQL Reference – Simple Comparison

syntax_compare_lists_1syntax_compare_lists_2

 

1. DUAL is a real table

Although there are some special optimizations in various tools, dual is a real table residing in the sys schema.

select owner, table_name
 from dba_tables
 where table_name = 'DUAL';
OWNER TABLE_NAME
----------------
SYS DUAL

 

In older database versions it was possible to do an insert into the DUAL table.

Warning! This is extremly dangerous. It will probably break ALL applications in your database.

insert into dual values ('Y');
select mysequence.nextval into v_id from dual;

TOO_MANY_ROWS error!

And nobody captures the TOO_MANY_ROWS exception for selects from dual.

Btw: A working safety mechanism is to add rownum = 1 to your query.

select mysequence.nextval into v_id from dual where rownum = 1;

Fortunately manipulating DUAL does not work anymore in recent versions of the database. I think 12c introduced it. Not even DBAs have the privs anymore to do this.

ORA-01031: insufficient privileges
*Cause: An attempt was made to perform a database operation without
the necessary privileges.

0. The Oracle SQL Syntax is documented!

Bam! There you have it! You wouldn’t have thought that. Well, you are not alone. So many developers seem to have no clue that this exists or where it exists.

When googeling use “oracle 12c doc” or “oracle 12.2 SQL ref” in front of your search term. It helps to find the proper pages.

12.2 doc portal

12.2 SQL introduction

12.2 SQL Language reference

 

 

Special thanks go out to Stefanie R., who gave me the idea for this blog post!

 

sqldev alert - overwrite package body modifications

DBA/Dev quick tipp: recover lost package body under EBR

Problem

It can happen that SQL developer suggests to reload an old package body that you are currently working on. If you click the suggested “autoupdate” then you will loose all current changes that you made to the package body. This just happened to a collegue of mine.

sqldev alert - overwrite package body modifications

It is possible to recover the old code using the flashback feature of the oracle database. But one has to be quicker than the undo retention period of the database.

If the schema was edition enabled this complicates things a tiny bit further. We need to make sure to fetch the plsql code from the correct edition that we are interested in.

When we are quering normal data dictionary views like DBA_SOURCE this will always use the current edition. DBA_SOURCE_AE will show all code editions.

Flashback does not work well with data dictionary views. However we can use the base tables and run a flashback query there to see the old package code.

Action

Three simple statements will do it.

Set your dba session to use the correct edition (this might not be needed if the object id is found using a slightly different query). The edition needs to match the edition where the source code was originally compiled

ALTER SESSION SET EDITION="DEV$ALPHA";

Find the object id of the package body

Use the correct name of the package and your edition name

select obj#
from sys."_CURRENT_EDITION_OBJ"
where name = 'MYPACKAGE'
and type# = 11 -- package body
and namespace = 2
and defining_edition = 'DEV$ALPHA';

Flashback query to show the source code for the identified object

select *
from sys.source$
--versions between timestamp (systimestamp - interval '25' minute) and systimestamp
as of timestamp (systimestamp - interval '25' minute)
where obj# = 1906090;

 

Done.

 

Aftermath

The starting situation can be reproduced by changeing body and spec of the same package. Once the spec is recompiled, the body needs to be recompiled too. Then SQL developer suggests to load the new code (which is in fact the old code). If you click yes, it will overwrite the current changes.

Apex 5 – show Y/N columns as font awesome icons

apex5_YN_icons

The database column value is Y or N (J or N for German databases). In the past I rendered such columns using images that had a name, where the column value could be appended to, for example IconCheckbox_Y.png. In Apex 5 the same is possible using icon classes.

1) add an alias for Font-Awesome Icons

This little css will create two aliases for the fa-check and the fa-remove icons. And it will give them a default color.

.syn-check-J:before,.syn-check-Y:before {
  content: "\f00c";
  color: green;
}
.syn-check-N:before {
  content: "\f00d";
  color: #ff3a30; /* color: red; */
}

If you want to know the content code for a different icon you can search in the fontAwesome.min.css file or have a look at some “cheatsheet” pages like this one here: http://astronautweb.co/snippet/font-awesome/

If we would add another attribute:

  font-family:fontAwesome;

then we could use the icon even without the “leading” fa class.
I do not recommend this! Remember in Apex 5.1 we have two icon fonts to choose from – FontAwesome OR Font-Apex. Adding the font-family would prevent switching the icons.

2) html expression to set the class

And then use a normal text field with a html expression.
Html expressions are available for classic reports, but also for interactive reports and interactive grids.

<span class="fa fa-lg syn-check-#COL1#" alt="#COL1#"></span>

Our new classes still can be combined with other font-awesome styles. Here I enlarged them a little bit by adding the “fa-lg” class.

The good thing about an html expression is, that the field value still is the original value. So we can use the interactive report column filters on this value and an export to csv will not add any html tags. This solution is SOC conform. The database query will not bother with the representation of the values.

apex5_YN_icons_filter

Quick. Easy. Useful.

How does it work in 5.1 with Interactive grids?

IGs are slightly different. Especially if they are editable. We can not simply add the icon to a text column. Instead we have a new column type HTML EXPRESSION. In that we can reference a different column.

Here is an example from the Interactive Grid Sample Application (Page 30, Base Edit). The emp table has a column ONLEAVE and holds Y and N values. apex5_YN_icons_IG_column2

The substitution syntax for IG columns does not use # anymore. Instead the expression will look like below. The column value from ONLEAVE is substituted.

<span class="fa syn-check-&ONLEAVE.">&ONLEAVE.</span>

We now face an additional obstacle. The ONLEAVE column is a switch. The switch component will translate Y into Yes and N into No. Our new second column recieves that translated value. We could either add an additional hidden column to the select and referece that one or we can extend our css aliases. Because the switch component look pretty useful, it seems better to include its outcome to the css.

.syn-check-J:before,.syn-check-Y:before,.syn-check-Yes:before {
  content: "\f00c";
  color: green;
}
.syn-check-N:before,.syn-check-No:before {
  content: "\f00d";
  color: #ff3a30; /* color: red; */
}

And here is how the result looks like:
Apex5_YNIcon_IG_column

As you can see the icons look more delicate than before. This is because the sample application uses Font Apex and not Font Awesome anymore.

Example using images

And sometimes we still want an image. Maybe because there is no fitting icon in our font or maybe because the users insist on keeping the old images. This can also be done with css classes. However some more attributes are needed.

Here is an example without further explaination.


.syn-checkImg-J,.syn-checkImg-Y {
  background-image:url("https://upload.wikimedia.org/wikipedia/commons/thumb/0/03/Green_check.svg/240px-Green_check.svg.png");
  background-repeat: no-repeat;
  background-origin: padding-box;
  background-size: 20px 20px;
  padding:0 20px;
}
.syn-checkImg-N {
  background-image:url("https://upload.wikimedia.org/wikipedia/commons/thumb/a/a2/X_mark.svg/210px-X_mark.svg.png");
  background-repeat: no-repeat;
  background-origin: padding-box;
  background-size: 20px 20px;
  padding:0 20px;
}

It works with DIVs and SPAN tags.
And I created a jsfiddle where you can test it out for yourself.
https://jsfiddle.net/1n6s4y3j/

Apex 5.1 session cloning

Introduction

With Apex 5.1 we got a nifty new feature. It is now possible to clone an apex session, so that we can have two (or more) independent APEX sessions running in the same browser.

It only took me 2h to implement this from scratch. So it is really easy to do.

Joel Kallmann describes how to do it in this blog post http://joelkallman.blogspot.de/2016/07/apex-session-isolation-across-multiple.html

There are a few additional tweaks I would like to mention here.

Step 1) Enable session cloning on instance level

logon as sys (or any user with apex_administrator_role).

I first granted my normal DBA account the apex_administrator_role

grant apex_administrator_role to myDBA;

and then as DBA:

begin
apex_instance_admin.set_parameter(
p_parameter => 'CLONE_SESSION_ENABLED',
p_value     => 'Y');
end;
/

 

If you are in a CDB/PDB environment, with Apex installed in the PDB (recommended) then  make sure to run this on the matching PDB (especially when working with sys).

e.g.

alter session set container = PDBAPEX

Joels article also explains how to enable this for a single workspace. But I got an error message when trying to do so.

"ORA-20987: APEX - Instance parameter not found - Contact your application administrator."

Step 2) Add a new navigation bar entry

Of cause you are free to add this functionality everywhere you want. But you will need a link, that the user has to click on to get a new session. My preference was the navigation bar.

The url for the link is simple. Just add APEX_CLONE_SESSION as the request parameter.

f?p=&APP_ID.:&APP_PAGE_ID.:&APP_SESSION.:APEX_CLONE_SESSION

create a navigation bar list entry with this link

  • Go to “Shared Components/Navigation Bar Lists/navigation bar” (the name of my list).
  • Choose any icon (fa-clone) and label text (“Session klonen”) that you want.
  • The target page needs to be “&APP_PAGE_ID.”. This will use the current page and add it into the link.
  • And most importantly the request needs to be APEX_CLONE_SESSION.

apex51_clone_session_navbar_entry

The entry is now ready and working. However it will replace the session in the current browser tab. But rightclick and “open in new tab” gives us two tabs with a different apex session.

If that is not enough then this next step is wanted too.

Step 3) Open the link in a new page

The goal is to add a TARGET attribute to our link, so that a new tab is always opened when we clone the session. Unfortunatly the navbar template has no option to include link atributes. We could do a kind of injection trick with the url, but I prefere a cleaner way.

To do so we need to modify three small parts.

First copy the navigation bar template as a new template “Navbar with Attributes”

And there we add the user defined property #A03# to our link. This needs to be done in all areas where we find an <a..> tag.At least for the “list template current” and “list template non-current”. I also added it for the sub list entries, even if my navbar doesn’t use sub lists.

apex51_clonesession_template

Don’t forget to add a description “Link Attributes” for the newly added attribute value in the “attribute description” section (scroll down a bit to see it).

Then enhance our existing navbar entry with target=”_blank”

apex51_clonesession_linkattributes

There is a tiny difference between using target=”_blank” or target=someothername.

target=”_blank” will always create a new tab.

target=”_someothername” will open a new tab on the first click. Consecutive clicks however will always reuse this same tab. This can be useful if you want to prevent your users to constantly cloning session after session after session.

 

And finally make sure that our application uses the new template for showing the navbar list

Shared Components/User Interface Attributes/Desktop/Navigation bar/List Template

apex51_clonesession_userinterface

 

Result

The navigation bar then could look similar to this here:

apex51_clonesession_navbar_de

Clicking on the “clone session” link will open a new tab in the same browser. In the url we will see that a different session id was created.

The new session will have all the same item values (page items, application items, etc.) as the privious session had. But from this point on, the two sessions will start to differ.

Both sessions will use the same browser cookie. For that reason if one session logs out, then the other session will be logged out too.

 

How to upgrade from Apex 5.0 to 5.1

Preparation

As a preparation I recommend several steps and checks to prepare the upgrade.

  • Make sure you have a working database backup and just in case a DBA at hand who would be able to restore your database, tablespace or schemas.
  • Workspace clean up – delete applications that are not needed anymore. Especially copies of others application, that were created to test a specific feature or do a proof of concept. Be careful whether you also want to run any included deinstallation scripts. For an Apex update in general you don’t want to delete the connected tables.
  • Software Download – Apex 5.1 can be downloaded from the OTN download page: www.oracle.com/technetwork/developer-tools/apex/downloads/index.html
  • Backup Application – Export the application including all private saved reports
  • Export supporting objects like Theme, Static Application files
  • Backup Image folder – if your image folder is /i/ then make a copy and rename the copy to /i50/
  • Check the Apex 5.1 known issues page:  http://www.oracle.com/technetwork/developer-tools/apex/downloads/apex-51-known-issues-3408261.html

Apart from downloding the new Apex version, none of these steps is really required. But it gives a nice and cozy fealing to be sure to be able to go back.

In Apex 5.1 some features are deprecated and some options did change. It is possible to prepare your application to anticipate the effects of the upgrade. I will cover this in a separate blog post. More importantly read through the “changed behaviour”, “deprecated” and “desupported” sections of the installation manual (https://docs.oracle.com/database/apex-5.1/HTMRN/toc.htm#HTMRN-GUID-8BDFB50B-4EC6-4051-A9B6-7D5805E00C4E ).

Here are some things to consider already in Apex 5.0.

  • Apex.Server plugin
    • does not return an jqXHR object anymore
    • async option deprecated
    • htmldb_getobject desupported => replace with apex.server
  • old Apex themes deprecated
  • check for CANCEL or BACK or PREVIOUS buttons (page redirect) with execute validations = YES. These will do client side validation in Apex 5.1. If that is not wanted, change it to NO.
  • jsTRee plugin deprecated
  • classic reports
    • hidden column type => hidden column
    • no enterable fields!
  • file browse storage => switch from WWV_FLOW_FILES to APEX_APPLICATION_TEMP_FILES
  • desupported attributes or types
    • page: body header, include standard javascript and css
    • region: svg charts, simple chart, classic tree
    • button: pre text, post text
    • item: start and stop grid layout, file browse storage type
  • Conditions deprecated: text= value, text != value, text is (not) contained in item
  • No more: Save state before branching
  • apex_plsql_job package desupported
  • check if you reference the internal hidden fields  (renamed in Apex 5.1): pPageChecksum => pPageItemsProtected, p_md5_checksum=>pPageItemsRowVersion
  • date picker (classic) deprecated
  • several updated javascript libraries

Decide about the upgrade path

Now consider whether you want to do a traditional upgrade (all steps in one go) or if you want to minimize the application downtime (several steps, not for CDB$ROOT installations). Or as oracle calls it: “maximize application uptime”.

To minimize downtime read this chapter in the documentation: https://docs.oracle.com/database/apex-5.1/HTMRN/toc.htm#HTMRN-GUID-411DE0D8-59E1-4267-8751-FD56305ABE4E

Upgrade in one go

The only step needed to do:

Create database schemas and database objects (tables, packages) and do the application migrations.

@apexins.sql tablespace_apex tablespace_files tablespace_temp images

Upgrade with maximum Uptime

Instead of running a single script, we can do the upgrade in several steps. Only during the 3rd step, the end users need to be disconnected. This third step took only 1.01 seconds on my system.

The upgrade of an Application Express instance runs in four phases:

  1. Create database schemas and database objects (tables, packages).
    This essentially creates the Apex_051000 schema.
    -> no influence on running sessions
  2. Migrate application metadata.
    This copies the repository application data from Apex_050000 into Apex_051000.
    To help with that some upgrade triggers were previously installed.
    ->  developers can’t work anymore
  3. Migrate data that runtime applications modify and switch to the new version.
    -> downtime for all (developers and end users)
  4. Migrate additional log and summary data (this step starts automatically)
    -> no influence on running sessions

But we need zero downtime – is it possible?

I’m convinced it is possible to reach a downtimeless application upgrade using the EBR  (edition based redefinition) feature of the oracle database. I have extensive knowledge using EBR even inside Apex. However so far I didn’t have the time to do a proof of concept (POC) for the upgrade. Also this would be an unsupported action (currently). The change would include tweaking serveral non editioned objects (public synonyms, session contexts, registry data) in such a way that they show up differently when used inside an edition.

If any Germany based customer or the Oracle Apex Team itself is interested how to do this and is willing to pay for the time I need to invest in this, then please contact me.

do the upgrade

Unzip the Apex_5.1.zip file into an appropriate folder. And navigate to the apex_5.1/apex folder

If you decide for the “Maximum Uptime” upgrade path, then three scripts need to run. And ORDS needs to be stopped for script 3. To run the scripts we need to know the tablespace names and the image path.

Find the tablespace

The documentation gives examples useing the SYSAUX tablespace. I do not recommend that. Apex should have its own tablespace.

select username, default_tablespace, temporary_tablespace, profile, oracle_maintained
from dba_users
where regexp_like(username,'^(APEX_|ORDS_)');

This shows only the default setting. We can reuse the same tablespace. But it is also possible to install apex 5.1 into a new tablespace. If you want to do that, then this new tablespace needs to be created first.

Sometimes we want to see if the data is really in this default tablespace. Here is a select that will show the data distribution and also how much space is used.

select owner as schema, tablespace_name as data_tbs, nvl(segment_type,' - total -') segment_type, round(sum(bytes)/1024/1024,2) size_in_MB
from dba_extents
where regexp_like(owner,'^(APEX_|ORDS_)')
group by owner, tablespace_name, rollup(segment_type)
;

Example result

SCHEMA        DATA_TBS SEGMENT_TYPE SIZE_IN_MB
APEX_050000   APEX     INDEX        239,69
APEX_050000   APEX     TABLE        267,19
APEX_050000   APEX     LOBINDEX     12,75
APEX_050000   APEX     LOBSEGMENT   240,31
APEX_050000   APEX     - total -    759,94
ORDS_METADATA ORDS     INDEX        4,5
ORDS_METADATA ORDS     TABLE        1,63
ORDS_METADATA ORDS     LOBINDEX     0,19
ORDS_METADATA ORDS     LOBSEGMENT   0,38
ORDS_METADATA ORDS     - total -    6,69

This is a example of one of my apex environments. As you can see there is only one Apex tablespace “APEX” used. Approximatly the same amount of data is in tables, in indexes and in LOBs. LOBSEGMENT indicates that there had been some wwv_flow_files activities going on.

Run the scripts

The scripts are located in the apex subfolder. For example D:/product/apex/apex_5.1/apex.

Navigate to that folder and start an sqlplus session with sys as sysdba. If you are in a CDB/PDB environment connect to the PDB not to CDB$ROOT. To connect to the PDB, the service name needs to be provided. Because of that also a running listener and a matching tnsnames.ora file is needed.

Assuming the following settings:

  • tablespace for apex and apex files: APEX
  • temp tablespace: TEMP
  • image directory: /i/

Then those 3 scripts need to run.

@apexins1.sql APEX APEX TEMP /i/
@apexins2.sql APEX APEX TEMP /i/

stop ORDS

@apexins3.sql APEX APEX TEMP /i/

restart ORDS

Phase 4 is automatically started by running a dbms_scheduler job.

Check privs and synonyms

Sometimes we give extra access from the Apex Schema to our own custom schema. For example in one Application I extended the Apex Feedback functionality and did use the view APEX_TEAM_FEEDBACK. Such changes need to be moved to the new APEX_050100 schema.

ACLs

Here is just how to check if the Apex 5.0 schema has any network ACLs set.

SELECT *
FROM DBA_HOST_ACES xe
where principal = 'APEX_050000';

I created a script to duplicate all ACEs that exists for APEX_050000 to APEX_050100.

The script is shown and explained in a separate blog post.

 

Grants

Here is just how to check if the Apex 5.0 schema has any objects granted to other schemas.


select *
from all_tab_privs
where grantor like 'APEX\_%' escape '\'
and grantee not in ('PUBLIC','SYS','SYSTEM');

It will show objects like tables, views and packages that have privileges granted directly.

To see only the missing grants you can run the following statement. If it returns no rows, then you are fine.


select GRANTEE , TABLE_SCHEMA , TABLE_NAME , PRIVILEGE , GRANTABLE ,HIERARCHY
from all_tab_privs
where grantor in ('APEX_050000','APEX_050100')
and not regexp_like (grantee,'^(APEX_|ORDS_|SYSTEM$|PUBLIC$|SYS$)')
group by GRANTEE , TABLE_SCHEMA , TABLE_NAME , PRIVILEGE , GRANTABLE ,HIERARCHY
having count(*) = 1 and min(grantor) = 'APEX_050000';

Those grants probably need to be copied to the new apex_50100 schema.

Search Source code

You should search the complete application, if there are any references to APEX_050000. This should be done after the migration.

downgrade to 5.0

The section how to downgrade an application back to 5.0 is currently missing from the documentation. Here is a official blog post how to do it: http://jastraub.blogspot.de/2017/01/ooops-i-did-it-again.html

This is the sql script that Jason Straub published to do the downgrade:

alter session set current_schema = SYS;

@wwv_flow_val.sql
@wwv_flow_val.plb 

begin
 dbms_utility.compile_schema('APEX_050000');
end;
/ 

set define '^'
@validate_apex x x APEX_050000

begin
 for i in ( select owner, trigger_name
 from sys.dba_triggers
 where owner = 'APEX_050000'
 and trigger_name like 'WWV_FLOW_UPGRADE_%'
 order by 1 )
 loop
 sys.dbms_output.put_line('Dropping trigger '||i.owner||'.'||i.trigger_name);
 execute immediate 'drop trigger '||i.owner||'.'||i.trigger_name;
 end loop;
end;
/

ALTER SESSION SET CURRENT_SCHEMA = APEX_050000;
exec apex_050000.wwv_flow_upgrade.switch_schemas('APEX_050100','APEX_050000');

ALTER SESSION SET CURRENT_SCHEMA = SYS;
drop context sys.APEX$SESSION;
create context sys.APEX$SESSION using APEX_050000.WWV_FLOW_SESSION_CONTEXT;
declare
 l_apex_version varchar2(30);
begin
 l_apex_version := apex_050000.wwv_flows_release;
 dbms_registry.downgrading('APEX','Oracle Application Express','validate_apex','APEX_050000');
 dbms_registry.downgraded('APEX',l_apex_version);
 validate_apex;
end;
/

select * from dba_tablespaces ;

select username, default_tablespace, temporary_tablespace, profile, oracle_maintained
from dba_users
where regexp_like(username,'^(APEX_|ORDS_)');

select owner as schema, tablespace_name as data_tablespace, nvl(segment_type,' - total -') segment_type, round(sum(bytes)/1024/1024,2) size_in_MB
from dba_extents
where regexp_like(owner,'^(APEX_|ORDS_)')
group by owner, tablespace_name, rollup(segment_type)
;

copy ACLs during Upgrade to Apex 5.1

The following works in 12c only. In previous database versions the package to set ACLs had other modules. Those are now deprecated. The script does not call any deprecated functions.

You can see the ACLs/ACEs by checking the data dictionary.

select * from DBA_HOST_ACES where principal like 'APEX_%';

This little script will check the network ACLs for the APEX5.0 schema and copies it to the Apex 5.1 scheme. It will not delete any ACLs. But use it at your own risk. It automatically commits.


declare
/* Author: Sven Weller
Company: syntegris information solutions GmbH
Purpose: Transfer Network ACLs from APEX_050000 to APEX_050100 schema
Created: 11.01.2017
*/
v_source_schema  varchar2(30) := 'APEX_050000';
v_target_schema  varchar2(30) := 'APEX_050100';

v_ace xs$ace_type;
v_host DBA_HOST_ACES.host%type;
v_lower_port DBA_HOST_ACES.lower_port%type;
v_upper_port DBA_HOST_ACES.upper_port%type;

BEGIN
for apex50acls in (SELECT xe.*
,row_number() over (partition by host, principal, lower_port, upper_port, start_date, end_date, grant_type,inverted_principal, principal_type order by ace_order, privilege) privlist#
,dense_rank() over (order by host, principal, lower_port, upper_port, start_date, end_date, grant_type,inverted_principal, principal_type) group#
FROM DBA_HOST_ACES xe
where principal = v_source_schema
and (xe.host, xe.lower_port, xe.upper_port,xe.start_date, xe.end_date, xe.grant_type,xe.inverted_principal,xe.principal_type)
not in (select t.host, t.lower_port, t.upper_port, t.start_date, t.end_date, t.grant_type,t.inverted_principal, t.principal_type
from DBA_HOST_ACES t
where t.principal = v_target_schema)
order by group# ,privlist#
) loop

if apex50acls.group#>1 and apex50acls.privlist#=1 then
-- store the old acl
DBMS_NETWORK_ACL_ADMIN.APPEND_HOST_ACE(
host        => v_host,
lower_port  => v_lower_port,
upper_port  => v_upper_port,
ace         => v_ace);
end if;

if apex50acls.privlist#=1 then -- first time
-- prepare the new acl
v_ace := xs$ace_type(
privilege_list => xs$name_list(apex50acls.privilege),
principal_name => v_target_schema,
principal_type => case apex50acls.principal_type
when 'APPLICATION' then xs_acl.ptype_xs
when 'DATABASE' then xs_acl.ptype_db
when 'EXTERNAL' then xs_acl.ptype_external
end ,
granted   => apex50acls.grant_type = 'GRANT',
inverted  => apex50acls.inverted_principal = 'YES',
start_date => case when apex50acls.start_date < systimestamp then systimestamp
when apex50acls.start_date > systimestamp then apex50acls.start_date
end,
end_date => apex50acls.end_date
);
v_host:= apex50acls.host;
v_lower_port := apex50acls.lower_port;
v_upper_port := apex50acls.upper_port;
else
-- add a new privilege
v_ace.privilege_list.extend;
v_ace.privilege_list(apex50acls.privlist#):= apex50acls.privilege;
end if;

end loop;
if v_host is not null then
-- store final ace
DBMS_NETWORK_ACL_ADMIN.APPEND_HOST_ACE(
host        => v_host,
lower_port  => v_lower_port,
upper_port  => v_upper_port,
ace         => v_ace);
end if;
END;
/

 

 

about JET Diagrams (JET v2.2.0) in Apex 5

This is a followup from my older blog post “Integrate Oracle JET into Apex 5“.

Oracle JET Diagrams are a new data visualization type in Oracle JET 2.1.0.

This post is organized into three mostly independent parts

  1. How to setup Oracle JET v2.2.0 for usage in Apex
  2. How to copy Oracle JET Container Diagrams from the cookbook into Apex
  3. Using Oracle JET Diagrams with container layout

 

How to setup Oracle JET v2.2.0 for usage in Apex

Step 1) Download the base distribution

From the download page (http://www.oracle.com/technetwork/developer-tools/jet/downloads/index.html) choose the base distribution and download this zip file.

Step 2) Unzip JET into the APEX image folder

Copy and unzip the file into a folder insider your image path from apex.Where you put it is your own choice. I prefere to add it to the library path where oracle jet will also be in Apex 5.1 distribution (/libraries/oraclejet/2.0.2)

You can choose a very similar path “/libraries/oraclejet/2.2.0”. Create this path and unzip the file there.

The next time apex is upgraded remember not to move the image folder but simply to overwrite it (make a copy of the original before that).

Step 3) Create, manipulate and deploy the main.js file

Basis for this should always be the main-template.js file from the subfolder \js\libs\oj\v2.2.0. This template has all the correct paths and versions for all sub modules that are included in the main.js.

Additionally we can add a base-url that points to the folder where we unziped JET. If we add the main.js file in the js folder, then this is not needed. But we come back to that base-url later. So for JET version 2.2.0 the complete main.js file will look like this.


/**
* Example of Require.js boostrap javascript
*/

requirejs.config({
// Path mappings for the logical module names
paths: {
'knockout': 'libs/knockout/knockout-3.4.0',
'jquery': 'libs/jquery/jquery-3.1.0.min',
'jqueryui-amd': 'libs/jquery/jqueryui-amd-1.12.0.min',
'ojs': 'libs/oj/v2.2.0/min',
'ojL10n': 'libs/oj/v2.2.0/ojL10n',
'ojtranslations': 'libs/oj/v2.2.0/resources',
'text': 'libs/require/text',
'promise': 'libs/es6-promise/es6-promise.min',
'hammerjs': 'libs/hammer/hammer-2.0.8.min',
'signals': 'libs/js-signals/signals.min',
'ojdnd': 'libs/dnd-polyfill/dnd-polyfill-1.0.0.min',
'css': 'libs/require-css/css.min',
'customElements': 'libs/webcomponents/CustomElements.min',
'proj4': 'libs/proj4js/dist/proj4'
},
// Shim configurations for modules that do not expose AMD
shim: {
'jquery': {
exports: ['jQuery', '$']
}
},

// This section configures the i18n plugin. It is merging the Oracle JET built-in translation
// resources with a custom translation file.
// Any resource file added, must be placed under a directory named "nls". You can use a path mapping or you can define
// a path that is relative to the location of this main.js file.
config: {
ojL10n: {
merge: {
//'ojtranslations/nls/ojtranslations': 'resources/nls/myTranslations'
}
},
text: {
// Override for the requirejs text plugin XHR call for loading text resources on CORS configured servers
useXhr: function (url, protocol, hostname, port) {
// Override function for determining if XHR should be used.
// url: the URL being requested
// protocol: protocol of page text.js is running on
// hostname: hostname of page text.js is running on
// port: port of page text.js is running on
// Use protocol, hostname, and port to compare against the url being requested.
// Return true or false. true means "use xhr", false means "fetch the .js version of this resource".
return true;
}
}
}
});

/**
* A top-level require call executed by the Application.
* Although 'ojcore' and 'knockout' would be loaded in any case (they are specified as dependencies
* by the modules themselves), we are listing them explicitly to get the references to the 'oj' and 'ko'
* objects in the callback.
*
* For a listing of which JET component modules are required for each component, see the specific component
* demo pages in the JET cookbook.
*/
require(['ojs/ojcore', 'knockout', 'jquery', 'ojs/ojknockout', 'ojs/ojbutton', 'ojs/ojtoolbar','ojs/ojmenu'], // add additional JET component modules as needed
function(oj, ko, $) // this callback gets executed when all required modules are loaded
{
// add any startup code that you want here
}
);

Step 4) Reference the main.js file in the page template

 

How to copy Oracle JET Container Diagrams from the cookbook into Apex

The JET cookbook demo can be found here. The interactivity in this visualization is charming. We can organize nodes into containers and expand or decrease the container.

Step 1) Copy the html and the js code from the cookbook to our page

Step 2) Add the require.config call

This time we add a base URL.

requirejs.config({
  baseUrl: '#IMAGE_PREFIX#libraries/oraclejet/js',
  // Path mappings for the logical module names
  paths: {
    'knockout': 'libs/knockout/knockout-3.4.0',
    'jquery': 'libs/jquery/jquery-3.1.0.min',
    'jqueryui-amd': 'libs/jquery/jqueryui-amd-1.12.0.min',
    'ojs': 'libs/oj/v2.2.0/min',
    'ojL10n': 'libs/oj/v2.2.0/ojL10n',
    'ojtranslations': 'libs/oj/v2.2.0/resources',
    'text': 'libs/require/text',
    'promise': 'libs/es6-promise/es6-promise.min',
    'hammerjs': 'libs/hammer/hammer-2.0.8.min',
    'signals': 'libs/js-signals/signals.min',
    'ojdnd': 'libs/dnd-polyfill/dnd-polyfill-1.0.0.min',
    'css': 'libs/require-css/css.min',
    'customElements': 'libs/webcomponents/CustomElements.min',
    'proj4': 'libs/proj4js/dist/proj4'
  },
  // Shim configurations for modules that do not expose AMD
  shim: {
    'jquery': {
      exports: ['jQuery', '$']
     }
    },

// This section configures the i18n plugin. It is merging the Oracle JET built-in translation
// resources with a custom translation file.
// Any resource file added, must be placed under a directory named "nls". You can use a path mapping or you can define
// a path that is relative to the location of this main.js file.
    config: {
        ojL10n: {
            merge: {
                //'ojtranslations/nls/ojtranslations': 'resources/nls/myTranslations'
            }
        }
    }
});

Step 3) Find out why it is not working yet

The only remaining ressource that could not be loaded should be the diagramLayouts/DemoContainerLayout.js file. The reason is simple. This file is not included in the base zip file. However we can get it directly from the JET cookbook page.

Firstjet_diagram_launch_standalone we open the cookbook in standalone mode. There is a button in the upper right corner that helps us to do so.

Then we inspect the network files again and locate the DemoContainerLayout.js. We can simply copy the address and store the file to our system

jet_diagram_copy_layoutfile

 

Step 4) Copy and integrate the layout file

To integrate this layoutfile into our page I choose a slightly different approach. This is not a file that might be relevant for a default Oracle JET installation. Instead I’d like to add it specifically to my application. In this way I can modify the file and influence the behaviour of my diagram without changing anything for other applications.

So we upload it as a static application file (in my case with a directory “oraclejet”).

And we reference the file directly in the require call. Here the suffix “.js” is important. It tells require that this is a direct file reference and not an alias name for a previously defined ressource.

require([‘ojs/ojcore’, ‘knockout’, ‘jquery’, ‘#APP_IMAGES#oraclejet/DemoContainerLayout.js‘,
‘ojs/ojknockout’, ‘ojs/ojbutton’, ‘ojs/ojdiagram’], function(oj, ko, $, layout) { …

 

Using Oracle JET Diagrams with container layout

An OracleJet diagram is essentially a graph. It consists of nodes and links between the nodes. The container diagram has the additional posibility to organize nodes into a hierarchy. Other layouts have similar possibilities but choose to render it completly different.

Which layout to use is configured in the attributes of the ojDiagram component (View) and inside the javascript Model.ojet_diagram_layout1ojet_diagram_layout2

The container layout has only very limited drawing possibility. Nodes are rectangles and links are lines.

The main nodes (containers) are always drawn horizontically from left to right. Child nodes are always drawn vertically  from top to bottom and inside their parent container. All nodes that have child nodes are considered containers and can potentially be expanded or collapsed.

Links that connect nodes that are side by side are attached to the left or right side of the nodes. Links that connect nodes that are  above or below each other connect to the top and bottom part of a node.

This very simple drawing approach allows for some nice small visualizations. For example we can easily present process flows with that. If we want to draw huge networks, then another layout will be more appropriate.

How to change descriptions

Nodes have several properties that can be set. A complete list can be found in the ojDiagram doc.

  • id ==> will uniquely identify a node. It will also be used as StartNode and EndNode in the link properties.
  • label ==> the text that is printed inside the node.
  • shortDesc ==> a small description that is shown as a tooltip when hovering over a node

The cookbook uses a small function to simplify node creation. But we can also create a node using direct json syntax.

this.nodes.push({
id: "id",
label: "label",
shortDesc: "shortDesc",
nodes: null
});

 

How to color the nodes

All nodes have a default style. The default is a kind of greyish background. We can change the backgroundStyle property for our node.

this.nodes[0].nodes[0].nodes[0].backgroundStyle = 'height:20px;width:60px;
border-color:#444444;background-color:#00FF80;border-width:.5px;
border-radius:8px';

This colors the first child of the first child in the first container to green and rounds the corners.

We can also simply set the background color, without setting all the other properties. For example for the second child in the first container.

this.nodes[0].nodes[1].backgroundStyle = "background-color:red";

It is possible to add images or shapes to our diagram. We can position them in the middle, left or right inside a node. This line will put a small yellow “human” in node N1.

this.nodes[1].icon = {width: 10, height: 10, halign: "right", 
shape: "human", color:"yellow", borderColor:"grey"};

The following shapes are predefined.

square, plus, diamond, triangleUp, triangleDown, 
human, rectangle, star, circle

It is possible to create custom shapes by providing an SVG path. Or we can add images instead of a shape. However this post is to small to explain that in more detail.

Next I show how to create a custom gradiant fill. There are two steps to do so.
First create the SVG-Fill-Gradient

<svg height="0" width="0">
    <defs>
      <linearGradient id="gradient" x1="0%" y1="100%">
        <stop offset="0%" style="stop-color: #66ccff"></stop>
        <stop offset="80%" style="stop-color: #0000FF"></stop>
      </linearGradient>
    </defs>
  </svg>

then add this gradient to the node.

this.nodes[0].containerStyle = {fill: "url(#gradient)"};

And the combined result looks like this. It certainly is not pretty, but it shows what is possible using a little imagination.

ojet_diagram_colored

Further readings: JET custom shapes and image markers

The source code for this coloring example can be copied into the JET cookbook page.

The HTML part

<div id='diagram-container>
<svg height="0" width="0">
<defs>
			<linearGradient id="gradient" x1="0%" y1="100%">
<stop offset="0%" style="stop-color: #66ccff">;</stop>
<stop offset="80%" style="stop-color: #0000FF"></stop>
</linearGradient>
</defs>
</svg>
<div id="diagram" data-bind="ojComponent: {
component: 'ojDiagram',
layout: layoutFunc,
animationOnDataChange: 'auto',
animationOnDisplay: 'auto',
maxZoom:2.0,
selectionMode: 'single',
styleDefaults : styleDefaults,
nodes : nodes,
links : links,
expanded: expanded
}"
style="max-width:800px;width:100%; height:600px;"></div>
</div>

The javascript part


require(['ojs/ojcore', 'knockout', 'jquery', 'diagramLayouts/DemoContainerLayout',
'ojs/ojknockout', 'ojs/ojbutton', 'ojs/ojdiagram'], function(oj, ko, $, layout) {
function model(data) {
var self = this;
self.layoutFunc = layout.containerLayout;
function createNode(id, nodes) {
return {
id: id,
label: id,
shortDesc: "Node " + id,
nodes: nodes ? nodes : null
};
}
function createLink(id, startId, endId) {
return {
id: id,
startNode: startId,
endNode: endId,
shortDesc: "Link " + id + ", connects " + startId + " to " + endId
};
}
this.expanded = ['N0', 'N00'];
this.nodes = [], this.links = [];
var childNodesN00 = [createNode("N000"), createNode("N001")];
var childNodesN0 = [createNode("N00", childNodesN00), createNode("N01"), createNode("N02")];
var childNodesN2 = [createNode("N20"), createNode("N21"), createNode("N22")];
this.nodes.push(createNode("N0", childNodesN0));
this.nodes.push(createNode("N1"));
this.nodes.push(createNode("N2", childNodesN2));
this.nodes.push(createNode("N3"));

this.nodes[0].nodes[0].nodes[0].backgroundStyle = 'height:20px;width:60px;border-color:#444444;background-color:#00FF80;border-width:.5px;border-radius:8px';
this.nodes[0].nodes[1].backgroundStyle = "background-color:red";
this.nodes[1].icon = {width: 10, height: 10, halign: "right", shape: "human", color:"yellow", borderColor:"grey"};
this.nodes[0].containerStyle = {fill: "url(#gradient)"};

// disable selection on some containers
this.nodes[0].selectable = 'off';
this.nodes[0].nodes[0].selectable = 'off';

// create the links
this.links.push(createLink("L0", "N2", "N3"));
this.links.push(createLink("L1", "N1", "N21"));
this.links.push(createLink("L2", "N1", "N22"));
this.links.push(createLink("L3", "N000", "N1"));
this.links.push(createLink("L4", "N001", "N1"));
this.links.push(createLink("L5", "N02", "N1"));
this.links.push(createLink("L6", "N000", "N001"));

this.styleDefaults = {
nodeDefaults: {
containerStyle: "border-color:#abb3ba;background-color:#f9f9f9;border-width:.5px;border-radius:1px;padding-top:20px;padding-left:10px;padding-bottom:10px;padding-right:10px;",
labelStyle: "color:#252525;font-size:8px;font-family:'Helvetica Neue', Helvetica, Arial, sans-serif;font-weight:normal;font-style:italic",
backgroundStyle: 'height:20px;width:60px;border-color:#444444;background-color:#f9f9f9;border-width:.5px;border-radius:1px',
icon: null
},
linkDefaults: {startConnectorType: "circle", endConnectorType: "arrow"}
};
}
$(document).ready(
function() {
ko.applyBindings(new model(),
document.getElementById('diagram-container'));
}
);
});

How to modify links

Modifing links is very similiar to modifing nodes.One main difference however is the definition of the “arrows” on each side of the link. Usually we want to have all links look the same. So instead of changing the properties of each single link, we just switch the default behaviour.

The following line will make the links look like simple arrows.

linkDefaults: {startConnectorType: "none", endConnectorType: "arrow"}

Also for static diagrams I prefer to give each link a proper description (shortDesc).

 

How to add interactivity

Back to our apex application.The goal here is to click on a node (or a link) and to show a specific Apex region that corresponds with the selection.

First we allow to select a node. The diagram layout can do “single” or “multiple” selections. To allow this, we add the selectionMode: ‘single’ property to our view. And since we want to work with the selected parts later, we also add selection: selectedNodes.

This selectedNodes needs to be defined in the nodeProperty.

Then we prepare our apex page and put some “apex connector logic” in place.

We create a region for each node that we want to interact with.

The region gets a static id R_DETAILS_XXX where XXX is the ID of the node and it gets a custom attribute

style="display: none;"

As a result we know the ID of each region and the region will be rendered but not displayed. With that we add a small function showDetails to the page. It will show one region and hide another (the previous) one.


function showDetails(showNodes,hideNodes) {
 console.log("ShowDetails="+showNodes);
 if (hideNodes!==""){
 $("#R_DETAILS_"+hideNodes).hide();
 };

$("#R_DETAILS_"+showNodes).show();

}

 

 

The JET and knockout binding will then be done using the optionChange property.

We add a function to react on the change of a selection. The “value” and the “previousValue” will then hold the ID of the node (or link). If we chose to do multiple selections it can be an array of nodes.

Html

<div id="diagram" data-bind="ojComponent: {
component: 'ojDiagram',
layout: layoutFunc,
selection: selectedNodes,
selectionMode: 'single',
styleDefaults : styleDefaults,
nodes : nodes,
links : links,
optionChange: diagramOptionChange
}"
style="max-width:800px;width:100%; height:600px;"></div>

Javascript


// set default selection

this.selectedNodes = ['N000'];

// disable selection on some containers
 this.nodes[0].selectable = 'off';
 this.nodes[0].nodes[0].selectable = 'off';

self.diagramOptionChange = function (event, data) {
 console.log("optionchanged="+data.option);
 if (data['option'] == 'selection') {
   showDetails(data['value'], data['previousValue']);
 }};

 

 

 

Further reading:

Data vizualization blog: A guide to diagrams (part9)

 

 

 

adaptive cursor sharing and DBMS_SQL

A recent post in the OTN mentioned that DBMS_SQL does not use bind peeking for binded variables. I couldn’t believe that, so I decided to do some tests for myself. The findings are strange…

This is potentially relevant for Apex developers, since the Apex engine uses DBMS_SQL. I still have to do further testing to check the behaviour in Apex.

First I setup some test to show bind peeking and adaptive cursor behaviour using normal statements in SQL*Plus or SQL Developer. After that we move to dynamic SQL, especially DBMS_SQL, and try the same again.

scenario setup

create skewed testdata

--drop table demo_big;
create table demo_big as
select level as id, 
       case when mod(level,10000)=0 
            then 'VALID' 
            else 'INVALID' 
       end as status
from dual
connect by level <= 1000000;

desc demo_big;

Name   Null Type
------ ---- -----------
ID          NUMBER
STATUS      VARCHAR2(7)

select status, count(*) 
from demo_big 
group by rollup(status);
STATUS     COUNT(*)
INVALID    999900
VALID      100
           1000000

So we have a few VALID values and a lot of INVALID ones.

Even if we have only two different values an index will be useful on this column. The data distribution is so skewed that any access trying to read the VALID values would profit from an index. However if we access the INVALID column we don’t want to use the index and instead want a full table scan.

-- create indexes on all the important columns
create unique index demo_big_id_ix on demo_big(id);
create index demo_big_status_ix on demo_big(status);

create statistical data(histograms)

First we create the statistics so that the optimizers knows what is in that table and how the data looks like.

-- create statistics and test histogram
execute dbms_stats.gather_table_stats(user, 'DEMO_BIG', method_opt=>'for all indexed columns size skewonly');

Then we check the data dictionary checks to see what has been created so far.
The hist_numtochar2 function is copied from Martin Widlake (Source: https://mwidlake.wordpress.com/2009/08/11/). It just helps to do a crude translation of the numerical histogram bucket endpoints. The code of the function can be found at the end of this post.

I don’t show the results from all selects but the last one. The other selects are here just as references. They are helpful to see what kind of statistics are in place.

select table_name, num_rows, blocks, last_analyzed
from user_tables
where table_name = 'DEMO_BIG';

select table_name, column_name, num_distinct, histogram, num_buckets, sample_size 
from user_tab_columns
where table_name = 'DEMO_BIG';

select *
from user_histograms
where table_name = 'DEMO_BIG' and column_name = 'STATUS';

select table_name, column_name, endpoint_number, endpoint_value, hist_numtochar2(endpoint_value) as translated_value
from user_histograms
where table_name = 'DEMO_BIG' and column_name = 'STATUS';

Here we see a frequency histogram with two buckets for the column STATUS.

TABLE     COLUMN  ENDPOINT_NUMBER    ENDPOINT_VALUE           TRANSLATED_VALUE
DEMO_BIG  STATUS  999900     380626532452853000000000000000000000    INVALJ*
DEMO_BIG  STATUS  1000000    447861930473196000000000000000000000    VALID

The first bucket holds 999900 values where status= INVALID.
The next bucket holds 1000000-999900 = 100 where status = VALID.

This of cause matches exactly what we created. So the statistical info in the dictionary is absolutly correct.

Tests

Now that our setup is in place, we can do some basic testing to see different plans.

check execution plan with LITERALS

-- test different cursor/execution plan using plain selects
select count(*) from demo_big where status = 'VALID';
select * from table(dbms_xplan.display_cursor);
----------------------------------------------------------------------------------------
| Id  | Operation         | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |                    |       |       |     3 (100)|          |
|   1 |  SORT AGGREGATE   |                    |     1 |     8 |            |          |
|*  2 |   INDEX RANGE SCAN| DEMO_BIG_STATUS_IX |   100 |   800 |     3   (0)| 00:00:01 |
----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("STATUS"='VALID')
select count(*) from demo_big where status = 'INVALID';
select * from table(dbms_xplan.display_cursor);
-------------------------------------------------------------------------------
| Id  | Operation          | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |          |       |       |   701 (100)|          |
|   1 |  SORT AGGREGATE    |          |     1 |     8 |            |          |
|*  2 |   TABLE ACCESS FULL| DEMO_BIG |   999K|  7811K|   701   (2)| 00:00:01 |
-------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter("STATUS"='INVALID')

Perfect! As expected one does an index access / index range scan, the other does a full table scan.

check execution plan with BIND parameters

select count(*) from demo_big where status = :P_ENTER_VALID;
select * from table(dbms_xplan.display_cursor);
----------------------------------------------------------------------------------------
| Id  | Operation         | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |                    |       |       |     3 (100)|          |
|   1 |  SORT AGGREGATE   |                    |     1 |     8 |            |          |
|*  2 |   INDEX RANGE SCAN| DEMO_BIG_STATUS_IX |   100 |   800 |     3   (0)| 00:00:01 |
----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("STATUS"=:P_ENTER_VALID)
select count(*) from demo_big where status = :P_ENTER_INVALID;
select * from table(dbms_xplan.display_cursor);
-------------------------------------------------------------------------------
| Id  | Operation          | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |          |       |       |   701 (100)|          |
|   1 |  SORT AGGREGATE    |          |     1 |     8 |            |          |
|*  2 |   TABLE ACCESS FULL| DEMO_BIG |   999K|  7811K|   701   (2)| 00:00:01 |
-------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter("STATUS"=:P_ENTER_INVALID)

The two statements are not identical because the name of the bind parameter is different. Because of that we get two different cursors. Each with a different execution plan.
This test shows that bind peeking works. During the hard parse phase the value of the binded parameter was checked (peeked) so that the correct estimations for the resulting rows/cardinalities were made. Which led in turn to the correct plan for each of the two different statements. However this first parameter “freezes” the execution plan. So that if we change the binded value, then the same plan is reused.

This behaviour was enhanced in 11g with the introduction of adaptive cursor sharing and got steadily improved since then.

To test adaptive behaviour we run the first query again a few times (at least 4 times). But this time we do not pass VALID, but instead INVALID as a parameter.

After that we can see a new child cursor 1 for the sql_id “7rjdcm7v7hfrs”.

select is_bind_sensitive, is_bind_aware, sql_id, child_number, sql_text
from v$sql
where upper(sql_text) like 'SELECT%FROM DEMO_BIG WHERE%'  and sql_text not like '%v$sql%'
;
IS_BIND    IS_BIND SQL_ID        CHILD   SQL_TEXT
_SENSITIVE _AWARE                _NUMBER
Y          N       7rjdcm7v7hfrs 0       select count(*) from demo_big where status = :P_ENTER_VALID
Y          Y       7rjdcm7v7hfrs 1       select count(*) from demo_big where status = :P_ENTER_VALID
Y          N       5zkmtfj331xmc 0       select count(*) from demo_big where status = :P_ENTER_INVALID
select * from table(dbms_xplan.display_cursor('7rjdcm7v7hfrs',0));
----------------------------------------------------------------------------------------
| Id  | Operation         | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |                    |       |       |     3 (100)|          |
|   1 |  SORT AGGREGATE   |                    |     1 |     8 |            |          |
|*  2 |   INDEX RANGE SCAN| DEMO_BIG_STATUS_IX |   100 |   800 |     3   (0)| 00:00:01 |
----------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   2 - access("STATUS"=:P_ENTER_VALID)

select * from table(dbms_xplan.display_cursor('7rjdcm7v7hfrs',1));
-------------------------------------------------------------------------------
| Id  | Operation          | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |          |       |       |   701 (100)|          |
|   1 |  SORT AGGREGATE    |          |     1 |     8 |            |          |
|*  2 |   TABLE ACCESS FULL| DEMO_BIG |   999K|  7811K|   701   (2)| 00:00:01 |
-------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   2 - filter("STATUS"=:P_ENTER_VALID)

This is adaptive behaviour. After a few bad tries a second execution plan is created for the same cursor and used. How many tries are needed? Often it changes on the third try. But it can happen that more are needed.

Test with DBMS_SQL

Now comes the more difficult part. Setup a small plsql block to use DBMS_SQL to run the same statement again using binded parameters.

-- testcase for BIND peeking/aware using DBMS_SQL
declare
  curid    NUMBER;
  ret      INTEGER;
  sql_stmt VARCHAR2(200);
begin
  sql_stmt := 'select count(*) from demo_big where status = :P_STATUS';

  -- get cursor handle
  curid := DBMS_SQL.OPEN_CURSOR;
  DBMS_SQL.PARSE(curid, sql_stmt, DBMS_SQL.NATIVE);

  DBMS_SQL.BIND_VARIABLE(curid, 'P_STATUS', 'VALID');
  ret := DBMS_SQL.EXECUTE_and_fetch(curid);

  DBMS_SQL.PARSE(curid, sql_stmt, DBMS_SQL.NATIVE);
  DBMS_SQL.BIND_VARIABLE(curid, 'P_STATUS', 'INVALID');
  for i in 1..5 loop
    ret := DBMS_SQL.EXECUTE_and_fetch(curid);
  end loop;

DBMS_SQL.close_cursor(curid);
end;
/

The v$sql view has two interesting columns.
IS_BIND_SENSITIVE shows cursors where the execution plan can evolve.
IS_BIND_AWARE shows child cursors where a new plan was created, meaning that the cursor was evolved.

select is_bind_sensitive, is_bind_aware, sql_id, child_number, sql_text
from v$sql
where upper(sql_text) like 'SELECT%FROM DEMO_BIG WHERE%'
and sql_text not like '%v$sql%'
;
IS_BIND_SENSITIVE    IS_BIND_AWARE    SQL_ID    CHILD_NUMBER    SQL_TEXT
Y    N    7rjdcm7v7hfrs    0    select count(*) from demo_big where status = :P_ENTER_VALID
Y    Y    7rjdcm7v7hfrs    1    select count(*) from demo_big where status = :P_ENTER_VALID
N    N    3kpu54a461gkm    0    select count(*) from demo_big where status = :P_STATUS
N    N    3kpu54a461gkm    1    select count(*) from demo_big where status = :P_STATUS
Y    N    5zkmtfj331xmc    0    select count(*) from demo_big where status = :P_ENTER_INVALID
N    N    fjjm63y7c6puq    0    select count(*) from demo_big where status = :P_STATUS2
N    N    1qx03gdh8712m    0    select count(*) from demo_big where status = 'INVALID'
N    N    2jm3371mug58t    0    select count(*) from demo_big where status = 'VALID'

The two child cursors

-- find the cursor id
select sql_id, child_number, bucket_id, count, is_bind_sensitive, is_bind_aware, sql_text
from v$sql s
left join v$sql_cs_histogram h using (sql_id, child_number)
where upper(s.sql_text) like 'SELECT%FROM DEMO_BIG WHERE%'
and s.sql_text not like '%v$sql%'
;

-- check the execution plan for both child cursors
select * from table(dbms_xplan.display_cursor('3kpu54a461gkm',0));
select * from table(dbms_xplan.display_cursor('3kpu54a461gkm',1));

-- see the plans in the SGA
select * from v$sql_plan where sql_id = '3kpu54a461gkm';
select * from v$sql_plan where sql_id = 'fjjm63y7c6puq';

Now the strange thing is: The first cursor is using a FULL table scan. But the first execution was done using the VALID value and should have resulted in the index range scan. The second child cursor does not even have an execution plan!

NOTE: cannot fetch plan for SQL_ID: 3kpu54a461gkm, CHILD_NUMBER: 1
      Please verify value of SQL_ID and CHILD_NUMBER; 
      It could also be that the plan is no longer in cursor cache (check v$sql_plan)

What is going on here? v$sql has a column EXECUTIONS which tells us how often this child cursor was called. It is always 0 for the child 1 from the DBMS_SQL cursor!

I did several more tests using DBMS_SQL. Even a case where the cursor was closed and opened several times. All with the same result.

Interpreting the results

I’m still not yet exactly sure what is going on there. It seems as if bind peeking and adaptive cursor sharing does not work with DBMS_SQL. But why do we see then two child cursors? It seems as if the different parameter values at least have the effect that a new child is created. And this happens only when there is a need for a different execution plan. But where is the plan for that? I still have some doubts. Maybe the execution plan in v$sql is lying is this case? Since DBMS_SQL goes deep into the internals it might be that some of the normal behaviours are not reflected in some of the views.

The cursor itself is in the private SQL workarea and I never checked that. Another approach would be to setup a scenario where we can measure the perormance difference. The test case I used was too small to see a desicive difference between the two possible plans.

Also we have to remember that the need for DBMS_SQL is rare. A normal select with binded parameters is certainly not a case where need dynamic SQL. A more typical case would be a cursor | statement where we do not know at compile time what columns are returned. Then we can use DBMS_SQL to analyse the structure of such a cursor and react on that.

However if we build some kind of dynamic frameworks and think about using DBMS_SQL we should rethink our strategy. Maybe it is easier to provide all the possible cases as plsql apis and thereby compiling during creation, instead of building the statement in a completly dynamic fashion but suffering some essential drawbacks.

Recommendations

1) Avoid DBMS_SQL, consider to use native SQL (execute_immediate) instead
2) If you have a skewed data distribution, make sure your plans are bind_sensitive
3) If you can guarantee an even data distribution, consider to add the NO_BIND_AWARE hint. This should be needed only in some extrem situations (very high performance requirements or cursor cache issues)

Appendix

The function that I used previously:

create or replace function hist_numtochar2(p_num number
,p_trunc varchar2 :='Y') return varchar2
-- Author: Martin Widlake
-- Source: https://mwidlake.wordpress.com/2009/08/11/
is
  m_vc varchar2(15);
  m_n number :=0;
  m_n1 number;
  m_loop number :=7;
begin
  m_n :=p_num;
  if length(to_char(m_n))>36 then
    --dbms_output.put_line ('input too short');
    m_vc:='num format err';
  else
    if p_trunc !='Y' then
      m_loop :=15;
    else
      m_n:=m_n+power(256,9);
    end if;
    --dbms_output.put_line(to_char(m_N,'999,999,999,999,999,999,999,999,999,999,999,999'));
    for i in 1..m_loop loop
      m_n1:=trunc(m_n/(power(256,15-i)));
      --    dbms_output.put_line(to_char(m_n1));
      if m_n1!=0 then m_vc:=m_vc||chr(m_n1);
      end if;
      dbms_output.put_line(m_vc);
      m_n:=m_n-(m_n1*power(256,15-i));
    end loop;
  end if;
  return m_vc;
end;
/