is {JSON}, or not is {JSON}, that is the question

Whether ’tis nobler in the mind to suffer
The [] and “” of outrageous json,
Or to take Arms against a Sea of troubles,
And by opposing {} end them: to select, to browse
No more; and by a browse, to say we end
the json array, and the thousand json objects
that Flesh is heir to? ‘Tis a consummation
devoutly to be wished. To select, to browse…

Intro

So I have this logging table where sometimes the message can be a json document. Most of the times it is an error message or some tracing information. But sometimes I want to show what data currently is inside a plsql collection. I wrote a small conversion function, that returns a clob which should contain JSON. This json ends up as a message in my logging table.

Inside an apex application I added the possibility to show this JSON as a modal page. I use the JSONView Plugin, however most browsers now have a default JSON display capability.

I detect if the message contains json data by using the 12.1 IS JSON operator.

In case it is json, I provide a link to the modal page and print the data there.

case 
  when LOG_MESSAGE is json 
  then
    '<a href="'||
    apex_util.prepare_url('f?p='||v('APP_ID')||':123:'||v('APP_SESSION')
    ||'::::P123_LOG_ID:'||to_char(LOG_ID))
    ||'" title="show JSON data">{JSON}</a>'
else 
  '<pre>'||LOG_MESSAGE||'</pre>'
end as log_message_enhanced

 
The modal page uses the application/json mime type.

The result looks similar to this:

 
modal_json_ok
 

How build such an apex page or how to do a conversion from a plsql collection to json is not part of this blog post. If your are interested in that, please leave a comment and I might consider to publish how to do so.

Problem

For certain documents the browser was not able to show the json. Instead it returned an error message and the json in text format. Here is an example.
modal_json_error

Sorry for the German Message, but I was not able to switch my Firefox to english mode easily. The error essentially says: “We could not parse the json, it seems the document does not comply to the specifications.”

So the database says it is json, but the browser says it is not!

What is going on? That is the question.
 

Solution

The reason for the behaviour is that Oracles IS JSON check uses the lax json syntax (by default). LAX json allows several things, among others it allows to have a list of objects with a trailing comma at the end. Exactly my issue.

Here is a basic SQL demonstration. Note the comma after the “Larry” inside the json object step.

select * from dual
where '[{"index":1,"name":"Larry",}]' 
      is json;
DUMMY
-----
X

So it is JSON. But only lax json.

Fortunatly we can also do a check for the more strict json interpretation.

select * from dual
where '[{"index":1,"name":"Larry",}]' 
      is json (STRICT);
No rows selected.

Can you spot the difference? The STRICT keyword including parenthesis tells the database that the document needs to confirm to the more strict specification.

This is of cause documented: About Strict and Lax JSON Syntax

The main differences are

  • STRICT: each JSON field and each string value must be enclosed in double quotation marks (“).
    LAX: An object literal can also be enclosed in single quotation marks (‘).
  • LAX: Case variations for keywords true, false, and null (for example, TRUE, True, TrUe, fALSe, NulL).
  • LAX: An extra comma (,) after the last element of an array or the last member of an object (for example, [a, b, c,], {a:b, c:d,}).
  • LAX: Numerals
    • with one or more leading zeros (for example, 0042.3).
    • Fractional numerals that lack 0 before the decimal point (for example, .14 instead of 0.14).
    • Numerals with no fractional part after the decimal point (for example, 342. or 1.e27).
    • A plus sign (+) preceding a numeral, meaning that the number is non-negative (for example, +1.3).

But there are more differences.

Using the (STRICT) keyword, solved my problem. Only those json documents where linked, that could be shown in the browser. The others were rendered as normal text.

Conclusion

 
Be aware that there are slightly different JSON specifications available.

If you want to show json inside a browser, then make sure the json document confirms to the strict json specification.

If you want to use json inside javascript, then probably the lax version is better suited.
 

 

10 Oracle plsql things you probably didn’t know

Many people enjoyed reading my last blog post “10 Oracle SQL features you probably didn’t know”. So I decided to spice it up a little more and do something similar for plsql.

I hope you like that one too.

With our further ado, let’s get started with the list.

10. The first Oracle version to feature plsql was Oracle DB version 6 (1988)

And no. Steven Feuerstein did NOT invent it.

At that time PLSQL did not have stored procedures nor did it have proper exception handling. But it already had embedded SQL.

I learned that from the great Lewis Cunningham. One of the godfathers of development with SQL and PLSQL.

Stored Procedures were added in Oracle 7 (1992). 7.3 was the version when I started to work with an Oracle Database. At that point plsql was in version 2.x. However there never was a version 3. Plsql versioning jumped to 8 when Oracle DB version 8 was introduced and plsql versioning was aligned with the db versions. So there are no plsql versions 3-7. But honestly? Nobody cares anymore that plsql does have its own versioning.

9. labels do not need to match

We can use <<labels>> in plsql. Mostly to increase readability of code. This is especially useful for loop constructs, but it also works for normal begin..end blocks.

begin
 <<mainloop>>
 loop
   <<check_some>>
   begin 
     <<dummyloop>>
     for r in (select * from dual) loop
       -- do nothing
       null;
     end loop dummyloop;
   end check_some;
   exit when 1=1;
 end loop mainloop;
end; 
/

As we can see there are several <<labels>>. And the usage of those labels at the “end” helps to distinguish which code part we are looking at [1].

But this is only as good as the programmer is!

Unfortunatly this works too:

begin
 <<mainloop>>
 loop
   <<check_some>>
   begin 
     <<dummyloop>>
     for r in (select * from dual) loop
       -- do nothing
       null;
     end loop mainloop;
   end check_sum;
   exit when 1=1;
 end loop dummyloop;
end; 
/

Here I mixed up the labels from the loops. And the “end check_sum” does not match the label at the beginning of the block. In fact the “label” at the end can be anything that is not a reserved word.

It runs identical to the previous code (still doing nothing). But it is way more confusing for the “future me” that has to maintain this mess.

8. pragma SERIALLY_REUSABLE

During the lifetime of a session, the package state (package variables, open cursors, etc.) are held in the UGA (user global area).  Subsequent calls in the same session to the same package profit from that by not needing to reinitialize the package state.

The pragma SERIALLY_REUSABLE is able to change this behaviour.

serially_reusable packages

After the work unit (server call) of a SERIALLY_REUSABLE package completes, Oracle Database does the following:

  • Closes any open cursors.
  • Frees some nonreusable memory (for example, memory for collection and long VARCHAR2 variables)
  • Returns the package instantiation to the pool of reusable instantiations kept for this package.

Essentially this means, that the package state exists only during the package call. Not for the whole session.

So far I never had the need to use this pragma. But I can imagine some very very special situations, where this might become interesting.

7. You can compile a package even while another session is running it

In general this is not possible. Assuming a package is currently running. Or to say it in technical terms: We have an active session (Session A) executing a packaged procedure.

A second session  (Session B) trying to do an “ALTER COMPILE PACKAGE (BODY)” would wait until session A finishes and will then afterwards try to compile the package. Upon success the first session A then will get a “package state has been discarded” error message as soon as it tries to run the same package once again. The second next try to run the package would succeed and will use the new package version. At that time the package state was discarded from the session and the new instantiation can be loaded.

This is all documented and well known behaviour.

We face three potential issues with this

  1. Session B needs to wait
  2. Session A might get an error
  3. Session B might wait so long that the developer decides to kill the client (closing SQL developer) thereby making everything 10 times worse, because the database compile call still is valid on the database session level – blocking all following attempts to run or compile the package.

EBR for the rescue!

Using Edition Based Redefinition (EBR) we can circumvent those issues.

Both sessions just need to use different editions. EBR allows us to store and run different code versions of the same plsql based object in the same database.

 

preparation

First create a package with a long running procedure. My example uses a procedure that runs for exactly 1 minute.


create or replace package myPck is
  procedure runMinute;
end myPck;
/

create or replace package body myPck is
   procedure runMinute is
   begin
     sys.dbms_lock.sleep(60);
   end runMinute;
end myPck;
/

 

Setup an edition DEV$ALPHA that is a child of the default edition (ORA$BASE).


create edition DEV$ALPHA;

 

You need an edition enabled schema to do this.

This is simple to do, but to explain EBR in more detail is beyond the scope of this blog post.

Example scenario

Lets run a few commands in two different sessions.

Session A resembles a USER/TESTER who currently executes the packaged function.

Session B resembles a DEVELOPER who wants to deploy a new version of the package.

Session A does this

set time on

alter session set edition=ORA$BASE;

execute myPck.runMinute;

Session B was started already and after the execute in Session A, run the following script in Session B.

alter session set edition=DEV$ALPHA;

— add a new procedure to the package

create or replace editionable package myPck …
/

create or replace editionable package body myPck …
/

— run the new procedure

exec myPck.run5secs;

Result is Session B finishes way before Session A does complete its 1 minute run.

See screenshotebr_run_sessions

q.e.d.

6. call a (pipelined) table function without the TABLE operator

It works only from 12.2 onwards. It is more of a SQL feature than a plsql one.

This will make a table function look indistinguishable from a parametrized view.

Example: split_string

First lets create a simple little table function. This one here just converts a delimited list into rows.

create or replace function split_string
 (p_str IN VARCHAR2
 ,p_delimiter IN VARCHAR2 default ','
 ) RETURN sys.odcivarchar2list PIPELINED 
IS
/** Function to split strings based upon delimiter
*
* @author Sven Weller
*
* @param p_str input string 
* @param p_delimiter delimiter string, default =, Delimiter should only be 1 char.
* @return list of strings
*
*/
 v_entry varchar2(4000);
 v_remaining_str varchar2(4000);
BEGIN
  -- input string needs to hold something to be able to split
  if p_str is not null then
     <<steps>>
    for i in 1..regexp_count(p_str,'\'||p_delimiter)+1 loop
      -- search + split
      v_entry := rtrim(regexp_substr(p_str,'[^\'||p_delimiter||']*('||p_delimiter||'|$)',1,i),p_delimiter);
      pipe row(v_entry);
    end loop steps; 
  else raise no_data_found; 
  end if; 

END split_string;
/

 

function created.

in 11g we call the function like this:

select * from TABLE(split_string('A:BB::CCC',':'));

in 12.2 we can now call it like that:

select * from split_string('A:BB::CCC',':');

As you can see the TABLE row source operator is gone. And it still works! The results of both statements are identical.

COLUMN_VALUE
A
BB
CCC

Want to test it? I made an example on livesql.com.

At the moment this is an undocumented 12.2 feature. So don’t use it for production code (yet). I quite like it. Less code is better! It might become some de-facto standard (similar to connect by level) and eventually will make it into the documentation.

5. dot notation for parameters

We can refer to parameters using the name of the module that declared them. This is useful when we need to distinguish a parameter from a column name.

example


create or replace function myFancyFunc (dummy in varchar2) return number
is
  ret number := 0;
begin
  begin
    select 1 into ret
    from dual
    where dummy = myFancyFunc.dummy
    and rownum = 1;
  exception
    when no_data_found then null;
  end;
  return ret;
end myFancyFunc;
/

Function MYFANCYFUNC compiled

select myFancyFunc('X') from dual;

1

select myFancyFunc('Y') from dual;

0

The function simply compares the value in the dummy column of the dual table to the value we input. If instead we would just compare dummy=dummy then we would get always 1 as a result. No matter what the input is. Even if we add an alias to the table and prefix the column with an alias, the non aliased “dummy” will still be interpreted as a column.

This behaviour is documented: Oracle Doc 12.1 – plsql name resolution

If a SQL statement references a name that belongs to both a column
and either a local variable or formal parameter,
then the column name takes precedence.

Interestingly we can also use labels on block level for specifying variables that are defined in this block.

set serveroutput on
<<main>>
declare
  dummy varchar2(10) := 'Y';
begin
  <<block1>>
  declare
    dummy varchar2(10) := 'X';
  begin
  <<block2>>
    declare
    dummy varchar2(10) := 'A';
    begin
      select dummy
      into dummy
      from dual
      where dummy = block1.dummy;

      dbms_output.put_line('MainBlock:'||main.dummy);
      dbms_output.put_line('Block1:'||block1.dummy);
      dbms_output.put_line('Block2:'||block2.dummy);
    end block2;
  end block1;
end main;
/

PL/SQL procedure successfully completed.

MainBlock:Y
Block1:X
Block2:X

Without dot notation the innermost variable (block2) is used – as we can see in the INTO part. And we can reference a different variable with the same name from a “higher” declaration by using the dot notation.

 

4. variable names be emojis

example

set serveroutput on
declare
  "💩"exception;
  pragma exception_init("💩",-20001);

  "⌚" timestamp := systimestamp;
  "🕑"interval day to second;
  "🎲"number;
  "💤"number := 2;
begin
  "🎲":= round(dbms_random.value(1,6));
  for "🔜"in 1.."🎲"loop
    dbms_lock.sleep("💤");
  end loop;
  "🕑":= systimestamp - "⌚" ;
  dbms_output.put_line('Slept for '|| "🕑");
exception
  when "💩"then
    dbms_output.put_line('Sorry something bad happend!');
    raise "💩";
end;
/

PL/SQL procedure successfully completed.

Slept for +00 00:00:08.049000

The source code looks a little bit different in sql developer. But trust me. I simply copy&pasted it from there to here.

emojicode

To make this work you need to use a font that supports emoijs/symbols, I used font “Segoe UI Symbol”. It is supposed to look better on windows 10[3].

If you are a hard core emoji lover then I suggest to have a look at emojicode.org

It is a emoji based programming language. Which did not make it into the esoteric programming languages list (yet).  Ook? Ook!

 

3. variables can be made mandatory (NOT NULL)

Check out the NOT NULL keyword during the variable declaration.

declare
  v_index number not null := 0;
begin
  v_index := 1;
  v_index := null;
end;
/

Error report –
ORA-06550: line 5, column 14: PLS-00382: expression is of wrong type
ORA-06550: line 5, column 3: PL/SQL: Statement ignored

The error message is a bit vague about what happened, but it is very exact where it happened (line 5, column 14). And what do we see there? A NULL expression.

The expression is of wrong type, because we added a NOT NULL constraint to the number type that was used. For more complex cases we can create our own sub types and use them. But if we just want to make sure that we do not need to consider null cases during further variable calls, then this is a possible way.

Link to plsql documentation

Currently there are no such other constraints  that we can use.  I could imagine with the potential arrival of SQL assertions, this might become a hot topic in plsql too.

2. you can “hack” dbms_output

Warning! This is dangerous. It might break some (poorly written) code that resides in the same schema. Do it at your own risk! It is also hilariously funny to do on april fools day to your fellow coworkers. I mean they shouldn’t use dbms_output anyway. That will teach them!

I start the example by showing the behaviour first. Then the code to produce this result.

behaviour


create or replace procedure doSomething is
  v_dummy dual.dummy%type;
begin
  select dummy into v_dummy from dual where 1=2;
exception
  when others then
    dbms_output.put_line(sqlerrm);
end doSomething;
/

Now we run the module a couple of times and want to see the output. We should expect a NO_DATA_FOUND error message.

set serveroutput on

execute doSomething;
execute doSomething;
execute doSomething;
execute doSomething;
execute doSomething;

Surprisingly instead of the error message we get something like this.

PL/SQL procedure successfully completed.

Wrong usage of DBMS_OUTPUT detected.

PL/SQL procedure successfully completed.

Make Databases Great Again!

PL/SQL procedure successfully completed.

ORA-01403: no data found

PL/SQL procedure successfully completed.

Make Databases Great Again!

PL/SQL procedure successfully completed.

So tell me what you want, what you really, really want

Omg! What is going on here?

Well here is the catch. We can “overload” dbms_output in out own schema. Then our package is called and not the original package from sys.

source code

create or replace package dbms_output
as
  procedure enable(BUFFER_SIZE number default null);
  procedure put_line(A in varchar2);
  procedure GET_LINE(LINE out VARCHAR2,STATUS out integer);
  procedure GET_LINES(LINES out sys.dbms_output.CHARARR,NUMLINES in out NUMBER);
  procedure GET_LINES(LINES out sys.DBMSOUTPUT_LINESARRAY,NUMLINES in out NUMBER);
end dbms_output ;
/

create or replace package body dbms_output
as
procedure enable(BUFFER_SIZE number default null) is
begin
  sys.dbms_output.enable(BUFFER_SIZE);
end;

function getRandomQuote (A in varchar2) return varchar2
is
  type quotes_t is table of varchar2(4000) index by binary_integer;
  v_quotes quotes_t;
  v_random binary_integer;
begin
  v_quotes(1) := 'You are hacked by the Chinese';
  v_quotes(2) := 'Wrong usage of DBMS_OUTPUT detected.';
  v_quotes(3) := 'System failure. Get away from keyboard';
  v_quotes(4) := 'Close all windows! NOW!';
  v_quotes(5) := 'Make Databases Great Again!';
  v_quotes(6) := A; -- sometimes return the correct text
  v_quotes(7) := A; -- sometimes return the correct text
  v_quotes(8) := 'So tell me what you want, what you really, really want';
  v_quotes(9) := 'None but ourselves can free our minds.';
  v_quotes(10) := 'Let there be light!';
  v_random := round(dbms_random.value(1,v_quotes.last));

  return v_quotes(v_random);

end getRandomQuote;

procedure put_line(A in varchar2) is
begin
  sys.dbms_output.put_line(getRandomQuote(A));
end;

procedure GET_LINE(LINE out VARCHAR2,STATUS out integer)
is
begin
  sys.dbms_output.GET_LINE(LINE,STATUS);
end;

procedure GET_LINES(LINES out sys.dbms_output.CHARARR,NUMLINES in out NUMBER)
is
begin
  sys.dbms_output.GET_LINES(LINES,NUMLINES);
end;

procedure GET_LINES(LINES out sys.DBMSOUTPUT_LINESARRAY,NUMLINES in out NUMBER)
is
begin
  sys.dbms_output.GET_LINES(LINES,NUMLINES);
end;
end dbms_output ;
/

How does it work?

Because how sql name resolution kicks in, the DBMS_OUTPUT package in our schema is used and not the public synonym for the DBMS_OUTPUT package from the sys schema.

The get_line functions then pushes the changed text to the normal buffer mechanism.

How can we avoid it?

Best is not to use DBMS_OUTPUT in real production code. It is a nice quick debugging tool. But not more than that.

Also if you prefix dbms_output always with the SYS schema, then it will call the original logic.
 

1. when others does not catch all exceptions

example

set serveroutput on 
declare
  e_cancelled exception;
  pragma exception_init(e_cancelled, -1013);
begin
  begin
    raise e_cancelled;
  exception
    when others then
      dbms_output.put_line('EXCEPTION OTHERS');
  end;
end;
/

ORA-01013: user requested cancel of current operation
ORA-06512: at line 6

We still see an exception, but not the dbms buffer output!

This needs some explanation.

There is a very limited set of exceptions that will not be captured by the WHEN OTHERS handler. We need to look closely and understand the exception itself to comprehend why this is a good thing.

Here the ORA-01013 is the “user requested cancel of current operation” exception. Essentially it means somebody pressed “CTRL+C” while running the code. In almost all environments this means: Stop doing whatever you do immediately! Or in more technical terms: It is an interrupt to the os process running your command. Same as executing “kill -2” (kill -SIGINT) in a nix environment (the-3-most-important-kill-signals-on-the-linux-unix-command-line). Even if the process is allowed to ignore the command, it shouldn’t do so by default.

ORA-01013 can sometimes also be the result of a timeout. Where the client is waiting for a response and after some time sends this as a timeout signal to the database session.

We are allowed to capture this exception and write a special handler for it.

set serveroutput on
declare
  e_cancelled exception;
  pragma exception_init(e_cancelled, -1013);
begin
  begin
    raise e_cancelled;
  exception
    when others then
      dbms_output.put_line('EXCEPTION OTHERS');
  end;
exception 
  when e_cancelled then
    dbms_output.put_line('OPERATION CANCELED');
END;
/

PL/SQL procedure successfully completed.

OPERATION CANCELED

Nothing to worry about. Just nice to know.

Please note: This example will behave differently in older outdated db versions. I think it was introduced as a fix for bug#12838063 in 11.2.0.4.

Other exceptions that are not handled include “ORA-03113: end-of-file on communication channel”.

But not “ORA-06508: PL/SQL: could not find program unit being called”. This was supposed to go through “when others” but testing on 12.2.0.1 revealed it is captured.

 

 

Footnotes


1. This is probably the only bug free code I ever wrote. It was meant to do nothing and it does that exceptionally well![2]
2. There might be room for some performance improvement. Allowing us to do nothing even faster.
3. On windows 10 💩 is supposed to look like 🔝💩.

 

 

 

10 Oracle SQL features you probably didn’t know

10 Oracle SQL features you probably didn’t know

I can not guarantee that you never heared of this. But the majority of developers doesn’t know or doesn’t seem to know. And honestly –  most of this information is fairly useless – and mostly harmless.

10. Why we have the CONCAT function

There is a concat function that can be used instead of the concat operator ||.

Nobody uses that! But there is a reason why it exists.

This is from the 8.1.7 Oracle documention

On most platforms, the concatenation operator is two solid vertical bars, as shown in Table 3-3. However, some IBM platforms use broken vertical bars for this operator. When moving SQL script files between systems having different character sets, such as between ASCII and EBCDIC, vertical bars might not be translated into the vertical bar required by the target Oracle environment. Oracle provides the CONCAT character function as an alternative to the vertical bar operator for cases when it is difficult or impossible to control translation performed by operating system or network utilities. Use this function in applications that will be moved between environments with differing character sets.

So we do have this function, because in the past this || was not consistent over all the different platforms using different character sets. Since nobody seems to use IBM EBCDIC anymore, there is no real need to use the CONCAT function.

9. select UNIQUE

Instead of doing a SELECT DISTINCT you can do SELECT UNIQUE.

But you shouldn’t. None of them. Forget I mentioned this.

Ok here is an example.

Example

select unique e.deptno from scott.emp e;

DEPTNO
-----
30
20
10

It is also possible to do SELECT COUNT(UNIQUE …) . Not sure when that one was introduced, but it seems to work now.

Sven says: “SELECT DISTINCT|UNIQUE should be considered a bug in real production code.”

I have yet to find an example where SELECT DISTINCT is needed. More likely there is a bug in the data model or missing joins in the where clause. GROUP BY or sometimes EXISTS are the better long term alternatives.

Using SELECT DISTINCT is absolutly fine for developer ad-hoc queries.

I feel this whole concept is a bit inconsistent. 12c introduced the new approximation function APPROX_COUNT_DISTINCT, but there is no APPROX_COUNT_UNIQUE!

8. there is an ANSI datatype DATE

The ANSI datatype DATE does not hold any time information (no hours, minutes, seconds). That is what the ANSI datatype TIME is for. Oracle does not officially know these two datatypes.

However:

EXTRACT function

Purpose

EXTRACT extracts and returns the value of a specified datetime field from a datetime or interval expression. The expr can be any expression that evaluates to a datetime or interval data type compatible with the requested field:

  • If YEAR or MONTH is requested, then expr must evaluate to an expression of data type DATE, TIMESTAMP, TIMESTAMP WITH TIME ZONE, TIMESTAMP WITH LOCAL TIME ZONE, or INTERVAL YEAR TO MONTH.
  • If DAY is requested, then expr must evaluate to an expression of data type DATE, TIMESTAMP, TIMESTAMP WITH TIME ZONE, TIMESTAMP WITH LOCAL TIME ZONE, or INTERVAL DAY TO SECOND.
  • If HOUR, MINUTE, or SECOND is requested, then expr must evaluate to an expression of data type TIMESTAMP, TIMESTAMP WITH TIME ZONE, TIMESTAMP WITH LOCAL TIME ZONE, or INTERVAL DAY TO SECOND. DATE is not valid here, because Oracle Database treats it as ANSI DATE data type, which has no time fields.
  • If TIMEZONE_HOUR, TIMEZONE_MINUTE, TIMEZONE_ABBR, TIMEZONE_REGION, or TIMEZONE_OFFSET is requested, then expr must evaluate to an expression of data type TIMESTAMP WITH TIME ZONE or TIMESTAMP WITH LOCAL TIME ZONE.

So essentially the doc says, that EXTRACT uses an ANSI DATE as an expression input.

Proof

select extract(day from sysdate) from dual;
21

select extract(second from sysdate) from dual;
ORA-30076: invalid extract field for extract source

select extract(second from systimestamp) from dual;
4.823

The error appears, because sysdate is kind of converted into a ANSI DATE and this does not hold time information.

 

Problem is: We just can not use this ANSI date type for normal activities.

Alternative for the ANSI date: TRUNC(datetime). In 12c as a virtual column.

Problem solved. ✔

 

7. group by ()

You can group by an empty parenthesis. Or let’s call it group by the FULL SET. This is in fact useful in some very specific cases.

It gives us the ability to get a NO_DATA_FOUND error if there is no data at all. Oracle guarantees that an aggregation select without a group by will always return a record. GROUP BY () can be added if you want to change this default behaviour on purpose.

Example

Let’s say we want to count how many employees are in one specific department.

select count(e.empno) emp#
from scott.dept d
left join scott.emp e on e.deptno=d.deptno
where d.dname = 'OPERATIONS';

EMP#
----
0

Since there are no employees in operations, this returns 0. This is correct and the information I want.

More specifically this query returns one row, but the e.empno is null and therefore is not counted. Count(*) would have returned 1!

But if we use a department name that does not even exists, then this will also return 0.

select count(e.empno) emp#
from scott.dept d
left join scott.emp e on e.deptno=d.deptno
where d.dname = 'FANCY_CLOUD_SPACE';

EMP#
----
0

Not correct! In this case I prefer to get a no data found error.

Grouping on the full set gives us this option.

select count(e.empno) emp#
from scott.dept d
left join scott.emp e on e.deptno=d.deptno
where d.dname = 'FANCY_CLOUD_SPACE'
group by ();

no rows found.

Btw: the SQL Developer syntax (pre-)parser identifies that () as an syntax error. But it will execute it successfully.

I have to admit, that in this case it would be much better to group by the department. But maybe it would be possible to get the count for multiple departments using an IN list. Then the group on department level would not help much.

 

Btw: GROUP BY NULL does the same thing, and might be less confusing for some developers. I personally prefere GROUP BY () because the () also is used in GROUPING_SETS.

 

6. + vs. –

In some edge cases using a different operator (+|-) leads to disturbing results.

Example: Substracting a tiny number from 1 does not equal 1. But adding a tiny number from 1 does equal 1!

select * from dual
where 1 = 1-0.000000000000000000000000000000000000001;

No data found!

This is expected behaviour…

select * from dual
where 1 = 1+0.000000000000000000000000000000000000001;

DUMMY
-----
X

Wat? Why do we get a result?!

The solution is simple. We overstepped the maximum precision of the number datatype when doing the addition. Max precision is 38 digits. The addition would require a precision of 39. Instead the result is rounded (or truncated) to a precision of 38 digits, which happens to equal 1.  The substraction result still is in the 38 precision range and therefore exact (very slightly less than 1).

Don’t confuse precision with maximum value! Precision is the number of different digits that we can store. This can then be moved by an exponent to the left or right for very large or very small values.

 

To put this into perspective

That precision is so extremly high, that we have major troubles visualizing it. So here is a comparsion. The comparison data comes from this hugely interesting site: http://money.visualcapitalist.com/all-of-the-worlds-money-and-markets-in-one-visualization/

The complete monetary assets in the whole word is estimated about 80 trillion $ (broad money=coins, banknotes, savings accounts, time deposits, …). That is a 80,000,000,000,000  or an 8 with 13 zeros at the end or 8e13 in scientific exponential notation.

select 8e13 from dual;

Considering all the national and private debts is even higher. Here we reach about  200 trillion $.

And if we consider derivates (=high risk gambles)  then the high estimations go as far as 1.2 quadrillion $.  That is a 1 followed by 24 more digits (mostly zeros) or 1.2e24.

Since all this is expressed in dollars, we can increase the number even further by chooseing a different currency.  For example the Nepalease Rupee is worth a little less than 0.01 dollar. So all the worlds derivatees could be expressed in Nepalease Rupees using a number as big as 1.2e26.

That means using a number with a precision of 38 we can easily express all the money in the world including derivates up to the very last Nepalease Rupee. And we still have only used 2/3rds of the maximum available precision. Or to say it differently. If we duplicate the planet Earth 1,000,000,000,000 (1 trillion) times – we could still give the exact amount of money on all planets in Napalease Rupees up to the very last Rupee using the number datatype.

That’s quite impressive!

Btw. The biggest number that can be expressed using the NUMBER datatype is 999…(38 9’s) x10125 . Which is way way bigger than anything we discussed so far.

 

5. instead of IN we can use = ANY

select 'TRUE'
 from dual
 where 1 = ANY (1,2,3,4,5)
TRUE

There are several similar unusual “replacement” options.

e.g.

  • instead of != we can also use ^=.
  • NOT IN is the same as  != ALL
  • SOME is a synonym for ANY – but only in sql!
  • <=LEAST | >=GREATEST can be replaced by <|>= ALL
  • >=LEAST | <=GREATEST can be replaced by >|<= SOME

I expect that SOME people do know about these operators, since questions about them are part of the SQL expert exam. So from ALL experts who took the exam at LEAST the few who passed should know SOMEthing about ANY.

The last replacement differs how NULL values are treated. In theory there could be cases where this comes in handy. I so far never had such an encounter.

Example

with testdata
as (select 1 a, 2 b, 3 c, 4 d from dual union all
   select 5 a, 4 b, 3 c, 2 d from dual union all
   select 1 a, null b, 2 c, null d from dual union all
   select null a, null b, null c, null d from dual)
select * from testdata
where 3 >= LEAST(a,b,c,d);

 

A B C D
-------
1 2 3 4
5 4 3 2

 

with testdata
as (select 1 a, 2 b, 3 c, 4 d from dual union all
    select 5 a, 4 b, 3 c, 2 d from dual union all
    select 1 a, null b, 2 c, null d from dual union all
    select null a, null b, null c, null d from dual)
select * from testdata
where 3 >= ANY(a,b,c,d);

 

A B C D
-------
1 2 3 4
5 4 3 2
1 - 2 -

4. external table on multiple files

External tables can load multiple files at once! As long as they have the same structure of cause. So here is an example.

CREATE TABLE EXT_DUMMY
(
    "RECORDTYPE" VARCHAR2(100 BYTE), 
    "COL1" VARCHAR2(100 BYTE), 
    "COL2" VARCHAR2(100 BYTE) 
)
ORGANIZATION EXTERNAL
(
    TYPE ORACLE_LOADER DEFAULT DIRECTORY "IMPORT_BAD_FILE" 
    ACCESS PARAMETERS ( 
             records delimited BY newline 
             FIELDS TERMINATED BY ';' 
             MISSING FIELD VALUES ARE NULL 
               ( RECORDTYPE CHAR
               , COL1 CHAR 
               , COL2 CHAR 
               ) 
    ) 
    LOCATION ( 'Testfile1.txt, Testfile2.txt' )
)
    reject limit 10
;

When we then do a select on the external table, we will see the data from both files.

We can also switch the external table to a different file.

alter table EXT_DUMMY location ('Testfile3.txt' )

 

For the experts: We can use the ROWID to find out which file was used to load a specific record. Special thanks go to OTN-forum member Odie_63 who found this solution. More info in this old OTN forum thread.

with ext_loc as (
      select position-1 as pos
           , name as filename
      from sys.external_location$
      where obj# = ( select object_id
                     from user_objects
                     where object_name = 'EXT_DUMMY' )
    )
select x.filename,
       t.*
from EXT_DUMMY t
join ext_loc x 
on x.pos = to_number(regexp_substr(dump(t.rowid,10,9,1),'\d+$'))
;

It’s a very clever piece of software. Essentially it extracts the filenumber from the rowid, looks up the file number in the data dictionary and combines that with our data set.

This select was done in a 10g database. In 12c we can probably use dbms_rowid to do the same, instead of regexp_substr(dump(rowid)).

3. insert default values

We can insert into a table using the “default” keyword to force default behaviour for this column.

Example

-- setup
create table swe_default_test 
(col1 varchar2(10) not null,
 col2 varchar2(10) default 'TEST' not null
 );
Table SWE_DEFAULT_TEST created.

-- test
insert into swe_default_test (col1) values ('X');
1 row inserted.

insert into swe_default_test (col1,col2) values ('X',null);
ORA-01400: cannot insert NULL into ("MYUSER"."SWE_DEFAULT_TEST"."COL2")

insert into swe_default_test (col1,col2) values ('X',default);
1 row inserted.

-- double check
select * from swe_default_test;
COL1 COL2 
---------- ----------
X TEST 
X TEST 

-- cleanup
drop table swe_default_test;
Table SWE_DEFAULT_TEST dropped.

 

In 12c we now have a new and I think better option to do the same. The default column can be defined additionally with “ON NULL”. Which has the effect, that inserting a NULL value, will lead to using the default value instead. Of cause the “default” keyword still works too.

-- setup
create table swe_default_test12c
(col1 varchar2(10) not null,
 col2 varchar2(10) default on null 'TEST' not null
 );
Table SWE_DEFAULT_TEST12C created.

-- test
insert into swe_default_test12c (col1,col2) values ('X',null);
1 row inserted.

insert into swe_default_test12c (col1,col2) values ('X',default);
1 row inserted.

-- doublecheck
select * from swe_default_test12c;
COL1 COL2 
---------- ----------
X TEST 
X TEST 

-- cleanup
drop table swe_default_test12c;
Table SWE_DEFAULT_TEST12C dropped.

As we can see both cases now work. The one using a NULL value and also useing the DEFAULT keyword.

This new 12c “default on null” feature can be used to replace the typical BEFORE ROW INSERT trigger. More info how to do this in Sequence and Audit columns with Apex 5 and 12c

 

2. (1,2) = ((1,2))

We can compare expression lists using the = operator. But the right side of the comparison needs an extra set of parenthesis.

select 'TRUE' checked from dual
 where (1,2,3,4,5) = ((1,2,3,4,5));
CHECKED
-------
TRUE

This is just a shorthand form of

1=1 and 2=2 and 3=3 and 4=4 and 5=5

If we do not use the second set of parenthesis on the right hand side of the comparison, then we get an error.

select 'TRUE' checked from dual
 where (1,2,3,4,5) = (1,2,3,4,5) ;
ORA-00920: invalid relational operator

This is documented. Although a tiny bit difficult to read in the syntax diagrams.

See: SQL Reference – Simple Comparison

syntax_compare_lists_1syntax_compare_lists_2

 

1. DUAL is a real table

Although there are some special optimizations in various tools, dual is a real table residing in the sys schema.

select owner, table_name
 from dba_tables
 where table_name = 'DUAL';
OWNER TABLE_NAME
----------------
SYS DUAL

 

In older database versions it was possible to do an insert into the DUAL table.

Warning! This is extremly dangerous. It will probably break ALL applications in your database.

insert into dual values ('Y');
select mysequence.nextval into v_id from dual;

TOO_MANY_ROWS error!

And nobody captures the TOO_MANY_ROWS exception for selects from dual.

Btw: A working safety mechanism is to add rownum = 1 to your query.

select mysequence.nextval into v_id from dual where rownum = 1;

Fortunately manipulating DUAL does not work anymore in recent versions of the database. I think 12c introduced it. Not even DBAs have the privs anymore to do this.

ORA-01031: insufficient privileges
*Cause: An attempt was made to perform a database operation without
the necessary privileges.

0. The Oracle SQL Syntax is documented!

Bam! There you have it! You wouldn’t have thought that. Well, you are not alone. So many developers seem to have no clue that this exists or where it exists.

When googeling use “oracle 12c doc” or “oracle 12.2 SQL ref” in front of your search term. It helps to find the proper pages.

12.2 doc portal

12.2 SQL introduction

12.2 SQL Language reference

 

 

Special thanks go out to Stefanie R., who gave me the idea for this blog post!

 

sqldev alert - overwrite package body modifications

DBA/Dev quick tipp: recover lost package body under EBR

Problem

It can happen that SQL developer suggests to reload an old package body that you are currently working on. If you click the suggested “autoupdate” then you will loose all current changes that you made to the package body. This just happened to a collegue of mine.

sqldev alert - overwrite package body modifications

It is possible to recover the old code using the flashback feature of the oracle database. But one has to be quicker than the undo retention period of the database.

If the schema was edition enabled this complicates things a tiny bit further. We need to make sure to fetch the plsql code from the correct edition that we are interested in.

When we are quering normal data dictionary views like DBA_SOURCE this will always use the current edition. DBA_SOURCE_AE will show all code editions.

Flashback does not work well with data dictionary views. However we can use the base tables and run a flashback query there to see the old package code.

Action

Three simple statements will do it.

Set your dba session to use the correct edition (this might not be needed if the object id is found using a slightly different query). The edition needs to match the edition where the source code was originally compiled

ALTER SESSION SET EDITION="DEV$ALPHA";

Find the object id of the package body

Use the correct name of the package and your edition name

select obj#
from sys."_CURRENT_EDITION_OBJ"
where name = 'MYPACKAGE'
and type# = 11 -- package body
and namespace = 2
and defining_edition = 'DEV$ALPHA';

Flashback query to show the source code for the identified object

select *
from sys.source$
--versions between timestamp (systimestamp - interval '25' minute) and systimestamp
as of timestamp (systimestamp - interval '25' minute)
where obj# = 1906090;

 

Done.

 

Aftermath

The starting situation can be reproduced by changeing body and spec of the same package. Once the spec is recompiled, the body needs to be recompiled too. Then SQL developer suggests to load the new code (which is in fact the old code). If you click yes, it will overwrite the current changes.

adaptive cursor sharing and DBMS_SQL

A recent post in the OTN mentioned that DBMS_SQL does not use bind peeking for binded variables. I couldn’t believe that, so I decided to do some tests for myself. The findings are strange…

This is potentially relevant for Apex developers, since the Apex engine uses DBMS_SQL. I still have to do further testing to check the behaviour in Apex.

First I setup some test to show bind peeking and adaptive cursor behaviour using normal statements in SQL*Plus or SQL Developer. After that we move to dynamic SQL, especially DBMS_SQL, and try the same again.

scenario setup

create skewed testdata

--drop table demo_big;
create table demo_big as
select level as id, 
       case when mod(level,10000)=0 
            then 'VALID' 
            else 'INVALID' 
       end as status
from dual
connect by level <= 1000000;

desc demo_big;

Name   Null Type
------ ---- -----------
ID          NUMBER
STATUS      VARCHAR2(7)

select status, count(*) 
from demo_big 
group by rollup(status);
STATUS     COUNT(*)
INVALID    999900
VALID      100
           1000000

So we have a few VALID values and a lot of INVALID ones.

Even if we have only two different values an index will be useful on this column. The data distribution is so skewed that any access trying to read the VALID values would profit from an index. However if we access the INVALID column we don’t want to use the index and instead want a full table scan.

-- create indexes on all the important columns
create unique index demo_big_id_ix on demo_big(id);
create index demo_big_status_ix on demo_big(status);

create statistical data(histograms)

First we create the statistics so that the optimizers knows what is in that table and how the data looks like.

-- create statistics and test histogram
execute dbms_stats.gather_table_stats(user, 'DEMO_BIG', method_opt=>'for all indexed columns size skewonly');

Then we check the data dictionary checks to see what has been created so far.
The hist_numtochar2 function is copied from Martin Widlake (Source: https://mwidlake.wordpress.com/2009/08/11/). It just helps to do a crude translation of the numerical histogram bucket endpoints. The code of the function can be found at the end of this post.

I don’t show the results from all selects but the last one. The other selects are here just as references. They are helpful to see what kind of statistics are in place.

select table_name, num_rows, blocks, last_analyzed
from user_tables
where table_name = 'DEMO_BIG';

select table_name, column_name, num_distinct, histogram, num_buckets, sample_size 
from user_tab_columns
where table_name = 'DEMO_BIG';

select *
from user_histograms
where table_name = 'DEMO_BIG' and column_name = 'STATUS';

select table_name, column_name, endpoint_number, endpoint_value, hist_numtochar2(endpoint_value) as translated_value
from user_histograms
where table_name = 'DEMO_BIG' and column_name = 'STATUS';

Here we see a frequency histogram with two buckets for the column STATUS.

TABLE     COLUMN  ENDPOINT_NUMBER    ENDPOINT_VALUE           TRANSLATED_VALUE
DEMO_BIG  STATUS  999900     380626532452853000000000000000000000    INVALJ*
DEMO_BIG  STATUS  1000000    447861930473196000000000000000000000    VALID

The first bucket holds 999900 values where status= INVALID.
The next bucket holds 1000000-999900 = 100 where status = VALID.

This of cause matches exactly what we created. So the statistical info in the dictionary is absolutly correct.

Tests

Now that our setup is in place, we can do some basic testing to see different plans.

check execution plan with LITERALS

-- test different cursor/execution plan using plain selects
select count(*) from demo_big where status = 'VALID';
select * from table(dbms_xplan.display_cursor);
----------------------------------------------------------------------------------------
| Id  | Operation         | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |                    |       |       |     3 (100)|          |
|   1 |  SORT AGGREGATE   |                    |     1 |     8 |            |          |
|*  2 |   INDEX RANGE SCAN| DEMO_BIG_STATUS_IX |   100 |   800 |     3   (0)| 00:00:01 |
----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("STATUS"='VALID')
select count(*) from demo_big where status = 'INVALID';
select * from table(dbms_xplan.display_cursor);
-------------------------------------------------------------------------------
| Id  | Operation          | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |          |       |       |   701 (100)|          |
|   1 |  SORT AGGREGATE    |          |     1 |     8 |            |          |
|*  2 |   TABLE ACCESS FULL| DEMO_BIG |   999K|  7811K|   701   (2)| 00:00:01 |
-------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter("STATUS"='INVALID')

Perfect! As expected one does an index access / index range scan, the other does a full table scan.

check execution plan with BIND parameters

select count(*) from demo_big where status = :P_ENTER_VALID;
select * from table(dbms_xplan.display_cursor);
----------------------------------------------------------------------------------------
| Id  | Operation         | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |                    |       |       |     3 (100)|          |
|   1 |  SORT AGGREGATE   |                    |     1 |     8 |            |          |
|*  2 |   INDEX RANGE SCAN| DEMO_BIG_STATUS_IX |   100 |   800 |     3   (0)| 00:00:01 |
----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("STATUS"=:P_ENTER_VALID)
select count(*) from demo_big where status = :P_ENTER_INVALID;
select * from table(dbms_xplan.display_cursor);
-------------------------------------------------------------------------------
| Id  | Operation          | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |          |       |       |   701 (100)|          |
|   1 |  SORT AGGREGATE    |          |     1 |     8 |            |          |
|*  2 |   TABLE ACCESS FULL| DEMO_BIG |   999K|  7811K|   701   (2)| 00:00:01 |
-------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter("STATUS"=:P_ENTER_INVALID)

The two statements are not identical because the name of the bind parameter is different. Because of that we get two different cursors. Each with a different execution plan.
This test shows that bind peeking works. During the hard parse phase the value of the binded parameter was checked (peeked) so that the correct estimations for the resulting rows/cardinalities were made. Which led in turn to the correct plan for each of the two different statements. However this first parameter “freezes” the execution plan. So that if we change the binded value, then the same plan is reused.

This behaviour was enhanced in 11g with the introduction of adaptive cursor sharing and got steadily improved since then.

To test adaptive behaviour we run the first query again a few times (at least 4 times). But this time we do not pass VALID, but instead INVALID as a parameter.

After that we can see a new child cursor 1 for the sql_id “7rjdcm7v7hfrs”.

select is_bind_sensitive, is_bind_aware, sql_id, child_number, sql_text
from v$sql
where upper(sql_text) like 'SELECT%FROM DEMO_BIG WHERE%'  and sql_text not like '%v$sql%'
;
IS_BIND    IS_BIND SQL_ID        CHILD   SQL_TEXT
_SENSITIVE _AWARE                _NUMBER
Y          N       7rjdcm7v7hfrs 0       select count(*) from demo_big where status = :P_ENTER_VALID
Y          Y       7rjdcm7v7hfrs 1       select count(*) from demo_big where status = :P_ENTER_VALID
Y          N       5zkmtfj331xmc 0       select count(*) from demo_big where status = :P_ENTER_INVALID
select * from table(dbms_xplan.display_cursor('7rjdcm7v7hfrs',0));
----------------------------------------------------------------------------------------
| Id  | Operation         | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |                    |       |       |     3 (100)|          |
|   1 |  SORT AGGREGATE   |                    |     1 |     8 |            |          |
|*  2 |   INDEX RANGE SCAN| DEMO_BIG_STATUS_IX |   100 |   800 |     3   (0)| 00:00:01 |
----------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   2 - access("STATUS"=:P_ENTER_VALID)

select * from table(dbms_xplan.display_cursor('7rjdcm7v7hfrs',1));
-------------------------------------------------------------------------------
| Id  | Operation          | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |          |       |       |   701 (100)|          |
|   1 |  SORT AGGREGATE    |          |     1 |     8 |            |          |
|*  2 |   TABLE ACCESS FULL| DEMO_BIG |   999K|  7811K|   701   (2)| 00:00:01 |
-------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   2 - filter("STATUS"=:P_ENTER_VALID)

This is adaptive behaviour. After a few bad tries a second execution plan is created for the same cursor and used. How many tries are needed? Often it changes on the third try. But it can happen that more are needed.

Test with DBMS_SQL

Now comes the more difficult part. Setup a small plsql block to use DBMS_SQL to run the same statement again using binded parameters.

-- testcase for BIND peeking/aware using DBMS_SQL
declare
  curid    NUMBER;
  ret      INTEGER;
  sql_stmt VARCHAR2(200);
begin
  sql_stmt := 'select count(*) from demo_big where status = :P_STATUS';

  -- get cursor handle
  curid := DBMS_SQL.OPEN_CURSOR;
  DBMS_SQL.PARSE(curid, sql_stmt, DBMS_SQL.NATIVE);

  DBMS_SQL.BIND_VARIABLE(curid, 'P_STATUS', 'VALID');
  ret := DBMS_SQL.EXECUTE_and_fetch(curid);

  DBMS_SQL.PARSE(curid, sql_stmt, DBMS_SQL.NATIVE);
  DBMS_SQL.BIND_VARIABLE(curid, 'P_STATUS', 'INVALID');
  for i in 1..5 loop
    ret := DBMS_SQL.EXECUTE_and_fetch(curid);
  end loop;

DBMS_SQL.close_cursor(curid);
end;
/

The v$sql view has two interesting columns.
IS_BIND_SENSITIVE shows cursors where the execution plan can evolve.
IS_BIND_AWARE shows child cursors where a new plan was created, meaning that the cursor was evolved.

select is_bind_sensitive, is_bind_aware, sql_id, child_number, sql_text
from v$sql
where upper(sql_text) like 'SELECT%FROM DEMO_BIG WHERE%'
and sql_text not like '%v$sql%'
;
IS_BIND_SENSITIVE    IS_BIND_AWARE    SQL_ID    CHILD_NUMBER    SQL_TEXT
Y    N    7rjdcm7v7hfrs    0    select count(*) from demo_big where status = :P_ENTER_VALID
Y    Y    7rjdcm7v7hfrs    1    select count(*) from demo_big where status = :P_ENTER_VALID
N    N    3kpu54a461gkm    0    select count(*) from demo_big where status = :P_STATUS
N    N    3kpu54a461gkm    1    select count(*) from demo_big where status = :P_STATUS
Y    N    5zkmtfj331xmc    0    select count(*) from demo_big where status = :P_ENTER_INVALID
N    N    fjjm63y7c6puq    0    select count(*) from demo_big where status = :P_STATUS2
N    N    1qx03gdh8712m    0    select count(*) from demo_big where status = 'INVALID'
N    N    2jm3371mug58t    0    select count(*) from demo_big where status = 'VALID'

The two child cursors

-- find the cursor id
select sql_id, child_number, bucket_id, count, is_bind_sensitive, is_bind_aware, sql_text
from v$sql s
left join v$sql_cs_histogram h using (sql_id, child_number)
where upper(s.sql_text) like 'SELECT%FROM DEMO_BIG WHERE%'
and s.sql_text not like '%v$sql%'
;

-- check the execution plan for both child cursors
select * from table(dbms_xplan.display_cursor('3kpu54a461gkm',0));
select * from table(dbms_xplan.display_cursor('3kpu54a461gkm',1));

-- see the plans in the SGA
select * from v$sql_plan where sql_id = '3kpu54a461gkm';
select * from v$sql_plan where sql_id = 'fjjm63y7c6puq';

Now the strange thing is: The first cursor is using a FULL table scan. But the first execution was done using the VALID value and should have resulted in the index range scan. The second child cursor does not even have an execution plan!

NOTE: cannot fetch plan for SQL_ID: 3kpu54a461gkm, CHILD_NUMBER: 1
      Please verify value of SQL_ID and CHILD_NUMBER; 
      It could also be that the plan is no longer in cursor cache (check v$sql_plan)

What is going on here? v$sql has a column EXECUTIONS which tells us how often this child cursor was called. It is always 0 for the child 1 from the DBMS_SQL cursor!

I did several more tests using DBMS_SQL. Even a case where the cursor was closed and opened several times. All with the same result.

Interpreting the results

I’m still not yet exactly sure what is going on there. It seems as if bind peeking and adaptive cursor sharing does not work with DBMS_SQL. But why do we see then two child cursors? It seems as if the different parameter values at least have the effect that a new child is created. And this happens only when there is a need for a different execution plan. But where is the plan for that? I still have some doubts. Maybe the execution plan in v$sql is lying is this case? Since DBMS_SQL goes deep into the internals it might be that some of the normal behaviours are not reflected in some of the views.

The cursor itself is in the private SQL workarea and I never checked that. Another approach would be to setup a scenario where we can measure the perormance difference. The test case I used was too small to see a desicive difference between the two possible plans.

Also we have to remember that the need for DBMS_SQL is rare. A normal select with binded parameters is certainly not a case where need dynamic SQL. A more typical case would be a cursor | statement where we do not know at compile time what columns are returned. Then we can use DBMS_SQL to analyse the structure of such a cursor and react on that.

However if we build some kind of dynamic frameworks and think about using DBMS_SQL we should rethink our strategy. Maybe it is easier to provide all the possible cases as plsql apis and thereby compiling during creation, instead of building the statement in a completly dynamic fashion but suffering some essential drawbacks.

Recommendations

1) Avoid DBMS_SQL, consider to use native SQL (execute_immediate) instead
2) If you have a skewed data distribution, make sure your plans are bind_sensitive
3) If you can guarantee an even data distribution, consider to add the NO_BIND_AWARE hint. This should be needed only in some extrem situations (very high performance requirements or cursor cache issues)

Appendix

The function that I used previously:

create or replace function hist_numtochar2(p_num number
,p_trunc varchar2 :='Y') return varchar2
-- Author: Martin Widlake
-- Source: https://mwidlake.wordpress.com/2009/08/11/
is
  m_vc varchar2(15);
  m_n number :=0;
  m_n1 number;
  m_loop number :=7;
begin
  m_n :=p_num;
  if length(to_char(m_n))>36 then
    --dbms_output.put_line ('input too short');
    m_vc:='num format err';
  else
    if p_trunc !='Y' then
      m_loop :=15;
    else
      m_n:=m_n+power(256,9);
    end if;
    --dbms_output.put_line(to_char(m_N,'999,999,999,999,999,999,999,999,999,999,999,999'));
    for i in 1..m_loop loop
      m_n1:=trunc(m_n/(power(256,15-i)));
      --    dbms_output.put_line(to_char(m_n1));
      if m_n1!=0 then m_vc:=m_vc||chr(m_n1);
      end if;
      dbms_output.put_line(m_vc);
      m_n:=m_n-(m_n1*power(256,15-i));
    end loop;
  end if;
  return m_vc;
end;
/

OTN Apprecition Day: the OTN forum

The SQL and PLSQL forum

Today is OTN Apprecitation day so I decided to write a short article about my favourite Oracle feature. It is the OTN SQL and PLSQL forum! Reading and posting on this forum made me a better developer.  I also frequently visit other forums like Database General, Apex and lately the Oracle JET, but not as intensively as the SQL and PLSQL forum.

My OTN forum handle is Sven W.

screen-shot-2016-10-11-at-23-57-04

greetings/honorable mentions

BluShadow – for moderating the forum and creating the FAQ list

Frank Kulash – for always answering in a nice and calm way insistently leading the OP to the final solution.

Billy Verrennye – for makeing me rethink old habits (like naming conventions) and for providing excellent and well thought source code examples.

William Robertson – for always beeing spot on

Odie_63 – For answering some of my questions, for example by taking apart the internal meaning of ROWIDs for external tables.

Boneist and ApexBine – for makeing their statements in a male dominated industry

noticable threads

About naming conventions in PLSQL

Coding Standards and Code Critique Request

features / ideas

PLSQL 101: Datatypes – DATE

About Ansi Joins

Introduction to regular expressions

SQL Assertions / declarative multi-row constraints

other

The 10 database commandmends

Are databases still nice and quick and simple to use like they once were?

Fun stuff from the past

Developers sometimes can be funny and sometimes they just need to boil of some steam.

Here is a collection of thread snippets from the past of the forum. Some years ago I collected memorable posts, but I don’t do this anymore. So the collection is slighty outdated now, but I tried to add a few recent quotes as well.

“OP” is used when I cite the “Original Poster” without giving the real forum handle. Otherwise usually only the first name for some of the well known members are used. Comments to the original quote from myself are in italic. Different threads are separated by a line.

Best of Forum 2007


OP> want a procedure or a block which will give a tree like structure using loops and cursors
3360> Why? Do you have a requirement to make this as slow as possible?


Dave Hemming> select ‘don”t stop me now, I”m having such a good time’ from dual

Special thanks to Dave! This is one of my favourite Queen songs.


APC> Late breaking newsflash: users are not developers!


Damorgan> Writing “working on tables” is as informative as writing “using keyboard and mouse.” See your instructor.


OP> SO CAN YOU PLEASE ASSIST ME ON THAT .

APC> On most keyboards the CAPSLOCK key is halfway down the lefthand side. Please learn to use it.


Billy> The features and flexibility and power of Oracle is NO substitution for a solid relational design.


Sentinel>

insert into table (column) values ('I have John''s shoes');

Sentinel>Of course what I’m doing with his shoes is a completely different story.
John Spencer> Since I only have one pair, I had to go to work barefoot this morning 🙂
John Spencer> Shoeless John


Damorgan> Without a context your posting is just a waste of perfectly good electrons.


OP>Is it possible to do something like this from a running program ?

Billy>Is it possible to jump from an aircraft at 5000 feet? Yes.

Billy>Of course, this has to be questioned as when it is done without a parachute, the changes of survival are very very slim. Never mind that if you’re the pilot, you are sending that plane down.

Billy>Yes, columns can be renamed dynamically from a running program. But it makes as much sense as jumping from a perfectly capable plane without a parachute.


APC> In general computing is about precision and removing ambiguity. That’s why the industry is full of pedants. Maddeningly, there is a direct correlation between pedantry and good programming.


Best of forum 2008


Billy>Be careful about making conclusions using observation only. Simple example. Observe how the WARP_SPEED hint makes the SQL go faster:

SQL> set timing on
SQL> select count(*) from all_objects;
COUNT(*)
----------
10460
Elapsed: 00:00:09.60
SQL> select /*+ WARP_SPEED */ count(*) from all_objects;
COUNT(*)
----------
10460
Elapsed: 00:00:00.50
SQL>

The empirical conclusion is that the WARP_SPEED hint made the query faster by 90%.

This conclusion (based on observation) is incorrect. The real reason why the 2nd query is faster is that it made substantially less physical I/O than the 1st query. The 1st query loaded a lot of data needed into the buffer cache. The 2nd query found that data there and had no need to perform the same expensive and slow physical I/Os that the 1st query did. Nor is there a WARP_SPEED hint.

So be very careful on making assumptions and basing conclusions solely on observation.


Sven> If you provide some example data and your insert statement as everybody suggested, we could give much better solutions without guessing all around.

Hans> But then we would only average one reply per question. And we would not get to spend as much time on the forums, getting to know each other so well.


Billy>Users make incredibly poor Oracle gods. That is what the DBA role is – godlike in Oracle and should be treated with care and respect and given only to those persons responsible for actual database administration. And no, users cannot administrate an Oracle database either.


WhiteHat>Hi all,

The Powers that Be at my work have decided to cut back on the number of different systems we have by re-writing a lot of them from scratch and combining functionality in order to reduce downtime caused by ETL processes and the like. so as a result I’m trying to implement the unified theory of everything in my stored proc and I can’t get it to work. Specifically I’m having difficulties combining quantum mechanics and general relativity into a single SQL statement. I’m getting ORA-06502: numeric or value error: String theory conversion error at line 3523

Is this possible in oracle v150.2.0.5 or will I have to upgrade to 153gR2?

being friday afternoon, my brain isn’t really in gear so I’m certain I’ve overlooked something simple. I suspect my basic architecture assumptions are incorrect but not sure. any advice?

Cheers,
WH.
Dave>It’s possible you’re trying to imply a quantum function to a relativistic variable. You’ll need to explicitly CAST it first.

Of course, I definitely think you need to show us your code. 🙂
Leo>Thats not always necessary as especially the quantum function sometimes can be decrypted itself by Oracle.

But definitely we need the code from line 3517.3 until line 3527.8


Billy>If this was ancient times, and you wrote this code to run on any of my databases, I would have handed your over to the SQL Inquisition for showing you the error of your ways.


Sarma>OP already told it is just an exercise for him on PL/SQL. In short, home-work.
Billy>Oh… I see.. You mean like attending Police College and committing, as home work, crimes like armed robbery, assault with a dangerous weapon, vehicular manslaughter, arson, and so on.
Billy>Yeah, I can see how this can teach you how to enforce the law.. NOT!


Justin>It’s generally helpful to specify the actual exception you’re getting rather than just saying “raises an exception”. Oracle error numbers and error strings are exceptionally useful debugging tools.


Michael O>Isn’t this the Oracle Support Forum where all wishes are granted?


Unknown>It is a basic problem that we face too here in forums. How do we show The Good Stuff of Oracle to a SQL-Server fanboy that cannot bare to empty his cup of SQL-Server in order to taste some Oracle?

Or dealing with a Java zealot that has been bitten badly by the J2EE religion, and sees Oracle as a mere persistence layer.. and not good for anything else?

Some people are so convinced that they are so absolutely right, they cannot even entertain the idea of something alternative.. never mind the idea that they just may be horribly wrong.


WhiteHat>[clippy]

Hi! it looks like you’re trying to use Oracle!

Do you:
( ) want to INSERT data to a table
( ) want to UPDATE existing data in a table
( ) ALTER the structure of the table
( ) search the internet for other queries

[clippy]

It’s not clear what you’re trying to do:
as we understand it it seems like:
you have a newly created table and you want to make it so there’s data in it is this correct?


OP> what if i will have thousands of record.i cant write
them all.
Dave> BANGS HEAD ON DESK
Dave> Instead of the select … from dual union select … from dual… perhaps you could use YOUR OWN TABLE.
OP> yeah i told you that i got it already so i dont need to bang head on desk……thanx neways


OP> Can you just tell me which is the best oracle performance tuning tool in the market? It should be free download.
Guido> It’s name is BRAIN (Biological Resource for All Informations Needs). If you really need to download that you should opt for another career path, I guess. 😉


OP>PL DESCRIBE U’R TABLE WORD.
Billy>Use proper English and not IM SPEAK as this is a technical forum and not some SMS teenage chat room.
padders>Please note however that it is considered acceptable to refer to someone’s ‘leet SQL skillz’.
Dave>Although it’s worth first establishing a reputation that clearly indicates that you do not think “irony” means “similar to iron”.


OP>i have run the package and it will take execution time more than 1 hour, how can i redure the execution time? any one help on this issue.
Matt>Remove all the code from the package.


shoblock> I really wish people would read the responses before they complain that they
aren’t working as desired.
APC> Aw c’mon. Next you’ll be wishing people would look stuff up in the documentation instead of straightaway posting questions here.


Laurent> of course regexp could save ink when printed


OP>i’ve try to add commit; but doesn’t work.

Dave>Glad to see you picked up on the need for a more complete explanation than “doesn’t work”.
Dave>Oh wait, you didn’t.

Someoneelse>That’s a new error in 11g:
Someoneelse>ORA-00042 DOESN’T WORK


Someonelse>IF Using_SQL_Server THEN
Someonelse> EXIT Oracle_Forums;
Someonelse>END IF;


Keith>If thats not clear, I’ll join the hitting head against the brickwall gang.


Billy> Do you fix the symptoms? Or do you fix the problem?
Padders>Erm. The problem I think. Aren’t we supposed to hit the symptoms with the lead pipe?


OP>Thread with title like “urgent help in sql plz ”
Billy>STOP!!

For that you need to fill in the “It Is Truly Urgent” form via the request link on the Oracle Forum main page. In triplicate. Submit it to the moderator. Wait for an urgency verification key to be supplied by the moderator. And only then can you post your urgent posting by attaching the urgency key to it for verification purposes.

Since you did not do it, your account is being reviewed for a possible 6 month suspension. You will also be prohibited from practicing Oracle during that time as you have illustrated the lack of common sense by posting this totally uncalled for and unwarranted “urgent” posting in this forum. And not applying common sense when using Oracle can cause serious injury to your database, cause serious damage to the scalability and performance of your applications, and may just cheese off your Oracle DBA resulting in a lead pipe being taken to your knee caps.


William>A right outer join is just a normal outer join written backwards to confuse everyone


More forum fun


John Stegeman> Last time I checked (1 minute ago), there is no “PL/SQL for SIM cards”


John Stegeman> Or even a entry in the mystical magical caverns of the registry, if none of those are set.
Ed Stevens> Please!  I’ll do anything!  I wash your car!  I’ll mow your lawn! Just don’t send me to the registry!


Someoneelse> We are under attack!
The Database General forum is being flooded with spam!
Here are some of the userids:  …
What the hell, is this a new feature of Jive?

jgarry> You want Jive to pump up social media, Jive pumps up social media.

jgarry> On other places I’ve been surprised by being blocked for too much posting.
I’m really not a robot!  It’s hard to tune that limit right, and some people may compose things beforehand.
But worse, spammers would consider it damage and route around it, with whatever they need to do to have numerous logons.
Like when I knocked some fuzzy balls off the umbrella next to my pool and little black widow spiders scattered everywhere.

Dude!> There is already a feature in place that does not allow people to post one message right after another without waiting for a while; 5 min. if I remember correctly.

BluShadow> It’s 30 seconds Dude!, not 5 minutes.

Dude!> Ah well, time is relative

KayK> all you need is a DeLorean


Dude!> How long will it take until everything implodes?
Billy>  Everything? I assume you are limiting “everything” to our solar system?
In that case, around 5 billion years from now, our sun will run out if fuel, shed its outer layers, and implodes into a white dwarf. Unfortunately it is too small to become a black hole. Which would be a kewl thing. Size some time matters.
Everything as in the universe? Guestimate is a 100 or so trillion years – depending on the theory you deem most likely (of which there are more than a few) describes the end of the universe. Implosion is just one of the theories. Perhaps an Asimov’s The Last Question end and beginning?

Dude!> I don’t worry so much about 5 billion years from now — not even history of the past 100 years is correct.


Billy> Disk space is cheaper than the effort to rebuild tables and indexes in order to reclaim space – and to support this effort as SOP.


William> So ‘QTR’ means ‘Quarter’? What is this, Twitter?


Jonathan Lewis> I got to the end to the first line (after the Hi) and thought: “we’re going to see a match_recognize() solution from Stew Ashton here”.

He was right.


“Re: What is the difference between select count(1) from tab and select count(*) from tab;”

Well after some short ramblings about performance and table sizes the gurus discussion went on to the right track.

Dave> One press on the shift key on my keyboard
William> “count(1)” is a nonstandard variation that takes more keystrokes and requires the parser to substitute “*” in place of the “1”, while making the person who wrote the query look foolish.
If you want an approximate result for a large data set quickly, have a look at the SAMPLE clause, e.g.

select count(*) * 20 from somebigtable sample(5);

Frank> Actually, on my keyboard, ‘1’ takes fewer keystrokes (depending on how you count) than ‘*’.  To type ‘1’, I just press the ‘1’ key, but to type ‘*’ I have to hold down the SHIFT key and then press the ‘8’ key.
Even though it’s that much harder to type, “COUNT (*)” is still better than “COUNT (1)”, for the reasons you mentioned.
Jonathan> You may be taking too narrow a view on the problem – although the correct view may, of course, be keyboard-dependent.  You need to step back from the 1/* dichotomy and consider the effect of parentheses:on the problem.

On my keyboard (*) requires me to do:  {shift} 980 {release}   (a total of 4 keystrokes – or 5 finger movements)

but (1) requires me to do: {shift} 9 {release} 1 {shift} 0 {release}  (a total of 5 keystrokes – or 6 finger movements)

Note also that if you are a “classical typist” your are probably going to use {left shift}, which means a large movement to the 1, unless you use a numeric keypad – in which case the 9 requires you to make a large lateral movement with your right hand (which can then stay in place until after the 8 stroke, of course).

Youngsters these days! Just don’t think things through properly!   (;)
William> Perhaps the round bracket keys are not shifted on some keyboards? I don’t think I’ve ever seen that though.
rp0428>Can you provide a specific reference to ANY of your books or blogs that cover an advanced topic such as this?

Sometimes ‘youngsters’ can benefit from seeing the explanation in context with some example code, trace files and execution plans.
Ospin> Just to inform for people with spain keyboards, this keyboards has “(” in shift+8 and “)” in shift+9, so is quit bit easy type “(8)”, so less finger movements and same results
John> To really figure this out, we probably need sql_trace for brains and bodies – when is Oracle going to wake up and put a bunch of SQL coders under a functional MRI scanner and do metabolic analysis to determine the precise effort involved?
However, i’ll say this: even if select(1) was an order of magnitude easier to type than select (*) (which it’s not), the dissonance and mental stress caused by seeing select(1) is probably enough to kill a few million brain cells of my own (not to mention people who come after me and have to read my code)…


All time classics:
Frameworkia – the NEW PLSQL development standard

News for developers from #OOW16 about 12.2

The following information I obtained during various sessions during OOW16. Sometimes they resemble just an impression I got while a certain feature was mentioned during a speak. So use the information with caution.

new db version 12.2

So the 12.2 version is out there now, but most of us won’t be able to use it very soon. Why? Because Oracle choose to do a cloud first approach. That means 12.2 is available for the Oracle Cloud but not for other installations yet. And my guess would be that first enterprise edition will get it and standard edition (SE2) might even get it a bit later.

Still there are many exciting new features in it and it makes sense to get ourselves familiar with them. So here are my favourites with a slight focus on developer relevance.

There is documentation about 12.2 new features out already, but it is extremely hard to find. It is hidden in the docs for the new Exadata Express Cloud service. So here is the quick link to see the new features guide.

To summarise many of the new features focus on improving availability of the database. Essentially giving developers and DBAs more options to keep the applications running even when encountering errors or while changes are takes place.

The second set of enhancements seem to be things that further extend capabilities of features that were added in 12.1.

And of cause a lot of performance improving possibilities had been added (this seems to by typical for an R2 version).

longer identifiers (128 chars)

Identifiers can now be put to 128 chars long. This applies for almost all objects including table, column, index and constraint names. Long awaited and finally there.

I suggest not to overdo it at the beginning, I suspect that many external tools that work with the Oracle database might not fully support such long identifiers yet. And of cause some common sense when naming your tables and columns should be applied as well. For example please do not repeat the table name again in the column name. However it will help greatly when you apply naming conventions on foreign key constraints or indexes.

There seems to be a new plsql constant Ora_Max_Name_Len that holds the maximum possible value in your database. It can be used at compile time to set the sizes of variables. There is an example in the plsql section later.

SQL functions

improved LISTAGG

LISTAGG now has an option not to error out when the list gets too long. Instead the list will be cut off and a default value (ellipsis “…”) is added. Additionally we can add the number of found results.

To do so add the new overflow clause, for example:

ON OVERFLOW TRUNCATE WITH COUNT

Unfortunatly LISTAGG is still not able to build distinct lists. If you want that, vote for this feature in the OTN database ideas space: https://community.oracle.com/ideas/12533

conversions with error handling

All conversion functions have now an added new clause that decides how an error is handled, when the conversion fails because of a formatting error. CAST, TO_NUMBER, TO_DATE and all the other TO_xxx functions got this improvement. The only exception is TO_CHAR. Because everything can be converted into char.

CAST_IN_DB12_2.gif

examples:

CAST(prod_id AS NUMBER DEFAULT 0 ON CONVERSION ERROR)
TO_DATE(last_processed_day DEFAULT date'2016-01-01' ON CONVERSION ERROR, 'DD-MON-RRRR','nls_date_language=AMERICAN')

So in the case where the conversion would result in an error, we can provide an alternative default value instead. This is highly useful!

Additionally there is a new function VALIDATE_CONVERSION which can be used to find values in a column where conversion would not be possible. It returns 0 for invalid values and 1 for correct ones.

new approximation functions

In 12R1 we already got APPROX_COUNT_DISTINCT
new are APPROX_PERCENTILE, APPROX_MEDIAN
These approximation functions can now be used in materialized views!
And there is a parameter that is able to switch the exact functions to the approximate versions.

alter session set approx_for_aggregation = 'TRUE';

Also new are APPROX_COUNT_DISTINCT_DETAIL, APPROX_COUNT_DISTINCT_AGG and TO_APPROX_COUNT_DISTINCT allowing to build hierarchies with those aggregation functions. Something we were not able to do in the past without rerunning the function on each aggregation level.

case insensitive linguistic queries

It is now possible to do searches using a case insensitive option, e.g.  BINARY_CI for various functions like LIKE. Those functions are able to use stemming. Not much detail about it yet, but it sounds like it can be used instead of putting UPPER around the columns.  At the moment I have no further information about possible performance impacts of that.

greatly enhanced JSON support

better simplified JSON

There is a complex JSON syntax using all the JSON functions and a simplified JSON syntax using dot notation. In general a lot of work was done to enhance the simplified JSON syntax. Like you can access elements of an array now.

JSON_EXISTS with predicates

If seen short examples where “and” expressions/filters using an && operator were done inside some SQL statement using JSON_EXISTS. Essentially JSON_EXISTS allows to add filters to the path expression using a solid set of operators.

build JSON with SQL/JSON

Using a similar syntax as the older sql/xml functions (XMLELEMENT, XMLAGG) we now have some new SQL functions to help us createing a JSON document. The output of those functions is VARCHAR2.

JSON_OBJECT, JSON_OBJECTAGG, JSON_ARRAY, JSON_ARRAYAGG

 

Views on JSON documents / Data Guide

The Data Guide allows to analyse JSON documents and automatically build views and virtual columns for elements in our JSON object.

The data dictionary has a list of all columns with an enabled data guide

USER|ALL|DBA_JSON_DATAGUIDE

We can access the dataguide in SQL with the functions JSON_DATAGUIDE or JSON_HIERDATAGUIDE or in PLSQL with the function DBMS_JSON.GET_INDEX_DATAGUIDE.

The plsql functions need an json search index  to work. If it exists then those functions should be preferred, since they work on persisted index data. If the json documents in the columns do all have a completly different structure, then it might be better to use the SQL functions and not use a json search index.

See also: Multiple Data Guides per Document Set

Based upon the data guide more operations are possible. For example we can easily create views and/or virtual columns that extract information from the JSON document as relational data.

To create a virtual column that shows data from our json column in a relational way we can use  DBMS_JSON.addVC.

To create a view that exposes json document data in a table like structure, we can use DBMS_JSON.createViewonDemand. This is based upon a JSON_TABLE function.

 

JSON search index

example

CREATE SEARCH INDEX po_dg_only_idx ON j_purchaseorder (po_document) FOR JSON;

 

GeoJSON support

If the JSON includes coordinates that are  conform with the GeoJSON standard, then it is possible to do geolocation searches on that JSON document.

GeoJSON also is supported during Spatial Queries and can be converted directly into Spatial Geometry.

more JSON enhancements

  • the JSON sql functions are now available in plsql as well => especially IS JSON will be useful
  • highly improved JSON search index
  • new predefined plsql object types JSON_OBJECT_T, JSON_ARRAY_T, JSON_ELEMENT_T, JSON_KEYLIST_T, JSON_SCALAR_T

MV enhancements

ON STATEMENT refresh for MV

We had ON DEMAND and ON COMMIT before. Now Materialised Views can be refreshed after DML changes to the base tables without having the need to wait for the commit

refresh statistics view

This was long overdue! We are now able to see statistics when the materialised view was refreshed. The name of the data dictionary view is not completely clear now, but I suspect DBMS_MVREF_STATS.

There is also a new package DBMS_MVIEW_STATS that can be used to organise the collection and the cleanup of those statistics.

improvements for partitioned objects

  • MOVE TABLE, SPLIT PARTITION and some other partitioned operations can be done online. That means they will not disrupt ongoing DML operations. And this includes automatic index maintenance.
  • CREATE TABLE FOR EXCHANGE prepares a non partitioned table to be partitioned.

 

superfast analytics

Join Groups

It is now possible to define columns as join groups. A JOIN GROUP is a new database object and it greatly increases performance of Join and analytical queries over those columns.

In Memory expressions

An INMEMORY expression is a special kind of virtual column that has an additional INMEMORY attribute attached to it. By that attribute special optimisations kick in that speed up access to this expression. This method is also used by some of the JSON optimisations.

A data dictionary view is added to help finding expressions that could profit from such virtual columns.

dba|all|user_expression_statistics

Analytical views

Analytical views are an easy way to define dimensional hierarchies and how they are rolled up. An Analytical View is a new object type.

This seems to be a completely new feature. Slightly based on the older dimensional cube possibilities. I’m not sure if this will be available for all editions later or if it will be an additional cost feature.

 

PL/SQL stuff

pragma deprecate

This feature was surprising to me, but I find it hugely interesting.

You can declare now plsql methods as deprecated by using this pragma. The “only” thing that happens is, if such a function is used you will get a compiler warning (PLW-something).

syntax example

function myFunc return varchar2
is
   pragma deprecate 'myFunc is deprecated. Use yourFunc instead!';
begin
  return 'x';
end myFunc;

So far I didn’t miss that feature, but I immediately have some projects in mind where I would use it.

static expressions instead of literals

All areas where a literal is to be used, can now be replaced by a so-called “static expression”. e.g.

declare
 myTab varchar2( Ora_Max_Name_Len + 2);
 myObject varchar2(2* (Ora_Max_Name_Len + 2));
begin
 myTab := '"Table"';
 myObject := '"ThisSchema".' || myTab ;
 ...

This works only as long as the expression can be resolved at compilation time.

As such it will not improve or change writing dynamic queries, however there is some impact for deployment scripts. So whenever you needed to use manually written compiler directives, this might be a new alternative.

minor changes

  • ACCESSIBLE BY for sub modules
  • bind plsql only datatypes to dbms_sql => this essentially just finishes what 12.1.0.2 already allowed for anonymous blocks and native SQL (execute immediate).
  • some enhancements for PL/scope: SQL_ID for static sql statements, reports where native SQL is used
  • slight enhancements for dbms_hprof: can now include SQL_IDs,  improved reporting   for sub cursors

new code coverage tool

It splits code into blocks and tells the developer if and how often those blocks are used during some sample run. Code that is not reached is coloured differently (red).

Blocks can be marked with a pragma

pragma coverage('NOT_FEASIBLE');

So that this block is not marked when it is not covered. The tool is currently only a new DBMS package, but it will be integrated into the next SQL Developer version.

dbms_plsql_coverage_….

 

developer.oracle.com

It is Oracles new landing page for developers. Strongly influenced by Steven Feuerstein.

screen-shot-2016-09-20-at-12-21-18

debugger enhancements

Can execute a sql now when the debugger encounters a breakpoint. Not implemented yet directly in SQL Developer, but it will be there in a future version. In general the new SQL developer already supports the enhanced debugging capabilities of the database, but not all is possible yet.

The debugger also is not available for the cloud yet. The protocol used, is not suited for cloud operations.

EBR (edition based redefinition)

Editioned objects no longer in use can be cleaned up automatically in the background

All we have to do is to drop the edition. Even in cases where this was not possible before.

CBO

CBO improved adaptive handling

The adaptive features of the CBO that are already in the database can now be better controlled. And the defaults were set in a way, that upgrades from 11g result in very similar behaviour.

OPTIMIZER_ADAPTIVE_FEATURES = TRUE or FALSE => deprecated

replaced by two new parameters

OPTIMIZER_ADAPTIVE_PLANS => default = TRUE

OPTIMIZER_ADAPTIVE_STATISTICS => default = FALSE

So if you switch from 12.1 to 12.2 and if you had OPTIMIZER_ADAPTIVE_FEATURES=TRUE then you might want to set the second parameter to TRUE again. If you switch from 11g to 12.2 you probably want the defaults.

The dbms_stats package got some improvements that go together with it.

Dbms_stats: auto_stat_extensions

At the moment not much is known about that yet.

Mview query rewrite

The CBO can now rewrite a query to a materialised view even if the view is stale at that moment. Materialized view logs are considered to return the correct results.

more CBO based things

  • new histogram types
  • GTT use session private statistics by default
  • automated SQL plan management
    Even after DB upgrade the capture of the old plan can be done by setting the parameter OPTIMIZER_FEATURES_ENABLED = ‘11.2.0.4’
    Plans can then evolve later
  • copy optimiser metadata from pre production to prod.
    EXPDP/IMPDP or dbms_stats.transfer_stats

application containers

Instead of just having a CDB and several PDBs in it, we can now define an Application Container. This serves as a kind of general repository for all PDBs that are part of that application container.

This is highly interesting.

The typical use case seems to be that you would roll out your application to various customers (in the cloud) by using a PDB for each customer. However all the common objects would be added into the application container, so that changes / enhancements can be rolled out to all customers at once or one after the other. The application container keeps track of you upgrade scripts an is able to “replay” those scripts in the specific PDBs. Only that is is not a replay of the script, instead the objects are linked from the PDB to the Application Container. I think this is the same mechanism as currently the CDB objects/metadata are made available on the PDBs.

Objects can be shared from the Application Container to the PDB by using the SHARING clause in the create statement.

SHARE NONE|METADATA|OBJECT

SHARE METADATA would only share the object definition, while SHARE OBJECT would also share the data.

It is possible to combine application containers with edition based redefinition. Essentially it seems as if the editions are copied from the Application Container to the PDB as all other Objects are copied/linked. The Application container just keeps track of what needs to be installed in a specific PDB.

shards

A new very special way to have locally distributed database. Seems to cover the same concept as shards in some noSql DBs. If the application is distributed over a set of databases, you can now declare them as sharded somehow. And if you define a table as sharded then its data will be split to the different databases. You use a shard key (very similar to a partitioned key) and that key defines where the data is going.

Exadata Express Cloud service

Exadata Express Cloud Service

Essentially you get a PDB with a few GB of space on an Exadata Machine running Oracle EE for only 175$ per month. Pricelist here.

SQL Developer version 4.1.5 supports the Exacta Express Cloud service. You can drag and drop objects (tables, etc.) directly onto the connection and get it moved there.

The Exadata Express Service includes Apex , ORDS and SODA. So it can also serve as a kind of JSON document storage using rest interface.

 

ORDS

Jetty is now supported for production mode. That means you can now run ORDS in standalone mode also for a production environment.

Since ORDS is not part of the database this does not depend on using database version 12.2.

 

SQL parse time issue – use “minified hinted outlines”

problem description

Recently I encountered a simple looking query that returned a very small result set, but used to take more than 5 seconds time to execute. When executing it a second time it was fast (less than 0.01 second). When adding a comment then the execution time was bad again (5 seconds). Adding a comment forces the optimizer to parse the query again. However cached blocks can be reused. So this simple test gives a good indication where the time is spent.

The query was simple looking at first but turned out to be a complex set of views within views with nested subqueries, union all combinations and so on. However the execution plan was all nested loops, the filter criteria was correctly pushed down through all the sub layers.

reason

The problem was that the complex sql took a very long time to hard parse.
Executing the plan was fast, but creating the execution plan was eating up the time. The CBO had trouble finding the best execution plan. The more tables are joined, the more different join orders are possible and need to be considered. This is called permutations (see tuning section below). It can take a considerably long time.

 

example

This was the query

SELECT
r.labwarebarcode
      ,r.orderid
      ,r.orderlineitemid
      ,r.streamid
      ,r.status
      ,r.lineitemcomment
      ,r.bat_identifier
      ,r.labwareitemid
      ,r.xposition
      ,r.yposition
      ,r.layoutid
      ,case when
         count(*) over (partition by r.streamid )                                        -- Number of wells
         =                                                                               -- equals
         count(case when r.status = 'SUCCESS' then 1 end) over (partition by r.streamid) -- number of successfull wells
         then 'FULFILLED'
         else 'NOT'
         end  as platestatus
from v_lnk_check_part_order_results r
where orderid = :orderid
order by streamid, labwareitemid;

Now how can we demonstrate that the parsing time is the problem?

To get a clear picture would be to trace your statement and run a TKPROF report for it. This is often something that is very hard to do for developers, because they do not have access to the OS and are not able without DBA privs to enable and disable the tracing and above all to run the TKPROF command. Here is a tutorial from John Watson that shows how to trace and use tkprof.

The way described at the beginning only gives us an indication. First run the statement several times until the timings are stable. This means the data blocks are cached in the buffer cache. Then slightly modify the statement, for example by adding a comment, adding a meaningless where condition (2=2) or removing one column from the output. This will lead to a new cursor and to a new hard parse. Run the same statement a second time and it should be fast again. The difference in execution time gives an approximization for the time needed to hard parse.

It is not perfect, but is a solid “educated guess”. I didn’t see an easy way in SQL developer to show those statistics using typical developer privs only.

the solution

Adding the following hint to the select solved the issue

/*+
LEADING(OLIS@SEL$11 ERRORS@SEL$11 WFI@SEL$2 EXTRAINFO@SEL$11 ILI@SEL$19 PL@SEL$27 ICH@SEL$17 SOL@SEL$25 BAT@SEL$21 CPD@SEL$23
CWFI@SEL$3 SOA@SEL$4 LI@SEL$3
POS@SEL$8 SOA@SEL$7 CWFI@SEL$6 LI@SEL$6
SOA@SEL$13 CE@SEL$13
OSM@SEL$12 O@SEL$12 S@SEL$12 LI@SEL$12)
ALL_ROWS
*/

What this hint does and how to get it is explained below.

I call it the “minified hinted outline” approach. Although “Extrem lead hinting” was a close runner up.

tuning effort

We already identified the hard parsing as the point where the CBO spends its time. And more specifically it is the point where the CBO creates multiple execution plans and compares them.


Source:Oracle Docs – Query Optimizer Concepts

To understand the problem that the CBO faces, here is a table that lists the number of potential table combinations that the CBO needs to consider. The permutations go up exponentially with the number of tables.

permutation considerations

# tables join permutations to consider
2 2! = 2, table A => table B & table B => table A
3 3! = 6, A=>B=>C & A=>C=>B & B=>A=>C & B=>C=>A & C=>A=>B & C=>B=>A
4 4! = 24
5 5! = 120
6 6! = 720
7 7! = 5040
15 15! = 1,307,674,368,000

There is a maximum number of permutations that the CBO considers.
This maximum number is 2000 (default since 11g). It can be influenced by the hidden parameter _optimizer_max_permutations. In older database version this was a normal init.ora parameter, but it was deprecated in 9i and replaced by the “hidden” parameter (http://oracleinaction.com/undocumented-params-11g/
). It might be that this maximum number is per query block. The documentation is not perfectly clear about it. A query block can be an independent subquery, a non-merged/inline view or a WITH clause section. So for ten query blocks, it might be that the maximum number of permutations to consider is 20000. I didn’t test that hypothesis but there are some indications that this happened in this specific case.

As we can see from the list this magical number of 2000 permutations is already reached when 7 tables are joined. After that the optimizer will stop looking at other combinations. Btw: that is why the table order in the FROM clause can still make a difference! So put the main tables at the beginning of the FROM clause, so that the CBO considers them correctly in its access path permutations.

general approaches

  • Stored outlines and SQL plan management
  • global table hints

stored outlines

A stored outline will store the execution plan for one specific SQL statement. They are mostly used for plan stability, however a nice side effect is, that the hard parse phase is shortend when a stored outline is found.
A good introduction about stored outlines can be found in the 10g documentation: https://docs.oracle.com/cd/B19306_01/server.102/b14211/outlines.htm#i30836 . Stored outlines are available since 9i at least. The modern version of a stored outline is a SQL plan baseline and SQL plan management (SPM).

Further reading:
Oracle white paper: http://www.oracle.com/technetwork/database/bi-datawarehousing/twp-sql-plan-mgmt-12c-1963237.pdf#22

Blog “Insight into the workings of the Optimizer”: https://blogs.oracle.com/optimizer/entry/how_to_use_sql_plan

global table hints

Essentially what stored outlines do is to set a couple of hints.

We can do this directly ourselfs by using global table hints.
The normal way to provide a hint, is to add it to the section where it is relevant for. But when views are involved, you probably won’t want to change the view definitions and cutter them with hints. First of all the views might belong to some other schema. Also they are used in different points in the application, so a hint that my be good for one specific SQL might not be good for another SQL using the same view.

Global table hints come to the rescue! A global table hint is able to “inject” a hint into the deeper layers of a view.

Quick and not so dirty solution

As already mentioned adding the following hint to the select solved the issue

/*+
LEADING(OLIS@SEL$11 ERRORS@SEL$11 WFI@SEL$2 EXTRAINFO@SEL$11 ILI@SEL$19 PL@SEL$27 ICH@SEL$17 SOL@SEL$25 BAT@SEL$21 CPD@SEL$23
CWFI@SEL$3 SOA@SEL$4 LI@SEL$3
POS@SEL$8 SOA@SEL$7 CWFI@SEL$6 LI@SEL$6
SOA@SEL$13 CE@SEL$13
OSM@SEL$12 O@SEL$12 S@SEL$12 LI@SEL$12)
ALL_ROWS
*/

All we need to get such a hint is clever usage of the Oracle SQL Developer.

how to

Using SQL developer we can check the execution plan for our query (F10).
In the lower section of the plan there is a section “Other XML” and in this section is a long list of HINTs. This section can be exported.

OR we can use the explain plan command and after that do this:

select *
from table(dbms_xplan.display(
             format=>'BASIC ALIAS OUTLINE'
          ));

“OUTLINE” will show the complete sql outline. Which is the sum of all hints that need to be applied to get this specific plan.
“ALIAS” will show the query block names that are used.
“BASIC” is just to focus on the relevant parts for this example.

A solid first test strategy is to copy and paste the complete outline section into the sql statement and see if it is now fast when using all the hints.

However this list is way to long and we can eliminate step by step most hints. I would recommend to keep only the leading hints. Remember, we just want to reduce the complexity for the high number of table permutations. The leading hints (and a few others like merge/no_merge) are responsible for freezing the join orders. Make sure to get all leading hints. If one of them is missing, the whole order of execution might break.

If the query is still reasonably fast then we can work with them only. In my case the execution time slightly increased to 0.15 seconds. Which is still way below the full 5 seconds when calling it without a hint.

The result may look like this:

LEADING(@SEL$6A1B73E7 CWFI@SEL$3 SOA@SEL$4 LI@SEL$3)
LEADING(@SEL$AB668856 POS@SEL$8 SOA@SEL$7 CWFI@SEL$6 LI@SEL$6)
LEADING(@SEL$5B04CACE SOA@SEL$13 CE@SEL$13)
LEADING(@SEL$FC92CB92 OSM@SEL$12 O@SEL$12 S@SEL$12 LI@SEL$12)
LEADING(@SEL$5461C9FE OLIS@SEL$11 ERRORS@SEL$11 WFI@SEL$2 EXTRAINFO@SEL$11 ILI@SEL$19 PL@SEL$27 ICH@SEL$17 SOL@SEL$25 BAT@SEL$21 CPD@SEL$23)
ALL_ROWS

Ok lets look more closely at one of the leading hints that we have:
LEADING(@SEL$5B04CACE SOA@SEL$13 CE@SEL$13)

@SEL$5B04CACE is a named query block. Where oracle itself had choosen the name. We can see those query block names when using the “ALIAS” setting during explain plan creation. I believe those query block names could potentially change when useing the same statement on a different database (e.g. moving the code from DEV to TEST DB). Therefore the goal is to get rid of those aliases for query blocks.

SOA@SEL$13 is the table(or view) alias “SOA” and SEL$13 is the 13th SELECT command in our query. Including the outermost select call that we issue ourselfs. So the table order is SOA before CE inside the query block “SEL$5B04CACE”.

We can then combine all those leading hints into one single leading hint. The order is relevant. Sometimes it needs a little testing to find the proper order of the hints. This is because the starting points, the query block names need to be removed when only one hint is used. In my case the “@SEL$5461C9FE” was the outermost block (the select itself), so the leading tables that came after that block were the first ones to consider. This outermost block can be identified by studying the ALIAS section of the execution plan.

Fazit

“minified hinted outlines” are a quick solution for a very specific type of problem. Typically when using complex queries on views we might want to consider such a solution. It is a quick solution as an alternativ to stored outlines or SPM.

drawbacks

This solution freezes parts of the execution plan. But in a less invasive way than a real stored outline would do. For example we do not say which index to use, instead the CBO still decides about the best way how to access a specific table. It is just that the order of the tables and subquery blocks is hinted and by that frozen.

If the select or the views change, then the leading hint needs to be regenerated. However this concept was working sucessfully even when moving the query including hint from test to a production environment.

Integrate Oracle JET into Apex 5.0

Introduction

For Apex 5.1 it is announced that the new chart library will be Oracle JET. It is possible to use Oracle Jet already in Apex 5.0. How to integrate it is the scope of this article.

Rationale

I don’t see that most of the projects will switch very soon to Apex 5.1. Many have just made the transition to Apex 5 and Universal Theme or are about to do so. This was usually quite some work. It is difficult to argue for making a new conversion to Apex 5.1 very soon. Even if some of the reasons (Interactive Grids!) are extremly compelling. However since Oracle JET Charts will be one of the major areas where we can expect some adaptation when migrating, it makes sense to start working with that technology as soon as possible. Using Oracle JET charts in Apex 5.0 would be a very good preparation and might smooth a future upgrade path.

In the scope of the Apex Dashboard Competition I made a try to combine Oracle Jet into Apex 5.0. There are quite a few traps and surprises when trying to do so. I plan to do a series of blog posts that all deal with the combination of Apex 5 and oracle jet.

This article will just concentrate on how to install Oracle Jet (OJ) and make it available in Apex 5.0.

License considerations

Since a few weeks Oracle Jet is open source.
See : http://www.oracle.com/technetwork/developer-tools/jet/downloads/jetlicense-2905156.html

“Anyone and Everyone can use Oracle JET. It’s an Open Source project sponsored by Oracle. It is distributed under the Universal Permissive License(UPL).”

Essentially it means we can use and distribute oracle jet as long as the appropriate license document is included in the distribution. The license document is also part of the oraclejet.zip file that we need to download.

Download sources

To install Oracle JET, we first need to download the newest version. Since we do not create a Oracle JET only application, most of the “Getting Started” docs and templates are not really relevant. The oraclejet.zip file hides under the “Oracle JavaScript Extension Toolkit : Base Distribution” link in the download page.

Download page: http://www.oracle.com/technetwork/developer-tools/jet/downloads/index.html

General documentation links:

Installation

Preparation

Prerequisites: the oraclejet.zip file and an apex 5.0 database.

I tested the whole installation process in the vanilla oracle developer virtual machine. This VM includes an Oracle EE database 12.1.0.2, Apex 5.0, ORDS 3.0.x and SQL Developer 4.1.

http://www.oracle.com/technetwork/database/enterprise-edition/databaseappdev-vm-161299.html

Remember it is for evaluation purposes, but not for production development!

Server Installation Process

Unfortunatly there is no CDN link that we could simply include in our apex page. Instead we need to reference the needed js and css files somehow in our Apex page.

To install Oracle JET we need access to the apex web server, where the image files are located. To do so unzip the downloaded oraclejet.zip file into a usable folder. I suggest to use the normal apex/images/libraries folder.

e.g.

/apex/images/libraries/oraclejet

apex/images is typically mapped as the image path /i/. It can later be used with a substituition variable #IMAGE_PREFIX#. To access an oraclejet file, we could include it like this #IMAGE_PREFIX#libaries/oraclejet/libs/oj/v2.0.0/ojs.js .

Unzipping the file into some accessible folder is all we need for the server installation part.

OJ configuration and integration into Universal Theme (UT)

The oj documentation states that the require.js framework is mandatory if data visualization components are used. Since the charts are the most interesting parts for Apex, we are forced to use require.js. Most of the installation problems I encountered had to do with using require.js. More details are in the “About requirejs” part. I hope that in the distribution for Apex 5.1 this dependency will be gone.

About requirejs

Require.js is a tool that is able to load needed ressources dynamically. Typically those resources are other javascript libraries. If multiple parts of an application will load the same library, require.js will only load it once including dependencies.

The Oracle JET documentation tells us how to install requirejs.

The loader that makes it easier to manage library references and is designed to improve the speed and quality of your code. Oracle JET uses RequireJS by default for the Oracle JET QuickStart Template and is required if you plan to use Oracle JET’s internationalization, data visualization components, or the oj.OAuth plugin in your application.

To use RequireJS to manage references:

  1. Download Oracle JET as described in Download the Oracle JET Zip File.
  2. Copy js/libs/oj/version/main-template.js to the js folder.
  3. In the js folder, rename main-template.js to main.js.
  4. Add the following script reference to your index.html file:
  5. Update main.js as needed to reference Oracle JET modules or your own scripts.

    For a list of Oracle JET modules and additional details about using RequireJS in your Oracle JET application, see Using RequireJS for Modular Development.

For more information about RequireJS, see http://requirejs.org.

Ok, so lets do this. Do not copy the code for the main.js from the documentation. This code is outdated and does not have all the correct paths. The reason is simple. Some libraries, like oj itself have now a newer version and the version is included in the path. If you copy the main-template.js as stated in the documentation, you should be safe. This is what I used for the main.js file. The paths are relative to the location of the main.js file itself. in this case to set the baseUrl is not needed.

requirejs.config({
  // baseUrl: '#IMAGE_PREFIX#libraries/oraclejet/js',
  // Path mappings for the logical module names
  paths: {
    'knockout': 'libs/knockout/knockout-3.4.0',
    'jquery': 'libs/jquery/jquery-2.1.3.min',
    'jqueryui-amd': 'libs/jquery/jqueryui-amd-1.11.4.min',
    'ojs': 'libs/oj/v2.0.0/min',
    'ojL10n': 'libs/oj/v2.0.0/ojL10n',
    'ojtranslations': 'libs/oj/v2.0.0/resources',
    'signals': 'libs/js-signals/signals.min',
    'text': 'libs/require/text',
    'promise': 'libs/es6-promise/promise-1.0.0.min',
    'hammerjs': 'libs/hammer/hammer-2.0.4.min',
    'ojdnd': 'libs/dnd-polyfill/dnd-polyfill-1.0.0.min',
  },
  // Shim configurations for modules that do not expose AMD
  shim: {
    'jquery': {
      exports: ['jQuery', '$']
     }
    }
  },

// This section configures the i18n plugin. It is merging the Oracle JET built-in translation
// resources with a custom translation file.
// Any resource file added, must be placed under a directory named "nls". You can use a path mapping or you can define
// a path that is relative to the location of this main.js file.
    config: {
        ojL10n: {
            merge: {
                //'ojtranslations/nls/ojtranslations': 'resources/nls/myTranslations'
            }
        }
    }
});

/**
 * A top-level require call executed by the Application.
 * Although 'ojcore' and 'knockout' would be loaded in any case (they are specified as dependencies
 * by the modules themselves), we are listing them explicitly to get the references to the 'oj' and 'ko'
 * objects in the callback.
 *
 * For a listing of which JET component modules are required for each component, see the specific component
 * demo pages in the JET cookbook.
 */

require(['ojs/ojcore', 'knockout', 'jquery', 'ojs/ojknockout', 'ojs/ojbutton', 'ojs/ojtoolbar','ojs/ojmenu','ojs/ojpictochart'], // add additional JET component modules as needed
  function(oj, ko, $) // this callback gets executed when all required modules are loaded
  {
      // add any startup code that you want here
  }
);

This file will be included in our Apex page using a js script inclusion.

<script data-main="#IMAGE_PREFIX#libraries/oraclejet/js/main" 
src="#IMAGE_PREFIX#libraries/oraclejet/js/libs/require/require.js" 
></script>

Since the script tag needs a special data-main attribute, we can not use the normal Apex 5 mechanism of adding the js.file name to the appropriate page section.

Instead we need to modify the page template. I copied the minimal one.
The result looks like this.

...
<head>
  <meta charset="utf-8">  
  <title>#TITLE#</title>
  #APEX_CSS#
  #THEME_CSS#
  #TEMPLATE_CSS#
  #THEME_STYLE_CSS#
  #APPLICATION_CSS#
  <!-- Oracle JET CSS files -->
  <link rel="stylesheet" href="#IMAGE_PREFIX#libraries/oraclejet/css/libs/oj/v2.0.0/alta/oj-alta-min.css" type="text/css"/>
  <script data-main="#IMAGE_PREFIX#libraries/oraclejet/js/main" src="#IMAGE_PREFIX#libraries/oraclejet/js/libs/require/require.js" ></script>
    
  #PAGE_CSS#  
  #FAVICONS#
  #HEAD#
  
  <meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no"/>
</head>
...

There is one more problem. Require.js does not like if a library with the same name is loaded previously without using require.js. (There are some notable exceptions that depend on using Asyncronous Module Definition Api (AMD)).

Apex 5.0 already loads the Hammer.js library in a different version than the one, that is included with the base Oracle JET distribution. Hammer.js provides the possibility to use Touch Gestures. So it is especially useful for mobile applications.

Because Apex already loaded Hammer require.js throws an error message. The only way I found to make it work, was to remove Hammer.js from the Theme. Unfortunately this can be done only by unsubscribing the Theme. After unsubscribing, we can edit the Theme and remove Hammer.js from the Javascript Files Section.

Thanks to Roel Hartmann for the tipp!

In any case you should check that the libraries were loaded successfully.

On firefox open the console (SHIFT+CTRL+K) and switch on the network traffic.

apex5_log_network

If all files loaded successfully then we are up and running.

It is time to try our first chart from the cookbook.

Using a static Cookbook Demo

So lets try to use the page and add a pictoChart to it

Link: http://www.oracle.com/webfolder/technetwork/jet/uiComponents-pictoChart-default.html

First we create a static region and add the html from the cookbook.

<div  id="picto-container">
  <div id='pc1' data-bind="ojComponent:{
      component: 'ojPictoChart',
      items: pictoChartItems,
      animationOnDisplay: 'auto',
      columnCount: 5
    }"
    style="vertical-align:middle; margin-right:15px">
  </div>
  <div style="display:inline-block; vertical-align:middle; font-weight:bold">
    <span style="color:#333333; font-size:1.1em">7 out of 10 college students</span><br>
    <span style="color:#ed6647; font-size:1.3em">have sleep problems.</span>
  </div>
</div>

Then we add some javascript functions to the page itself (global functions section).

In the javascript section we need to repeat the require call including the configuration part. This is because the libs are loaded asynchronously. On each page where we want to add some js files from Oracle Jet we would do a require call immediately before the relevant section. This time the baseUrl setting is helpful.

To enable pictocharts we need to require the ojs/ojpictochart lib.

require.config({
  baseUrl: '#IMAGE_PREFIX#libraries/oraclejet/js',
  paths: {
    'knockout': 'libs/knockout/knockout-3.4.0',
    'jquery': 'libs/jquery/jquery-2.1.3.min',
    'jqueryui-amd': 'libs/jquery/jqueryui-amd-1.11.4.min',
    'ojs': 'libs/oj/v2.0.0/min',
    'ojL10n': 'libs/oj/v2.0.0/ojL10n',
    'ojtranslations': 'libs/oj/v2.0.0/resources',
    'signals': 'libs/js-signals/signals.min',
    'text': 'libs/require/text',
    'promise': 'libs/es6-promise/promise-1.0.0.min',
    'hammerjs': 'libs/hammer/hammer-2.0.4.min',
    'ojdnd': 'libs/dnd-polyfill/dnd-polyfill-1.0.0.min'
  },
   // waitSeconds: 1,
  // Shim configurations for modules that do not expose AMD
  shim: {
    'jquery': {
      exports: ['jQuery', '$']
    }
  }
});      

require(['ojs/ojcore', 'knockout', 'jquery', 'ojs/ojknockout', 'ojs/ojpictochart'], function (oj, ko, $) {
    function PictoChartModel() {
      this.pictoChartItems = ko.observableArray([
        {name: 'Have  Sleep Problems', shape: 'human', count:7, color: '#ed6647'},
        {name: 'Sleep Well', shape: 'human', count: 3}
      ]);
    }
    var pictoChartModel = new PictoChartModel();

    $(document).ready(
      function() {
        ko.applyBindings(pictoChartModel, document.getElementById('picto-container'));
      }
    );
  });

Voilá we are done!

This is how the result looks like.
Screen Shot 2016-04-07 at 01.16.45

How to enhance the pictoCharts a bit including adding other shapes will be explained in another blog post.

Here are just two more screenshots to give you some ideas.

Screen Shot 2016-04-07 at 01.17.16Screen Shot 2016-04-07 at 00.28.13

SQL Quickie: Column usage

A question directed at me recently was if we can somehow monitor how a table is accessed useing which where conditions. The goal was to decide about index creation for such a table.

Fortunatly Oracle already collects some relevant information for this.

The sys.col_usage$ table is used by the dbms_stats package to decide which columns might profit by adding histograms. If we have DBA privs, we can access this information:

-- column usage 
select o.owner, o.object_name,  c.column_name,
       u.*
from dba_objects o
join dba_tab_columns c on c.owner = o.owner and c.table_name = o.object_name 
left join SYS.COL_USAGE$ u  on o.object_id = u.obj# and u.intcol# = c.column_id
where o.object_type = 'TABLE'
and c.owner = 'LARRY' -- enter appropriate schema here
and c.table_name = 'CLOUDNO9' -- enter appropriate table here
order by c.table_name, c.column_id;

And this is how the result could look like

OWNER	OBJECT_NAME	COLUMN_NAME	OBJ#	INTCOL#	EQUALITY_PREDS	EQUIJOIN_PREDS	NONEQUIJOIN_PREDS	RANGE_PREDS	LIKE_PREDS	NULL_PREDS	TIMESTAMP
LARRY	CLOUDNO9	COL1		68200	1	725	5	0	0	1	0	10.03.2016 15:42
LARRY	CLOUDNO9	COL2									
LARRY	CLOUDNO9	COL3									
LARRY	CLOUDNO9	COL4									
LARRY	CLOUDNO9	COL5									
LARRY	CLOUDNO9	COL6		68200	6	0	1	0	0	0	0	19.04.2012 13:33
LARRY	CLOUDNO9	COL7		68200	7	0	1	0	0	0	0	19.04.2012 13:33
LARRY	CLOUDNO9	COL8		68200	8	0	1	0	0	0	2	25.06.2014 10:28
LARRY	CLOUDNO9	COL9		68200	9	326	3	0	0	1	0	24.02.2016 09:13
LARRY	CLOUDNO9	COL10		68200	10	316	1	0	0	0	0	24.02.2016 09:13
LARRY	CLOUDNO9	COL11		68200	11	315	1	0	0	0	0	24.02.2016 09:13
LARRY	CLOUDNO9	ID		68200	12	891	1674	0	0	0	0	10.03.2016 17:57
LARRY	CLOUDNO9	FK1_ID		68200	13	2236	3	1	0	0	0	10.03.2016 19:43
LARRY	CLOUDNO9	FK2_ID		68200	14	1	508	0	0	0	3	10.03.2016 17:57
LARRY	CLOUDNO9	CHANGED									
LARRY	CLOUDNO9	CHANGED_BY									
LARRY	CLOUDNO9	COL12		68200	17	0	161	0	0	0	161	10.03.2016 14:57

The different columns are statistic counts with some obvious meaning

  • EQUALITY_PREDS = equality predicates
    for example a where condition as this

    col1 = 'ABC'
  • EQUIJOIN_PREDS = column used by an equi join
    for example a join as this

    ...from a join b on a.id = b.a_id

    IN / NOT IN comparisons often also count as equi joins

    ...from a 
    where a.id not in (select b.a_id from b)
  • NONEQUIJOIN_PREDS = column used by a non equi join
    for example a join as this

    ...from a join b on a.col1 between b.col1 and b.col2
  • RANGE_PREDS = column used by a range comparison
    for example a where condition as this

    where a.col_date >= trunc(sysdate)-5
  • LIKE_PREDS = column used by a like comparison
    for example a where condition as this

    where a.col1 like 'AMORE%'

    Any kind of like expression counts.

  • NULL_PREDS = column compared with IS NULL
    for example a where condition as this

    where a.col1 is null

    IS NOT NULL is not tracked. And only for columns that are optional.

  • TIMESTAMP = tells us when this column was last used using one of those methods

Doing such statements the information is first stored and collected somewhere in the SGA. We can force it into this table by flushing it using

execute DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO;

Recommendation

We can add a simple index check to the previous select.
This helps to quickly identify columns that might profit from an index.
Obvious candidates would be columns where no index exists but with a high count of EQUALITY_PREDS+EQUIJOIN_PREDS, but also RANGE_PREDS.

-- column usage + index check
select o.owner, o.object_name,  c.column_name,
       case when exists (select null 
                          from dba_ind_columns i 
                          where i.table_owner = c.owner 
                          and i.table_name = c.table_name 
                          and i.column_name = c.column_name) 
        then 'YES' else 'NO' 
       end  as index_exists,
       u.*
from dba_objects o
join dba_tab_columns c on c.owner = o.owner and c.table_name = o.object_name 
left join SYS.COL_USAGE$ u  on o.object_id = u.obj# and u.intcol# = c.column_id
where o.object_type = 'TABLE'
and c.owner = 'LARRY' 
and c.table_name = 'CLOUDNO9'
order by c.table_name, c.column_id;

And if you want to quickly scan your whole schema try this version.

-- column usage without index for the whole schema 
select * 
from (
  -- column usage + index check
  select o.owner, c.table_name,  c.column_name, 
         case when exists (select null 
                            from dba_ind_columns i 
                            where i.table_owner = c.owner 
                            and i.table_name = c.table_name 
                            and i.column_name = c.column_name) 
          then 'YES' else 'NO' 
         end  as index_exists,
         c.num_distinct, c.nullable, 
         u.*
  from dba_objects o
  join dba_tab_columns c on c.owner = o.owner and c.table_name = o.object_name 
  left join SYS.COL_USAGE$ u  on o.object_id = u.obj# and u.intcol# = c.column_id
  where o.object_type = 'TABLE'
  and c.owner = 'LARRY' 
  --and c.table_name = 'XXX'
  )
where index_exists = 'NO' 
and equality_preds+equijoin_preds+range_preds > 100 
and equality_preds+equijoin_preds+range_preds > like_preds+null_preds
and num_distinct > 1
order by equality_preds+equijoin_preds+range_preds desc, table_name, intcol#;

References

Maria Colgan mentioned this table in her blog about column histograms:
https://blogs.oracle.com/optimizer/entry/how_does_the_method_opt

Ulrike Schwinn wrote a nice article for the German community about table and column monitoring:
https://apex.oracle.com/pls/apex/germancommunities/dbacommunity/tipp/1261/index.html