Apex 5.1 session cloning

Introduction

With Apex 5.1 we got a nifty new feature. It is now possible to clone an apex session, so that we can have two (or more) independent APEX sessions running in the same browser.

It only took me 2h to implement this from scratch. So it is really easy to do.

Joel Kallmann describes how to do it in this blog post http://joelkallman.blogspot.de/2016/07/apex-session-isolation-across-multiple.html

There are a few additional tweaks I would like to mention here.

Step 1) Enable session cloning on instance level

logon as sys (or any user with apex_administrator_role).

I first granted my normal DBA account the apex_administrator_role

grant apex_administrator_role to myDBA;

and then as DBA:

begin
apex_instance_admin.set_parameter(
p_parameter => 'CLONE_SESSION_ENABLED',
p_value     => 'Y');
end;
/

 

If you are in a CDB/PDB environment, with Apex installed in the PDB (recommended) then  make sure to run this on the matching PDB (especially when working with sys).

e.g.

alter session set container = PDBAPEX

Joels article also explains how to enable this for a single workspace. But I got an error message when trying to do so.

"ORA-20987: APEX - Instance parameter not found - Contact your application administrator."

Step 2) Add a new navigation bar entry

Of cause you are free to add this functionality everywhere you want. But you will need a link, that the user has to click on to get a new session. My preference was the navigation bar.

The url for the link is simple. Just add APEX_CLONE_SESSION as the request parameter.

f?p=&APP_ID.:&APP_PAGE_ID.:&APP_SESSION.:APEX_CLONE_SESSION

create a navigation bar list entry with this link

  • Go to “Shared Components/Navigation Bar Lists/navigation bar” (the name of my list).
  • Choose any icon (fa-clone) and label text (“Session klonen”) that you want.
  • The target page needs to be “&APP_PAGE_ID.”. This will use the current page and add it into the link.
  • And most importantly the request needs to be APEX_CLONE_SESSION.

apex51_clone_session_navbar_entry

The entry is now ready and working. However it will replace the session in the current browser tab. But rightclick and “open in new tab” gives us two tabs with a different apex session.

If that is not enough then this next step is wanted too.

Step 3) Open the link in a new page

The goal is to add a TARGET attribute to our link, so that a new tab is always opened when we clone the session. Unfortunatly the navbar template has no option to include link atributes. We could do a kind of injection trick with the url, but I prefere a cleaner way.

To do so we need to modify three small parts.

First copy the navigation bar template as a new template “Navbar with Attributes”

And there we add the user defined property #A03# to our link. This needs to be done in all areas where we find an <a..> tag.At least for the “list template current” and “list template non-current”. I also added it for the sub list entries, even if my navbar doesn’t use sub lists.

apex51_clonesession_template

Don’t forget to add a description “Link Attributes” for the newly added attribute value in the “attribute description” section (scroll down a bit to see it).

Then enhance our existing navbar entry with target=”_blank”

apex51_clonesession_linkattributes

There is a tiny difference between using target=”_blank” or target=someothername.

target=”_blank” will always create a new tab.

target=”_someothername” will open a new tab on the first click. Consecutive clicks however will always reuse this same tab. This can be useful if you want to prevent your users to constantly cloning session after session after session.

 

And finally make sure that our application uses the new template for showing the navbar list

Shared Components/User Interface Attributes/Desktop/Navigation bar/List Template

apex51_clonesession_userinterface

 

Result

The navigation bar then could look similar to this here:

apex51_clonesession_navbar_de

Clicking on the “clone session” link will open a new tab in the same browser. In the url we will see that a different session id was created.

The new session will have all the same item values (page items, application items, etc.) as the privious session had. But from this point on, the two sessions will start to differ.

Both sessions will use the same browser cookie. For that reason if one session logs out, then the other session will be logged out too.

 

How to upgrade from Apex 5.0 to 5.1

Preparation

As a preparation I recommend several steps and checks to prepare the upgrade.

  • Make sure you have a working database backup and just in case a DBA at hand who would be able to restore your database, tablespace or schemas.
  • Workspace clean up – delete applications that are not needed anymore. Especially copies of others application, that were created to test a specific feature or do a proof of concept. Be careful whether you also want to run any included deinstallation scripts. For an Apex update in general you don’t want to delete the connected tables.
  • Software Download – Apex 5.1 can be downloaded from the OTN download page: www.oracle.com/technetwork/developer-tools/apex/downloads/index.html
  • Backup Application – Export the application including all private saved reports
  • Export supporting objects like Theme, Static Application files
  • Backup Image folder – if your image folder is /i/ then make a copy and rename the copy to /i50/
  • Check the Apex 5.1 known issues page:  http://www.oracle.com/technetwork/developer-tools/apex/downloads/apex-51-known-issues-3408261.html

Apart from downloding the new Apex version, none of these steps is really required. But it gives a nice and cozy fealing to be sure to be able to go back.

In Apex 5.1 some features are deprecated and some options did change. It is possible to prepare your application to anticipate the effects of the upgrade. I will cover this in a separate blog post. More importantly read through the “changed behaviour”, “deprecated” and “desupported” sections of the installation manual (https://docs.oracle.com/database/apex-5.1/HTMRN/toc.htm#HTMRN-GUID-8BDFB50B-4EC6-4051-A9B6-7D5805E00C4E ).

Here are some things to consider already in Apex 5.0.

  • Apex.Server plugin
    • does not return an jqXHR object anymore
    • async option deprecated
    • htmldb_getobject desupported => replace with apex.server
  • old Apex themes deprecated
  • check for CANCEL or BACK or PREVIOUS buttons (page redirect) with execute validations = YES. These will do client side validation in Apex 5.1. If that is not wanted, change it to NO.
  • jsTRee plugin deprecated
  • classic reports
    • hidden column type => hidden column
    • no enterable fields!
  • file browse storage => switch from WWV_FLOW_FILES to APEX_APPLICATION_TEMP_FILES
  • desupported attributes or types
    • page: body header, include standard javascript and css
    • region: svg charts, simple chart, classic tree
    • button: pre text, post text
    • item: start and stop grid layout, file browse storage type
  • Conditions deprecated: text= value, text != value, text is (not) contained in item
  • No more: Save state before branching
  • apex_plsql_job package desupported
  • check if you reference the internal hidden fields  (renamed in Apex 5.1): pPageChecksum => pPageItemsProtected, p_md5_checksum=>pPageItemsRowVersion
  • date picker (classic) deprecated
  • several updated javascript libraries

Decide about the upgrade path

Now consider whether you want to do a traditional upgrade (all steps in one go) or if you want to minimize the application downtime (several steps, not for CDB$ROOT installations). Or as oracle calls it: “maximize application uptime”.

To minimize downtime read this chapter in the documentation: https://docs.oracle.com/database/apex-5.1/HTMRN/toc.htm#HTMRN-GUID-411DE0D8-59E1-4267-8751-FD56305ABE4E

Upgrade in one go

The only step needed to do:

Create database schemas and database objects (tables, packages) and do the application migrations.

@apexins.sql tablespace_apex tablespace_files tablespace_temp images

Upgrade with maximum Uptime

Instead of running a single script, we can do the upgrade in several steps. Only during the 3rd step, the end users need to be disconnected. This third step took only 1.01 seconds on my system.

The upgrade of an Application Express instance runs in four phases:

  1. Create database schemas and database objects (tables, packages).
    This essentially creates the Apex_051000 schema.
    -> no influence on running sessions
  2. Migrate application metadata.
    This copies the repository application data from Apex_050000 into Apex_051000.
    To help with that some upgrade triggers were previously installed.
    ->  developers can’t work anymore
  3. Migrate data that runtime applications modify and switch to the new version.
    -> downtime for all (developers and end users)
  4. Migrate additional log and summary data (this step starts automatically)
    -> no influence on running sessions

But we need zero downtime – is it possible?

I’m convinced it is possible to reach a downtimeless application upgrade using the EBR  (edition based redefinition) feature of the oracle database. I have extensive knowledge using EBR even inside Apex. However so far I didn’t have the time to do a proof of concept (POC) for the upgrade. Also this would be an unsupported action (currently). The change would include tweaking serveral non editioned objects (public synonyms, session contexts, registry data) in such a way that they show up differently when used inside an edition.

If any Germany based customer or the Oracle Apex Team itself is interested how to do this and is willing to pay for the time I need to invest in this, then please contact me.

do the upgrade

Unzip the Apex_5.1.zip file into an appropriate folder. And navigate to the apex_5.1/apex folder

If you decide for the “Maximum Uptime” upgrade path, then three scripts need to run. And ORDS needs to be stopped for script 3. To run the scripts we need to know the tablespace names and the image path.

Find the tablespace

The documentation gives examples useing the SYSAUX tablespace. I do not recommend that. Apex should have its own tablespace.

select username, default_tablespace, temporary_tablespace, profile, oracle_maintained
from dba_users
where regexp_like(username,'^(APEX_|ORDS_)');

This shows only the default setting. We can reuse the same tablespace. But it is also possible to install apex 5.1 into a new tablespace. If you want to do that, then this new tablespace needs to be created first.

Sometimes we want to see if the data is really in this default tablespace. Here is a select that will show the data distribution and also how much space is used.

select owner as schema, tablespace_name as data_tbs, nvl(segment_type,' - total -') segment_type, round(sum(bytes)/1024/1024,2) size_in_MB
from dba_extents
where regexp_like(owner,'^(APEX_|ORDS_)')
group by owner, tablespace_name, rollup(segment_type)
;

Example result

SCHEMA        DATA_TBS SEGMENT_TYPE SIZE_IN_MB
APEX_050000   APEX     INDEX        239,69
APEX_050000   APEX     TABLE        267,19
APEX_050000   APEX     LOBINDEX     12,75
APEX_050000   APEX     LOBSEGMENT   240,31
APEX_050000   APEX     - total -    759,94
ORDS_METADATA ORDS     INDEX        4,5
ORDS_METADATA ORDS     TABLE        1,63
ORDS_METADATA ORDS     LOBINDEX     0,19
ORDS_METADATA ORDS     LOBSEGMENT   0,38
ORDS_METADATA ORDS     - total -    6,69

This is a example of one of my apex environments. As you can see there is only one Apex tablespace “APEX” used. Approximatly the same amount of data is in tables, in indexes and in LOBs. LOBSEGMENT indicates that there had been some wwv_flow_files activities going on.

Run the scripts

The scripts are located in the apex subfolder. For example D:/product/apex/apex_5.1/apex.

Navigate to that folder and start an sqlplus session with sys as sysdba. If you are in a CDB/PDB environment connect to the PDB not to CDB$ROOT. To connect to the PDB, the service name needs to be provided. Because of that also a running listener and a matching tnsnames.ora file is needed.

Assuming the following settings:

  • tablespace for apex and apex files: APEX
  • temp tablespace: TEMP
  • image directory: /i/

Then those 3 scripts need to run.

@apexins1.sql APEX APEX TEMP /i/
@apexins2.sql APEX APEX TEMP /i/

stop ORDS

@apexins3.sql APEX APEX TEMP /i/

restart ORDS

Phase 4 is automatically started by running a dbms_scheduler job.

Check privs and synonyms

Sometimes we give extra access from the Apex Schema to our own custom schema. For example in one Application I extended the Apex Feedback functionality and did use the view APEX_TEAM_FEEDBACK. Such changes need to be moved to the new APEX_050100 schema.

ACLs

Here is just how to check if the Apex 5.0 schema has any network ACLs set.

SELECT *
FROM DBA_HOST_ACES xe
where principal = 'APEX_050000';

I created a script to duplicate all ACEs that exists for APEX_050000 to APEX_050100.

The script is shown and explained in a separate blog post.

 

Grants

Here is just how to check if the Apex 5.0 schema has any objects granted to other schemas.


select *
from all_tab_privs
where grantor like 'APEX\_%' escape '\'
and grantee not in ('PUBLIC','SYS','SYSTEM');

It will show objects like tables, views and packages that have privileges granted directly.

To see only the missing grants you can run the following statement. If it returns no rows, then you are fine.


select GRANTEE , TABLE_SCHEMA , TABLE_NAME , PRIVILEGE , GRANTABLE ,HIERARCHY
from all_tab_privs
where grantor in ('APEX_050000','APEX_050100')
and not regexp_like (grantee,'^(APEX_|ORDS_|SYSTEM$|PUBLIC$|SYS$)')
group by GRANTEE , TABLE_SCHEMA , TABLE_NAME , PRIVILEGE , GRANTABLE ,HIERARCHY
having count(*) = 1 and min(grantor) = 'APEX_050000';

Those grants probably need to be copied to the new apex_50100 schema.

Search Source code

You should search the complete application, if there are any references to APEX_050000. This should be done after the migration.

downgrade to 5.0

The section how to downgrade an application back to 5.0 is currently missing from the documentation. Here is a official blog post how to do it: http://jastraub.blogspot.de/2017/01/ooops-i-did-it-again.html

This is the sql script that Jason Straub published to do the downgrade:

alter session set current_schema = SYS;

@wwv_flow_val.sql
@wwv_flow_val.plb 

begin
 dbms_utility.compile_schema('APEX_050000');
end;
/ 

set define '^'
@validate_apex x x APEX_050000

begin
 for i in ( select owner, trigger_name
 from sys.dba_triggers
 where owner = 'APEX_050000'
 and trigger_name like 'WWV_FLOW_UPGRADE_%'
 order by 1 )
 loop
 sys.dbms_output.put_line('Dropping trigger '||i.owner||'.'||i.trigger_name);
 execute immediate 'drop trigger '||i.owner||'.'||i.trigger_name;
 end loop;
end;
/

ALTER SESSION SET CURRENT_SCHEMA = APEX_050000;
exec apex_050000.wwv_flow_upgrade.switch_schemas('APEX_050100','APEX_050000');

ALTER SESSION SET CURRENT_SCHEMA = SYS;
drop context sys.APEX$SESSION;
create context sys.APEX$SESSION using APEX_050000.WWV_FLOW_SESSION_CONTEXT;
declare
 l_apex_version varchar2(30);
begin
 l_apex_version := apex_050000.wwv_flows_release;
 dbms_registry.downgrading('APEX','Oracle Application Express','validate_apex','APEX_050000');
 dbms_registry.downgraded('APEX',l_apex_version);
 validate_apex;
end;
/

select * from dba_tablespaces ;

select username, default_tablespace, temporary_tablespace, profile, oracle_maintained
from dba_users
where regexp_like(username,'^(APEX_|ORDS_)');

select owner as schema, tablespace_name as data_tablespace, nvl(segment_type,' - total -') segment_type, round(sum(bytes)/1024/1024,2) size_in_MB
from dba_extents
where regexp_like(owner,'^(APEX_|ORDS_)')
group by owner, tablespace_name, rollup(segment_type)
;

copy ACLs during Upgrade to Apex 5.1

The following works in 12c only. In previous database versions the package to set ACLs had other modules. Those are now deprecated. The script does not call any deprecated functions.

You can see the ACLs/ACEs by checking the data dictionary.

select * from DBA_HOST_ACES where principal like 'APEX_%';

This little script will check the network ACLs for the APEX5.0 schema and copies it to the Apex 5.1 scheme. It will not delete any ACLs. But use it at your own risk. It automatically commits.


declare
/* Author: Sven Weller
Company: syntegris information solutions GmbH
Purpose: Transfer Network ACLs from APEX_050000 to APEX_050100 schema
Created: 11.01.2017
*/
v_source_schema  varchar2(30) := 'APEX_050000';
v_target_schema  varchar2(30) := 'APEX_050100';

v_ace xs$ace_type;
v_host DBA_HOST_ACES.host%type;
v_lower_port DBA_HOST_ACES.lower_port%type;
v_upper_port DBA_HOST_ACES.upper_port%type;

BEGIN
for apex50acls in (SELECT xe.*
,row_number() over (partition by host, principal, lower_port, upper_port, start_date, end_date, grant_type,inverted_principal, principal_type order by ace_order, privilege) privlist#
,dense_rank() over (order by host, principal, lower_port, upper_port, start_date, end_date, grant_type,inverted_principal, principal_type) group#
FROM DBA_HOST_ACES xe
where principal = v_source_schema
and (xe.host, xe.lower_port, xe.upper_port,xe.start_date, xe.end_date, xe.grant_type,xe.inverted_principal,xe.principal_type)
not in (select t.host, t.lower_port, t.upper_port, t.start_date, t.end_date, t.grant_type,t.inverted_principal, t.principal_type
from DBA_HOST_ACES t
where t.principal = v_target_schema)
order by group# ,privlist#
) loop

if apex50acls.group#>1 and apex50acls.privlist#=1 then
-- store the old acl
DBMS_NETWORK_ACL_ADMIN.APPEND_HOST_ACE(
host        => v_host,
lower_port  => v_lower_port,
upper_port  => v_upper_port,
ace         => v_ace);
end if;

if apex50acls.privlist#=1 then -- first time
-- prepare the new acl
v_ace := xs$ace_type(
privilege_list => xs$name_list(apex50acls.privilege),
principal_name => v_target_schema,
principal_type => case apex50acls.principal_type
when 'APPLIACTION' then xs_acl.ptype_xs
when 'DATABASE' then xs_acl.ptype_db
when 'EXTERNAL' then xs_acl.ptype_external
end ,
granted   => apex50acls.grant_type = 'GRANT',
inverted  => apex50acls.inverted_principal = 'YES',
start_date => case when apex50acls.start_date < systimestamp then systimestamp
when apex50acls.start_date > systimestamp then apex50acls.start_date
end,
end_date => apex50acls.end_date
);
v_host:= apex50acls.host;
v_lower_port := apex50acls.lower_port;
v_upper_port := apex50acls.upper_port;
else
-- add a new privilege
v_ace.privilege_list.extend;
v_ace.privilege_list(apex50acls.privlist#):= apex50acls.privilege;
end if;

end loop;
if v_host is not null then
-- store final ace
DBMS_NETWORK_ACL_ADMIN.APPEND_HOST_ACE(
host        => v_host,
lower_port  => v_lower_port,
upper_port  => v_upper_port,
ace         => v_ace);
end if;
END;
/

 

 

about JET Diagrams (JET v2.2.0) in Apex 5

This is a followup from my older blog post “Integrate Oracle JET into Apex 5“.

Oracle JET Diagrams are a new data visualization type in Oracle JET 2.1.0.

This post is organized into three mostly independent parts

  1. How to setup Oracle JET v2.2.0 for usage in Apex
  2. How to copy Oracle JET Container Diagrams from the cookbook into Apex
  3. Using Oracle JET Diagrams with container layout

 

How to setup Oracle JET v2.2.0 for usage in Apex

Step 1) Download the base distribution

From the download page (http://www.oracle.com/technetwork/developer-tools/jet/downloads/index.html) choose the base distribution and download this zip file.

Step 2) Unzip JET into the APEX image folder

Copy and unzip the file into a folder insider your image path from apex.Where you put it is your own choice. I prefere to add it to the library path where oracle jet will also be in Apex 5.1 distribution (/libraries/oraclejet/2.0.2)

You can choose a very similar path “/libraries/oraclejet/2.2.0”. Create this path and unzip the file there.

The next time apex is upgraded remember not to move the image folder but simply to overwrite it (make a copy of the original before that).

Step 3) Create, manipulate and deploy the main.js file

Basis for this should always be the main-template.js file from the subfolder \js\libs\oj\v2.2.0. This template has all the correct paths and versions for all sub modules that are included in the main.js.

Additionally we can add a base-url that points to the folder where we unziped JET. If we add the main.js file in the js folder, then this is not needed. But we come back to that base-url later. So for JET version 2.2.0 the complete main.js file will look like this.


/**
* Example of Require.js boostrap javascript
*/

requirejs.config({
// Path mappings for the logical module names
paths: {
'knockout': 'libs/knockout/knockout-3.4.0',
'jquery': 'libs/jquery/jquery-3.1.0.min',
'jqueryui-amd': 'libs/jquery/jqueryui-amd-1.12.0.min',
'ojs': 'libs/oj/v2.2.0/min',
'ojL10n': 'libs/oj/v2.2.0/ojL10n',
'ojtranslations': 'libs/oj/v2.2.0/resources',
'text': 'libs/require/text',
'promise': 'libs/es6-promise/es6-promise.min',
'hammerjs': 'libs/hammer/hammer-2.0.8.min',
'signals': 'libs/js-signals/signals.min',
'ojdnd': 'libs/dnd-polyfill/dnd-polyfill-1.0.0.min',
'css': 'libs/require-css/css.min',
'customElements': 'libs/webcomponents/CustomElements.min',
'proj4': 'libs/proj4js/dist/proj4'
},
// Shim configurations for modules that do not expose AMD
shim: {
'jquery': {
exports: ['jQuery', '$']
}
},

// This section configures the i18n plugin. It is merging the Oracle JET built-in translation
// resources with a custom translation file.
// Any resource file added, must be placed under a directory named "nls". You can use a path mapping or you can define
// a path that is relative to the location of this main.js file.
config: {
ojL10n: {
merge: {
//'ojtranslations/nls/ojtranslations': 'resources/nls/myTranslations'
}
},
text: {
// Override for the requirejs text plugin XHR call for loading text resources on CORS configured servers
useXhr: function (url, protocol, hostname, port) {
// Override function for determining if XHR should be used.
// url: the URL being requested
// protocol: protocol of page text.js is running on
// hostname: hostname of page text.js is running on
// port: port of page text.js is running on
// Use protocol, hostname, and port to compare against the url being requested.
// Return true or false. true means "use xhr", false means "fetch the .js version of this resource".
return true;
}
}
}
});

/**
* A top-level require call executed by the Application.
* Although 'ojcore' and 'knockout' would be loaded in any case (they are specified as dependencies
* by the modules themselves), we are listing them explicitly to get the references to the 'oj' and 'ko'
* objects in the callback.
*
* For a listing of which JET component modules are required for each component, see the specific component
* demo pages in the JET cookbook.
*/
require(['ojs/ojcore', 'knockout', 'jquery', 'ojs/ojknockout', 'ojs/ojbutton', 'ojs/ojtoolbar','ojs/ojmenu'], // add additional JET component modules as needed
function(oj, ko, $) // this callback gets executed when all required modules are loaded
{
// add any startup code that you want here
}
);

Step 4) Reference the main.js file in the page template

 

How to copy Oracle JET Container Diagrams from the cookbook into Apex

The JET cookbook demo can be found here. The interactivity in this visualization is charming. We can organize nodes into containers and expand or decrease the container.

Step 1) Copy the html and the js code from the cookbook to our page

Step 2) Add the require.config call

This time we add a base URL.

requirejs.config({
  baseUrl: '#IMAGE_PREFIX#libraries/oraclejet/js',
  // Path mappings for the logical module names
  paths: {
    'knockout': 'libs/knockout/knockout-3.4.0',
    'jquery': 'libs/jquery/jquery-3.1.0.min',
    'jqueryui-amd': 'libs/jquery/jqueryui-amd-1.12.0.min',
    'ojs': 'libs/oj/v2.2.0/min',
    'ojL10n': 'libs/oj/v2.2.0/ojL10n',
    'ojtranslations': 'libs/oj/v2.2.0/resources',
    'text': 'libs/require/text',
    'promise': 'libs/es6-promise/es6-promise.min',
    'hammerjs': 'libs/hammer/hammer-2.0.8.min',
    'signals': 'libs/js-signals/signals.min',
    'ojdnd': 'libs/dnd-polyfill/dnd-polyfill-1.0.0.min',
    'css': 'libs/require-css/css.min',
    'customElements': 'libs/webcomponents/CustomElements.min',
    'proj4': 'libs/proj4js/dist/proj4'
  },
  // Shim configurations for modules that do not expose AMD
  shim: {
    'jquery': {
      exports: ['jQuery', '$']
     }
    },

// This section configures the i18n plugin. It is merging the Oracle JET built-in translation
// resources with a custom translation file.
// Any resource file added, must be placed under a directory named "nls". You can use a path mapping or you can define
// a path that is relative to the location of this main.js file.
    config: {
        ojL10n: {
            merge: {
                //'ojtranslations/nls/ojtranslations': 'resources/nls/myTranslations'
            }
        }
    }
});

Step 3) Find out why it is not working yet

The only remaining ressource that could not be loaded should be the diagramLayouts/DemoContainerLayout.js file. The reason is simple. This file is not included in the base zip file. However we can get it directly from the JET cookbook page.

Firstjet_diagram_launch_standalone we open the cookbook in standalone mode. There is a button in the upper right corner that helps us to do so.

Then we inspect the network files again and locate the DemoContainerLayout.js. We can simply copy the address and store the file to our system

jet_diagram_copy_layoutfile

 

Step 4) Copy and integrate the layout file

To integrate this layoutfile into our page I choose a slightly different approach. This is not a file that might be relevant for a default Oracle JET installation. Instead I’d like to add it specifically to my application. In this way I can modify the file and influence the behaviour of my diagram without changing anything for other applications.

So we upload it as a static application file (in my case with a directory “oraclejet”).

And we reference the file directly in the require call. Here the suffix “.js” is important. It tells require that this is a direct file reference and not an alias name for a previously defined ressource.

require([‘ojs/ojcore’, ‘knockout’, ‘jquery’, ‘#APP_IMAGES#oraclejet/DemoContainerLayout.js‘,
‘ojs/ojknockout’, ‘ojs/ojbutton’, ‘ojs/ojdiagram’], function(oj, ko, $, layout) { …

 

Using Oracle JET Diagrams with container layout

An OracleJet diagram is essentially a graph. It consists of nodes and links between the nodes. The container diagram has the additional posibility to organize nodes into a hierarchy. Other layouts have similar possibilities but choose to render it completly different.

Which layout to use is configured in the attributes of the ojDiagram component (View) and inside the javascript Model.ojet_diagram_layout1ojet_diagram_layout2

The container layout has only very limited drawing possibility. Nodes are rectangles and links are lines.

The main nodes (containers) are always drawn horizontically from left to right. Child nodes are always drawn vertically  from top to bottom and inside their parent container. All nodes that have child nodes are considered containers and can potentially be expanded or collapsed.

Links that connect nodes that are side by side are attached to the left or right side of the nodes. Links that connect nodes that are  above or below each other connect to the top and bottom part of a node.

This very simple drawing approach allows for some nice small visualizations. For example we can easily present process flows with that. If we want to draw huge networks, then another layout will be more appropriate.

How to change descriptions

Nodes have several properties that can be set. A complete list can be found in the ojDiagram doc.

  • id ==> will uniquely identify a node. It will also be used as StartNode and EndNode in the link properties.
  • label ==> the text that is printed inside the node.
  • shortDesc ==> a small description that is shown as a tooltip when hovering over a node

The cookbook uses a small function to simplify node creation. But we can also create a node using direct json syntax.

this.nodes.push({
id: "id",
label: "label",
shortDesc: "shortDesc",
nodes: null
});

 

How to color the nodes

All nodes have a default style. The default is a kind of greyish background. We can change the backgroundStyle property for our node.

this.nodes[0].nodes[0].nodes[0].backgroundStyle = 'height:20px;width:60px;
border-color:#444444;background-color:#00FF80;border-width:.5px;
border-radius:8px';

This colors the first child of the first child in the first container to green and rounds the corners.

We can also simply set the background color, without setting all the other properties. For example for the second child in the first container.

this.nodes[0].nodes[1].backgroundStyle = "background-color:red";

It is possible to add images or shapes to our diagram. We can position them in the middle, left or right inside a node. This line will put a small yellow “human” in node N1.

this.nodes[1].icon = {width: 10, height: 10, halign: "right", 
shape: "human", color:"yellow", borderColor:"grey"};

The following shapes are predefined.

square, plus, diamond, triangleUp, triangleDown, 
human, rectangle, star, circle

It is possible to create custom shapes by providing an SVG path. Or we can add images instead of a shape. However this post is to small to explain that in more detail.

Next I show how to create a custom gradiant fill. There are two steps to do so.
First create the SVG-Fill-Gradient

<svg height="0" width="0">
    <defs>
      <linearGradient id="gradient" x1="0%" y1="100%">
        <stop offset="0%" style="stop-color: #66ccff"></stop>
        <stop offset="80%" style="stop-color: #0000FF"></stop>
      </linearGradient>
    </defs>
  </svg>

then add this gradient to the node.

this.nodes[0].containerStyle = {fill: "url(#gradient)"};

And the combined result looks like this. It certainly is not pretty, but it shows what is possible using a little imagination.

ojet_diagram_colored

Further readings: JET custom shapes and image markers

The source code for this coloring example can be copied into the JET cookbook page.

The HTML part

<div id='diagram-container>
<svg height="0" width="0">
<defs>
			<linearGradient id="gradient" x1="0%" y1="100%">
<stop offset="0%" style="stop-color: #66ccff">;</stop>
<stop offset="80%" style="stop-color: #0000FF"></stop>
</linearGradient>
</defs>
</svg>
<div id="diagram" data-bind="ojComponent: {
component: 'ojDiagram',
layout: layoutFunc,
animationOnDataChange: 'auto',
animationOnDisplay: 'auto',
maxZoom:2.0,
selectionMode: 'single',
styleDefaults : styleDefaults,
nodes : nodes,
links : links,
expanded: expanded
}"
style="max-width:800px;width:100%; height:600px;"></div>
</div>

The javascript part


require(['ojs/ojcore', 'knockout', 'jquery', 'diagramLayouts/DemoContainerLayout',
'ojs/ojknockout', 'ojs/ojbutton', 'ojs/ojdiagram'], function(oj, ko, $, layout) {
function model(data) {
var self = this;
self.layoutFunc = layout.containerLayout;
function createNode(id, nodes) {
return {
id: id,
label: id,
shortDesc: "Node " + id,
nodes: nodes ? nodes : null
};
}
function createLink(id, startId, endId) {
return {
id: id,
startNode: startId,
endNode: endId,
shortDesc: "Link " + id + ", connects " + startId + " to " + endId
};
}
this.expanded = ['N0', 'N00'];
this.nodes = [], this.links = [];
var childNodesN00 = [createNode("N000"), createNode("N001")];
var childNodesN0 = [createNode("N00", childNodesN00), createNode("N01"), createNode("N02")];
var childNodesN2 = [createNode("N20"), createNode("N21"), createNode("N22")];
this.nodes.push(createNode("N0", childNodesN0));
this.nodes.push(createNode("N1"));
this.nodes.push(createNode("N2", childNodesN2));
this.nodes.push(createNode("N3"));

this.nodes[0].nodes[0].nodes[0].backgroundStyle = 'height:20px;width:60px;border-color:#444444;background-color:#00FF80;border-width:.5px;border-radius:8px';
this.nodes[0].nodes[1].backgroundStyle = "background-color:red";
this.nodes[1].icon = {width: 10, height: 10, halign: "right", shape: "human", color:"yellow", borderColor:"grey"};
this.nodes[0].containerStyle = {fill: "url(#gradient)"};

// disable selection on some containers
this.nodes[0].selectable = 'off';
this.nodes[0].nodes[0].selectable = 'off';

// create the links
this.links.push(createLink("L0", "N2", "N3"));
this.links.push(createLink("L1", "N1", "N21"));
this.links.push(createLink("L2", "N1", "N22"));
this.links.push(createLink("L3", "N000", "N1"));
this.links.push(createLink("L4", "N001", "N1"));
this.links.push(createLink("L5", "N02", "N1"));
this.links.push(createLink("L6", "N000", "N001"));

this.styleDefaults = {
nodeDefaults: {
containerStyle: "border-color:#abb3ba;background-color:#f9f9f9;border-width:.5px;border-radius:1px;padding-top:20px;padding-left:10px;padding-bottom:10px;padding-right:10px;",
labelStyle: "color:#252525;font-size:8px;font-family:'Helvetica Neue', Helvetica, Arial, sans-serif;font-weight:normal;font-style:italic",
backgroundStyle: 'height:20px;width:60px;border-color:#444444;background-color:#f9f9f9;border-width:.5px;border-radius:1px',
icon: null
},
linkDefaults: {startConnectorType: "circle", endConnectorType: "arrow"}
};
}
$(document).ready(
function() {
ko.applyBindings(new model(),
document.getElementById('diagram-container'));
}
);
});

How to modify links

Modifing links is very similiar to modifing nodes.One main difference however is the definition of the “arrows” on each side of the link. Usually we want to have all links look the same. So instead of changing the properties of each single link, we just switch the default behaviour.

The following line will make the links look like simple arrows.

linkDefaults: {startConnectorType: "none", endConnectorType: "arrow"}

Also for static diagrams I prefer to give each link a proper description (shortDesc).

 

How to add interactivity

Back to our apex application.The goal here is to click on a node (or a link) and to show a specific Apex region that corresponds with the selection.

First we allow to select a node. The diagram layout can do “single” or “multiple” selections. To allow this, we add the selectionMode: ‘single’ property to our view. And since we want to work with the selected parts later, we also add selection: selectedNodes.

This selectedNodes needs to be defined in the nodeProperty.

Then we prepare our apex page and put some “apex connector logic” in place.

We create a region for each node that we want to interact with.

The region gets a static id R_DETAILS_XXX where XXX is the ID of the node and it gets a custom attribute

style="display: none;"

As a result we know the ID of each region and the region will be rendered but not displayed. With that we add a small function showDetails to the page. It will show one region and hide another (the previous) one.


function showDetails(showNodes,hideNodes) {
 console.log("ShowDetails="+showNodes);
 if (hideNodes!==""){
 $("#R_DETAILS_"+hideNodes).hide();
 };

$("#R_DETAILS_"+showNodes).show();

}

 

 

The JET and knockout binding will then be done using the optionChange property.

We add a function to react on the change of a selection. The “value” and the “previousValue” will then hold the ID of the node (or link). If we chose to do multiple selections it can be an array of nodes.

Html

<div id="diagram" data-bind="ojComponent: {
component: 'ojDiagram',
layout: layoutFunc,
selection: selectedNodes,
selectionMode: 'single',
styleDefaults : styleDefaults,
nodes : nodes,
links : links,
optionChange: diagramOptionChange
}"
style="max-width:800px;width:100%; height:600px;"></div>

Javascript


// set default selection

this.selectedNodes = ['N000'];

// disable selection on some containers
 this.nodes[0].selectable = 'off';
 this.nodes[0].nodes[0].selectable = 'off';

self.diagramOptionChange = function (event, data) {
 console.log("optionchanged="+data.option);
 if (data['option'] == 'selection') {
   showDetails(data['value'], data['previousValue']);
 }};

 

 

 

Further reading:

Data vizualization blog: A guide to diagrams (part9)

 

 

 

adaptive cursor sharing and DBMS_SQL

A recent post in the OTN mentioned that DBMS_SQL does not use bind peeking for binded variables. I couldn’t believe that, so I decided to do some tests for myself. The findings are strange…

This is potentially relevant for Apex developers, since the Apex engine uses DBMS_SQL. I still have to do further testing to check the behaviour in Apex.

First I setup some test to show bind peeking and adaptive cursor behaviour using normal statements in SQL*Plus or SQL Developer. After that we move to dynamic SQL, especially DBMS_SQL, and try the same again.

scenario setup

create skewed testdata

--drop table demo_big;
create table demo_big as
select level as id, 
       case when mod(level,10000)=0 
            then 'VALID' 
            else 'INVALID' 
       end as status
from dual
connect by level <= 1000000;

desc demo_big;

Name   Null Type
------ ---- -----------
ID          NUMBER
STATUS      VARCHAR2(7)

select status, count(*) 
from demo_big 
group by rollup(status);
STATUS     COUNT(*)
INVALID    999900
VALID      100
           1000000

So we have a few VALID values and a lot of INVALID ones.

Even if we have only two different values an index will be useful on this column. The data distribution is so skewed that any access trying to read the VALID values would profit from an index. However if we access the INVALID column we don’t want to use the index and instead want a full table scan.

-- create indexes on all the important columns
create unique index demo_big_id_ix on demo_big(id);
create index demo_big_status_ix on demo_big(status);

create statistical data(histograms)

First we create the statistics so that the optimizers knows what is in that table and how the data looks like.

-- create statistics and test histogram
execute dbms_stats.gather_table_stats(user, 'DEMO_BIG', method_opt=>'for all indexed columns size skewonly');

Then we check the data dictionary checks to see what has been created so far.
The hist_numtochar2 function is copied from Martin Widlake (Source: https://mwidlake.wordpress.com/2009/08/11/). It just helps to do a crude translation of the numerical histogram bucket endpoints. The code of the function can be found at the end of this post.

I don’t show the results from all selects but the last one. The other selects are here just as references. They are helpful to see what kind of statistics are in place.

select table_name, num_rows, blocks, last_analyzed
from user_tables
where table_name = 'DEMO_BIG';

select table_name, column_name, num_distinct, histogram, num_buckets, sample_size 
from user_tab_columns
where table_name = 'DEMO_BIG';

select *
from user_histograms
where table_name = 'DEMO_BIG' and column_name = 'STATUS';

select table_name, column_name, endpoint_number, endpoint_value, hist_numtochar2(endpoint_value) as translated_value
from user_histograms
where table_name = 'DEMO_BIG' and column_name = 'STATUS';

Here we see a frequency histogram with two buckets for the column STATUS.

TABLE     COLUMN  ENDPOINT_NUMBER    ENDPOINT_VALUE           TRANSLATED_VALUE
DEMO_BIG  STATUS  999900     380626532452853000000000000000000000    INVALJ*
DEMO_BIG  STATUS  1000000    447861930473196000000000000000000000    VALID

The first bucket holds 999900 values where status= INVALID.
The next bucket holds 1000000-999900 = 100 where status = VALID.

This of cause matches exactly what we created. So the statistical info in the dictionary is absolutly correct.

Tests

Now that our setup is in place, we can do some basic testing to see different plans.

check execution plan with LITERALS

-- test different cursor/execution plan using plain selects
select count(*) from demo_big where status = 'VALID';
select * from table(dbms_xplan.display_cursor);
----------------------------------------------------------------------------------------
| Id  | Operation         | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |                    |       |       |     3 (100)|          |
|   1 |  SORT AGGREGATE   |                    |     1 |     8 |            |          |
|*  2 |   INDEX RANGE SCAN| DEMO_BIG_STATUS_IX |   100 |   800 |     3   (0)| 00:00:01 |
----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("STATUS"='VALID')
select count(*) from demo_big where status = 'INVALID';
select * from table(dbms_xplan.display_cursor);
-------------------------------------------------------------------------------
| Id  | Operation          | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |          |       |       |   701 (100)|          |
|   1 |  SORT AGGREGATE    |          |     1 |     8 |            |          |
|*  2 |   TABLE ACCESS FULL| DEMO_BIG |   999K|  7811K|   701   (2)| 00:00:01 |
-------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter("STATUS"='INVALID')

Perfect! As expected one does an index access / index range scan, the other does a full table scan.

check execution plan with BIND parameters

select count(*) from demo_big where status = :P_ENTER_VALID;
select * from table(dbms_xplan.display_cursor);
----------------------------------------------------------------------------------------
| Id  | Operation         | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |                    |       |       |     3 (100)|          |
|   1 |  SORT AGGREGATE   |                    |     1 |     8 |            |          |
|*  2 |   INDEX RANGE SCAN| DEMO_BIG_STATUS_IX |   100 |   800 |     3   (0)| 00:00:01 |
----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("STATUS"=:P_ENTER_VALID)
select count(*) from demo_big where status = :P_ENTER_INVALID;
select * from table(dbms_xplan.display_cursor);
-------------------------------------------------------------------------------
| Id  | Operation          | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |          |       |       |   701 (100)|          |
|   1 |  SORT AGGREGATE    |          |     1 |     8 |            |          |
|*  2 |   TABLE ACCESS FULL| DEMO_BIG |   999K|  7811K|   701   (2)| 00:00:01 |
-------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter("STATUS"=:P_ENTER_INVALID)

The two statements are not identical because the name of the bind parameter is different. Because of that we get two different cursors. Each with a different execution plan.
This test shows that bind peeking works. During the hard parse phase the value of the binded parameter was checked (peeked) so that the correct estimations for the resulting rows/cardinalities were made. Which led in turn to the correct plan for each of the two different statements. However this first parameter “freezes” the execution plan. So that if we change the binded value, then the same plan is reused.

This behaviour was enhanced in 11g with the introduction of adaptive cursor sharing and got steadily improved since then.

To test adaptive behaviour we run the first query again a few times (at least 4 times). But this time we do not pass VALID, but instead INVALID as a parameter.

After that we can see a new child cursor 1 for the sql_id “7rjdcm7v7hfrs”.

select is_bind_sensitive, is_bind_aware, sql_id, child_number, sql_text
from v$sql
where upper(sql_text) like 'SELECT%FROM DEMO_BIG WHERE%'  and sql_text not like '%v$sql%'
;
IS_BIND    IS_BIND SQL_ID        CHILD   SQL_TEXT
_SENSITIVE _AWARE                _NUMBER
Y          N       7rjdcm7v7hfrs 0       select count(*) from demo_big where status = :P_ENTER_VALID
Y          Y       7rjdcm7v7hfrs 1       select count(*) from demo_big where status = :P_ENTER_VALID
Y          N       5zkmtfj331xmc 0       select count(*) from demo_big where status = :P_ENTER_INVALID
select * from table(dbms_xplan.display_cursor('7rjdcm7v7hfrs',0));
----------------------------------------------------------------------------------------
| Id  | Operation         | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |                    |       |       |     3 (100)|          |
|   1 |  SORT AGGREGATE   |                    |     1 |     8 |            |          |
|*  2 |   INDEX RANGE SCAN| DEMO_BIG_STATUS_IX |   100 |   800 |     3   (0)| 00:00:01 |
----------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   2 - access("STATUS"=:P_ENTER_VALID)

select * from table(dbms_xplan.display_cursor('7rjdcm7v7hfrs',1));
-------------------------------------------------------------------------------
| Id  | Operation          | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |          |       |       |   701 (100)|          |
|   1 |  SORT AGGREGATE    |          |     1 |     8 |            |          |
|*  2 |   TABLE ACCESS FULL| DEMO_BIG |   999K|  7811K|   701   (2)| 00:00:01 |
-------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   2 - filter("STATUS"=:P_ENTER_VALID)

This is adaptive behaviour. After a few bad tries a second execution plan is created for the same cursor and used. How many tries are needed? Often it changes on the third try. But it can happen that more are needed.

Test with DBMS_SQL

Now comes the more difficult part. Setup a small plsql block to use DBMS_SQL to run the same statement again using binded parameters.

-- testcase for BIND peeking/aware using DBMS_SQL
declare
  curid    NUMBER;
  ret      INTEGER;
  sql_stmt VARCHAR2(200);
begin
  sql_stmt := 'select count(*) from demo_big where status = :P_STATUS';

  -- get cursor handle
  curid := DBMS_SQL.OPEN_CURSOR;
  DBMS_SQL.PARSE(curid, sql_stmt, DBMS_SQL.NATIVE);

  DBMS_SQL.BIND_VARIABLE(curid, 'P_STATUS', 'VALID');
  ret := DBMS_SQL.EXECUTE_and_fetch(curid);

  DBMS_SQL.PARSE(curid, sql_stmt, DBMS_SQL.NATIVE);
  DBMS_SQL.BIND_VARIABLE(curid, 'P_STATUS', 'INVALID');
  for i in 1..5 loop
    ret := DBMS_SQL.EXECUTE_and_fetch(curid);
  end loop;

DBMS_SQL.close_cursor(curid);
end;
/

The v$sql view has two interesting columns.
IS_BIND_SENSITIVE shows cursors where the execution plan can evolve.
IS_BIND_AWARE shows child cursors where a new plan was created, meaning that the cursor was evolved.

select is_bind_sensitive, is_bind_aware, sql_id, child_number, sql_text
from v$sql
where upper(sql_text) like 'SELECT%FROM DEMO_BIG WHERE%'
and sql_text not like '%v$sql%'
;
IS_BIND_SENSITIVE    IS_BIND_AWARE    SQL_ID    CHILD_NUMBER    SQL_TEXT
Y    N    7rjdcm7v7hfrs    0    select count(*) from demo_big where status = :P_ENTER_VALID
Y    Y    7rjdcm7v7hfrs    1    select count(*) from demo_big where status = :P_ENTER_VALID
N    N    3kpu54a461gkm    0    select count(*) from demo_big where status = :P_STATUS
N    N    3kpu54a461gkm    1    select count(*) from demo_big where status = :P_STATUS
Y    N    5zkmtfj331xmc    0    select count(*) from demo_big where status = :P_ENTER_INVALID
N    N    fjjm63y7c6puq    0    select count(*) from demo_big where status = :P_STATUS2
N    N    1qx03gdh8712m    0    select count(*) from demo_big where status = 'INVALID'
N    N    2jm3371mug58t    0    select count(*) from demo_big where status = 'VALID'

The two child cursors

-- find the cursor id
select sql_id, child_number, bucket_id, count, is_bind_sensitive, is_bind_aware, sql_text
from v$sql s
left join v$sql_cs_histogram h using (sql_id, child_number)
where upper(s.sql_text) like 'SELECT%FROM DEMO_BIG WHERE%'
and s.sql_text not like '%v$sql%'
;

-- check the execution plan for both child cursors
select * from table(dbms_xplan.display_cursor('3kpu54a461gkm',0));
select * from table(dbms_xplan.display_cursor('3kpu54a461gkm',1));

-- see the plans in the SGA
select * from v$sql_plan where sql_id = '3kpu54a461gkm';
select * from v$sql_plan where sql_id = 'fjjm63y7c6puq';

Now the strange thing is: The first cursor is using a FULL table scan. But the first execution was done using the VALID value and should have resulted in the index range scan. The second child cursor does not even have an execution plan!

NOTE: cannot fetch plan for SQL_ID: 3kpu54a461gkm, CHILD_NUMBER: 1
      Please verify value of SQL_ID and CHILD_NUMBER; 
      It could also be that the plan is no longer in cursor cache (check v$sql_plan)

What is going on here? v$sql has a column EXECUTIONS which tells us how often this child cursor was called. It is always 0 for the child 1 from the DBMS_SQL cursor!

I did several more tests using DBMS_SQL. Even a case where the cursor was closed and opened several times. All with the same result.

Interpreting the results

I’m still not yet exactly sure what is going on there. It seems as if bind peeking and adaptive cursor sharing does not work with DBMS_SQL. But why do we see then two child cursors? It seems as if the different parameter values at least have the effect that a new child is created. And this happens only when there is a need for a different execution plan. But where is the plan for that? I still have some doubts. Maybe the execution plan in v$sql is lying is this case? Since DBMS_SQL goes deep into the internals it might be that some of the normal behaviours are not reflected in some of the views.

The cursor itself is in the private SQL workarea and I never checked that. Another approach would be to setup a scenario where we can measure the perormance difference. The test case I used was too small to see a desicive difference between the two possible plans.

Also we have to remember that the need for DBMS_SQL is rare. A normal select with binded parameters is certainly not a case where need dynamic SQL. A more typical case would be a cursor | statement where we do not know at compile time what columns are returned. Then we can use DBMS_SQL to analyse the structure of such a cursor and react on that.

However if we build some kind of dynamic frameworks and think about using DBMS_SQL we should rethink our strategy. Maybe it is easier to provide all the possible cases as plsql apis and thereby compiling during creation, instead of building the statement in a completly dynamic fashion but suffering some essential drawbacks.

Recommendations

1) Avoid DBMS_SQL, consider to use native SQL (execute_immediate) instead
2) If you have a skewed data distribution, make sure your plans are bind_sensitive
3) If you can guarantee an even data distribution, consider to add the NO_BIND_AWARE hint. This should be needed only in some extrem situations (very high performance requirements or cursor cache issues)

Appendix

The function that I used previously:

create or replace function hist_numtochar2(p_num number
,p_trunc varchar2 :='Y') return varchar2
-- Author: Martin Widlake
-- Source: https://mwidlake.wordpress.com/2009/08/11/
is
  m_vc varchar2(15);
  m_n number :=0;
  m_n1 number;
  m_loop number :=7;
begin
  m_n :=p_num;
  if length(to_char(m_n))>36 then
    --dbms_output.put_line ('input too short');
    m_vc:='num format err';
  else
    if p_trunc !='Y' then
      m_loop :=15;
    else
      m_n:=m_n+power(256,9);
    end if;
    --dbms_output.put_line(to_char(m_N,'999,999,999,999,999,999,999,999,999,999,999,999'));
    for i in 1..m_loop loop
      m_n1:=trunc(m_n/(power(256,15-i)));
      --    dbms_output.put_line(to_char(m_n1));
      if m_n1!=0 then m_vc:=m_vc||chr(m_n1);
      end if;
      dbms_output.put_line(m_vc);
      m_n:=m_n-(m_n1*power(256,15-i));
    end loop;
  end if;
  return m_vc;
end;
/

about IDLE session timers in Apex 5

Introduction

In a recent project I wanted to use the idle session timer. The final goal was to give the end user a possibility to see how soon his session will end. It happens when the user enters a lot of data in a tabular form and work is interupted, that all the effort is in vain because the idle timeout forces a new login.

Martin D’Souza wrote an article about that some time ago: http://www.talkapex.com/2009/09/enhanced-apex-session-timeouts.html. But it is a bit outdated now, and he does not reuse the Apex settings, and simply uses his own timer that accidentically is the same as the Apex timeout. Better would be an approach to use the Apex settings.

Surprisingly it was not so easy to find where the idle session time is set and how to use it progammatically. Here are my findings.

How to set the idle timeout

See also this older blog post from Patrick Wolf about setting the timeout in Apex 3.2. Some of it is still true, however some has change since then. For example we can now set the timeouts also on workspace level.

Is your Oracle APEX session timing out?

There are 3-4 levels where the session idle timeout can be set.
Each specific level overwrites the setting of the more generic level.

Instance Level

Login as instance administrator (INTERNAL workspace).

Manage instance / Security / Session Timeout

The default is 1 hour (=3600 seconds).

apex5_sessiontimer_instanceAdmin

Workspace Level

As a instance administrator (INTERNAL workspace) go to

Manage Workspaces / Existing Workspace / Click on Workspace Name / Session Timeout
 
apex5_sessiontimer_workspacesetting

Application Level

Application / Shared Components / Application Definition Attributes / Security / Session Management

 

apex5_sessiontimer_appsetting

Session Level

This can only be set programmatically.

http://docs.oracle.com/cd/E14373_01/apirefs.32/e13369/apex_util.htm#AEAPI355

BEGIN
APEX_UTIL.SET_SESSION_MAX_IDLE_SECONDS(p_seconds => 1200);
END;

How to read the settings

Here are some commands that help to find out about the current session timeouts/idle timeouts. Note that to get the instance level settings (e.g. the default if nothing else was set), we can not use the official apex apis.
Update! This has been changed with version 5.0.4!


--- Find apex session idle times

-- Application level
select application_id, application_name, maximum_session_idle_Seconds, maximum_session_life_seconds, session_lifetime_exceeded_url, session_idle_time_exceeded_url
from apex_applications
--where application_id = :APP_ID
;

-- Application level - DBA access only
select security_group_id, display_id as application_id, name as application_name, max_session_length_sec, on_max_session_timeout_url, max_session_idle_sec, on_max_idle_timeout_url
from apex_050000.wwv_flows
--where display_id = :APP_ID
;

-- Workspace level
select workspace, workspace_display_name, maximum_session_idle_Seconds, maximum_session_life_seconds
from apex_workspaces;

-- Workspace level - DBA access only
select id, short_name, display_name, source_identifier, max_session_length_sec,max_session_idle_sec
from apex_050000.wwv_flow_companies;

-- Instance level - minimum Apex 5.0.4 needed + APEX_ADMINISTRATOR_READ_ROLE
select name, value
from APEX_INSTANCE_PARAMETERS
where name in ('MAX_SESSION_LENGTH_SEC','MAX_SESSION_IDLE_SEC');

-- Instance level - DBA access only
select name, value
from apex_050000.WWV_FLOW_PLATFORM_PREFS
where name in ('MAX_SESSION_LENGTH_SEC', 'MAX_SESSION_IDLE_SEC');

-- Instance level alternative - DBA access only
select apex_050000.wwv_flow_platform.get_preference('MAX_SESSION_LENGTH_SEC') as MAX_SESSION_LENGTH_SEC
      ,apex_050000.wwv_flow_platform.get_preference('MAX_SESSION_IDLE_SEC') as MAX_SESSION_IDLE_SEC
from dual;

-- Workspace settings including Instance default overwrites - DBA access only
alter session set current_schema = APEX_050000;

set serveroutput on
declare
  v_ws wwv_flow_security.t_workspace;
  v_security_group_id number;
begin
  wwv_flow_security.g_security_group_id := 10; -- Internal
  v_security_group_id := wwv_flow_security.find_security_group_id (p_company =&gt; 'MYWORKSPACE');
  v_ws := wwv_flow_security.get_workspace(v_security_group_id);
  dbms_output.put_line('ws max kill timeout='|| v_ws.qos_max_session_kill_timeout );
  dbms_output.put_line('ws max session time in sec='|| v_ws.max_session_length_sec );
  dbms_output.put_line('ws max idle time in sec='|| v_ws.max_session_idle_sec );
end;
/

Please note that since Apex 5.0.4 we are able to read the instance settings from an official Apex view.

Therefore the following little script will give us the session idle time, regardless where it is set.

select coalesce(
   ( -- Application level
    select maximum_session_idle_Seconds
    from apex_applications
    where application_id = v('APP_ID'))
   ,( -- Workspace level
     select maximum_session_idle_Seconds
     from apex_workspaces
     where workspace = v('WORKSPACE_ID'))
   ,(-- Instance level
     select to_number(value)
     from APEX_INSTANCE_PARAMETERS
     where name ='MAX_SESSION_IDLE_SEC')
     ) max_idle_time     
from dual;

Unfortunatly to read from this APEX_INSTANCE_PARAMETERS view we need the new APEX_ADMINISTRATOR_READ_ROLE. The role is automatically granted to DBAs, Apex Admins and some other privileged database accounts during installation of Apex 5.0.4.

If you don’t have it already, then you can grant it like this

grant APEX_ADMINISTRATOR_READ_ROLE to mySchema;

Apex and Jet – a fairy tale

In the old times, when it was still of some use to wish for the thing one wanted, there lived a King named Joel R. whose daughters were all handsome, but the youngest was so beautiful that the sun himself, who has seen so much, wondered each time he shone over her because of her beauty. The name of the little girl was Apex.

Near the royal castle build with bricks of Forms there was a great dark wood where birds were twittering and many squirrels were running up the trees. In the bark of some trees there were mysterious inscriptions from the foreign county of Java. And in the wood under an old js-tree was a font; and when the day was hot, the King’s daughter used to go forth into the wood and sit by the brink of the cool font. The font was simply awesome and if the time seemed long, she would take out a golden chart, and throw it up and catch it again, and this was her favourite pastime.

On her 12th Birthday Apex looked into a codemirror and saw Apex 5.1 always wearing beautiful purple boots. Now it happened one day that her boots were strapped so tight that the chart, instead of falling back into the maiden’s little hand which had sent it aloft, dropped to the ground near the edge of the well and rolled in. The king’s daughter followed it with her mobile-UI as it sank, but the well was deep, so deep that the bottom could not be seen. Then she began to weep, and she wept and wept as if she could never be comforted. And in the midst of her weeping she heard a voice saying to her: “What ails thee, king’s daughter Apex? Thy tears would melt a heart of stone.” And when she looked to see where the voice came from, there was nothing but a Toad stretching his thick ugly head out of the water. – “I weep because my golden chart has fallen into the font.” – “Never mind, do not weep,” answered the Toad, “I can help you; but what will you give me if I fetch up your chart again?” – “Whatever you like, dear toad,” said she, “any of my wizards, dynamic actions and interactive reports, or even the golden cloud that I wear.” – “Thy wizards, thy dynamic actions and interactive reports, and thy golden cloud are not for me,” answered the Toad, “but if thou wouldst love me, and have me for thy companion and play-fellow, and let me sit by thee at the Universal Theme, and eat from thy tables, and drink from thy views, and sleep in thy little pages, if thou wouldst promise all this, then would I dive below the water and fetch thee thy golden chart again.” – “Oh yes,” Apex answered, “I will promise it all, whatever you want, if you will only get me my chart again.” But she thought to herself: What nonsense he talks! As if he could do anything but sit in the water and croak with the other frogs, or could possibly be any one’s companion.

But the Toad, as soon as he heard her promise, drew his head under the font and sank down out of sight, but after a while he came to the surface again with the chart in his mouth, and he threw it on a nearby Peake. The King’s daughter was overjoyed to see her pretty plaything again, and she required it up and ran off with it. “Stop, stop!” cried the Toad, “take me up too. I cannot run as fast as you!” But it was of no use, Apex had no listener for him, and made haste home, and very soon forgot all about the poor Toad.

The next day, when the King’s daughter was sitting at table with the King and all the court, and eating from her golden plate, there came something up the marble stairs, and then there came a knockout at the door, and a voice crying: “Apex, Apex, let me in!” And she got up and ran to see who it could be, but when she opened the door, there was the Toad sitting outside. Then she shut the door hastily and went back to her server, feeling very uneasy. King Joel noticed how quickly her heart was beating, and said: “My child, what are you afraid of? Is there a giant Page Designer standing at the door ready to carry you away?” – “Oh no,” answered she, “no Page Designer, but a horrid Toad.” – “And what does the Toad want?” asked the King. “O dear father,” answered she, “when I was sitting by the font yesterday, and playing with my golden chart, it fell into the water, and while I was crying for the loss of it, the Toad came and got it again for me on condition I would let him be my companion, but I never thought that he could leave the application server and come after me; but now there he is outside the door, and he wants to come in to me.” And then they all heard him hammering the second time and crying:

“Youngest King’s daughter,
Open to me!
By the fonts water
What promised you me?
Youngest King’s daughter
Now open to me!”

“That which thou hast promised must thou perform,” said King Joel, “so go now and require him in.” So she went and opened the door, and the Toad hopped in. Then he stopped and cried: “Lift me up to install beside you.” But she delayed doing so until the King ordered her. When once the Toad was on the chair, he wanted to get on the table, and there he sat and said: “Now push your page a little nearer, so that we may eat together.” And so she did, but everybody might see how unwilling she was, and the Toad feasted heartily, but every jquery seemed to stick in her throat. “I have had enough now,” said the Toad at last, “and as I am tired, you must deploy me onto your server, and make ready your image folder, and we will lie down and go to sleep.” Then the King’s daughter began to weep, and was afraid of the cold Toad, that nothing would satisfy him but he must sleep in her pretty clean workspace. Now the King grew angry with her, saying: “That which thou hast promised in thy time of necessity, must thou now perform.” So she picked up the Toad with her finger and thumb, carried him upstairs and put him in a corner, and when she had lain down to sleep, he came creeping up, saying: “I am tired and want sleep as much as you; take me up, or I will tell your father.” Then she felt beside herself with rage, and picking him up, she threw him with all her strength against the browser, crying: “Now will you be quiet, you horrid Toad!”

But as he fell, he ceased to be a Toad, and became all at once a prince with beautiful kind shapes. And he told her his name was Jet and how the wicked witch of ADF had bound him by her spells, and how no one but she alone could have released him. Apex soon forgot about her old pal AnyChart and only had an UI for the young and beautiful Jet. And they two would go together to his father’s kingdom. And there came to the door an interactive grid, and behind the grid was standing faithful John Snyders, the servant of the young prince Jet. Now, faithful John had suffered such care and pain when his master was turned into a Toad, that he had been obliged to wear three iron libraries over his heart, to keep it from breaking with trouble and anxiety. When the grid started to take prince Jet to his kingdom, and faithful John had helped them both in, he got up behind, and was full of joy at his master’s deliverance.

And when they had gone a part of the way, the prince heard a sound at the back of the interactive grid, as if something had broken, and he turned round and cried:

“John, the other real data service must be breaking!”

“The ORDS does not break,
‘Tis the library round my heart
That, to lessen its ache,
When I grieved for your sake,
I bound round my heart.”

Again, and yet once again there was the same sound, and the prince thought it must be some ORDS breaking, but it was the breaking of the other library from faithful John’s heart, because he was now so relieved and happy.

The End


Other fairy tales to come:

Patrick Wolf and the seven little Shakeebs

Align topbar divs horizontally in Apex 5 UT

Using the 12 column grid in Apex 5, we can easily position regions (=divs) as we want.

Unfortunately this is not possible in the title bar where the breadcrumbs reside. There simply is no column grid in the breadcrumb bar. But sometimes it is useful to add some regions or DIVs there.

Here is an example how two regions look that are added to the breadcrumb bar.

topbar_position_vertical
default vertical alignment

They are always aligned one below the other. To use the space available we often want to stack the divs beside each other.

A tiny little bit of css can change this alignment. Add this on page level to the css section.

.syn-app--topbarcontainer, div#R_TOPBAR {
  border-spacing: 2px 2px;
  border: 0px solid;
  display: -webkit-flex;
  display: -ms-flexbox;
  display: flex;
  flex-direction: row;
};

The main trick here is to use a flex display with direction row. This will try to align all child divs in a horizontal row.

More infos about flex property: http://www.w3schools.com/cssref/css3_pr_flex-direction.asp

The css class “syn-app–topbarcontainer” then needs to be applied to a parent region. This region should have the “Blank with Attributes” template (Apex 5.1).

Alternatively the region can use R_TOPBAR as a region alias.

The two Regions A and B are subregions. The subregions can be used with any template. The example uses the standard template with some minified settings.
 

topbar_position_PD02
region settings

And this is how it looks in the application:

topbar_position_horizontal
horizontal alignment

This works in Apex 5.0 and also in Apex 5.1 (early adopter).

Be aware that this is not a responsive layout. So when the browser window size is lessened, then the regions will not position below each other.

Integrate Oracle JET into Apex 5.0

Introduction

For Apex 5.1 it is announced that the new chart library will be Oracle JET. It is possible to use Oracle Jet already in Apex 5.0. How to integrate it is the scope of this article.

Rationale

I don’t see that most of the projects will switch very soon to Apex 5.1. Many have just made the transition to Apex 5 and Universal Theme or are about to do so. This was usually quite some work. It is difficult to argue for making a new conversion to Apex 5.1 very soon. Even if some of the reasons (Interactive Grids!) are extremly compelling. However since Oracle JET Charts will be one of the major areas where we can expect some adaptation when migrating, it makes sense to start working with that technology as soon as possible. Using Oracle JET charts in Apex 5.0 would be a very good preparation and might smooth a future upgrade path.

In the scope of the Apex Dashboard Competition I made a try to combine Oracle Jet into Apex 5.0. There are quite a few traps and surprises when trying to do so. I plan to do a series of blog posts that all deal with the combination of Apex 5 and oracle jet.

This article will just concentrate on how to install Oracle Jet (OJ) and make it available in Apex 5.0.

License considerations

Since a few weeks Oracle Jet is open source.
See : http://www.oracle.com/technetwork/developer-tools/jet/downloads/jetlicense-2905156.html

“Anyone and Everyone can use Oracle JET. It’s an Open Source project sponsored by Oracle. It is distributed under the Universal Permissive License(UPL).”

Essentially it means we can use and distribute oracle jet as long as the appropriate license document is included in the distribution. The license document is also part of the oraclejet.zip file that we need to download.

Download sources

To install Oracle JET, we first need to download the newest version. Since we do not create a Oracle JET only application, most of the “Getting Started” docs and templates are not really relevant. The oraclejet.zip file hides under the “Oracle JavaScript Extension Toolkit : Base Distribution” link in the download page.

Download page: http://www.oracle.com/technetwork/developer-tools/jet/downloads/index.html

General documentation links:

Installation

Preparation

Prerequisites: the oraclejet.zip file and an apex 5.0 database.

I tested the whole installation process in the vanilla oracle developer virtual machine. This VM includes an Oracle EE database 12.1.0.2, Apex 5.0, ORDS 3.0.x and SQL Developer 4.1.

http://www.oracle.com/technetwork/database/enterprise-edition/databaseappdev-vm-161299.html

Remember it is for evaluation purposes, but not for production development!

Server Installation Process

Unfortunatly there is no CDN link that we could simply include in our apex page. Instead we need to reference the needed js and css files somehow in our Apex page.

To install Oracle JET we need access to the apex web server, where the image files are located. To do so unzip the downloaded oraclejet.zip file into a usable folder. I suggest to use the normal apex/images/libraries folder.

e.g.

/apex/images/libraries/oraclejet

apex/images is typically mapped as the image path /i/. It can later be used with a substituition variable #IMAGE_PREFIX#. To access an oraclejet file, we could include it like this #IMAGE_PREFIX#libaries/oraclejet/libs/oj/v2.0.0/ojs.js .

Unzipping the file into some accessible folder is all we need for the server installation part.

OJ configuration and integration into Universal Theme (UT)

The oj documentation states that the require.js framework is mandatory if data visualization components are used. Since the charts are the most interesting parts for Apex, we are forced to use require.js. Most of the installation problems I encountered had to do with using require.js. More details are in the “About requirejs” part. I hope that in the distribution for Apex 5.1 this dependency will be gone.

About requirejs

Require.js is a tool that is able to load needed ressources dynamically. Typically those resources are other javascript libraries. If multiple parts of an application will load the same library, require.js will only load it once including dependencies.

The Oracle JET documentation tells us how to install requirejs.

The loader that makes it easier to manage library references and is designed to improve the speed and quality of your code. Oracle JET uses RequireJS by default for the Oracle JET QuickStart Template and is required if you plan to use Oracle JET’s internationalization, data visualization components, or the oj.OAuth plugin in your application.

To use RequireJS to manage references:

  1. Download Oracle JET as described in Download the Oracle JET Zip File.
  2. Copy js/libs/oj/version/main-template.js to the js folder.
  3. In the js folder, rename main-template.js to main.js.
  4. Add the following script reference to your index.html file:
  5. Update main.js as needed to reference Oracle JET modules or your own scripts.

    For a list of Oracle JET modules and additional details about using RequireJS in your Oracle JET application, see Using RequireJS for Modular Development.

For more information about RequireJS, see http://requirejs.org.

Ok, so lets do this. Do not copy the code for the main.js from the documentation. This code is outdated and does not have all the correct paths. The reason is simple. Some libraries, like oj itself have now a newer version and the version is included in the path. If you copy the main-template.js as stated in the documentation, you should be safe. This is what I used for the main.js file. The paths are relative to the location of the main.js file itself. in this case to set the baseUrl is not needed.

requirejs.config({
  // baseUrl: '#IMAGE_PREFIX#libraries/oraclejet/js',
  // Path mappings for the logical module names
  paths: {
    'knockout': 'libs/knockout/knockout-3.4.0',
    'jquery': 'libs/jquery/jquery-2.1.3.min',
    'jqueryui-amd': 'libs/jquery/jqueryui-amd-1.11.4.min',
    'ojs': 'libs/oj/v2.0.0/min',
    'ojL10n': 'libs/oj/v2.0.0/ojL10n',
    'ojtranslations': 'libs/oj/v2.0.0/resources',
    'signals': 'libs/js-signals/signals.min',
    'text': 'libs/require/text',
    'promise': 'libs/es6-promise/promise-1.0.0.min',
    'hammerjs': 'libs/hammer/hammer-2.0.4.min',
    'ojdnd': 'libs/dnd-polyfill/dnd-polyfill-1.0.0.min',
  },
  // Shim configurations for modules that do not expose AMD
  shim: {
    'jquery': {
      exports: ['jQuery', '$']
     }
    }
  },

// This section configures the i18n plugin. It is merging the Oracle JET built-in translation
// resources with a custom translation file.
// Any resource file added, must be placed under a directory named "nls". You can use a path mapping or you can define
// a path that is relative to the location of this main.js file.
    config: {
        ojL10n: {
            merge: {
                //'ojtranslations/nls/ojtranslations': 'resources/nls/myTranslations'
            }
        }
    }
});

/**
 * A top-level require call executed by the Application.
 * Although 'ojcore' and 'knockout' would be loaded in any case (they are specified as dependencies
 * by the modules themselves), we are listing them explicitly to get the references to the 'oj' and 'ko'
 * objects in the callback.
 *
 * For a listing of which JET component modules are required for each component, see the specific component
 * demo pages in the JET cookbook.
 */

require(['ojs/ojcore', 'knockout', 'jquery', 'ojs/ojknockout', 'ojs/ojbutton', 'ojs/ojtoolbar','ojs/ojmenu','ojs/ojpictochart'], // add additional JET component modules as needed
  function(oj, ko, $) // this callback gets executed when all required modules are loaded
  {
      // add any startup code that you want here
  }
);

This file will be included in our Apex page using a js script inclusion.

<script data-main="#IMAGE_PREFIX#libraries/oraclejet/js/main" 
src="#IMAGE_PREFIX#libraries/oraclejet/js/libs/require/require.js" 
></script>

Since the script tag needs a special data-main attribute, we can not use the normal Apex 5 mechanism of adding the js.file name to the appropriate page section.

Instead we need to modify the page template. I copied the minimal one.
The result looks like this.

...
<head>
  <meta charset="utf-8">  
  <title>#TITLE#</title>
  #APEX_CSS#
  #THEME_CSS#
  #TEMPLATE_CSS#
  #THEME_STYLE_CSS#
  #APPLICATION_CSS#
  <!-- Oracle JET CSS files -->
  <link rel="stylesheet" href="#IMAGE_PREFIX#libraries/oraclejet/css/libs/oj/v2.0.0/alta/oj-alta-min.css" type="text/css"/>
  <script data-main="#IMAGE_PREFIX#libraries/oraclejet/js/main" src="#IMAGE_PREFIX#libraries/oraclejet/js/libs/require/require.js" ></script>
    
  #PAGE_CSS#  
  #FAVICONS#
  #HEAD#
  
  <meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no"/>
</head>
...

There is one more problem. Require.js does not like if a library with the same name is loaded previously without using require.js. (There are some notable exceptions that depend on using Asyncronous Module Definition Api (AMD)).

Apex 5.0 already loads the Hammer.js library in a different version than the one, that is included with the base Oracle JET distribution. Hammer.js provides the possibility to use Touch Gestures. So it is especially useful for mobile applications.

Because Apex already loaded Hammer require.js throws an error message. The only way I found to make it work, was to remove Hammer.js from the Theme. Unfortunately this can be done only by unsubscribing the Theme. After unsubscribing, we can edit the Theme and remove Hammer.js from the Javascript Files Section.

Thanks to Roel Hartmann for the tipp!

In any case you should check that the libraries were loaded successfully.

On firefox open the console (SHIFT+CTRL+K) and switch on the network traffic.

apex5_log_network

If all files loaded successfully then we are up and running.

It is time to try our first chart from the cookbook.

Using a static Cookbook Demo

So lets try to use the page and add a pictoChart to it

Link: http://www.oracle.com/webfolder/technetwork/jet/uiComponents-pictoChart-default.html

First we create a static region and add the html from the cookbook.

<div  id="picto-container">
  <div id='pc1' data-bind="ojComponent:{
      component: 'ojPictoChart',
      items: pictoChartItems,
      animationOnDisplay: 'auto',
      columnCount: 5
    }"
    style="vertical-align:middle; margin-right:15px">
  </div>
  <div style="display:inline-block; vertical-align:middle; font-weight:bold">
    <span style="color:#333333; font-size:1.1em">7 out of 10 college students</span><br>
    <span style="color:#ed6647; font-size:1.3em">have sleep problems.</span>
  </div>
</div>

Then we add some javascript functions to the page itself (global functions section).

In the javascript section we need to repeat the require call including the configuration part. This is because the libs are loaded asynchronously. On each page where we want to add some js files from Oracle Jet we would do a require call immediately before the relevant section. This time the baseUrl setting is helpful.

To enable pictocharts we need to require the ojs/ojpictochart lib.

require.config({
  baseUrl: '#IMAGE_PREFIX#libraries/oraclejet/js',
  paths: {
    'knockout': 'libs/knockout/knockout-3.4.0',
    'jquery': 'libs/jquery/jquery-2.1.3.min',
    'jqueryui-amd': 'libs/jquery/jqueryui-amd-1.11.4.min',
    'ojs': 'libs/oj/v2.0.0/min',
    'ojL10n': 'libs/oj/v2.0.0/ojL10n',
    'ojtranslations': 'libs/oj/v2.0.0/resources',
    'signals': 'libs/js-signals/signals.min',
    'text': 'libs/require/text',
    'promise': 'libs/es6-promise/promise-1.0.0.min',
    'hammerjs': 'libs/hammer/hammer-2.0.4.min',
    'ojdnd': 'libs/dnd-polyfill/dnd-polyfill-1.0.0.min'
  },
   // waitSeconds: 1,
  // Shim configurations for modules that do not expose AMD
  shim: {
    'jquery': {
      exports: ['jQuery', '$']
    }
  }
});      

require(['ojs/ojcore', 'knockout', 'jquery', 'ojs/ojknockout', 'ojs/ojpictochart'], function (oj, ko, $) {
    function PictoChartModel() {
      this.pictoChartItems = ko.observableArray([
        {name: 'Have  Sleep Problems', shape: 'human', count:7, color: '#ed6647'},
        {name: 'Sleep Well', shape: 'human', count: 3}
      ]);
    }
    var pictoChartModel = new PictoChartModel();

    $(document).ready(
      function() {
        ko.applyBindings(pictoChartModel, document.getElementById('picto-container'));
      }
    );
  });

Voilá we are done!

This is how the result looks like.
Screen Shot 2016-04-07 at 01.16.45

How to enhance the pictoCharts a bit including adding other shapes will be explained in another blog post.

Here are just two more screenshots to give you some ideas.

Screen Shot 2016-04-07 at 01.17.16Screen Shot 2016-04-07 at 00.28.13

Sequence and Audit columns with Apex 5 and 12c

Introduction

In the good ol’ times database triggers were used to populate database columns like primary identifiers or audit columns.

Here is an example of such a trigger.
For sake of simplicity I concentrate on the insert trigger.

create table swe_demo (id number primary key
                      ,col1 number
                      ,col2 varchar2(30)
                      ,inserted_date date not null
                      ,inserted_from varchar2(30) not null);

create sequence swe_demo_seq cache 10000;

create or replace trigger swe_demo_bri_trg
  BEFORE INSERT ON swe_demo
  FOR EACH ROW
BEGIN
   -- record needs always a key
   IF :new.id IS NULL
   THEN
      :new.id := swe_demo_seq.NEXTVAL;
   END IF;

   -- timestamp of the last changes
   :new.inserted_date := SYSDATE;
   :new.inserted_from := COALESCE(v('APP_USER'), user);
END swe_demo_bri_trg;
/

What does it do?

  • The ID column is filled with a sequence value.
  • The inserted_date column is filled with sysdate.
  • The inserted_from column is filled with the current apex user or if that is not set, then the connected user is choosen.

It is time to revisit that functionality and see which new and maybe better conceptes are offered when the newest technologies are in place.

This was tested against Apex 5.0.1 and Oracle 12.1.0.1 .
It should work also in Apex 5.0.3 and Oracle 12.1.0.2

Solution

If you are just looking for the final code – here is my “perfect” solution.

create sequence swe_demo_seq cache 10000;
create table swe_demo
  (id number default swe_demo_seq.nextval primary key
   ,col1 number
   ,col2 varchar2(30)
   ,inserted_date date default sysdate not null
   ,inserted_from varchar2(30) default coalesce(
          sys_context('APEX$SESSION','app_user')
         ,regexp_substr(sys_context('userenv','client_identifier'),'^[^:]*')
         ,sys_context('userenv','session_user')
         )
      not null);

The inserted_from value is now more accurate.

You might have noticed that the need for an insert trigger is gone.
This increases performance considerably.

To insert 1 mill rows the old version needed more than a minute. This can be considered fast.
Elapsed: 00:01:12.904

The new version needs less than 6 seconds. This is a blizzard!
Elapsed: 00:00:05.598

I admit that the test was specifically set up to show a major difference. Feel free to set up your own more realistic test cases.

Still 12 times faster is really nice.

The test code to get this result can be seen in chapter TC4.

Considerations and test cases

How to find out about the current user

This line

COALESCE(v('APP_USER'), user)

from the example is really bad.

v(‘APP_USER’) is a bit slow, although not as slow as I suspected. It will not work on databases without apex installations. “v” is just a public synonym for an apex function that looks up session specific values in the apex repository. Surprisingly the “user” function is a much bigger drag. See TC1 below for performance comparisons.

In older apex versions a good way to find out who was logged in, was to read the client_identifier from the userenv context.

sys_context('userenv','client_identifier')

The structure of this did change between the different apex versions. In Apex 5 it now holds the APP_USER and the Session ID, e.g. SVEN:0123456789

A context is an extremly efficient way to read global session or application data.

Apex 5 introduced a new context with the namespace “APEX$SESSION”. This context is populated by the apex engine (APEX_050000.WWV_FLOW_SESSION_CONTEXT) and holds app_user, app_session and workspace_id. There is a good chance that future versions will add more parameters to this namespace.

See also: http://jeffkemponoracle.com/2015/11/24/apex-5-application-context/

To read the logged in user we fetch SYS_CONTEXT(‘APEX$SESSION’,’APP_USER’).
To read the user if there is something non-apex running, we should read the client_identifer. But there is more to consider.

database links

In many apex projects we do not insert into the local apex database, but into some remote project schema. This is often the case when I just add some reporting or monitoring solution in apex, without interfering with the real project database too much. So lets compare how the different contexts behave when done local vs. remote (over a db link).

12c introduced some changes to the userenv namespace. Especially the “current_user” is now deprecated.
Here is a list of all parameters that belong to the userenv namespace:
https://docs.oracle.com/cd/B19306_01/server.102/b14200/functions165.htm#g1513460

The interesting ones are:

  • client_identifier
    Returns an identifier that is set by the application through the DBMS_SESSION.SET_IDENTIFIER procedure, the OCI attribute OCI_ATTR_CLIENT_IDENTIFIER, or the Java class Oracle.jdbc.OracleConnection.setClientIdentifier. This attribute is used by various database components to identify lightweight application users who authenticate as the same database user.
  • current_schema
    Name of the default schema being used in the current schema. This value can be changed during the session with an ALTER SESSION SET CURRENT_SCHEMA statement.
  • current_user
    Deprecated – Use the SESSION_USER parameter instead.
  • session_user
    For enterprises users, returns the schema. For other users, returns the database user name by which the current user is authenticated. This value remains the same throughout the duration of the session.
  • dblink_info
    Returns the source of a database link session. Specifically, it returns a string of the form:SOURCE_GLOBAL_NAME=dblink_src_global_name, DBLINK_NAME=dblink_name, SOURCE_AUDIT_SESSIONID=dblink_src_audit_sessionid
  • authenticated_identity
    Returns the identity used in authentication. In the list that follows, the type of user is followed by the value returned: …

For testing purposes I just made a loopback link to the same database but a different schema. The testscript can be found in section TC2.

The following tests were done:
1) An insert from an apex page into a view that included a db link to REMOTE_B
2) An insert from an apex page, directly into a local view (LOCAL_A) which was mapped to demo.swe_demo.
5) A direct database insert in the view with the DB link (remote insert)
6) A direct database insert in the view without the db link
7) an insert from a DBA “IAMDBA” using alter session set current_schema

This was the result

Id Some Text Inserted Date Inserted From App User Apex Session Client Identifier Current Schema Current User User Session User Authenticated Identity Dblink Info
1 apex remote insert 22-FEB-16 SVEN SVEN:4628609689353 DEMO DEMO REMOTE_B REMOTE_B REMOTE_B SOURCE_GLOBAL_NAME=DB_A.11623201
2 apex local insert 22-FEB-16 SVEN SVEN SVEN SVEN:4628609689353 DEMO DEMO APEX_PUBLIC_USER APEX_PUBLIC_USER APEX_PUBLIC_USER
5 direct remote insert 22-FEB-16 REMOTE_B DEMO DEMO REMOTE_B REMOTE_B REMOTE_B
6 direct local insert 22-FEB-16 LOCAL_A DEMO DEMO LOCAL_A LOCAL_A LOCAL_A
7 direct insert current_schema 22-FEB-16 IAMDBA DEMO DEMO IAMDBA IAMDBA IAMDBA

I did some more tests, for example using definer and invoker rights procedures, but they didn’t reveal any important differences.

The findings

APEX$SESSION is not populated via db link, but CLIENT_IDENTIFIER is.
Other than those two SESSION_USER has the best information and is always populated.

Therefore in an apex near environment the best expression to find out who inserted some record would be

coalesce(
   sys_context('APEX$SESSION','app_user')
   ,regexp_substr(sys_context('userenv','client_identifier'),'^[^:]*')
   ,sys_context('userenv','session_user')
   )

One should also notice that current_user returns the same values as current_session. This is contrary to the documentation in older database versions. And Oracles suggestion to use session_user instead is certainly correct, but you should be aware that then a different user might be stored than before.

Authenticated_Identity promises to be interesting in very specific security environments.

12c identity columns and default values

With 12c we can consider to use an identity column as our primary key. Also it is now possible to use a sequence as a default value for the column.

The base syntax for an identity column is

id number generated as identity

The base syntax for a default value column is

id number default swe_demo_seq.nextval

There are several differences between the two.
An identity column essentially prevents that values are inserted into this column. So that it is ensured that always the sequence was used to populate the data. The default value column uses the sequence only if null was inserted. This is essentially the same functionality what the trigger did.

It is possible to setup an identity column to behave almost the same as the default value column.

The extended syntax for such an identity column is

id number generated by default on null as identity (cache 10000)

I slightly prefer the default value column syntax for two reasons

  1. The default value syntax is also used for the two audit columns. So it is consistent to use the same mechanism.
  2. Default value columns allow more control over the sequence object.

I plan to write an extra blog post to discuss the differences in more detail.

TC3 shows an performance comparison between trigger logic, default value and identity column. Default values and identity are equally fast. But the trigger looses by a large margin!

Various testcases

For most cases the results should be consistent in other environments too. However always test it your own. Here are the scripts that I used. They might help to setup your own test cases.

TC1) Compare performance of different expressions to read the client identifer

The following plsql block was used and run

set time on
set timing on
declare
  v_result varchar2(100);
begin
  for i in 1..1000000 loop
  v_result := ##EXPRESSION##;
  end loop;
end;
/

This is the output for several tested expressions.
Watch out that some expressions return different results.

I always did three runs to see if the results are consistent and copied one of the elapsed times to the table.

sys_context('userenv','client_identifier')

00:00:02.436
This is the base. It should not be possible to go faster.

substr(sys_context('userenv','client_identifier'),1
            ,instr(sys_context('userenv','client_identifier'),':')-1)

00:00:04.288
Finds the colon and returns everything before that. But returns NULL if no colon is in the client_identifer.

substr(sys_context('userenv','client_identifier'),1, 
   coalesce(nullif(instr(sys_context('userenv','client_identifier'),':'),0)-1,
 length(sys_context('userenv','client_identifier'))))

00:00:06.311
A little complex to read, isn’t it…

regexp_substr(sys_context('userenv','client_identifier'),'^[^:]*')	

00:00:03.459
This is the winner!

translate(sys_context('userenv','client_identifier'),'A:0123456789','A')

00:00:05.663
Surprisingly slow

It surprised me that the REGEXP_SUBSTR expression was the best performing. Previous comparisons between regular expressions and substr/instr combinations always went in favor of substr. It seems either the internal code was optimized by oracle. Or more likely that this specific task (simple search from the beginning of the string) is well suited for a fast regexp search.

In general all expressions were fast. Interesting side note is that REGEXP_SUBSTR and TRANSLASTE were a tiny bit slower for the second identifier, while the other expressions performed equally fast.

Some other expressions:

user 

Elapsed: 00:00:20.652
Really really slow!

sys_context('APEX$SESSION','app_user')

00:00:01.549
Blazingly fast!

v('APP_USER')

00:00:05.646
Not as slow as I thought!

TC2) Analyze the value of differnt contexts with regards to DB links

This script can be run as DBA to setup a demo user.
Make sure you do not have a schema named “demo” already in your system.


create user demo identified by demo;
grant unlimited tablespace to demo;

drop table demo.swe_demo ;
drop sequence demo.swe_demo_seq;

create table demo.swe_demo 
                      (id number primary key
                      ,some_text varchar2(30)
                      ,inserted_date           date not null
                      ,inserted_from           varchar2(30) not null
                      ,c_app_user                varchar2(30)
                      ,c_apex_session            varchar2(30)
                      ,c_client_identifier       varchar2(100)
                      ,c_current_schema          varchar2(30)
                      ,c_current_user            varchar2(30)
                      ,c_user                    varchar2(30)
                      ,c_session_user            varchar2(30)
                      ,c_authenticated_identity  varchar2(100)
                      ,c_dblink_info             varchar2(100)
                      ,c_external_name           varchar2(30)
                      );

create sequence demo.swe_demo_seq cache 10000;


create or replace trigger demo.swe_demo_bri_trg
  BEFORE INSERT ON demo.swe_demo
  FOR EACH ROW
BEGIN
   -- record needs a key
   IF :new.id IS NULL
   THEN
      :new.id := swe_demo_seq.NEXTVAL;
   END IF;

   -- timestamp of last changes 
   :new.inserted_date := SYSDATE;
   :new.c_app_user               := v('APP_USER');
   :new.c_apex_session           := sys_context('APEX$SESSION','app_user');
   :new.c_client_identifier      := sys_context('userenv','client_identifier');
   :new.c_current_schema         := sys_context('userenv','current_schema');
   :new.c_current_user           := sys_context('userenv','current_user');
   :new.c_user                   := user;
   :new.c_authenticated_identity := sys_context('userenv','authenticated_identity');
   :new.c_session_user           := sys_context('userenv','session_user');
   :new.c_dblink_info            := sys_context('userenv','dblink_info');
   :new.c_external_name          := sys_context('userenv','EXTERNAL_NAME');
   
   -- recommended:
   :new.inserted_from        := coalesce(sys_context('APEX$SESSION','app_user')
                                     ,regexp_substr(sys_context('userenv','client_identifier'),'^[^:]*')
                                     ,sys_context('userenv','session_user')
                                     );
   
END swe_demo_bri_trg;
/

show errors

grant select, insert, update, delete on demo.swe_demo to SchemaB;

TC3) Performance comparison for populating PK columns

--------------------------------------------------------------------------------
-- swe 12c demo identity columns
--------------------------------------------------------------------------------
set time on
set timing on

-- old logic using trigger
drop table swe_demo ;
drop sequence swe_demo_seq;
create table swe_demo (id number primary key, col1 number, col2 varchar2(30));
create sequence swe_demo_seq cache 10000;

create or replace trigger swe_demo_bri_trg
  BEFORE INSERT ON swe_demo
  FOR EACH ROW
BEGIN
   -- record needs a key
   IF :new.id IS NULL
   THEN
      :new.id := swe_demo_seq.NEXTVAL;
   END IF;

END swe_demo_bri_trg;
/

-- Performance test to insert 1 mill records
insert into swe_demo (col1)
select level from dual connect by level &amp;amp;lt;= 1000000;
   
-- result
1,000,000 rows inserted.
Elapsed: 00:00:36.854

-- setup 12c logic using default value sequences
drop table swe_demo ;
drop sequence swe_demo_seq;
create sequence swe_demo_seq cache 10000;
create table swe_demo (id number default swe_demo_seq.nextval primary key, col1 number, col2 varchar2(30));


-- performance test to insert 1 mill records
insert into swe_demo (col1)
select level from dual connect by level &amp;amp;lt;= 1000000;

-- result
1,000,000 rows inserted.
Elapsed: 00:00:04.068


-- 12c logic using identity column 
drop table swe_demo ;
drop sequence swe_demo_seq;
create table swe_demo (id number generated as identity primary key, col1 number, col2 varchar2(30));

-- Performance test to insert 1 mill records
insert into swe_demo (col1)
select level from dual connect by level &amp;amp;lt;= 1000000;

--result 
1,000,000 rows inserted.
Elapsed: 00:00:08.626

There is a performance difference compared to the default value solution. This is purely based upon the sequence cache
The identity column was created with a default cache size of 20.

--Try to manually insert a row into the identiy column 
insert into swe_demo (id, col1)
values (null, -1);

SQL Error: ORA-32795: cannot insert into a generated always identity column
32795.0000 -  &amp;amp;quot;cannot insert into a generated always identity column&amp;amp;quot;
*Cause:    An attempt was made to insert a value into an identity column
           created with GENERATED ALWAYS keywords.
*Action:   A generated always identity column cannot be directly inserted.
           Instead, the associated sequence generator must provide the value.
;
-- use non default settings for the identity column
-- 12c logic using identity column 
drop table swe_demo purge;
create table swe_demo (id number generated by default on null as identity (cache 10000) primary key , col1 number, col2 varchar2(30));

-- Performance test to insert 1 mill records
insert into swe_demo (col1)
select level from dual connect by level &amp;amp;lt;= 1000000;

--result 
1,000,000 rows inserted.
Elapsed: 00:00:03.763    

-- test insert
insert into swe_demo (id, col1)
values (null, -1);

1 row inserted.

insert into swe_demo (id, col1)
values (-1, -1);

1 row inserted.

It seems as this is even a tiny bit faster than the default value solution. The test results were influenced heavily by other network or database activities.
Both execution times are very close. The top score however was done using an identity column.

TC4) Extend performance test TC3 with audit columns

--------------------------------------------------------------------------------
-- Adding audit columns to the previous example
-- Final comparison
--------------------------------------------------------------------------------
-- old logic 
drop table swe_demo purge;
drop sequence swe_demo_seq ;

create sequence swe_demo_seq cache 10000;
create table swe_demo (id number primary key
                      ,col1 number
                      ,col2 varchar2(30)
                      ,inserted_date date not null
                      ,inserted_from varchar2(30) not null);

create or replace trigger swe_demo_bri_trg
  BEFORE INSERT ON swe_demo
  FOR EACH ROW
BEGIN
   -- record needs always a key
   IF :new.id IS NULL
   THEN
      :new.id := swe_demo_seq.NEXTVAL;
   END IF;

   -- timestamp of the last changes
   :new.inserted_date := SYSDATE;
   :new.inserted_from := COALESCE(v('APP_USER'), user);
END swe_demo_bri_trg;
/

-- Performance test to insert 1 mill rows
insert into swe_demo (col1)
select level from dual connect by level <= 1000000;

1,000,000 rows inserted.
Elapsed: 00:01:12.904

-- new logic 
drop table swe_demo purge;
drop sequence swe_demo_seq;
create sequence swe_demo_seq cache 10000;
create table swe_demo
  (id number default swe_demo_seq.nextval primary key
   ,col1 number
   ,col2 varchar2(30)
   ,inserted_date date default sysdate not null
   ,inserted_from varchar2(30) default coalesce(
          sys_context('APEX$SESSION','app_user')
         , regexp_substr(sys_context('userenv','client_identifier'),'^[^:]*')
         ,sys_context('userenv','session_user')
         )
      not null);

-- Performance test to insert 1 mill rows
insert into swe_demo (col1)
select level from dual connect by level <= 1000000;

1,000,000 rows inserted.
Elapsed: 00:00:05.598

Side notes

EBR considerations

A trigger is an editionable object. A default column value is not editionable. So if the default expression needs to change, then this will be done in all editions at the same time. With a trigger we are able to choose.

Possible issues

The db_link_info context did not return what the documentation said.
This might be an issue specific to the oracle db version (12.1.0.1).

Security considerations

Consider if you want to trust the client_identifier. Any session can simply set this value and _identify_ as someone else. The apex$session context can not so easily be manipulated.

fragments

When using identity columns if you drop the table the automatically generated sequence (ISEQ$…) will stay. This is needed in case the table is flashbacked from the recycle bin. Therefore I always try to remember to purge the table.

drop table swe_demo purge;

Cleanup

Cleanup code for most of the generated objects:

drop user demo cascade;
drop table swe_demo purge;
drop sequence swe_demo_seq;