Apex 5.1 session cloning

Introduction

With Apex 5.1 we got a nifty new feature. It is now possible to clone an apex session, so that we can have two (or more) independent APEX sessions running in the same browser.

It only took me 2h to implement this from scratch. So it is really easy to do.

Joel Kallmann describes how to do it in this blog post http://joelkallman.blogspot.de/2016/07/apex-session-isolation-across-multiple.html

There are a few additional tweaks I would like to mention here.

Step 1) Enable session cloning on instance level

logon as sys (or any user with apex_administrator_role).

I first granted my normal DBA account the apex_administrator_role

grant apex_administrator_role to myDBA;

and then as DBA:

begin
apex_instance_admin.set_parameter(
p_parameter => 'CLONE_SESSION_ENABLED',
p_value     => 'Y');
end;
/

 

If you are in a CDB/PDB environment, with Apex installed in the PDB (recommended) then  make sure to run this on the matching PDB (especially when working with sys).

e.g.

alter session set container = PDBAPEX

Joels article also explains how to enable this for a single workspace. But I got an error message when trying to do so.

"ORA-20987: APEX - Instance parameter not found - Contact your application administrator."

Step 2) Add a new navigation bar entry

Of cause you are free to add this functionality everywhere you want. But you will need a link, that the user has to click on to get a new session. My preference was the navigation bar.

The url for the link is simple. Just add APEX_CLONE_SESSION as the request parameter.

f?p=&APP_ID.:&APP_PAGE_ID.:&APP_SESSION.:APEX_CLONE_SESSION

create a navigation bar list entry with this link

  • Go to “Shared Components/Navigation Bar Lists/navigation bar” (the name of my list).
  • Choose any icon (fa-clone) and label text (“Session klonen”) that you want.
  • The target page needs to be “&APP_PAGE_ID.”. This will use the current page and add it into the link.
  • And most importantly the request needs to be APEX_CLONE_SESSION.

apex51_clone_session_navbar_entry

The entry is now ready and working. However it will replace the session in the current browser tab. But rightclick and “open in new tab” gives us two tabs with a different apex session.

If that is not enough then this next step is wanted too.

Step 3) Open the link in a new page

The goal is to add a TARGET attribute to our link, so that a new tab is always opened when we clone the session. Unfortunatly the navbar template has no option to include link atributes. We could do a kind of injection trick with the url, but I prefere a cleaner way.

To do so we need to modify three small parts.

First copy the navigation bar template as a new template “Navbar with Attributes”

And there we add the user defined property #A03# to our link. This needs to be done in all areas where we find an <a..> tag.At least for the “list template current” and “list template non-current”. I also added it for the sub list entries, even if my navbar doesn’t use sub lists.

apex51_clonesession_template

Don’t forget to add a description “Link Attributes” for the newly added attribute value in the “attribute description” section (scroll down a bit to see it).

Then enhance our existing navbar entry with target=”_blank”

apex51_clonesession_linkattributes

There is a tiny difference between using target=”_blank” or target=someothername.

target=”_blank” will always create a new tab.

target=”_someothername” will open a new tab on the first click. Consecutive clicks however will always reuse this same tab. This can be useful if you want to prevent your users to constantly cloning session after session after session.

 

And finally make sure that our application uses the new template for showing the navbar list

Shared Components/User Interface Attributes/Desktop/Navigation bar/List Template

apex51_clonesession_userinterface

 

Result

The navigation bar then could look similar to this here:

apex51_clonesession_navbar_de

Clicking on the “clone session” link will open a new tab in the same browser. In the url we will see that a different session id was created.

The new session will have all the same item values (page items, application items, etc.) as the privious session had. But from this point on, the two sessions will start to differ.

Both sessions will use the same browser cookie. For that reason if one session logs out, then the other session will be logged out too.

 

How to upgrade from Apex 5.0 to 5.1

Preparation

As a preparation I recommend several steps and checks to prepare the upgrade.

  • Make sure you have a working database backup and just in case a DBA at hand who would be able to restore your database, tablespace or schemas.
  • Workspace clean up – delete applications that are not needed anymore. Especially copies of others application, that were created to test a specific feature or do a proof of concept. Be careful whether you also want to run any included deinstallation scripts. For an Apex update in general you don’t want to delete the connected tables.
  • Software Download – Apex 5.1 can be downloaded from the OTN download page: www.oracle.com/technetwork/developer-tools/apex/downloads/index.html
  • Backup Application – Export the application including all private saved reports
  • Export supporting objects like Theme, Static Application files
  • Backup Image folder – if your image folder is /i/ then make a copy and rename the copy to /i50/
  • Check the Apex 5.1 known issues page:  http://www.oracle.com/technetwork/developer-tools/apex/downloads/apex-51-known-issues-3408261.html

Apart from downloding the new Apex version, none of these steps is really required. But it gives a nice and cozy fealing to be sure to be able to go back.

In Apex 5.1 some features are deprecated and some options did change. It is possible to prepare your application to anticipate the effects of the upgrade. I will cover this in a separate blog post. More importantly read through the “changed behaviour”, “deprecated” and “desupported” sections of the installation manual (https://docs.oracle.com/database/apex-5.1/HTMRN/toc.htm#HTMRN-GUID-8BDFB50B-4EC6-4051-A9B6-7D5805E00C4E ).

Here are some things to consider already in Apex 5.0.

  • Apex.Server plugin
    • does not return an jqXHR object anymore
    • async option deprecated
    • htmldb_getobject desupported => replace with apex.server
  • old Apex themes deprecated
  • check for CANCEL or BACK or PREVIOUS buttons (page redirect) with execute validations = YES. These will do client side validation in Apex 5.1. If that is not wanted, change it to NO.
  • jsTRee plugin deprecated
  • classic reports
    • hidden column type => hidden column
    • no enterable fields!
  • file browse storage => switch from WWV_FLOW_FILES to APEX_APPLICATION_TEMP_FILES
  • desupported attributes or types
    • page: body header, include standard javascript and css
    • region: svg charts, simple chart, classic tree
    • button: pre text, post text
    • item: start and stop grid layout, file browse storage type
  • Conditions deprecated: text= value, text != value, text is (not) contained in item
  • No more: Save state before branching
  • apex_plsql_job package desupported
  • check if you reference the internal hidden fields  (renamed in Apex 5.1): pPageChecksum => pPageItemsProtected, p_md5_checksum=>pPageItemsRowVersion
  • date picker (classic) deprecated
  • several updated javascript libraries

Decide about the upgrade path

Now consider whether you want to do a traditional upgrade (all steps in one go) or if you want to minimize the application downtime (several steps, not for CDB$ROOT installations). Or as oracle calls it: “maximize application uptime”.

To minimize downtime read this chapter in the documentation: https://docs.oracle.com/database/apex-5.1/HTMRN/toc.htm#HTMRN-GUID-411DE0D8-59E1-4267-8751-FD56305ABE4E

Upgrade in one go

The only step needed to do:

Create database schemas and database objects (tables, packages) and do the application migrations.

@apexins.sql tablespace_apex tablespace_files tablespace_temp images

Upgrade with maximum Uptime

Instead of running a single script, we can do the upgrade in several steps. Only during the 3rd step, the end users need to be disconnected. This third step took only 1.01 seconds on my system.

The upgrade of an Application Express instance runs in four phases:

  1. Create database schemas and database objects (tables, packages).
    This essentially creates the Apex_051000 schema.
    -> no influence on running sessions
  2. Migrate application metadata.
    This copies the repository application data from Apex_050000 into Apex_051000.
    To help with that some upgrade triggers were previously installed.
    ->  developers can’t work anymore
  3. Migrate data that runtime applications modify and switch to the new version.
    -> downtime for all (developers and end users)
  4. Migrate additional log and summary data (this step starts automatically)
    -> no influence on running sessions

But we need zero downtime – is it possible?

I’m convinced it is possible to reach a downtimeless application upgrade using the EBR  (edition based redefinition) feature of the oracle database. I have extensive knowledge using EBR even inside Apex. However so far I didn’t have the time to do a proof of concept (POC) for the upgrade. Also this would be an unsupported action (currently). The change would include tweaking serveral non editioned objects (public synonyms, session contexts, registry data) in such a way that they show up differently when used inside an edition.

If any Germany based customer or the Oracle Apex Team itself is interested how to do this and is willing to pay for the time I need to invest in this, then please contact me.

do the upgrade

Unzip the Apex_5.1.zip file into an appropriate folder. And navigate to the apex_5.1/apex folder

If you decide for the “Maximum Uptime” upgrade path, then three scripts need to run. And ORDS needs to be stopped for script 3. To run the scripts we need to know the tablespace names and the image path.

Find the tablespace

The documentation gives examples useing the SYSAUX tablespace. I do not recommend that. Apex should have its own tablespace.

select username, default_tablespace, temporary_tablespace, profile, oracle_maintained
from dba_users
where regexp_like(username,'^(APEX_|ORDS_)');

This shows only the default setting. We can reuse the same tablespace. But it is also possible to install apex 5.1 into a new tablespace. If you want to do that, then this new tablespace needs to be created first.

Sometimes we want to see if the data is really in this default tablespace. Here is a select that will show the data distribution and also how much space is used.

select owner as schema, tablespace_name as data_tbs, nvl(segment_type,' - total -') segment_type, round(sum(bytes)/1024/1024,2) size_in_MB
from dba_extents
where regexp_like(owner,'^(APEX_|ORDS_)')
group by owner, tablespace_name, rollup(segment_type)
;

Example result

SCHEMA        DATA_TBS SEGMENT_TYPE SIZE_IN_MB
APEX_050000   APEX     INDEX        239,69
APEX_050000   APEX     TABLE        267,19
APEX_050000   APEX     LOBINDEX     12,75
APEX_050000   APEX     LOBSEGMENT   240,31
APEX_050000   APEX     - total -    759,94
ORDS_METADATA ORDS     INDEX        4,5
ORDS_METADATA ORDS     TABLE        1,63
ORDS_METADATA ORDS     LOBINDEX     0,19
ORDS_METADATA ORDS     LOBSEGMENT   0,38
ORDS_METADATA ORDS     - total -    6,69

This is a example of one of my apex environments. As you can see there is only one Apex tablespace “APEX” used. Approximatly the same amount of data is in tables, in indexes and in LOBs. LOBSEGMENT indicates that there had been some wwv_flow_files activities going on.

Run the scripts

The scripts are located in the apex subfolder. For example D:/product/apex/apex_5.1/apex.

Navigate to that folder and start an sqlplus session with sys as sysdba. If you are in a CDB/PDB environment connect to the PDB not to CDB$ROOT. To connect to the PDB, the service name needs to be provided. Because of that also a running listener and a matching tnsnames.ora file is needed.

Assuming the following settings:

  • tablespace for apex and apex files: APEX
  • temp tablespace: TEMP
  • image directory: /i/

Then those 3 scripts need to run.

@apexins1.sql APEX APEX TEMP /i/
@apexins2.sql APEX APEX TEMP /i/

stop ORDS

@apexins3.sql APEX APEX TEMP /i/

restart ORDS

Phase 4 is automatically started by running a dbms_scheduler job.

Check privs and synonyms

Sometimes we give extra access from the Apex Schema to our own custom schema. For example in one Application I extended the Apex Feedback functionality and did use the view APEX_TEAM_FEEDBACK. Such changes need to be moved to the new APEX_050100 schema.

ACLs

Here is just how to check if the Apex 5.0 schema has any network ACLs set.

SELECT *
FROM DBA_HOST_ACES xe
where principal = 'APEX_050000';

I created a script to duplicate all ACEs that exists for APEX_050000 to APEX_050100.

The script is shown and explained in a separate blog post.

 

Grants

Here is just how to check if the Apex 5.0 schema has any objects granted to other schemas.


select *
from all_tab_privs
where grantor like 'APEX\_%' escape '\'
and grantee not in ('PUBLIC','SYS','SYSTEM');

It will show objects like tables, views and packages that have privileges granted directly.

To see only the missing grants you can run the following statement. If it returns no rows, then you are fine.


select GRANTEE , TABLE_SCHEMA , TABLE_NAME , PRIVILEGE , GRANTABLE ,HIERARCHY
from all_tab_privs
where grantor in ('APEX_050000','APEX_050100')
and not regexp_like (grantee,'^(APEX_|ORDS_|SYSTEM$|PUBLIC$|SYS$)')
group by GRANTEE , TABLE_SCHEMA , TABLE_NAME , PRIVILEGE , GRANTABLE ,HIERARCHY
having count(*) = 1 and min(grantor) = 'APEX_050000';

Those grants probably need to be copied to the new apex_50100 schema.

Search Source code

You should search the complete application, if there are any references to APEX_050000. This should be done after the migration.

downgrade to 5.0

The section how to downgrade an application back to 5.0 is currently missing from the documentation. Here is a official blog post how to do it: http://jastraub.blogspot.de/2017/01/ooops-i-did-it-again.html

This is the sql script that Jason Straub published to do the downgrade:

alter session set current_schema = SYS;

@wwv_flow_val.sql
@wwv_flow_val.plb 

begin
 dbms_utility.compile_schema('APEX_050000');
end;
/ 

set define '^'
@validate_apex x x APEX_050000

begin
 for i in ( select owner, trigger_name
 from sys.dba_triggers
 where owner = 'APEX_050000'
 and trigger_name like 'WWV_FLOW_UPGRADE_%'
 order by 1 )
 loop
 sys.dbms_output.put_line('Dropping trigger '||i.owner||'.'||i.trigger_name);
 execute immediate 'drop trigger '||i.owner||'.'||i.trigger_name;
 end loop;
end;
/

ALTER SESSION SET CURRENT_SCHEMA = APEX_050000;
exec apex_050000.wwv_flow_upgrade.switch_schemas('APEX_050100','APEX_050000');

ALTER SESSION SET CURRENT_SCHEMA = SYS;
drop context sys.APEX$SESSION;
create context sys.APEX$SESSION using APEX_050000.WWV_FLOW_SESSION_CONTEXT;
declare
 l_apex_version varchar2(30);
begin
 l_apex_version := apex_050000.wwv_flows_release;
 dbms_registry.downgrading('APEX','Oracle Application Express','validate_apex','APEX_050000');
 dbms_registry.downgraded('APEX',l_apex_version);
 validate_apex;
end;
/

select * from dba_tablespaces ;

select username, default_tablespace, temporary_tablespace, profile, oracle_maintained
from dba_users
where regexp_like(username,'^(APEX_|ORDS_)');

select owner as schema, tablespace_name as data_tablespace, nvl(segment_type,' - total -') segment_type, round(sum(bytes)/1024/1024,2) size_in_MB
from dba_extents
where regexp_like(owner,'^(APEX_|ORDS_)')
group by owner, tablespace_name, rollup(segment_type)
;

copy ACLs during Upgrade to Apex 5.1

The following works in 12c only. In previous database versions the package to set ACLs had other modules. Those are now deprecated. The script does not call any deprecated functions.

You can see the ACLs/ACEs by checking the data dictionary.

select * from DBA_HOST_ACES where principal like 'APEX_%';

This little script will check the network ACLs for the APEX5.0 schema and copies it to the Apex 5.1 scheme. It will not delete any ACLs. But use it at your own risk. It automatically commits.


declare
/* Author: Sven Weller
Company: syntegris information solutions GmbH
Purpose: Transfer Network ACLs from APEX_050000 to APEX_050100 schema
Created: 11.01.2017
*/
v_source_schema  varchar2(30) := 'APEX_050000';
v_target_schema  varchar2(30) := 'APEX_050100';

v_ace xs$ace_type;
v_host DBA_HOST_ACES.host%type;
v_lower_port DBA_HOST_ACES.lower_port%type;
v_upper_port DBA_HOST_ACES.upper_port%type;

BEGIN
for apex50acls in (SELECT xe.*
,row_number() over (partition by host, principal, lower_port, upper_port, start_date, end_date, grant_type,inverted_principal, principal_type order by ace_order, privilege) privlist#
,dense_rank() over (order by host, principal, lower_port, upper_port, start_date, end_date, grant_type,inverted_principal, principal_type) group#
FROM DBA_HOST_ACES xe
where principal = v_source_schema
and (xe.host, xe.lower_port, xe.upper_port,xe.start_date, xe.end_date, xe.grant_type,xe.inverted_principal,xe.principal_type)
not in (select t.host, t.lower_port, t.upper_port, t.start_date, t.end_date, t.grant_type,t.inverted_principal, t.principal_type
from DBA_HOST_ACES t
where t.principal = v_target_schema)
order by group# ,privlist#
) loop

if apex50acls.group#>1 and apex50acls.privlist#=1 then
-- store the old acl
DBMS_NETWORK_ACL_ADMIN.APPEND_HOST_ACE(
host        => v_host,
lower_port  => v_lower_port,
upper_port  => v_upper_port,
ace         => v_ace);
end if;

if apex50acls.privlist#=1 then -- first time
-- prepare the new acl
v_ace := xs$ace_type(
privilege_list => xs$name_list(apex50acls.privilege),
principal_name => v_target_schema,
principal_type => case apex50acls.principal_type
when 'APPLIACTION' then xs_acl.ptype_xs
when 'DATABASE' then xs_acl.ptype_db
when 'EXTERNAL' then xs_acl.ptype_external
end ,
granted   => apex50acls.grant_type = 'GRANT',
inverted  => apex50acls.inverted_principal = 'YES',
start_date => case when apex50acls.start_date < systimestamp then systimestamp
when apex50acls.start_date > systimestamp then apex50acls.start_date
end,
end_date => apex50acls.end_date
);
v_host:= apex50acls.host;
v_lower_port := apex50acls.lower_port;
v_upper_port := apex50acls.upper_port;
else
-- add a new privilege
v_ace.privilege_list.extend;
v_ace.privilege_list(apex50acls.privlist#):= apex50acls.privilege;
end if;

end loop;
if v_host is not null then
-- store final ace
DBMS_NETWORK_ACL_ADMIN.APPEND_HOST_ACE(
host        => v_host,
lower_port  => v_lower_port,
upper_port  => v_upper_port,
ace         => v_ace);
end if;
END;
/

 

 

about JET Diagrams (JET v2.2.0) in Apex 5

This is a followup from my older blog post “Integrate Oracle JET into Apex 5“.

Oracle JET Diagrams are a new data visualization type in Oracle JET 2.1.0.

This post is organized into three mostly independent parts

  1. How to setup Oracle JET v2.2.0 for usage in Apex
  2. How to copy Oracle JET Container Diagrams from the cookbook into Apex
  3. Using Oracle JET Diagrams with container layout

 

How to setup Oracle JET v2.2.0 for usage in Apex

Step 1) Download the base distribution

From the download page (http://www.oracle.com/technetwork/developer-tools/jet/downloads/index.html) choose the base distribution and download this zip file.

Step 2) Unzip JET into the APEX image folder

Copy and unzip the file into a folder insider your image path from apex.Where you put it is your own choice. I prefere to add it to the library path where oracle jet will also be in Apex 5.1 distribution (/libraries/oraclejet/2.0.2)

You can choose a very similar path “/libraries/oraclejet/2.2.0”. Create this path and unzip the file there.

The next time apex is upgraded remember not to move the image folder but simply to overwrite it (make a copy of the original before that).

Step 3) Create, manipulate and deploy the main.js file

Basis for this should always be the main-template.js file from the subfolder \js\libs\oj\v2.2.0. This template has all the correct paths and versions for all sub modules that are included in the main.js.

Additionally we can add a base-url that points to the folder where we unziped JET. If we add the main.js file in the js folder, then this is not needed. But we come back to that base-url later. So for JET version 2.2.0 the complete main.js file will look like this.


/**
* Example of Require.js boostrap javascript
*/

requirejs.config({
// Path mappings for the logical module names
paths: {
'knockout': 'libs/knockout/knockout-3.4.0',
'jquery': 'libs/jquery/jquery-3.1.0.min',
'jqueryui-amd': 'libs/jquery/jqueryui-amd-1.12.0.min',
'ojs': 'libs/oj/v2.2.0/min',
'ojL10n': 'libs/oj/v2.2.0/ojL10n',
'ojtranslations': 'libs/oj/v2.2.0/resources',
'text': 'libs/require/text',
'promise': 'libs/es6-promise/es6-promise.min',
'hammerjs': 'libs/hammer/hammer-2.0.8.min',
'signals': 'libs/js-signals/signals.min',
'ojdnd': 'libs/dnd-polyfill/dnd-polyfill-1.0.0.min',
'css': 'libs/require-css/css.min',
'customElements': 'libs/webcomponents/CustomElements.min',
'proj4': 'libs/proj4js/dist/proj4'
},
// Shim configurations for modules that do not expose AMD
shim: {
'jquery': {
exports: ['jQuery', '$']
}
},

// This section configures the i18n plugin. It is merging the Oracle JET built-in translation
// resources with a custom translation file.
// Any resource file added, must be placed under a directory named "nls". You can use a path mapping or you can define
// a path that is relative to the location of this main.js file.
config: {
ojL10n: {
merge: {
//'ojtranslations/nls/ojtranslations': 'resources/nls/myTranslations'
}
},
text: {
// Override for the requirejs text plugin XHR call for loading text resources on CORS configured servers
useXhr: function (url, protocol, hostname, port) {
// Override function for determining if XHR should be used.
// url: the URL being requested
// protocol: protocol of page text.js is running on
// hostname: hostname of page text.js is running on
// port: port of page text.js is running on
// Use protocol, hostname, and port to compare against the url being requested.
// Return true or false. true means "use xhr", false means "fetch the .js version of this resource".
return true;
}
}
}
});

/**
* A top-level require call executed by the Application.
* Although 'ojcore' and 'knockout' would be loaded in any case (they are specified as dependencies
* by the modules themselves), we are listing them explicitly to get the references to the 'oj' and 'ko'
* objects in the callback.
*
* For a listing of which JET component modules are required for each component, see the specific component
* demo pages in the JET cookbook.
*/
require(['ojs/ojcore', 'knockout', 'jquery', 'ojs/ojknockout', 'ojs/ojbutton', 'ojs/ojtoolbar','ojs/ojmenu'], // add additional JET component modules as needed
function(oj, ko, $) // this callback gets executed when all required modules are loaded
{
// add any startup code that you want here
}
);

Step 4) Reference the main.js file in the page template

 

How to copy Oracle JET Container Diagrams from the cookbook into Apex

The JET cookbook demo can be found here. The interactivity in this visualization is charming. We can organize nodes into containers and expand or decrease the container.

Step 1) Copy the html and the js code from the cookbook to our page

Step 2) Add the require.config call

This time we add a base URL.

requirejs.config({
  baseUrl: '#IMAGE_PREFIX#libraries/oraclejet/js',
  // Path mappings for the logical module names
  paths: {
    'knockout': 'libs/knockout/knockout-3.4.0',
    'jquery': 'libs/jquery/jquery-3.1.0.min',
    'jqueryui-amd': 'libs/jquery/jqueryui-amd-1.12.0.min',
    'ojs': 'libs/oj/v2.2.0/min',
    'ojL10n': 'libs/oj/v2.2.0/ojL10n',
    'ojtranslations': 'libs/oj/v2.2.0/resources',
    'text': 'libs/require/text',
    'promise': 'libs/es6-promise/es6-promise.min',
    'hammerjs': 'libs/hammer/hammer-2.0.8.min',
    'signals': 'libs/js-signals/signals.min',
    'ojdnd': 'libs/dnd-polyfill/dnd-polyfill-1.0.0.min',
    'css': 'libs/require-css/css.min',
    'customElements': 'libs/webcomponents/CustomElements.min',
    'proj4': 'libs/proj4js/dist/proj4'
  },
  // Shim configurations for modules that do not expose AMD
  shim: {
    'jquery': {
      exports: ['jQuery', '$']
     }
    },

// This section configures the i18n plugin. It is merging the Oracle JET built-in translation
// resources with a custom translation file.
// Any resource file added, must be placed under a directory named "nls". You can use a path mapping or you can define
// a path that is relative to the location of this main.js file.
    config: {
        ojL10n: {
            merge: {
                //'ojtranslations/nls/ojtranslations': 'resources/nls/myTranslations'
            }
        }
    }
});

Step 3) Find out why it is not working yet

The only remaining ressource that could not be loaded should be the diagramLayouts/DemoContainerLayout.js file. The reason is simple. This file is not included in the base zip file. However we can get it directly from the JET cookbook page.

Firstjet_diagram_launch_standalone we open the cookbook in standalone mode. There is a button in the upper right corner that helps us to do so.

Then we inspect the network files again and locate the DemoContainerLayout.js. We can simply copy the address and store the file to our system

jet_diagram_copy_layoutfile

 

Step 4) Copy and integrate the layout file

To integrate this layoutfile into our page I choose a slightly different approach. This is not a file that might be relevant for a default Oracle JET installation. Instead I’d like to add it specifically to my application. In this way I can modify the file and influence the behaviour of my diagram without changing anything for other applications.

So we upload it as a static application file (in my case with a directory “oraclejet”).

And we reference the file directly in the require call. Here the suffix “.js” is important. It tells require that this is a direct file reference and not an alias name for a previously defined ressource.

require([‘ojs/ojcore’, ‘knockout’, ‘jquery’, ‘#APP_IMAGES#oraclejet/DemoContainerLayout.js‘,
‘ojs/ojknockout’, ‘ojs/ojbutton’, ‘ojs/ojdiagram’], function(oj, ko, $, layout) { …

 

Using Oracle JET Diagrams with container layout

An OracleJet diagram is essentially a graph. It consists of nodes and links between the nodes. The container diagram has the additional posibility to organize nodes into a hierarchy. Other layouts have similar possibilities but choose to render it completly different.

Which layout to use is configured in the attributes of the ojDiagram component (View) and inside the javascript Model.ojet_diagram_layout1ojet_diagram_layout2

The container layout has only very limited drawing possibility. Nodes are rectangles and links are lines.

The main nodes (containers) are always drawn horizontically from left to right. Child nodes are always drawn vertically  from top to bottom and inside their parent container. All nodes that have child nodes are considered containers and can potentially be expanded or collapsed.

Links that connect nodes that are side by side are attached to the left or right side of the nodes. Links that connect nodes that are  above or below each other connect to the top and bottom part of a node.

This very simple drawing approach allows for some nice small visualizations. For example we can easily present process flows with that. If we want to draw huge networks, then another layout will be more appropriate.

How to change descriptions

Nodes have several properties that can be set. A complete list can be found in the ojDiagram doc.

  • id ==> will uniquely identify a node. It will also be used as StartNode and EndNode in the link properties.
  • label ==> the text that is printed inside the node.
  • shortDesc ==> a small description that is shown as a tooltip when hovering over a node

The cookbook uses a small function to simplify node creation. But we can also create a node using direct json syntax.

this.nodes.push({
id: "id",
label: "label",
shortDesc: "shortDesc",
nodes: null
});

 

How to color the nodes

All nodes have a default style. The default is a kind of greyish background. We can change the backgroundStyle property for our node.

this.nodes[0].nodes[0].nodes[0].backgroundStyle = 'height:20px;width:60px;
border-color:#444444;background-color:#00FF80;border-width:.5px;
border-radius:8px';

This colors the first child of the first child in the first container to green and rounds the corners.

We can also simply set the background color, without setting all the other properties. For example for the second child in the first container.

this.nodes[0].nodes[1].backgroundStyle = "background-color:red";

It is possible to add images or shapes to our diagram. We can position them in the middle, left or right inside a node. This line will put a small yellow “human” in node N1.

this.nodes[1].icon = {width: 10, height: 10, halign: "right", 
shape: "human", color:"yellow", borderColor:"grey"};

The following shapes are predefined.

square, plus, diamond, triangleUp, triangleDown, 
human, rectangle, star, circle

It is possible to create custom shapes by providing an SVG path. Or we can add images instead of a shape. However this post is to small to explain that in more detail.

Next I show how to create a custom gradiant fill. There are two steps to do so.
First create the SVG-Fill-Gradient

<svg height="0" width="0">
    <defs>
      <linearGradient id="gradient" x1="0%" y1="100%">
        <stop offset="0%" style="stop-color: #66ccff"></stop>
        <stop offset="80%" style="stop-color: #0000FF"></stop>
      </linearGradient>
    </defs>
  </svg>

then add this gradient to the node.

this.nodes[0].containerStyle = {fill: "url(#gradient)"};

And the combined result looks like this. It certainly is not pretty, but it shows what is possible using a little imagination.

ojet_diagram_colored

Further readings: JET custom shapes and image markers

The source code for this coloring example can be copied into the JET cookbook page.

The HTML part

<div id='diagram-container>
<svg height="0" width="0">
<defs>
			<linearGradient id="gradient" x1="0%" y1="100%">
<stop offset="0%" style="stop-color: #66ccff">;</stop>
<stop offset="80%" style="stop-color: #0000FF"></stop>
</linearGradient>
</defs>
</svg>
<div id="diagram" data-bind="ojComponent: {
component: 'ojDiagram',
layout: layoutFunc,
animationOnDataChange: 'auto',
animationOnDisplay: 'auto',
maxZoom:2.0,
selectionMode: 'single',
styleDefaults : styleDefaults,
nodes : nodes,
links : links,
expanded: expanded
}"
style="max-width:800px;width:100%; height:600px;"></div>
</div>

The javascript part


require(['ojs/ojcore', 'knockout', 'jquery', 'diagramLayouts/DemoContainerLayout',
'ojs/ojknockout', 'ojs/ojbutton', 'ojs/ojdiagram'], function(oj, ko, $, layout) {
function model(data) {
var self = this;
self.layoutFunc = layout.containerLayout;
function createNode(id, nodes) {
return {
id: id,
label: id,
shortDesc: "Node " + id,
nodes: nodes ? nodes : null
};
}
function createLink(id, startId, endId) {
return {
id: id,
startNode: startId,
endNode: endId,
shortDesc: "Link " + id + ", connects " + startId + " to " + endId
};
}
this.expanded = ['N0', 'N00'];
this.nodes = [], this.links = [];
var childNodesN00 = [createNode("N000"), createNode("N001")];
var childNodesN0 = [createNode("N00", childNodesN00), createNode("N01"), createNode("N02")];
var childNodesN2 = [createNode("N20"), createNode("N21"), createNode("N22")];
this.nodes.push(createNode("N0", childNodesN0));
this.nodes.push(createNode("N1"));
this.nodes.push(createNode("N2", childNodesN2));
this.nodes.push(createNode("N3"));

this.nodes[0].nodes[0].nodes[0].backgroundStyle = 'height:20px;width:60px;border-color:#444444;background-color:#00FF80;border-width:.5px;border-radius:8px';
this.nodes[0].nodes[1].backgroundStyle = "background-color:red";
this.nodes[1].icon = {width: 10, height: 10, halign: "right", shape: "human", color:"yellow", borderColor:"grey"};
this.nodes[0].containerStyle = {fill: "url(#gradient)"};

// disable selection on some containers
this.nodes[0].selectable = 'off';
this.nodes[0].nodes[0].selectable = 'off';

// create the links
this.links.push(createLink("L0", "N2", "N3"));
this.links.push(createLink("L1", "N1", "N21"));
this.links.push(createLink("L2", "N1", "N22"));
this.links.push(createLink("L3", "N000", "N1"));
this.links.push(createLink("L4", "N001", "N1"));
this.links.push(createLink("L5", "N02", "N1"));
this.links.push(createLink("L6", "N000", "N001"));

this.styleDefaults = {
nodeDefaults: {
containerStyle: "border-color:#abb3ba;background-color:#f9f9f9;border-width:.5px;border-radius:1px;padding-top:20px;padding-left:10px;padding-bottom:10px;padding-right:10px;",
labelStyle: "color:#252525;font-size:8px;font-family:'Helvetica Neue', Helvetica, Arial, sans-serif;font-weight:normal;font-style:italic",
backgroundStyle: 'height:20px;width:60px;border-color:#444444;background-color:#f9f9f9;border-width:.5px;border-radius:1px',
icon: null
},
linkDefaults: {startConnectorType: "circle", endConnectorType: "arrow"}
};
}
$(document).ready(
function() {
ko.applyBindings(new model(),
document.getElementById('diagram-container'));
}
);
});

How to modify links

Modifing links is very similiar to modifing nodes.One main difference however is the definition of the “arrows” on each side of the link. Usually we want to have all links look the same. So instead of changing the properties of each single link, we just switch the default behaviour.

The following line will make the links look like simple arrows.

linkDefaults: {startConnectorType: "none", endConnectorType: "arrow"}

Also for static diagrams I prefer to give each link a proper description (shortDesc).

 

How to add interactivity

Back to our apex application.The goal here is to click on a node (or a link) and to show a specific Apex region that corresponds with the selection.

First we allow to select a node. The diagram layout can do “single” or “multiple” selections. To allow this, we add the selectionMode: ‘single’ property to our view. And since we want to work with the selected parts later, we also add selection: selectedNodes.

This selectedNodes needs to be defined in the nodeProperty.

Then we prepare our apex page and put some “apex connector logic” in place.

We create a region for each node that we want to interact with.

The region gets a static id R_DETAILS_XXX where XXX is the ID of the node and it gets a custom attribute

style="display: none;"

As a result we know the ID of each region and the region will be rendered but not displayed. With that we add a small function showDetails to the page. It will show one region and hide another (the previous) one.


function showDetails(showNodes,hideNodes) {
 console.log("ShowDetails="+showNodes);
 if (hideNodes!==""){
 $("#R_DETAILS_"+hideNodes).hide();
 };

$("#R_DETAILS_"+showNodes).show();

}

 

 

The JET and knockout binding will then be done using the optionChange property.

We add a function to react on the change of a selection. The “value” and the “previousValue” will then hold the ID of the node (or link). If we chose to do multiple selections it can be an array of nodes.

Html

<div id="diagram" data-bind="ojComponent: {
component: 'ojDiagram',
layout: layoutFunc,
selection: selectedNodes,
selectionMode: 'single',
styleDefaults : styleDefaults,
nodes : nodes,
links : links,
optionChange: diagramOptionChange
}"
style="max-width:800px;width:100%; height:600px;"></div>

Javascript


// set default selection

this.selectedNodes = ['N000'];

// disable selection on some containers
 this.nodes[0].selectable = 'off';
 this.nodes[0].nodes[0].selectable = 'off';

self.diagramOptionChange = function (event, data) {
 console.log("optionchanged="+data.option);
 if (data['option'] == 'selection') {
   showDetails(data['value'], data['previousValue']);
 }};

 

 

 

Further reading:

Data vizualization blog: A guide to diagrams (part9)

 

 

 

adaptive cursor sharing and DBMS_SQL

A recent post in the OTN mentioned that DBMS_SQL does not use bind peeking for binded variables. I couldn’t believe that, so I decided to do some tests for myself. The findings are strange…

This is potentially relevant for Apex developers, since the Apex engine uses DBMS_SQL. I still have to do further testing to check the behaviour in Apex.

First I setup some test to show bind peeking and adaptive cursor behaviour using normal statements in SQL*Plus or SQL Developer. After that we move to dynamic SQL, especially DBMS_SQL, and try the same again.

scenario setup

create skewed testdata

--drop table demo_big;
create table demo_big as
select level as id, 
       case when mod(level,10000)=0 
            then 'VALID' 
            else 'INVALID' 
       end as status
from dual
connect by level <= 1000000;

desc demo_big;

Name   Null Type
------ ---- -----------
ID          NUMBER
STATUS      VARCHAR2(7)

select status, count(*) 
from demo_big 
group by rollup(status);
STATUS     COUNT(*)
INVALID    999900
VALID      100
           1000000

So we have a few VALID values and a lot of INVALID ones.

Even if we have only two different values an index will be useful on this column. The data distribution is so skewed that any access trying to read the VALID values would profit from an index. However if we access the INVALID column we don’t want to use the index and instead want a full table scan.

-- create indexes on all the important columns
create unique index demo_big_id_ix on demo_big(id);
create index demo_big_status_ix on demo_big(status);

create statistical data(histograms)

First we create the statistics so that the optimizers knows what is in that table and how the data looks like.

-- create statistics and test histogram
execute dbms_stats.gather_table_stats(user, 'DEMO_BIG', method_opt=>'for all indexed columns size skewonly');

Then we check the data dictionary checks to see what has been created so far.
The hist_numtochar2 function is copied from Martin Widlake (Source: https://mwidlake.wordpress.com/2009/08/11/). It just helps to do a crude translation of the numerical histogram bucket endpoints. The code of the function can be found at the end of this post.

I don’t show the results from all selects but the last one. The other selects are here just as references. They are helpful to see what kind of statistics are in place.

select table_name, num_rows, blocks, last_analyzed
from user_tables
where table_name = 'DEMO_BIG';

select table_name, column_name, num_distinct, histogram, num_buckets, sample_size 
from user_tab_columns
where table_name = 'DEMO_BIG';

select *
from user_histograms
where table_name = 'DEMO_BIG' and column_name = 'STATUS';

select table_name, column_name, endpoint_number, endpoint_value, hist_numtochar2(endpoint_value) as translated_value
from user_histograms
where table_name = 'DEMO_BIG' and column_name = 'STATUS';

Here we see a frequency histogram with two buckets for the column STATUS.

TABLE     COLUMN  ENDPOINT_NUMBER    ENDPOINT_VALUE           TRANSLATED_VALUE
DEMO_BIG  STATUS  999900     380626532452853000000000000000000000    INVALJ*
DEMO_BIG  STATUS  1000000    447861930473196000000000000000000000    VALID

The first bucket holds 999900 values where status= INVALID.
The next bucket holds 1000000-999900 = 100 where status = VALID.

This of cause matches exactly what we created. So the statistical info in the dictionary is absolutly correct.

Tests

Now that our setup is in place, we can do some basic testing to see different plans.

check execution plan with LITERALS

-- test different cursor/execution plan using plain selects
select count(*) from demo_big where status = 'VALID';
select * from table(dbms_xplan.display_cursor);
----------------------------------------------------------------------------------------
| Id  | Operation         | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |                    |       |       |     3 (100)|          |
|   1 |  SORT AGGREGATE   |                    |     1 |     8 |            |          |
|*  2 |   INDEX RANGE SCAN| DEMO_BIG_STATUS_IX |   100 |   800 |     3   (0)| 00:00:01 |
----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("STATUS"='VALID')
select count(*) from demo_big where status = 'INVALID';
select * from table(dbms_xplan.display_cursor);
-------------------------------------------------------------------------------
| Id  | Operation          | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |          |       |       |   701 (100)|          |
|   1 |  SORT AGGREGATE    |          |     1 |     8 |            |          |
|*  2 |   TABLE ACCESS FULL| DEMO_BIG |   999K|  7811K|   701   (2)| 00:00:01 |
-------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter("STATUS"='INVALID')

Perfect! As expected one does an index access / index range scan, the other does a full table scan.

check execution plan with BIND parameters

select count(*) from demo_big where status = :P_ENTER_VALID;
select * from table(dbms_xplan.display_cursor);
----------------------------------------------------------------------------------------
| Id  | Operation         | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |                    |       |       |     3 (100)|          |
|   1 |  SORT AGGREGATE   |                    |     1 |     8 |            |          |
|*  2 |   INDEX RANGE SCAN| DEMO_BIG_STATUS_IX |   100 |   800 |     3   (0)| 00:00:01 |
----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("STATUS"=:P_ENTER_VALID)
select count(*) from demo_big where status = :P_ENTER_INVALID;
select * from table(dbms_xplan.display_cursor);
-------------------------------------------------------------------------------
| Id  | Operation          | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |          |       |       |   701 (100)|          |
|   1 |  SORT AGGREGATE    |          |     1 |     8 |            |          |
|*  2 |   TABLE ACCESS FULL| DEMO_BIG |   999K|  7811K|   701   (2)| 00:00:01 |
-------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter("STATUS"=:P_ENTER_INVALID)

The two statements are not identical because the name of the bind parameter is different. Because of that we get two different cursors. Each with a different execution plan.
This test shows that bind peeking works. During the hard parse phase the value of the binded parameter was checked (peeked) so that the correct estimations for the resulting rows/cardinalities were made. Which led in turn to the correct plan for each of the two different statements. However this first parameter “freezes” the execution plan. So that if we change the binded value, then the same plan is reused.

This behaviour was enhanced in 11g with the introduction of adaptive cursor sharing and got steadily improved since then.

To test adaptive behaviour we run the first query again a few times (at least 4 times). But this time we do not pass VALID, but instead INVALID as a parameter.

After that we can see a new child cursor 1 for the sql_id “7rjdcm7v7hfrs”.

select is_bind_sensitive, is_bind_aware, sql_id, child_number, sql_text
from v$sql
where upper(sql_text) like 'SELECT%FROM DEMO_BIG WHERE%'  and sql_text not like '%v$sql%'
;
IS_BIND    IS_BIND SQL_ID        CHILD   SQL_TEXT
_SENSITIVE _AWARE                _NUMBER
Y          N       7rjdcm7v7hfrs 0       select count(*) from demo_big where status = :P_ENTER_VALID
Y          Y       7rjdcm7v7hfrs 1       select count(*) from demo_big where status = :P_ENTER_VALID
Y          N       5zkmtfj331xmc 0       select count(*) from demo_big where status = :P_ENTER_INVALID
select * from table(dbms_xplan.display_cursor('7rjdcm7v7hfrs',0));
----------------------------------------------------------------------------------------
| Id  | Operation         | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |                    |       |       |     3 (100)|          |
|   1 |  SORT AGGREGATE   |                    |     1 |     8 |            |          |
|*  2 |   INDEX RANGE SCAN| DEMO_BIG_STATUS_IX |   100 |   800 |     3   (0)| 00:00:01 |
----------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   2 - access("STATUS"=:P_ENTER_VALID)

select * from table(dbms_xplan.display_cursor('7rjdcm7v7hfrs',1));
-------------------------------------------------------------------------------
| Id  | Operation          | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |          |       |       |   701 (100)|          |
|   1 |  SORT AGGREGATE    |          |     1 |     8 |            |          |
|*  2 |   TABLE ACCESS FULL| DEMO_BIG |   999K|  7811K|   701   (2)| 00:00:01 |
-------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   2 - filter("STATUS"=:P_ENTER_VALID)

This is adaptive behaviour. After a few bad tries a second execution plan is created for the same cursor and used. How many tries are needed? Often it changes on the third try. But it can happen that more are needed.

Test with DBMS_SQL

Now comes the more difficult part. Setup a small plsql block to use DBMS_SQL to run the same statement again using binded parameters.

-- testcase for BIND peeking/aware using DBMS_SQL
declare
  curid    NUMBER;
  ret      INTEGER;
  sql_stmt VARCHAR2(200);
begin
  sql_stmt := 'select count(*) from demo_big where status = :P_STATUS';

  -- get cursor handle
  curid := DBMS_SQL.OPEN_CURSOR;
  DBMS_SQL.PARSE(curid, sql_stmt, DBMS_SQL.NATIVE);

  DBMS_SQL.BIND_VARIABLE(curid, 'P_STATUS', 'VALID');
  ret := DBMS_SQL.EXECUTE_and_fetch(curid);

  DBMS_SQL.PARSE(curid, sql_stmt, DBMS_SQL.NATIVE);
  DBMS_SQL.BIND_VARIABLE(curid, 'P_STATUS', 'INVALID');
  for i in 1..5 loop
    ret := DBMS_SQL.EXECUTE_and_fetch(curid);
  end loop;

DBMS_SQL.close_cursor(curid);
end;
/

The v$sql view has two interesting columns.
IS_BIND_SENSITIVE shows cursors where the execution plan can evolve.
IS_BIND_AWARE shows child cursors where a new plan was created, meaning that the cursor was evolved.

select is_bind_sensitive, is_bind_aware, sql_id, child_number, sql_text
from v$sql
where upper(sql_text) like 'SELECT%FROM DEMO_BIG WHERE%'
and sql_text not like '%v$sql%'
;
IS_BIND_SENSITIVE    IS_BIND_AWARE    SQL_ID    CHILD_NUMBER    SQL_TEXT
Y    N    7rjdcm7v7hfrs    0    select count(*) from demo_big where status = :P_ENTER_VALID
Y    Y    7rjdcm7v7hfrs    1    select count(*) from demo_big where status = :P_ENTER_VALID
N    N    3kpu54a461gkm    0    select count(*) from demo_big where status = :P_STATUS
N    N    3kpu54a461gkm    1    select count(*) from demo_big where status = :P_STATUS
Y    N    5zkmtfj331xmc    0    select count(*) from demo_big where status = :P_ENTER_INVALID
N    N    fjjm63y7c6puq    0    select count(*) from demo_big where status = :P_STATUS2
N    N    1qx03gdh8712m    0    select count(*) from demo_big where status = 'INVALID'
N    N    2jm3371mug58t    0    select count(*) from demo_big where status = 'VALID'

The two child cursors

-- find the cursor id
select sql_id, child_number, bucket_id, count, is_bind_sensitive, is_bind_aware, sql_text
from v$sql s
left join v$sql_cs_histogram h using (sql_id, child_number)
where upper(s.sql_text) like 'SELECT%FROM DEMO_BIG WHERE%'
and s.sql_text not like '%v$sql%'
;

-- check the execution plan for both child cursors
select * from table(dbms_xplan.display_cursor('3kpu54a461gkm',0));
select * from table(dbms_xplan.display_cursor('3kpu54a461gkm',1));

-- see the plans in the SGA
select * from v$sql_plan where sql_id = '3kpu54a461gkm';
select * from v$sql_plan where sql_id = 'fjjm63y7c6puq';

Now the strange thing is: The first cursor is using a FULL table scan. But the first execution was done using the VALID value and should have resulted in the index range scan. The second child cursor does not even have an execution plan!

NOTE: cannot fetch plan for SQL_ID: 3kpu54a461gkm, CHILD_NUMBER: 1
      Please verify value of SQL_ID and CHILD_NUMBER; 
      It could also be that the plan is no longer in cursor cache (check v$sql_plan)

What is going on here? v$sql has a column EXECUTIONS which tells us how often this child cursor was called. It is always 0 for the child 1 from the DBMS_SQL cursor!

I did several more tests using DBMS_SQL. Even a case where the cursor was closed and opened several times. All with the same result.

Interpreting the results

I’m still not yet exactly sure what is going on there. It seems as if bind peeking and adaptive cursor sharing does not work with DBMS_SQL. But why do we see then two child cursors? It seems as if the different parameter values at least have the effect that a new child is created. And this happens only when there is a need for a different execution plan. But where is the plan for that? I still have some doubts. Maybe the execution plan in v$sql is lying is this case? Since DBMS_SQL goes deep into the internals it might be that some of the normal behaviours are not reflected in some of the views.

The cursor itself is in the private SQL workarea and I never checked that. Another approach would be to setup a scenario where we can measure the perormance difference. The test case I used was too small to see a desicive difference between the two possible plans.

Also we have to remember that the need for DBMS_SQL is rare. A normal select with binded parameters is certainly not a case where need dynamic SQL. A more typical case would be a cursor | statement where we do not know at compile time what columns are returned. Then we can use DBMS_SQL to analyse the structure of such a cursor and react on that.

However if we build some kind of dynamic frameworks and think about using DBMS_SQL we should rethink our strategy. Maybe it is easier to provide all the possible cases as plsql apis and thereby compiling during creation, instead of building the statement in a completly dynamic fashion but suffering some essential drawbacks.

Recommendations

1) Avoid DBMS_SQL, consider to use native SQL (execute_immediate) instead
2) If you have a skewed data distribution, make sure your plans are bind_sensitive
3) If you can guarantee an even data distribution, consider to add the NO_BIND_AWARE hint. This should be needed only in some extrem situations (very high performance requirements or cursor cache issues)

Appendix

The function that I used previously:

create or replace function hist_numtochar2(p_num number
,p_trunc varchar2 :='Y') return varchar2
-- Author: Martin Widlake
-- Source: https://mwidlake.wordpress.com/2009/08/11/
is
  m_vc varchar2(15);
  m_n number :=0;
  m_n1 number;
  m_loop number :=7;
begin
  m_n :=p_num;
  if length(to_char(m_n))>36 then
    --dbms_output.put_line ('input too short');
    m_vc:='num format err';
  else
    if p_trunc !='Y' then
      m_loop :=15;
    else
      m_n:=m_n+power(256,9);
    end if;
    --dbms_output.put_line(to_char(m_N,'999,999,999,999,999,999,999,999,999,999,999,999'));
    for i in 1..m_loop loop
      m_n1:=trunc(m_n/(power(256,15-i)));
      --    dbms_output.put_line(to_char(m_n1));
      if m_n1!=0 then m_vc:=m_vc||chr(m_n1);
      end if;
      dbms_output.put_line(m_vc);
      m_n:=m_n-(m_n1*power(256,15-i));
    end loop;
  end if;
  return m_vc;
end;
/

OTN Apprecition Day: the OTN forum

The SQL and PLSQL forum

Today is OTN Apprecitation day so I decided to write a short article about my favourite Oracle feature. It is the OTN SQL and PLSQL forum! Reading and posting on this forum made me a better developer.  I also frequently visit other forums like Database General, Apex and lately the Oracle JET, but not as intensively as the SQL and PLSQL forum.

My OTN forum handle is Sven W.

screen-shot-2016-10-11-at-23-57-04

greetings/honorable mentions

BluShadow – for moderating the forum and creating the FAQ list

Frank Kulash – for always answering in a nice and calm way insistently leading the OP to the final solution.

Billy Verrennye – for makeing me rethink old habits (like naming conventions) and for providing excellent and well thought source code examples.

William Robertson – for always beeing spot on

Odie_63 – For answering some of my questions, for example by taking apart the internal meaning of ROWIDs for external tables.

Boneist and ApexBine – for makeing their statements in a male dominated industry

noticable threads

About naming conventions in PLSQL

Coding Standards and Code Critique Request

features / ideas

PLSQL 101: Datatypes – DATE

About Ansi Joins

Introduction to regular expressions

SQL Assertions / declarative multi-row constraints

other

The 10 database commandmends

Are databases still nice and quick and simple to use like they once were?

Fun stuff from the past

Developers sometimes can be funny and sometimes they just need to boil of some steam.

Here is a collection of thread snippets from the past of the forum. Some years ago I collected memorable posts, but I don’t do this anymore. So the collection is slighty outdated now, but I tried to add a few recent quotes as well.

“OP” is used when I cite the “Original Poster” without giving the real forum handle. Otherwise usually only the first name for some of the well known members are used. Comments to the original quote from myself are in italic. Different threads are separated by a line.

Best of Forum 2007


OP> want a procedure or a block which will give a tree like structure using loops and cursors
3360> Why? Do you have a requirement to make this as slow as possible?


Dave Hemming> select ‘don”t stop me now, I”m having such a good time’ from dual

Special thanks to Dave! This is one of my favourite Queen songs.


APC> Late breaking newsflash: users are not developers!


Damorgan> Writing “working on tables” is as informative as writing “using keyboard and mouse.” See your instructor.


OP> SO CAN YOU PLEASE ASSIST ME ON THAT .

APC> On most keyboards the CAPSLOCK key is halfway down the lefthand side. Please learn to use it.


Billy> The features and flexibility and power of Oracle is NO substitution for a solid relational design.


Sentinel>

insert into table (column) values ('I have John''s shoes');

Sentinel>Of course what I’m doing with his shoes is a completely different story.
John Spencer> Since I only have one pair, I had to go to work barefoot this morning 🙂
John Spencer> Shoeless John


Damorgan> Without a context your posting is just a waste of perfectly good electrons.


OP>Is it possible to do something like this from a running program ?

Billy>Is it possible to jump from an aircraft at 5000 feet? Yes.

Billy>Of course, this has to be questioned as when it is done without a parachute, the changes of survival are very very slim. Never mind that if you’re the pilot, you are sending that plane down.

Billy>Yes, columns can be renamed dynamically from a running program. But it makes as much sense as jumping from a perfectly capable plane without a parachute.


APC> In general computing is about precision and removing ambiguity. That’s why the industry is full of pedants. Maddeningly, there is a direct correlation between pedantry and good programming.


Best of forum 2008


Billy>Be careful about making conclusions using observation only. Simple example. Observe how the WARP_SPEED hint makes the SQL go faster:

SQL> set timing on
SQL> select count(*) from all_objects;
COUNT(*)
----------
10460
Elapsed: 00:00:09.60
SQL> select /*+ WARP_SPEED */ count(*) from all_objects;
COUNT(*)
----------
10460
Elapsed: 00:00:00.50
SQL>

The empirical conclusion is that the WARP_SPEED hint made the query faster by 90%.

This conclusion (based on observation) is incorrect. The real reason why the 2nd query is faster is that it made substantially less physical I/O than the 1st query. The 1st query loaded a lot of data needed into the buffer cache. The 2nd query found that data there and had no need to perform the same expensive and slow physical I/Os that the 1st query did. Nor is there a WARP_SPEED hint.

So be very careful on making assumptions and basing conclusions solely on observation.


Sven> If you provide some example data and your insert statement as everybody suggested, we could give much better solutions without guessing all around.

Hans> But then we would only average one reply per question. And we would not get to spend as much time on the forums, getting to know each other so well.


Billy>Users make incredibly poor Oracle gods. That is what the DBA role is – godlike in Oracle and should be treated with care and respect and given only to those persons responsible for actual database administration. And no, users cannot administrate an Oracle database either.


WhiteHat>Hi all,

The Powers that Be at my work have decided to cut back on the number of different systems we have by re-writing a lot of them from scratch and combining functionality in order to reduce downtime caused by ETL processes and the like. so as a result I’m trying to implement the unified theory of everything in my stored proc and I can’t get it to work. Specifically I’m having difficulties combining quantum mechanics and general relativity into a single SQL statement. I’m getting ORA-06502: numeric or value error: String theory conversion error at line 3523

Is this possible in oracle v150.2.0.5 or will I have to upgrade to 153gR2?

being friday afternoon, my brain isn’t really in gear so I’m certain I’ve overlooked something simple. I suspect my basic architecture assumptions are incorrect but not sure. any advice?

Cheers,
WH.
Dave>It’s possible you’re trying to imply a quantum function to a relativistic variable. You’ll need to explicitly CAST it first.

Of course, I definitely think you need to show us your code. 🙂
Leo>Thats not always necessary as especially the quantum function sometimes can be decrypted itself by Oracle.

But definitely we need the code from line 3517.3 until line 3527.8


Billy>If this was ancient times, and you wrote this code to run on any of my databases, I would have handed your over to the SQL Inquisition for showing you the error of your ways.


Sarma>OP already told it is just an exercise for him on PL/SQL. In short, home-work.
Billy>Oh… I see.. You mean like attending Police College and committing, as home work, crimes like armed robbery, assault with a dangerous weapon, vehicular manslaughter, arson, and so on.
Billy>Yeah, I can see how this can teach you how to enforce the law.. NOT!


Justin>It’s generally helpful to specify the actual exception you’re getting rather than just saying “raises an exception”. Oracle error numbers and error strings are exceptionally useful debugging tools.


Michael O>Isn’t this the Oracle Support Forum where all wishes are granted?


Unknown>It is a basic problem that we face too here in forums. How do we show The Good Stuff of Oracle to a SQL-Server fanboy that cannot bare to empty his cup of SQL-Server in order to taste some Oracle?

Or dealing with a Java zealot that has been bitten badly by the J2EE religion, and sees Oracle as a mere persistence layer.. and not good for anything else?

Some people are so convinced that they are so absolutely right, they cannot even entertain the idea of something alternative.. never mind the idea that they just may be horribly wrong.


WhiteHat>[clippy]

Hi! it looks like you’re trying to use Oracle!

Do you:
( ) want to INSERT data to a table
( ) want to UPDATE existing data in a table
( ) ALTER the structure of the table
( ) search the internet for other queries

[clippy]

It’s not clear what you’re trying to do:
as we understand it it seems like:
you have a newly created table and you want to make it so there’s data in it is this correct?


OP> what if i will have thousands of record.i cant write
them all.
Dave> BANGS HEAD ON DESK
Dave> Instead of the select … from dual union select … from dual… perhaps you could use YOUR OWN TABLE.
OP> yeah i told you that i got it already so i dont need to bang head on desk……thanx neways


OP> Can you just tell me which is the best oracle performance tuning tool in the market? It should be free download.
Guido> It’s name is BRAIN (Biological Resource for All Informations Needs). If you really need to download that you should opt for another career path, I guess. 😉


OP>PL DESCRIBE U’R TABLE WORD.
Billy>Use proper English and not IM SPEAK as this is a technical forum and not some SMS teenage chat room.
padders>Please note however that it is considered acceptable to refer to someone’s ‘leet SQL skillz’.
Dave>Although it’s worth first establishing a reputation that clearly indicates that you do not think “irony” means “similar to iron”.


OP>i have run the package and it will take execution time more than 1 hour, how can i redure the execution time? any one help on this issue.
Matt>Remove all the code from the package.


shoblock> I really wish people would read the responses before they complain that they
aren’t working as desired.
APC> Aw c’mon. Next you’ll be wishing people would look stuff up in the documentation instead of straightaway posting questions here.


Laurent> of course regexp could save ink when printed


OP>i’ve try to add commit; but doesn’t work.

Dave>Glad to see you picked up on the need for a more complete explanation than “doesn’t work”.
Dave>Oh wait, you didn’t.

Someoneelse>That’s a new error in 11g:
Someoneelse>ORA-00042 DOESN’T WORK


Someonelse>IF Using_SQL_Server THEN
Someonelse> EXIT Oracle_Forums;
Someonelse>END IF;


Keith>If thats not clear, I’ll join the hitting head against the brickwall gang.


Billy> Do you fix the symptoms? Or do you fix the problem?
Padders>Erm. The problem I think. Aren’t we supposed to hit the symptoms with the lead pipe?


OP>Thread with title like “urgent help in sql plz ”
Billy>STOP!!

For that you need to fill in the “It Is Truly Urgent” form via the request link on the Oracle Forum main page. In triplicate. Submit it to the moderator. Wait for an urgency verification key to be supplied by the moderator. And only then can you post your urgent posting by attaching the urgency key to it for verification purposes.

Since you did not do it, your account is being reviewed for a possible 6 month suspension. You will also be prohibited from practicing Oracle during that time as you have illustrated the lack of common sense by posting this totally uncalled for and unwarranted “urgent” posting in this forum. And not applying common sense when using Oracle can cause serious injury to your database, cause serious damage to the scalability and performance of your applications, and may just cheese off your Oracle DBA resulting in a lead pipe being taken to your knee caps.


William>A right outer join is just a normal outer join written backwards to confuse everyone


More forum fun


John Stegeman> Last time I checked (1 minute ago), there is no “PL/SQL for SIM cards”


John Stegeman> Or even a entry in the mystical magical caverns of the registry, if none of those are set.
Ed Stevens> Please!  I’ll do anything!  I wash your car!  I’ll mow your lawn! Just don’t send me to the registry!


Someoneelse> We are under attack!
The Database General forum is being flooded with spam!
Here are some of the userids:  …
What the hell, is this a new feature of Jive?

jgarry> You want Jive to pump up social media, Jive pumps up social media.

jgarry> On other places I’ve been surprised by being blocked for too much posting.
I’m really not a robot!  It’s hard to tune that limit right, and some people may compose things beforehand.
But worse, spammers would consider it damage and route around it, with whatever they need to do to have numerous logons.
Like when I knocked some fuzzy balls off the umbrella next to my pool and little black widow spiders scattered everywhere.

Dude!> There is already a feature in place that does not allow people to post one message right after another without waiting for a while; 5 min. if I remember correctly.

BluShadow> It’s 30 seconds Dude!, not 5 minutes.

Dude!> Ah well, time is relative

KayK> all you need is a DeLorean


Dude!> How long will it take until everything implodes?
Billy>  Everything? I assume you are limiting “everything” to our solar system?
In that case, around 5 billion years from now, our sun will run out if fuel, shed its outer layers, and implodes into a white dwarf. Unfortunately it is too small to become a black hole. Which would be a kewl thing. Size some time matters.
Everything as in the universe? Guestimate is a 100 or so trillion years – depending on the theory you deem most likely (of which there are more than a few) describes the end of the universe. Implosion is just one of the theories. Perhaps an Asimov’s The Last Question end and beginning?

Dude!> I don’t worry so much about 5 billion years from now — not even history of the past 100 years is correct.


Billy> Disk space is cheaper than the effort to rebuild tables and indexes in order to reclaim space – and to support this effort as SOP.


William> So ‘QTR’ means ‘Quarter’? What is this, Twitter?


Jonathan Lewis> I got to the end to the first line (after the Hi) and thought: “we’re going to see a match_recognize() solution from Stew Ashton here”.

He was right.


“Re: What is the difference between select count(1) from tab and select count(*) from tab;”

Well after some short ramblings about performance and table sizes the gurus discussion went on to the right track.

Dave> One press on the shift key on my keyboard
William> “count(1)” is a nonstandard variation that takes more keystrokes and requires the parser to substitute “*” in place of the “1”, while making the person who wrote the query look foolish.
If you want an approximate result for a large data set quickly, have a look at the SAMPLE clause, e.g.

select count(*) * 20 from somebigtable sample(5);

Frank> Actually, on my keyboard, ‘1’ takes fewer keystrokes (depending on how you count) than ‘*’.  To type ‘1’, I just press the ‘1’ key, but to type ‘*’ I have to hold down the SHIFT key and then press the ‘8’ key.
Even though it’s that much harder to type, “COUNT (*)” is still better than “COUNT (1)”, for the reasons you mentioned.
Jonathan> You may be taking too narrow a view on the problem – although the correct view may, of course, be keyboard-dependent.  You need to step back from the 1/* dichotomy and consider the effect of parentheses:on the problem.

On my keyboard (*) requires me to do:  {shift} 980 {release}   (a total of 4 keystrokes – or 5 finger movements)

but (1) requires me to do: {shift} 9 {release} 1 {shift} 0 {release}  (a total of 5 keystrokes – or 6 finger movements)

Note also that if you are a “classical typist” your are probably going to use {left shift}, which means a large movement to the 1, unless you use a numeric keypad – in which case the 9 requires you to make a large lateral movement with your right hand (which can then stay in place until after the 8 stroke, of course).

Youngsters these days! Just don’t think things through properly!   (;)
William> Perhaps the round bracket keys are not shifted on some keyboards? I don’t think I’ve ever seen that though.
rp0428>Can you provide a specific reference to ANY of your books or blogs that cover an advanced topic such as this?

Sometimes ‘youngsters’ can benefit from seeing the explanation in context with some example code, trace files and execution plans.
Ospin> Just to inform for people with spain keyboards, this keyboards has “(” in shift+8 and “)” in shift+9, so is quit bit easy type “(8)”, so less finger movements and same results
John> To really figure this out, we probably need sql_trace for brains and bodies – when is Oracle going to wake up and put a bunch of SQL coders under a functional MRI scanner and do metabolic analysis to determine the precise effort involved?
However, i’ll say this: even if select(1) was an order of magnitude easier to type than select (*) (which it’s not), the dissonance and mental stress caused by seeing select(1) is probably enough to kill a few million brain cells of my own (not to mention people who come after me and have to read my code)…


All time classics:
Frameworkia – the NEW PLSQL development standard

News for developers from #OOW16 about 12.2

The following information I obtained during various sessions during OOW16. Sometimes they resemble just an impression I got while a certain feature was mentioned during a speak. So use the information with caution.

new db version 12.2

So the 12.2 version is out there now, but most of us won’t be able to use it very soon. Why? Because Oracle choose to do a cloud first approach. That means 12.2 is available for the Oracle Cloud but not for other installations yet. And my guess would be that first enterprise edition will get it and standard edition (SE2) might even get it a bit later.

Still there are many exciting new features in it and it makes sense to get ourselves familiar with them. So here are my favourites with a slight focus on developer relevance.

There is documentation about 12.2 new features out already, but it is extremely hard to find. It is hidden in the docs for the new Exadata Express Cloud service. So here is the quick link to see the new features guide.

To summarise many of the new features focus on improving availability of the database. Essentially giving developers and DBAs more options to keep the applications running even when encountering errors or while changes are takes place.

The second set of enhancements seem to be things that further extend capabilities of features that were added in 12.1.

And of cause a lot of performance improving possibilities had been added (this seems to by typical for an R2 version).

longer identifiers (128 chars)

Identifiers can now be put to 128 chars long. This applies for almost all objects including table, column, index and constraint names. Long awaited and finally there.

I suggest not to overdo it at the beginning, I suspect that many external tools that work with the Oracle database might not fully support such long identifiers yet. And of cause some common sense when naming your tables and columns should be applied as well. For example please do not repeat the table name again in the column name. However it will help greatly when you apply naming conventions on foreign key constraints or indexes.

There seems to be a new plsql constant Ora_Max_Name_Len that holds the maximum possible value in your database. It can be used at compile time to set the sizes of variables. There is an example in the plsql section later.

SQL functions

improved LISTAGG

LISTAGG now has an option not to error out when the list gets too long. Instead the list will be cut off and a default value (ellipsis “…”) is added. Additionally we can add the number of found results.

To do so add the new overflow clause, for example:

ON OVERFLOW TRUNCATE WITH COUNT

Unfortunatly LISTAGG is still not able to build distinct lists. If you want that, vote for this feature in the OTN database ideas space: https://community.oracle.com/ideas/12533

conversions with error handling

All conversion functions have now an added new clause that decides how an error is handled, when the conversion fails because of a formatting error. CAST, TO_NUMBER, TO_DATE and all the other TO_xxx functions got this improvement. The only exception is TO_CHAR. Because everything can be converted into char.

CAST_IN_DB12_2.gif

examples:

CAST(prod_id AS NUMBER DEFAULT 0 ON CONVERSION ERROR)
TO_DATE(last_processed_day DEFAULT date'2016-01-01' ON CONVERSION ERROR, 'DD-MON-RRRR','nls_date_language=AMERICAN')

So in the case where the conversion would result in an error, we can provide an alternative default value instead. This is highly useful!

Additionally there is a new function VALIDATE_CONVERSION which can be used to find values in a column where conversion would not be possible. It returns 0 for invalid values and 1 for correct ones.

new approximation functions

In 12R1 we already got APPROX_COUNT_DISTINCT
new are APPROX_PERCENTILE, APPROX_MEDIAN
These approximation functions can now be used in materialized views!
And there is a parameter that is able to switch the exact functions to the approximate versions.

alter session set approx_for_aggregation = 'TRUE';

Also new are APPROX_COUNT_DISTINCT_DETAIL, APPROX_COUNT_DISTINCT_AGG and TO_APPROX_COUNT_DISTINCT allowing to build hierarchies with those aggregation functions. Something we were not able to do in the past without rerunning the function on each aggregation level.

case insensitive linguistic queries

It is now possible to do searches using a case insensitive option, e.g.  BINARY_CI for various functions like LIKE. Those functions are able to use stemming. Not much detail about it yet, but it sounds like it can be used instead of putting UPPER around the columns.  At the moment I have no further information about possible performance impacts of that.

greatly enhanced JSON support

better simplified JSON

There is a complex JSON syntax using all the JSON functions and a simplified JSON syntax using dot notation. In general a lot of work was done to enhance the simplified JSON syntax. Like you can access elements of an array now.

JSON_EXISTS with predicates

If seen short examples where “and” expressions/filters using an && operator were done inside some SQL statement using JSON_EXISTS. Essentially JSON_EXISTS allows to add filters to the path expression using a solid set of operators.

build JSON with SQL/JSON

Using a similar syntax as the older sql/xml functions (XMLELEMENT, XMLAGG) we now have some new SQL functions to help us createing a JSON document. The output of those functions is VARCHAR2.

JSON_OBJECT, JSON_OBJECTAGG, JSON_ARRAY, JSON_ARRAYAGG

 

Views on JSON documents / Data Guide

The Data Guide allows to analyse JSON documents and automatically build views and virtual columns for elements in our JSON object.

The data dictionary has a list of all columns with an enabled data guide

USER|ALL|DBA_JSON_DATAGUIDE

We can access the dataguide in SQL with the functions JSON_DATAGUIDE or JSON_HIERDATAGUIDE or in PLSQL with the function DBMS_JSON.GET_INDEX_DATAGUIDE.

The plsql functions need an json search index  to work. If it exists then those functions should be preferred, since they work on persisted index data. If the json documents in the columns do all have a completly different structure, then it might be better to use the SQL functions and not use a json search index.

See also: Multiple Data Guides per Document Set

Based upon the data guide more operations are possible. For example we can easily create views and/or virtual columns that extract information from the JSON document as relational data.

To create a virtual column that shows data from our json column in a relational way we can use  DBMS_JSON.addVC.

To create a view that exposes json document data in a table like structure, we can use DBMS_JSON.createViewonDemand. This is based upon a JSON_TABLE function.

 

JSON search index

example

CREATE SEARCH INDEX po_dg_only_idx ON j_purchaseorder (po_document) FOR JSON;

 

GeoJSON support

If the JSON includes coordinates that are  conform with the GeoJSON standard, then it is possible to do geolocation searches on that JSON document.

GeoJSON also is supported during Spatial Queries and can be converted directly into Spatial Geometry.

more JSON enhancements

  • the JSON sql functions are now available in plsql as well => especially IS JSON will be useful
  • highly improved JSON search index
  • new predefined plsql object types JSON_OBJECT_T, JSON_ARRAY_T, JSON_ELEMENT_T, JSON_KEYLIST_T, JSON_SCALAR_T

MV enhancements

ON STATEMENT refresh for MV

We had ON DEMAND and ON COMMIT before. Now Materialised Views can be refreshed after DML changes to the base tables without having the need to wait for the commit

refresh statistics view

This was long overdue! We are now able to see statistics when the materialised view was refreshed. The name of the data dictionary view is not completely clear now, but I suspect DBMS_MVREF_STATS.

There is also a new package DBMS_MVIEW_STATS that can be used to organise the collection and the cleanup of those statistics.

improvements for partitioned objects

  • MOVE TABLE, SPLIT PARTITION and some other partitioned operations can be done online. That means they will not disrupt ongoing DML operations. And this includes automatic index maintenance.
  • CREATE TABLE FOR EXCHANGE prepares a non partitioned table to be partitioned.

 

superfast analytics

Join Groups

It is now possible to define columns as join groups. A JOIN GROUP is a new database object and it greatly increases performance of Join and analytical queries over those columns.

In Memory expressions

An INMEMORY expression is a special kind of virtual column that has an additional INMEMORY attribute attached to it. By that attribute special optimisations kick in that speed up access to this expression. This method is also used by some of the JSON optimisations.

A data dictionary view is added to help finding expressions that could profit from such virtual columns.

dba|all|user_expression_statistics

Analytical views

Analytical views are an easy way to define dimensional hierarchies and how they are rolled up. An Analytical View is a new object type.

This seems to be a completely new feature. Slightly based on the older dimensional cube possibilities. I’m not sure if this will be available for all editions later or if it will be an additional cost feature.

 

PL/SQL stuff

pragma deprecate

This feature was surprising to me, but I find it hugely interesting.

You can declare now plsql methods as deprecated by using this pragma. The “only” thing that happens is, if such a function is used you will get a compiler warning (PLW-something).

syntax example

function myFunc return varchar2
is
   pragma deprecate 'myFunc is deprecated. Use yourFunc instead!';
begin
  return 'x';
end myFunc;

So far I didn’t miss that feature, but I immediately have some projects in mind where I would use it.

static expressions instead of literals

All areas where a literal is to be used, can now be replaced by a so-called “static expression”. e.g.

declare
 myTab varchar2( Ora_Max_Name_Len + 2);
 myObject varchar2(2* (Ora_Max_Name_Len + 2));
begin
 myTab := '"Table"';
 myObject := '"ThisSchema".' || myTab ;
 ...

This works only as long as the expression can be resolved at compilation time.

As such it will not improve or change writing dynamic queries, however there is some impact for deployment scripts. So whenever you needed to use manually written compiler directives, this might be a new alternative.

minor changes

  • ACCESSIBLE BY for sub modules
  • bind plsql only datatypes to dbms_sql => this essentially just finishes what 12.1.0.2 already allowed for anonymous blocks and native SQL (execute immediate).
  • some enhancements for PL/scope: SQL_ID for static sql statements, reports where native SQL is used
  • slight enhancements for dbms_hprof: can now include SQL_IDs,  improved reporting   for sub cursors

new code coverage tool

It splits code into blocks and tells the developer if and how often those blocks are used during some sample run. Code that is not reached is coloured differently (red).

Blocks can be marked with a pragma

pragma coverage('NOT_FEASIBLE');

So that this block is not marked when it is not covered. The tool is currently only a new DBMS package, but it will be integrated into the next SQL Developer version.

dbms_plsql_coverage_….

 

developer.oracle.com

It is Oracles new landing page for developers. Strongly influenced by Steven Feuerstein.

screen-shot-2016-09-20-at-12-21-18

debugger enhancements

Can execute a sql now when the debugger encounters a breakpoint. Not implemented yet directly in SQL Developer, but it will be there in a future version. In general the new SQL developer already supports the enhanced debugging capabilities of the database, but not all is possible yet.

The debugger also is not available for the cloud yet. The protocol used, is not suited for cloud operations.

EBR (edition based redefinition)

Editioned objects no longer in use can be cleaned up automatically in the background

All we have to do is to drop the edition. Even in cases where this was not possible before.

CBO

CBO improved adaptive handling

The adaptive features of the CBO that are already in the database can now be better controlled. And the defaults were set in a way, that upgrades from 11g result in very similar behaviour.

OPTIMIZER_ADAPTIVE_FEATURES = TRUE or FALSE => deprecated

replaced by two new parameters

OPTIMIZER_ADAPTIVE_PLANS => default = TRUE

OPTIMIZER_ADAPTIVE_STATISTICS => default = FALSE

So if you switch from 12.1 to 12.2 and if you had OPTIMIZER_ADAPTIVE_FEATURES=TRUE then you might want to set the second parameter to TRUE again. If you switch from 11g to 12.2 you probably want the defaults.

The dbms_stats package got some improvements that go together with it.

Dbms_stats: auto_stat_extensions

At the moment not much is known about that yet.

Mview query rewrite

The CBO can now rewrite a query to a materialised view even if the view is stale at that moment. Materialized view logs are considered to return the correct results.

more CBO based things

  • new histogram types
  • GTT use session private statistics by default
  • automated SQL plan management
    Even after DB upgrade the capture of the old plan can be done by setting the parameter OPTIMIZER_FEATURES_ENABLED = ‘11.2.0.4’
    Plans can then evolve later
  • copy optimiser metadata from pre production to prod.
    EXPDP/IMPDP or dbms_stats.transfer_stats

application containers

Instead of just having a CDB and several PDBs in it, we can now define an Application Container. This serves as a kind of general repository for all PDBs that are part of that application container.

This is highly interesting.

The typical use case seems to be that you would roll out your application to various customers (in the cloud) by using a PDB for each customer. However all the common objects would be added into the application container, so that changes / enhancements can be rolled out to all customers at once or one after the other. The application container keeps track of you upgrade scripts an is able to “replay” those scripts in the specific PDBs. Only that is is not a replay of the script, instead the objects are linked from the PDB to the Application Container. I think this is the same mechanism as currently the CDB objects/metadata are made available on the PDBs.

Objects can be shared from the Application Container to the PDB by using the SHARING clause in the create statement.

SHARE NONE|METADATA|OBJECT

SHARE METADATA would only share the object definition, while SHARE OBJECT would also share the data.

It is possible to combine application containers with edition based redefinition. Essentially it seems as if the editions are copied from the Application Container to the PDB as all other Objects are copied/linked. The Application container just keeps track of what needs to be installed in a specific PDB.

shards

A new very special way to have locally distributed database. Seems to cover the same concept as shards in some noSql DBs. If the application is distributed over a set of databases, you can now declare them as sharded somehow. And if you define a table as sharded then its data will be split to the different databases. You use a shard key (very similar to a partitioned key) and that key defines where the data is going.

Exadata Express Cloud service

Exadata Express Cloud Service

Essentially you get a PDB with a few GB of space on an Exadata Machine running Oracle EE for only 175$ per month. Pricelist here.

SQL Developer version 4.1.5 supports the Exacta Express Cloud service. You can drag and drop objects (tables, etc.) directly onto the connection and get it moved there.

The Exadata Express Service includes Apex , ORDS and SODA. So it can also serve as a kind of JSON document storage using rest interface.

 

ORDS

Jetty is now supported for production mode. That means you can now run ORDS in standalone mode also for a production environment.

Since ORDS is not part of the database this does not depend on using database version 12.2.

 

Patrick Wolf and the seven little Shakeebs – a fairy tale

There was once upon a time an old Enterprise Architect who had seven little designers, and loved them with all the love of a mother for her children. One day she wanted to go into the xmlforest and fetch some json. So she called all seven to her and said: ’Dear designers, I have to go into the xmlforest, be on your guard against the Wolf; if he comes in, he will consume you all–skin, pages, and everything. He often disguises himself with a different user interface, but you will know him at once by his rough certificate and his wrong password.’ The kids said: ’Dear mother, we will take good care of ourselves; you may go away without any anxiety.’ Then the old one bleated, and went on her way with an easy mind.

It was not long before someone knocked at the login screen and called: ’Open the door, dear children; your mother is here, and has brought something back with her for each of you.’ But the little kids knew that it was the wolf, by the rough certificate. ’We will not open the screen,’ cried they, ’you are not our mother. She has a soft, pleasant voice, but your certificate is rough; you are the wolf!’ Then the wolf went away to a hacker and bought himself a great lump of cross site scripts, ate this and made his certificate trusted with it. Then he came back, knocked at the login of the application, and called: ’Open the door, dear children, your mother is here and has brought something back with her for each of you.’ But the wolf had laid his black password against the window, and the children saw it and cried: ’We will not open the door, our mother has not black password like you: you are the wolf!’ Then the wolf ran to a developer and said: ’I have hurt my feet, rub some SQL over them for me.’ And when the developer had rubbed his feet over, he ran to the dba and said: ’Strew some white injection over my feet for me.’ The dba thought to himself: ’The wolf wants to deceive someone,’ and refused; but the wolf said: ’If you will not do it, I will devour you.’ Then the dba was afraid, and made his passwords white for him. Truly, this is the way of outsourcing database management.

So now the wretch went for the third time to the house-door, knocked at it and said: ’Open the door for me, designers, your dear little mother has come home, and has brought every one of you something back from the xmlforest with her.’ The little kids cried: ’First show us your password that we may know if you are our dear little mother.’ Then he put his SQL injected password in through the window and when the kids saw that it was white, they believed that all he said was true, and opened the backdoor. But who should come in but the wolf! They were terrified and wanted to hide themselves. One sprang under the table, the second into the Attribute Dictionary, the third into the Application Dashboard, the fourth into the SQL Workshop, the fifth into the Cross Page Utilites, the sixth under the Migration bench, and the seventh into the Apps Gallery. But the wolf found them all, and used no great ceremony; one after the other he devoured them down his throat. The youngest, who was in the Apps Gallery, was the only one he did not find. When the wolf had satisfied his universal theme appetite he took himself off, laid himself down under a jstree in the green field project outside, and began to sleep. Soon afterwards the old architect came home again from the xmlforest. Ah! what a sight she saw there! The back-door stood wide open. The table, dashboards, and benches were thrown down, the workshop lay broken to pieces, and the quilts and pillows were pulled off the websheets. She sought her designers, but they were nowhere to be found. She called them one after another by name, but no one answered. At last, when she came to the youngest, named Shakeeb, a soft voice cried: ’Dear mother, I am in the Apps gallery.’ She took the kid out, and it told her that the wolf had come and had eaten all the others. Then you may workspace-imagine how she wept over her poor designers.

At length in her grief she went out, and the youngest Shakeeb ran with her. When they came to the green field project, there lay the wolf by the jstree and hadooped so loud that the svn-branches shook. She looked at him on every side and saw that something was moving and struggling in his gorged belly. ’Ah, heavens,’ she said, ’is it possible that my poor designers whom he has swallowed down for his supper, can be still alive?’ Then the kid had to run home and fetch scissors, a theme roller, a needle and thread, and the architect cut open the monster’s bitbucket, and hardly had she made one cut, than one little kid thrust its head out, and when she had injected farther, all six sprang out one after another, and were all still alive, and had suffered no injury whatever, for in his greediness the monster had swallowed them down whole. What rejoicing there was! They dockered their dear mother, and jumpstarted like oracle after the sun aquisition. The mother, however, said: ’Now go and look for some big antiviruses, and we will fill the wicked beast’s bitbucket with them while he is still on a timeout.’ Then the seven designers dragged the antiviruses thither with all speed, and put as many of them into this bitbucket as they could get in; and the mother bootstrapped him up again in the greatest haste, so that he was not aware of anything and never once stirred.

When the wolf at length had had his fill of sleep, he got on his trunks, and as the antibodies in his stomach made him very thirsty, he wanted to go to a well to drink. But when he began to walk and to move about, the antiviruses in his stomach knocked against each other and rattled. Then cried he:

 ’What rumbles and tumbles
  Against my poor bones?
  I thought ’twas six kids,
  But it feels like big stones.’

And when he got to the well and stooped over the water to drink, the heavy antiviruses made him fall in, and he drowned miserably. When the seven designers saw that, they came running to the spot and cried aloud: ’The wolf is dead! The wolf is dead!’ and danced for joy round about the well with their mother.


What do we learn from it?

A) Designers are not good at preventing xss-attacks.

B) Better use Patrick Wolf’s Advisor to check your application against SQL injection, than to wait for the Big Bad Wolf.

C) Remember to check the Application Gallery; little gems hide in there.


Other fairy tales

planned but not implemented yet:

  • SnowWhite and RoseRed Themes
  • Three little stickers

 

Apex page call stack

Page Call Stack Implementation

Purpose

The goal is to have a way to navigate back to the previous page. Typically this is done by a button, sometimes also by a page brache or a link column. A global application item A_LAST_PAGE is used to hold the numberof the last page.

The link inside the “Back to previous page button” then is rendered using this item with a page target: &A_LAST_PAGE.

This post describes how to setup the logic for populating this item using a complete page call stack. Thereby allowing to navigate several pages forth and back as wanted and always having some logical choice set as the last page.

Setup

Needed are two apex application items, one application process and a plsql module. The page stack is implemented using an apex collection.

The names of the items are declared as constants in the plsql module. So if you use a different item name, then you need to change the constant value there too.
Also if your homepage is not number 1 then you must change that constant value.

application items

A_LAST_PAGE => Will hold the page number. This item can be referred wherever you need to go back to the last page.

A_PAGESTACK_POINTER => the current position in the page stack. Entries are never deleted if we move back in the stack, just overwritten. So in theory the stack can also be used to “go forward” again.

application page process
The application process needs to run in the page head for each page. You can / should set a condition so that it doesn’t run for some special pages like Login, Help, Feedback or certain Modal pages. Although modal pages are taken into account if you use the default Apex 5 mechanism.

Apex_pageStack

database code

The logic should work in all apex versions but for the rule 4. The isModalPage subfunction should only be used if you are on Apex 5.0 yet. Older version will raise an error because the page_mode column does not exists in previous versions of the apex_application_pages view.

  -- adds the current apex page to the page stack
  procedure managePageStack
  is
    -- constants
    co_modul_name          constant varchar2(100) := $$PLSQL_UNIT||'.managePageStack';
    c_appItem_last_page    constant varchar2(30)  := 'A_LAST_PAGE';
    c_appItem_page_pointer constant varchar2(30)  := 'A_PAGESTACK_POINTER';
    c_PageStack            constant varchar2(30)  := 'PAGESTACK';
    c_PageStack_max_size   constant number        := 500;
    c_Homepage             constant varchar2(30)  := '1';

    -- types
    type pageStack_t      is table of apex_collections.c001%type index by binary_integer; 

    -- variables
    v_pageStack           pageStack_t;
    v_current_page        varchar2(30);
    v_ps_pointer          number;
    v_last_page           varchar2(30);
    v_new_last_page       varchar2(30);

    -- sub modules
    function isModalPage(p_page in varchar2) return boolean
    is
      v_page_mode APEX_APPLICATION_PAGES.page_mode%type;
    begin

      select page_mode
      into v_page_mode
      from APEX_APPLICATION_PAGES
      where application_id = v('APP_ID')
      and page_id = p_page;

      return (v_page_mode='Modal Page');

    exception
      when no_data_found then
        return null;
    end isModalPage;

  begin
    ----------------------------------------------------------------------------
    -- read the current application items
    ----------------------------------------------------------------------------
    v_current_page  := v('APP_PAGE_ID');
    v_ps_pointer    := coalesce(to_number(v(c_appItem_page_pointer)),0);
    v_last_page     := coalesce(v(c_appItem_last_page),c_Homepage);

    ----------------------------------------------------------------------------
    -- make sure the collection exists
    ----------------------------------------------------------------------------
    if v_ps_pointer <= 0 then       -- create the collection       -- first page is always the homepage       --if not apex_collection.collection_exists(c_PageStack) then       apex_collection.create_collection(c_PageStack);       --end if;       -- just in case add the homepage       apex_collection.add_member(c_PageStack, p_c001 => c_Homepage);
      -- point to first page
      v_ps_pointer := 1;
    end if;  

    ----------------------------------------------------------------------------
    -- load the apex_collection into a plsql collection
    ----------------------------------------------------------------------------
    select c001
    bulk collect into v_pageStack
    from apex_collections
    where collection_name = c_PageStack;

    ----------------------------------------------------------------------------
    -- implement rules
    ----------------------------------------------------------------------------
    case 

    -- rule 0
    -- if something is wrong with the collection and we can not transfer it to a plsql collection
    when v_pageStack.count = 0
    then v_new_last_page := c_Homepage;
         if not apex_collection.collection_exists(c_PageStack) then
           apex_collection.create_collection(c_PageStack);
         end if;
         apex_collection.add_member(c_PageStack, p_c001 => c_Homepage);
         v_ps_pointer := 1;

    -- rule 1
    -- if the current page is the same as we have currently in stack, then do nothing
    -- this happens during a redirect to the same page
    when v_current_page = v_pageStack(v_ps_pointer)
    then v_new_last_page := v_last_page;

    -- rule 2
    -- if the new page is the same as the last page
    -- then go back in the stack one step. But never go below the first page
    -- we probably went back to the previous page, therefore last page needs to be even one more back in the stack
    when v_current_page = v_last_page
    then v_ps_pointer    := greatest(v_ps_pointer-1,1);
         -- the new last page is even one more page back
         v_new_last_page := v_pageStack(greatest(v_ps_pointer-1,1));

    -- rule 3
    -- if we are back to the home page reset everything!
    when v_current_page = c_homepage
    then v_ps_pointer := 1;
         v_new_last_page := v_pageStack(v_ps_pointer);
    -- rule 4
    -- ignore modal pages
    when isModalPage(v_current_page)
    then v_new_last_page := v_last_page;

    -- rule 5
    -- the new page is not under the current or last page, so we need to add it to the stack and increase pointer
    else
      v_new_last_page := v_pageStack(v_ps_pointer);
      v_ps_pointer:=v_ps_pointer+1;

      -- check to lessen the impact of endless loops and other nasty things
      if v_ps_pointer > c_PageStack_max_size then
        v_new_last_page := v_pageStack(1);
        v_ps_pointer := 2;
      end if;

      -- are we at the end of the stack already?
      if v_PageStack.count >= v_ps_pointer then
        -- change page in stack
        -- use the update_member_attribute function,
        -- because that is slightly faster than the update_member function
        apex_collection.update_member_attribute(c_PageStack, p_seq => v_ps_pointer, p_attr_number => 1, p_attr_value => v_current_page);
      else
        -- add page to stack
        apex_collection.add_member(c_PageStack, p_c001 => v_current_page);
        -- no need to add it to the plsql collection too!
      end if;
    end case;

    ----------------------------------------------------------------------------
    -- set the last page item
    apex_util.set_session_state(c_appItem_last_page    , coalesce(v_new_last_page,c_Homepage));
    apex_util.set_session_state(c_appItem_page_pointer ,v_ps_pointer);

  exception
     when others then
       -- add your own custom logging framework here
       logger.logError( co_modul_name, 'Problem during management of page call stack !'||'pointer='||v_ps_pointer||', current page='||v_current_page||', last page ='||v_last_page);
       raise;
  end managePageStack;

Check scripts

A DBA can execute the following statements to see what is happening.

alter session set current_schema = apex_050000;
execute wwv_flow_security.g_security_group_id := 10;
select * from wwv_flow_collections$;
select * from wwv_flow_collection_members$
where collection_id in (select id from wwv_flow_collections$ 
                       where collection_name = 'PAGESTACK');

Side Notes

Ideas

When we have such a call stack it opens up some other possibilities.
For example we can implement a dynamic breadcrumb bar that shows not only the static way to one page, but instead shows the way we used in our session. And if we go back in the call stack, we could even show the pages that we just left.

Tuning

While implementing the collection part I wondered
which is better (=faster) to use.
apex_collection.update_member or apex_collection.update_member_attribute

They both work slightly differently, but for my purpose (only one column) they are identical.

Here is the performance test that I did. Result is that apex_collection.update_member_attribute is almost a second faster when calling it 10000 times. This matched my expectation.

-------------------------------
-- test member_attribute 
-- performance test
set serveroutput on
declare
  v_time timestamp := systimestamp;
begin 
 apex_collection.create_collection('PageStack');
 -- add new page 10 to the stack
 apex_collection.add_member('PageStack', p_c001 => '10');

 v_time := systimestamp;
 -- update the member 10000 times
 for i in 1..10000 loop
    apex_collection.update_member('PageStack', p_seq => 1, p_c001 => '20');
 end loop;    
 dbms_output.put_line('Member           updated 10000 times: '||to_char(systimestamp-v_time));

 -- update the member 10000 times using attribute
 v_time := systimestamp;
 for i in 1..10000 loop
    apex_collection.update_member_attribute('PageStack', p_seq => 1, p_attr_number => 1, p_attr_value => '10');
 end loop;    
 dbms_output.put_line('Member attribute updated 10000 times: '||to_char(systimestamp-v_time));

 apex_collection.delete_collection('PageStack');
end;
/
Elapsed: 00:00:05.048
Member           updated 10000 times: +000000000 00:00:02.980000000
Member attribute updated 10000 times: +000000000 00:00:02.043000000

To run this yourself, you need to create a valid apex session state.
For example using Martin D’Souzas logic: http://www.talkapex.com/2012/08/how-to-create-apex-session-in-plsql.html

Nasty surprises

When you create an apex collection using the apex_collection package, the collection name will always be written in UPPERCASE. This cost me some time to identify the issue, because when reading it from apex_collections I used a lowercase name. And the collection was never found. So remember to always write collection names in UPPERCASE.

about IDLE session timers in Apex 5

Introduction

In a recent project I wanted to use the idle session timer. The final goal was to give the end user a possibility to see how soon his session will end. It happens when the user enters a lot of data in a tabular form and work is interupted, that all the effort is in vain because the idle timeout forces a new login.

Martin D’Souza wrote an article about that some time ago: http://www.talkapex.com/2009/09/enhanced-apex-session-timeouts.html. But it is a bit outdated now, and he does not reuse the Apex settings, and simply uses his own timer that accidentically is the same as the Apex timeout. Better would be an approach to use the Apex settings.

Surprisingly it was not so easy to find where the idle session time is set and how to use it progammatically. Here are my findings.

How to set the idle timeout

See also this older blog post from Patrick Wolf about setting the timeout in Apex 3.2. Some of it is still true, however some has change since then. For example we can now set the timeouts also on workspace level.

Is your Oracle APEX session timing out?

There are 3-4 levels where the session idle timeout can be set.
Each specific level overwrites the setting of the more generic level.

Instance Level

Login as instance administrator (INTERNAL workspace).

Manage instance / Security / Session Timeout

The default is 1 hour (=3600 seconds).

apex5_sessiontimer_instanceAdmin

Workspace Level

As a instance administrator (INTERNAL workspace) go to

Manage Workspaces / Existing Workspace / Click on Workspace Name / Session Timeout
 
apex5_sessiontimer_workspacesetting

Application Level

Application / Shared Components / Application Definition Attributes / Security / Session Management

 

apex5_sessiontimer_appsetting

Session Level

This can only be set programmatically.

http://docs.oracle.com/cd/E14373_01/apirefs.32/e13369/apex_util.htm#AEAPI355

BEGIN
APEX_UTIL.SET_SESSION_MAX_IDLE_SECONDS(p_seconds => 1200);
END;

How to read the settings

Here are some commands that help to find out about the current session timeouts/idle timeouts. Note that to get the instance level settings (e.g. the default if nothing else was set), we can not use the official apex apis.
Update! This has been changed with version 5.0.4!


--- Find apex session idle times

-- Application level
select application_id, application_name, maximum_session_idle_Seconds, maximum_session_life_seconds, session_lifetime_exceeded_url, session_idle_time_exceeded_url
from apex_applications
--where application_id = :APP_ID
;

-- Application level - DBA access only
select security_group_id, display_id as application_id, name as application_name, max_session_length_sec, on_max_session_timeout_url, max_session_idle_sec, on_max_idle_timeout_url
from apex_050000.wwv_flows
--where display_id = :APP_ID
;

-- Workspace level
select workspace, workspace_display_name, maximum_session_idle_Seconds, maximum_session_life_seconds
from apex_workspaces;

-- Workspace level - DBA access only
select id, short_name, display_name, source_identifier, max_session_length_sec,max_session_idle_sec
from apex_050000.wwv_flow_companies;

-- Instance level - minimum Apex 5.0.4 needed + APEX_ADMINISTRATOR_READ_ROLE
select name, value
from APEX_INSTANCE_PARAMETERS
where name in ('MAX_SESSION_LENGTH_SEC','MAX_SESSION_IDLE_SEC');

-- Instance level - DBA access only
select name, value
from apex_050000.WWV_FLOW_PLATFORM_PREFS
where name in ('MAX_SESSION_LENGTH_SEC', 'MAX_SESSION_IDLE_SEC');

-- Instance level alternative - DBA access only
select apex_050000.wwv_flow_platform.get_preference('MAX_SESSION_LENGTH_SEC') as MAX_SESSION_LENGTH_SEC
      ,apex_050000.wwv_flow_platform.get_preference('MAX_SESSION_IDLE_SEC') as MAX_SESSION_IDLE_SEC
from dual;

-- Workspace settings including Instance default overwrites - DBA access only
alter session set current_schema = APEX_050000;

set serveroutput on
declare
  v_ws wwv_flow_security.t_workspace;
  v_security_group_id number;
begin
  wwv_flow_security.g_security_group_id := 10; -- Internal
  v_security_group_id := wwv_flow_security.find_security_group_id (p_company =&gt; 'MYWORKSPACE');
  v_ws := wwv_flow_security.get_workspace(v_security_group_id);
  dbms_output.put_line('ws max kill timeout='|| v_ws.qos_max_session_kill_timeout );
  dbms_output.put_line('ws max session time in sec='|| v_ws.max_session_length_sec );
  dbms_output.put_line('ws max idle time in sec='|| v_ws.max_session_idle_sec );
end;
/

Please note that since Apex 5.0.4 we are able to read the instance settings from an official Apex view.

Therefore the following little script will give us the session idle time, regardless where it is set.

select coalesce(
   ( -- Application level
    select maximum_session_idle_Seconds
    from apex_applications
    where application_id = v('APP_ID'))
   ,( -- Workspace level
     select maximum_session_idle_Seconds
     from apex_workspaces
     where workspace = v('WORKSPACE_ID'))
   ,(-- Instance level
     select to_number(value)
     from APEX_INSTANCE_PARAMETERS
     where name ='MAX_SESSION_IDLE_SEC')
     ) max_idle_time     
from dual;

Unfortunatly to read from this APEX_INSTANCE_PARAMETERS view we need the new APEX_ADMINISTRATOR_READ_ROLE. The role is automatically granted to DBAs, Apex Admins and some other privileged database accounts during installation of Apex 5.0.4.

If you don’t have it already, then you can grant it like this

grant APEX_ADMINISTRATOR_READ_ROLE to mySchema;