my favorite Apex 5.2 new features

Introduction

Apex 5.2 is still in early adopter phase 1. It looks as if there will be a second early adopter phase. And probably after that Apex 5.2 will be released.

The following statements are highly speculational. There are based upon my observations in the early adopter cloud version (apexea.oracle.com) and based upon various talks from Apex team members.

Remember: It is possible that some of the features that are already there or that are promised, will not make it into the final version.

New features doc/app

Create application features

The wizard to create a new application now has a blueprint functionality. This means during application creation we add standard modules to the application that are frequently requested.

Choose/Create Application Icon

This is part of the blueprint functionality. We can now choose from a very limited set of icons, adjust the color and we will get an svg in static application files (#APP_IMAGES#app-icon.svg) that can be used everywhere.

Apex52_bluepring_dialog3

I blogged about it already and showed a way how to use the created icon also inside the logo header. See Apex 5.0-5.2 Logo Text with Icon

Quick SQL

Quick SQL is a shorthand way for creating a datamodel.

It is not really an Apex 5.2 feature. However it is a packaged application that was created for 5.2 but is made available already now. This is part of the low code movement.

Check out the shortlink to the public quicksql application: https://apex.oracle.com/quicksql/

Interactive Grid (IG) features

As expected IGs still get a lot of love. Some bugfixes but also several enhancements. Here are my favorite new features.

url based filtering

This can not be tested yet. However John Snyders mentioned it here.

With Interactive Reports we can set filters by adding some parameters to the url for that page. The same is absolutly needed for Grids.

copy down functionality

In the future there will be a way to copy data from one cell of an IG to all empty cells that are below it. Many customers asked me already for this when presenting IGs.

apex52_copydown
I’m not sure how easy it will be to use. Currently it doesn#t seem to be finalised yet. I hope the apex development team finds a decent way to implement it (=easy for the end user).

As you can see in the screenshot there is also a copy to clipboard function (ctrl+C)!

components

Oracle JET upgraded to version 4.1

4.1 is a huge step forward. Apex 5.1 included Oracle JET 2.0.2 (if I remember correctly). The Oracle JET team pushes enhancements very rapidly and frequently. So it is good to have a fairly recent version included now. However at the time when Apex 5.2 will be public, Oracle JET might again be several versions in front.

remote databases

Classical reports (and some other components too) can now be based upon a remote database source.

I wasn’t able to test this feature thoroughly in the cloud version. I hope it means we can now use tables over database links (=remote databases) as a source for our wizard. This is a very typical scenario in company environments. Use apex to access a different Oracle database where the business data resides, but where no apex is installed.

general stuff

Spotlight search

The search functionality inside page builder was hugely improved. Get familiar with it!

apex52_spotlight_search

Sticky property filters

The property pane in the page designer can now be searched. And this search can be pinned. So that the same filter is applied for all items that we click upon. This is very useful!

apex52_pin_filter

Dev toolbar enhancements

Page timing info. Looks nice. I’m not sure how useful, but I sometimes refer to the page timings. Having a better graphical presentation is certainly interesting.

apex52_page_timing_infos

Javascript errors on the page are now marked on the developer toolbar.
apex52_devbar_with_JSerror

REST/JSON support

There are major enhancements with regards to rest modules and consuming rest web sources.

I didn’t have time to test them yet. But it is definitly a way into the right direction.

Additional notes

compatibility mode

There is no compatibility mode 5.2. It seems as if this is intended. Somebody already addressed this as an issue, and this was the response:

“Thank you for your feedback. Unbelievable, but APEX 5.2 doesn’t contain any change of behaviour …”

Quo Vadis IR?

IGs have an improved way to do conditional highlighting. Already in Apex 5.1.
We can now highlight a column depending on the value of another column.

This was way overdue.

Unfortunatly this is still not possible for Interactive Reports.

Here is how the highlighting dialog looks for IRs vs. IGs

I have the feeling that IRs are now the step child of the apex team. It is the declared goal to eventually replace IRs with IGs. I support that vision for the future. However we are still far away from that (pivot functionality missing, and some others). During that transition, IR shouldn’t be left out of enhancements.

Charts

There are some new charts included. Especially Gantt Charts might be of some interest. However I don’t particuarly missed them in the past.

Oracle Jet includes a few very unique chart types. I would like to see declarative support for all of them. But my favorite Oracle JET chart types are diagrams (container layout), NBox and the PictoChart.

Conclusion

Some small enhancements that really should have made it into the previous version already (url based IG filtering for example). But also some major steps into the right direction (REST+JSON support).

I like!

Advertisements

The EBnoR Manifesto

The “EBnoR” Manifesto

about

Edition based not only redefinition

Author: Sven-Uwe Weller

suw_logo

ceo syntegris information solutions GmbH

Germany

 

 


Mission Statement

Edition based redefinition

EBR – is not well known.

It is barely used.

Edition based not only redefinition

EBnoR – will change that.

EBR is a tool with unique possibilities.

EBnoR deals with cases

beyond

the originally intended scope.

EBR is difficult to use.

EBnoR is easy to use.

EBnoR avoids complexities.

EBnoR is powered by the strengths of EBR.

 

 

 


EBR Basics

Edition based redefinition is a unique feature of the Oracle database that is available since version 11.2. It can be used at no additional license costs in all database editions (EE, SE, SE1, SE2, XE).

The term “edition” is used to describe a set of plsql based objects. Among those object types are plsql packages, views and synonyms. A complete list is here in the Oracle documentation.

A different edition can hold different code for the same plsql object. In essence each edition resembles a version of your database application code. EBR allows to store and run different versions of your application at the same time. The different code versions do exist inside the database at the same time.

This opens up a whole set of some very special possibilities that usually do not exist in other environments. EBR allows to do an upgrade to a new application release without shutting down the application. The new code version is installed in a new edition. Currently running sessions will still work with the old release. Deployment problems can be tested using the new release. Once those issues are solved, then the new edition is made available (as the new default edition). Only when a user ends his (database) session and reconnects then the new code version from the now new default edition will be used.

distinction to VCS

EBR should not be confused with a version control system like SVN or GIT. Although there are some similarities it serves a different purpose. A version control system supports the development and deployment process for a team of programmers. They can store and merge code there. Forks and code branches can be used to support development that temporarily goes into different directions. Code merges allow to combine branches again. This is something that EBR can and will not do. EBR is about running the different code versions in parallel. Application end users profit from EBR, not developers. At best an edition resembles a release in the version control system.

Sven says: A VCS saves and documents code, EBR executes code.

versions of data

Data resides in tables. Tables are not editionable objects. That means two things. Changes to table structures are not editionable directly. And the data itself is not subject to code release changes. That is a good thing. The data and the code that works with the data is separated.

Creating a new edition means that code is duplicated. Code can be duplicated easily,  duplicating data is a much bigger issue.

Sometimes a code change requires to change existing data. This increases the complexity to run the old and the new code version at the same time. EBR offers solutions using cross-edition triggers and editioning views. How to use cross edition triggers is described in Bryn Llewellyn’s excellent white paper from 2009 (http://www.oracle.com/technetwork/articles/grid/edition-based-redefinition-1-133045.pdf). Read and understand this paper first before you go on and read this manifesto.

“Executive Overview

Large, mission critical applications built on Oracle Database 11g Release 1 and earlier versions are often unavailable for tens of hours while the application’s database objects are patched or upgraded. Oracle Database 11g Release 2 introduces edition-based redefinition, a revolutionary new capability that allows online application upgrade with uninterrupted availability of the application. When the installation of the upgrade is complete, the pre-upgrade application and the post-upgrade application can be used at the same time. Therefore an existing session can continue to use the pre-upgrade application until its user decides to end it; and all new sessions can use the post-upgrade application. As soon as no sessions are any longer using the pre-upgrade application, it can be retired. In other words, the application as a whole enjoys hot rollover from the pre-upgrade version to the post-upgrade version.

This whitepaper explains how edition-based redefinition works, and how to write online application upgrade scripts using this capability, at the level of detail needed by engineers who will write such scripts.”


the current status

EBR is in the market for a long time now. Still many developers and DBAs do not know about it. If they know about it, they are not using it actively. I identified a set of problem areas that are responsible for this current status of EBR. I see possible solutions. Using EBR the focus should not only be on redefining the code, instead EBR can open up a set of new and unique possibilities. Zero downtime application upgrades are only one of them. I call the set of solutions “Edition based not only redefinition”  or short EBnoR.

problem areas under the influence of intelligent people (developers and DBAs)

the deployment dilemma shortly before GO-live it is too late to think about EBR
the design change gamble keep changes required by EBR at a minimum, design with EBR in mind
the zero downtime promise Perfectness – if even possible – is expensive. Don’t go for perfectness, almost perfect is good enough.
the cross edition complexity Cross edition triggers are needed for extrem cases. Huge effort for only a tiny result.

problem areas influencing decision makers (architects and project managers)

the cross plattform idependency nonsense EBR is only available for Oracle databases
the marketing confusion Editions are not license editions

EBnoR does offer solutions to each of those problem areas. Using small sidesteps from Oracles recommend path will lead to major improvements in development time, flexibility and ultimately results in a quicker learning curve.

The “deployment dilemma”

The advantages of EBR shine during one of the later steps in the software development lifecycle – the deployment phase. The dilemma is that at such a late point the additional programming effort to use EBR competes with adding more features or bug fixing of the application.

Oracle propagates that the advantage of using EBR to do application upgrades with minimal downtime is so big, that it is worth the additional effort and time. This is true in very rare cases only and mostly only for very large companies than can effort to spent money on this additional effort.

Problems

The problem is the decision point. Budget is reserved for a specific change in one application. There never is enough budget. There never is additional budget. Especially not shortly before the next application upgrade will go live. At this point there is not enough budget and not enough time.

EBR is a tool not an application itself. As such most companies hesitate to set up an extra project or a major software change just for using a tool.

Solution

The decision needs to be moved from budget owners to the developers and to an early point during the software life cycle. Only then will EBR find more integrations.

EBnoR: Start with EBR from day 1 in the project!

Use editions already during development!

The “design change gamble”

EBR requires some changes to the data design and to the schema.

One requirement is to add a view layer on top of all tables. Among others this allows to use cross edition triggers to do data changes. This is certainly a solid approach, however it is a major change to the data model. As such this will face critic from various other parties involved.

Typical arguments that are raised against such a change are:

  • Performance will suffer because of the view
  • The views do not confer to our naming conventions
  • The views hide complexity
  • Some tools work (better) with tables instead of views
  • Enabling EBR can not be undone

The gamble is that EBR will take the blame if implemented at a final stage during development. If any issue arises, e.g. performance in the production environment drops, some will argue that the reason is in the additional view layer. Just because that was the only thing that was changed. It might be hard to prove otherwise, as long as comparable performance data is missing.

Editions can be dropped. Switching on editioning in the database has no implications on using the feature or removing editioning features.

Solution

EBnoR: Avoid major design changes!

Use a view layer if possible. If not, still use EBR without this extra layer.

Test early while using editioning!

The “zero downtime promise”

Absolute zero downtime is very hard to reach. Even a simple ALTER TABLE statement will lock the full table. During that the table is not available. Sessions will wait (which is a different term for downtime)  or even break.

Focus on changes that have a minimal impact on downtime. For example adding a column is usually no problem. Even if the column is added to the old application without adding the new code it should not break anything. Write your application in such a way that added columns will not break existing code. This is easy to do. If the column is mandatory always provide a default value. Updating the data in this column might need more time. Dropping a column should be avoided because this usually breaks old application versions.

If the normal downtime window is a few days, then using EBR can change that window to less than an hour by focusing on the DML changes (tables mostly). For a normal user such a small downtime window is the same as zero. He will often not even notice such a short break. This does not mean that the new application version needs to be available after an hour. But just that after an hour the user is able to continue to work with the old version. After that downtime window the new edition can be installed, tested and then a gradual switch over of user sessions can be done.

Zero downtime is expensive. Even with EBR.

Solution

EBnoR: Aim for very low but not for zero downtime.

Don’t promise zero to users!

The “crossedition complexity”

Bryns paper describes how to use cross edition triggers to handle data changes between one application version and the next one. There is a major intrinsic problem with that. Let us first investigate how cross edition triggers work.

Cross edition triggers allow to manipulate data in one edition and add transformation logic so that the same dataset is correctly represented in the next or previous edition.

  • “A forward crossedition trigger is fired by application DML issued by sessions using the pre-upgrade edition. Such a trigger is used to implement transformations from the old representation forwards into the new representation.“
  • “A reverse crossedition trigger is fired by application DML issued by sessions using the post-upgrade edition. Such a trigger is used to implement transformations from the new representation backwards into the old representation.”

The problem is not how to write a trigger, but in the complexity of forward and backward data manipulations.

Sometimes the data manipulation needed to go to the next application version is simple, but can not be automated easily. Manual data cleansing actions can be such a case.

Some scenarios do not have a simple 1 to 1 transformation rule from data in the edition A and data in edition B. To do the forward transformation is usually part of the project. Very often it is much harder to provide the backward transformation. Sometimes this is only possible while keeping the old data structure thereby duplicating and replicating data. This defies the original reason for change.

The EBR solution is to have this change only temporarily. Therefore old editions should be dropped very soon after the application was switched to the new code (and data) version.

Problems

  • Often requires to store several versions of the same data. Usually in different columns.
  • Need to automate data transformation rules
  • Huge additional effort not related to a better application
  • Multiple editions require lots of cross edition triggers

Solution

Start slow and avoid cross edition data transformations!

This needs some explanation and examples.

Example

For example if a design decision was made to trim the time component from all date values in a column. So that instead of a range comparison an equal comparison can be made.

Old code:

dateCol >= trunc(:searchDate) and dateCol < trunc(:searchDate) + 1

New code:

dateCol = trunc(:searchDate)

The typical change includes a DML statement that updates the column using TRUNC.

update myTable
set dateCol = trunc(dateCol)
where dateCol != trunc(dateCol);

This change can not be undone because information (the time component) is lost.

So to write a reverse crossedition trigger to mimic the exact same source data is difficult.

One solution could involve keeping the old column including time precision and adding a second column that holds only the truncated date value. Then add (or change) a editioning view in the old and new edition that serves as an api to the relevant column. A forward crossedition trigger would truncate the data that is inserted using the old application version. But without knowing where the time component is coming from, it would be impossible to write a reverse edition trigger that adds the time correctly. A default logic would be needed. Maybe depending on some other values. Like the time should be between the time from previous and following rows. So we need several new objects, a fairly complex code to do the reverse change, the data model is more complex than the simple update solution but above all data would be duplicated.

But why care? In this specific example the old application code might still work, even if the time component is truncated. So there a two possible easy ways to avoid complexity

  1. run the update several times, for example after each batch load of new data
  2. add a normal db trigger in the old edition that truncates all newly inserted date values
:new.dateCol := trunc(:old.dateCol)

Yes this would require a change in the old edition and as such would change old application code. If this can not be done because of organisational hurdles, then create an intermediate “preparation” edition and add the trigger there.

Consider a step by step approach. There are many situations out there, where we can get away without the need to implement additional cross edition logic. Avoiding this helps to get EBR started and become acquainted with the feature. Once we got used to it, we can add more – eventually even using cross edition logic for specific use cases.

EBnoR: Do not depend on editioning views, do not use cross edition triggers.

Avoid deleting objects, don’t hesitate to add objects (especially columns).

If you need to delete objects this will influence all editions. After careful consideration just do it. After all, rules are there to be broken. Announce it as a major application version which will take a small downtime. Do not let the zero downtime paradigm make you write complex code. Complex code in the end costs more in terms of maintainability and testing effort.

learning curve considerations

A learning curve comparison shows how a reduced approach will ease the first steps into the feature.

  • plsql code versions: start using EBR already during development. Then the developers get used to working with editions. Building a new release and deploying it in a specific edition then is nothing strange anymore. Do development and tests in the same database but in different editions. Each step in the development lifecycle deserves a new edition.
  • design changes: avoid changes in the data model that would influence old editions in a negative way. For example avoid dropping columns. Also write your code in a way it does not break when a new column is added to a table. Model your data first before you start to build a new application.
  • low or zero downtime: often a tiny downtime is ok. If the aim is to be fast, but not to be perfect then the additional effort suddenly is way reduced.  Do not aim for perfectness!
    Example: Adding a column to a table requires that the table is locked for a moment. Especially if the new column is not null and is filled with some extra values. Other session might wait for this. If the table is very very large such a lock can take some time. If it takes a  minute thats fine. It will not be zero downtime, but almost every project can effort to wait a minute during a new deployment.  If it takes half a day, then you must invest effort to optimise it.
  • cross edition data: avoid using cross edition triggers. They serve only a very specific very special purpose. Build your data changes in a way that you do not depend on cross edition triggers. The need for cross edition triggers is the major contributor to why there is a increased programming effort when EBR is used.

The “cross platform independency nonsense”

“EBR can not be used in our company because we want to stay database vendor independent”

Such an argument can often be heard from java developers or other programmers outside of the database world.

An analogy

You brought a Porsche 911. This is an expensive but very fast car. However you choose not to drive it in 5th and 6th gear because there are other cars out there that do not have a 5th and sth gear. Does this make sense?

In my opinion the claim for database independency is often made to hide the incompetency or inability of the developer to learn about the unique features that their specific database allows. In general it is easy to structure code in such a way that vendor specific features are encapsulated and properly modularized. This will made future changes to another database less problematic.

This is also true for EBR. If you decide to enable and use editions you can do it in such a way that all the specific implementation details are hidden from the normal code. The developers and the DBAs should be “edition aware”. But after some initial setup, there is no extra effort that EBR requires. In general you would simply use a connect string that connects to the edition of your choice. There is almost no need to consider editions directly in the code.

There are a few exceptions. Like if you want to start a scheduled job then you proabably want to start that job in the same edition as the session that is currently running and creates that job. This is possible using a job class using a service that is tied to an edition. Or more general: Everytime you have a kind of client access to your database, you want to make sure that you can influence the edition that is used at connection time.

Solution

Modularize and structure your code well!

Immediately counter the argument by using strong analogies.

The “marketing confusion”

I think the EBR name is suboptimal (nicely spoken). First of all the term “edition” is confusing, because it is also used as a name for database editions (EE, SE, XE). The R stands for “redefinition”. The term is a poor description of adding a new code release to your application.

I admit I’m not creative enough to suggest a better name, but as an analyst I’m able to pinpoint a flaw when I see one.

Nowadays every “thing” or “technology” has a smart speaking name and a funny logo.  The name usually is also an abbreviation, and one could think this is only of minor importance. It is not. This is the name that feature/tool goes by. This name and logo is used in slides, presentations, and social media channels.

some examples 

Node JS packages: https://www.npmjs.com

Here are just a few of them. But every of those javascript packages has a small but recognisable logo added.

ebnor_npm_packages

Architecture components for Spark: https://www.linkedin.com/pulse/hadoop-summit-2015-takeaway-lambda-architecture-laurent-bride

ebnor_hadoop_summit_logos

Just look at all those small but important looking pics in this simple architecture diagram. It makes you feel as if you missed something important, if you don’t know them all.

Problems

  • EBR is hard to pronounce and to spell out.
  • The term “edition” is already established in the Oracle namespace. And usually it stands close to “license costs”.
  • A nice enchanting logo is missing.

Suggestions

  • EBnoR / EbanoR = Edition based not only Redefinition
  • Edi
  • ebar
  • PRETI = plsql runtime environment and test integration
  • or something completely different = OSCD (could be in use already by some other organisation)

Solution

Oracle, get some logo wizards and marketing people to work on that!

 

 

 

 


beyond EBR

The EBR feature allows to do more than the standard model intended by the Oracle development team.

The standard model is

  1. install a new application version in a new edition in production
  2. test if the deployment went well
  3. switch the current edition to the new edition
  4. after some short grace period drop the old edition

During this whole time the impact on connected users is minimal.

This is great!

But there is more…

Using EBR in certain ways can open up new possibilities and ease the upgrade pain.

New possibilities include

restore and run old code

Assume a problem is reported, where you suspect that the reason is gone meanwhile. But the problem existed inside some older code version. Useing EBR it is now possible to quickly prove that assumption. Create a new edition in the environment where you want to reproduce the issue (usually DEV, but might be some integration or QA test database). Fetch the old plsql code from your versioning system and add it into this new edition.

Run the test in both editions to see if the problem occurs with the old code but not with the new code.

After that test simple throw away the whole edition (drop edition cascade).

develop and test in the same database

Build releases in ORA$BASE

=> test in edition TST$BETA

=> develop in edition DEV$ALPHA.

This is something I discovered when actively developing and working with EBR. It is one of the best things to do. It might even save license costs because you can eliminate a complete test database. Although some test databases do not need additional licenses. It does not mean you can save up upon tests. But certain types of tests can profit hugely from EBR. Testers use one edition, developers use another (child) edition. Developers can actively analyse bugs that testers found without the need to move data from the test to the dev database. And developers can fix bugs without disturbing tests running in the same database.

Build the test environment automatically from your versioning system.

provide backward compatibility

This is often useful for API development (see also ThickDB | SmartDB paradigm).

APIs provide a specific interface (usually views and plsql packages). APIs expose a limited set of objects and methods for a specific task. APIs do not expose data directly. This is were EBR shines. All those objects are editionalble objects!

Often we have multiple and different consumers using the same api. If API functionality is enhanced, then we can provide the newer version of the API in a new edition. If we keep the old edition instead of dropping it almost immediately, consumers can still use their old code without the need to immediatly switch to the new version.

As long as cross edition triggers can be avoided, there are no major problems with keeping multiple API versions for a longer period of time.

Tipp: All editioned objects should be compiled in each edition even if there was no change. This actualizes a separate code version in each new edition.

 


Recommendation

the EBnoR (Edition Based not only Redefinition) approach

  • Start using EBR on day 1 in the project
  • Aim for low but not zero downtime
  • Avoid (complex) cross edition logic
  • Develop and test in the same database in different editions

 

 


Some final words

I had planned to publish this manifesto for quite a long time time (several years now). Since I started to put down those thoughts, EBR was enhanced. New recommendations have been made by Oracle plsql development. Other people, including the AskTom Team, encountered similar things with EBR. Some of those ideas and recommendations match with EBnoR. Still I expect EBnoR to be very controversial. This is intended.

For sake of brevety I skipped a larger part of examples and direct code demos.

EBR also still has issues where I see no immediate solution. For example editions are database level objects. As a developer I would prefere them to be schema level objects. The problems connected to this are beyond the scope of even EBnoR.

 

Exotic SQL: Hints that can change results

In general hints are suggestions for the Oracle cost based optimizer (CBO) that can influence performance but will never change the result. The CBO will comply with the hint at all costs – if it is possible.

True? Well not always!

There is a very small set of hints which can change the result of a select or DML statement.

The CBO is an extremly complicated piece of software, so it can happen that there are bugs in it, which in turn lead to different results. This is not what I am considering here.

So here is my collection of hints that are able to change the output of a statement.

ignore_row_on_dupkey_index

This hint allows to avoid UK errors during an insert. So only those rows are inserted that do not violate the unique index. The other rows are silently ignored.

See this example by Richard Foote about the hint:
https://richardfoote.wordpress.com/2010/12/20/oracle11g-ignore_row_on_dupkey_index-hint-micro-cuts/

Similar hints are
(CHANGE_DUPKEY_ERROR_INDEX and RETRY_ON_ROW_CHANGE)

They are all called semantic hints. Which has the side effect that they are not disabled by setting _optimizer_ignore_hints=TRUE

https://docs.oracle.com/en/database/oracle/oracle-database/12.2/sqlrf/Comments.html#GUID-4664D3D8-6312-4C15-8E8F-4872DD7A44F8

For more examples see this older post from Rob van Wijk
http://rwijk.blogspot.de/2009/10/three-new-hints.html

driving_site

I consider the driving_site hint to be one of the “good” hints. There are many cases where we need it. It is good to set it, because it enables behaviour that otherwise the optimizer can not choose.

However there is one strange special issue that I found.
I described this behaviour in an older blog post: https://wordpress.com/post/svenweller.wordpress.com/108

If we have a distributed query then SYSTIMESTAMP and also DBTIMEZONE is affected by the driving site hint. It will return the timezone from the local (no hint) or the remote (with hint) database. SYSDATE however was not affected.

This effect happend on an older 10g database, I wasn’t able to test if the behaviour still exists in 12c.

A similar effect can be reached using the materialize hint.

opt_param(‘OPTIMIZER_SECURE_VIEW_MERGING’,’true|false’)

The opt_param hint will allow to set instance parameters only for the time while the select is running. Some of those parameters can potentially influence the outcome of queries.

One of them is OPTIMIZER_SECURE_VIEW_MERGING
https://docs.oracle.com/en/database/oracle/oracle-database/12.2/refrn/OPTIMIZER_SECURE_VIEW_MERGING.html

An example from Tom Kyte:

the goal of this parameter [OPTIMIZER_SECURE_VIEW_MERGING] is to prevent a function owned by some user B from seeing data of some other user A that is should not see. For example, …

and after some setup…

b%ORA11GR2> set autotrace on explain
b%ORA11GR2> alter system set optimizer_secure_view_merging = true;

System altered.

b%ORA11GR2> select * from a.v v1 where f(y) = ‘ok to see’;

X Y
———- ——————————
1 ok to see

I see: ok to see

Execution Plan
———————————————————-
Plan hash value: 1931062764

—————————————————————————
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
—————————————————————————
| 0 | SELECT STATEMENT | | 10000 | 292K| 27206 (1)| 00:05:27 |
|* 1 | VIEW | V | 10000 | 292K| 27206 (1)| 00:05:27 |
|* 2 | TABLE ACCESS FULL| T | 10000 | 292K| 27206 (1)| 00:05:27 |
—————————————————————————

Predicate Information (identified by operation id):
—————————————————

1 – filter(“F”(“Y”)=’ok to see’)
2 – filter(“Y_F”(“X”)=1)

there it is clear that the Y_F(X)=1 is evaluated and then f(y) is

b%ORA11GR2> alter system set optimizer_secure_view_merging = false;

System altered.

b%ORA11GR2> select * from a.v v1 where f(y) = ‘ok to see’;

X Y
———- ——————————
1 ok to see

I see: ok to see
I see: NOT ok to see

whoops, my function saw data that it should not

Execution Plan
———————————————————-
Plan hash value: 1601196873

————————————————————————–
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
————————————————————————–
| 0 | SELECT STATEMENT | | 100 | 3000 | 27207 (1)| 00:05:27 |
|* 1 | TABLE ACCESS FULL| T | 100 | 3000 | 27207 (1)| 00:05:27 |
————————————————————————–

Predicate Information (identified by operation id):
—————————————————

1 – filter(“F”(“Y”)=’ok to see’ AND “Y_F”(“X”)=1)

and now we can see the converse is true

b%ORA11GR2> alter system set optimizer_secure_view_merging = true;

System altered.

b%ORA11GR2> set autotrace off

There are also some side effects with VPD. See https://antognini.ch/2011/09/optimizer_secure_view_merging-and-vpd/

Also in some cases this parameter can result in errors.
Here is a report of such a case: https://support.esri.com/en/technical-article/000010620

Another example for a parameter that can influence result sets would be RESULT_CACHE_REMOTE_EXPIRATION
https://docs.oracle.com/cd/B28359_01/server.111/b28320/initparams208.htm#REFRN10294

The default is 0. Changeing it to a positiv integer, will allow us to get “stale” data from the cache instead of fetching a fresh value from the remote source.

fresh_mv

This 12.2 hint can make a real time materialised view to refresh itself. So if the data is stale then the hint will use the MV and somehow add the missing data to it. Without the hint we would still see old data, but with it we will see new data.

https://docs.oracle.com/en/database/oracle/oracle-database/12.2/sqlrf/Comments.html#GUID-5EF4198B-50B3-40D8-B12A-3D3115C69D9B

The FRESH_MV hint applies when querying a real-time materialized view. This hint instructs the optimizer to use on-query computation to fetch up-to-date data from the materialized view, even if the materialized view is stale.

A similar connected hint would be
no_rewrite/rewrite_or_error
All of those hints have the ability to change the behaviour, especially the usage of stale materialized views. Which then can give a different result.

Disclaimer

This list has no pretence to be complete. Also some of the mentioned behaviours might change in different database versions. Especially if it is not documented and not intended behaviour.

There are more such hints. I have recorded some more, but couldn’t consistently reproduce the issue. It might have been related to some bug or some other obscure circumstance, that I can’t remember.

Here is the current list for those “watch out” candidates:

  • first_rows
  • (no_)query_transformation
  • (no_)result_cache

Result cache in combination with deterministic functions, that are not truely deterministic can easily produce wrong/different results. However the main culprit then is the wrong usage of the deterministic pragma.

And some more strange things

The SELECT clause can also influence the number of rows. And not only via distinct.

    with tbl as (select 1 val from dual union all  
                select 2 val from dual union all  
                select 3 val from dual )  
        SELECT  CASE  0  
                    WHEN  0  
                        THEN  'Aardvark'  
                    WHEN  SUM (val)  
                        THEN  'Baracuda'  
                END  AS c  
        FROM    tbl;  

Aardvark
Aardvark
Aardvark
3 rows selected.

    with tbl as (select 1 val from dual union all  
                select 2 val from dual union all  
                select 3 val from dual )  
        SELECT  CASE  6  
                    WHEN  0  
                        THEN  'Aardvark'  
                    WHEN  SUM (val)  
                        THEN  'Baracuda'  
                END  AS c  
        FROM    tbl;  

Baracuda
1 row selected.

Discovered by Frank Kulash and discussed and tested in this OTN thread
https://community.oracle.com/thread/4097998?start=15&tstart=0

Apex 5.1: Interactive Grids – How to customize toolbar buttons

Introduction

I wanted to slightly change the toolbar buttons in some Interactive Grid. Since in my application all buttons have an icon, I wanted to use the same icon also for IG buttons, especially for actions like “Save” or “Add row”.

What helped me were John Snyders excellent postings “How to hack Apex Interactive Grids”. In this specific case I mostly referred to the second part of the series.

But there were some things that aren’t explained and I needed to figure them out myself.

Toolbars – before and after

The users were used how the buttons looked like in the previous Tabular Form.

Tabular Form
toolbar_tab_form

There are 4 buttons

  • Cancel: go back to previous page
  • Delete: to delete the marked rows
  • Save: To save the current changes
  • Add Row: to insert a new record

The buttons were using the template type “Primary”, thats why they are blue.

Default Interactive Grid
IG_modify_toolbar_default_IG

The “Save” button was already added declarativly, by setting the property in the Attributes/toolbar section of the IG.

The “Add Row” button is created as soon as we allow to Edit.
Attributes/Edit/Allowed Operations/Add Row

As we can see, the “Cancel” button and the “Delete” button are missing. But more importantly, the buttons have no icons and are missing the color.

Customize Interactive Grid

Here is how to change that. I’m not showing how to add the cancel button. This can easily be positioned somehwere else, like in the region header. But the Delete button needs to be added. And the visualization of the buttons should match with the rest of the application.

First give the grid a static ID. Here I use “IG_MYGRID”.

Adding the delete button

In Apex 5.1 many objects now have an “advanced/javascript code” property. This is incredibly useful. It is a kind of hook to add additional features programatically.

For an Interactive Grid we can use it to customize the grid further.

We first find our IG and its toolbar object. The toolbar consists of several toolbar groups. They are separated by thin vertical lines. The group “actions3” holds currently only the “Add Row” button, this is where we want to add the “Delete” button too.

function(config) {
    let $ = apex.jQuery,
        toolbarData = $.apex.interactiveGrid.copyDefaultToolbar(), // copy the whole toolbar
        toolbarGroup = toolbarData.toolbarFind("actions3"); // this is the group with the action=add row

    // add a new "delete" button
    toolbarGroup.controls.push({type: "BUTTON",
                                action: "selection-delete"
                               });

    //store the config
    config.toolbarData = toolbarData;
    return config;
}

This is the result:

IG_modify_toolbar_with_delete_button

The button is there – that’s good. But it does look kind of ugly.
Not quite what we want yet.

We have to understand that an IG provides several actions. Here we used the “delete-selection” action. The actions already have several properties, like an icon. We can overwrite those properties for each visual component that is used to represent that action. In our case the toolbar button.

The trashcan icon might look better in your case, it depends on template/theme styling. In my case it comes with a slightly darker background. The same happens for the normal save icon (icon-ig-save).

Add and change the icons

Let’s add an icon also for the “Save” and the “Add Row” buttons.

Using the toolbarFind Method allows us, to identify the two new existing buttons. Check John Snyders blog to learn more about the toolbarFind method. It is very useful.

function(config) {
    let $ = apex.jQuery,
        toolbarData = $.apex.interactiveGrid.copyDefaultToolbar(), // copy the whole toolbar
        toolbarGroup = toolbarData.toolbarFind("actions3"); // this is the group with the action=add row
        addrowAction = toolbarData.toolbarFind("selection-add-row"), //add row button
        saveAction = toolbarData.toolbarFind("save"); // Save button

    // add a new "delete" button
    toolbarGroup.controls.push({type: "BUTTON",
                                action: "selection-delete",
                                icon: "icon-ig-delete", // alternative FontAwesome icon: "fa fa-trash",
                                iconBeforeLabel: true,
                                hot: true
                               });

    
    // manipulate the buttons
    addrowAction.icon = "icon-ig-add-row"; // alternative font awesome icon: "fa fa-plus";
    addrowAction.iconBeforeLabel = true;
    addrowAction.hot = true;

    saveAction.iconBeforeLabel = true;
    saveAction.icon ="icon-ig-save-as"; // list of alternative grid icons (Font Apex):icon-ig-save,icon-irr-saved-report
    //saveAction.icon ="fa fa-save"; // list of alternative font awesome icons: fa-save,fa-check
    saveAction.hot = true;

    
    //store the config
    config.toolbarData = toolbarData;
    return config;
}

Result
IG_modify_toolbar_custom_with_icons

It is still not perfect. The “Save” and the “Add Row” buttons do have an icon now and the icon is positioned before the label. Still the icon for the “Delete” looks ugly.

I also set all buttons to “hot=true”. The save button already has that as default. Why to do so is explained in the next “css” section.

IG_modify_toolbar_hot_as_Primary

Styling the toolbar buttons with css

Instead of the normal grid icons I wanted to use FontApex or FontAwesome icons.
You have to experiment a little what works in your specific case.

The application where I added this grid, still uses FontAwesome. The users unfortunatly decided against switching to FontApex. They were use to the older items, and didn’t like how the new font looks in the menu. I sometimes regret a little that I gave them this choice. FontApex is the much better option, especially because of the modifiers that can be added. Check the Universal Them template application for examples of that.

The problem in the Interactive Grid is, that the grid icons are added with a class a-icon which in turn is fixed to FontApex. The creates two additional problems. Since I didn’t use FontApex other icons where not included, only the ones that the grid provides. But I wanted the buttons to look like all other buttons in the application, including the same icon font.

This little css needs to be added to the css inline section of the page.

#IG_MYGRID span.a-Icon.fa, #IG_MYGRID span.a-Icon.fa:before {
  font-family: FontAwesome !important; 
}
#IG_MYGRID_ig_toolbar .a-Button--hot {
    background-color: #0072b9;
    color: #ffffff;
    font-weight: inherit;
}

It overwrites the Font-Family and it uses the class “a-Button–hot” (which is added for “hot” buttons) to style the toolbar buttons blue. Unfortunatly I had to use !important for th Font-Family, because that was already so in the apex core.css.

I also did change the icons on the buttons to reuse the same icons as we had on the tabular form. Check the comments in the previous code, to see which icons are good alternatives if the correct font is available.

Result

IG with custom toolbar buttons

And we did it!

I think there are a few other ways how to reach the same result, but this is what worked well for me.

Other stuff and some comments

Page attribute “reload on Submit”

When moving from a tabular form to an interactive grid we can copy the SELECT statement and create a new IG region with that.

However we must change one page setting. The “Reload on Submit” must be changed from “Always” to “Only for Success” .

Page_Reload_on_Submit

Because of this setting, we can not use a tabular form and a IG on the same page. I would prefer to see both regions as long as the new IG is still in developement. But the setting must be different when showing a tabular form compared to showing an interactive grid.

How to find out about all this

If we want to know what actions are available we have two approaches. One is, find a button/menu point that does the action, inspect the element and look for the data-action attribute. For example this works for the add-row button.

data-action="selection-add-row"

The other way is to use the console and grab all actions.

We open the console on the page with the grid and run the following javascript commands

to show grid actions

apex.region( "IG_MYGRID" ).widget().interactiveGrid( "getActions" ).list()

or alternativly

apex.region("IG_MYGRID").widget().interactiveGrid("getActions").list().forEach(function(a) { console.log("Action Label: " + a.label + ", Name: " + a.name + (a.choice !== undefined ? ", Choice: " + a.choice : "") ); });

to show row actions

apex.region("IG_MYGRID").widget().interactiveGrid("getViews").grid.rowActionMenu$.menu("option")

Then check the object in the console to see what actions are possible, how they are named and which properties they have.

Exotic SQL: Bubble Sort with Model Clause

Introduction

This is a “just for fun” experiment. But I demonstrate some model clause effects on the way.

Task: Sort a pipe separated list using SQL.

For example this string ’29|1|3004|3|2|24′ has 6 elements separated by |. The elements should be sorted in numerical order so that the resulting string looks like this ‘1|2|3|24|29|3004’.

Standard way

The “normal” way would be to tokenize the input string, put the tokens into rows, sort the rows and aggregate the sorted result to reassemble the string.

Here is one possible solution.


with inputdata(src)
     as (select '29|1|3004|3|2|24' from dual)
    ,num_rows (part, remains)
     as (select to_number(substr(i.src,1,coalesce(nullif(instr(i.src,'|'),0)-1,length(i.src)))) as part
               ,substr(src,instr(i.src,'|')+1) as remains
         from inputdata i
         UNION ALL
         select to_number(substr(r.remains,1,coalesce(nullif(instr(r.remains,'|'),0)-1,length(r.remains)))) as part
               ,substr(remains, nullif(instr(r.remains,'|'),0)+1) as remains
         from num_rows r
         where r.remains is not null
         )
    , combine (resultdata)
as (select listagg(to_char(part,'FM9999999999'),'|') within group (order by part) from num_rows)
select * from combine;

The num_rows subquery in this example is a recursive with clause that splits the input string into rows, one row for each token. The logic to split the string into tokens is based around substr . Essentially it just cuts of an element from the beginning of the string and keeps the remainder until there are no elements left.
The combine subquery then uses listagg to sort and convert the rows into a string again.

Standard SQL technologies that each decent developer should know about.

There are multiple ways how to tokenize,  how to create the rows and how to aggregate the rows again. But the general principle is still the same.

Here is a second example solution. This time using connect by to create rows and some regular expressions to count and split.

with inputdata(src) as (select '29|1|3004|3|2|24' from dual )
select listagg(token,'|') within group (order by to_number(token)) as res
from (
  select src, regexp_substr(src,'[^|]+', 1, level) token
  from inputdata
  connect by level <= regexp_count(src,'[|*]') + 1
  )
group by src;

Exotic way

I was wondering if we can use a totally different approach.

Instead of letting the database do the hard work and implement a sort mechanism, we can do it ourselves. Here comes bubbles sort. It is one of the most inefficient sort mechanisms you can think of (unless the list is already sorted), but it is fairly easy to implement.

Since bubble sort is an iterative approach with a simple set of rules, the model clause immediately jumped to my mind.

So here is a solution based around a slightly optimised bubble sort mechanism (gnome sort). We will take it apart afterwards.
The solution here creates several rows, but this is just to see and check each step of the iteration. It is possible to do this using one row (dim=0) only, although it is much harder to develop and to understand.

with inputdata(src) as (select '29|1|3004|3|2|24' from dual)
select *
from inputdata
model dimension by (0 as dim)
measures (  src
          , 1 ele_pos1
          , 2 ele_pos2
          , cast (regexp_substr(src,'[^\|]+',1,1) as varchar2(100)) token1
          , cast (regexp_substr(src,'[^\|]+',1,2) as varchar2(100)) token2
          , regexp_count(src,'\|')+1 max_element
          , 2 hwm
          , cast (src as varchar2(500)) as res
          )
rules iterate (500)
      until (-- we can stop if the high water mark is at the last position and no switch needed
             hwm[iteration_number] >= max_element[0]
             and (ele_pos1[iteration_number] = 1 or to_number(token1[iteration_number]) < to_number(token2[iteration_number])))
( -- switch tokens if left token > right token
  res[iteration_number+1]
    = case when to_number(token1[iteration_number]) > to_number(token2[iteration_number]) then
      -- do the switch
      regexp_replace(
             regexp_replace(res[iteration_number],'[^\|]+',token2[iteration_number],1,ele_pos1[iteration_number])
                            ,'[^\|]+',token1[iteration_number],1,ele_pos2[iteration_number])
      else res[iteration_number] -- no switch needed
      end ,
  -- calculate next position for token1
  ele_pos1[iteration_number+1]
  = case
      when ele_pos1[iteration_number] = 1 then
        -- we reached first position, so go back to hwm
        hwm[iteration_number]
      when to_number(token1[iteration_number]) > to_number(token2[iteration_number]) then
        -- after a switch, move one position the the left and check there
        ele_pos1[iteration_number] - 1
      when ele_pos1[iteration_number] + 1 <= max_element[0]-1 then
        -- no more switch, so go to hwm and look for next element
         hwm[iteration_number]
       else
         -- just in case, don't do anything
         ele_pos1[iteration_number]
       end,
   -- calculate next position for token2
   ele_pos2[iteration_number+1]
    = case
       when ele_pos2[iteration_number] = 2 then
         --ele_pos2[iteration_number] + 1
         hwm[iteration_number] + 1
       when to_number(token1[iteration_number]) > to_number(token2[iteration_number]) then
        ele_pos2[iteration_number] - 1
      when ele_pos2[iteration_number] + 1 <= max_element[0] then
        hwm[iteration_number] + 1
      else
        ele_pos2[iteration_number]
      end,
  -- calculate high water mark
  hwm[iteration_number+1]
  = greatest(hwm[iteration_number],ele_pos2[iteration_number+1]),
  -- get token1 for new calculated position1
  token1[iteration_number+1]
  = regexp_substr(res[iteration_number+1],'[^\|]+',1,ele_pos1[iteration_number+1]),
  -- get token2 for new calculated position2
  token2[iteration_number+1]
  = regexp_substr(res[iteration_number+1],'[^\|]+',1,ele_pos2[iteration_number+1])
 );

Here is the result, but I show only the important column. Feel free to run the statement yourself (works on all supported database versions) and see the other helper columns.

Result (only Res column)
29|1|3004|3|2|24
1|29|3004|3|2|24
1|29|3004|3|2|24
1|29|3|3004|2|24
1|3|29|3004|2|24
1|3|29|3004|2|24
1|3|29|2|3004|24
1|3|2|29|3004|24
1|2|3|29|3004|24
1|2|3|29|3004|24
1|2|3|29|24|3004
1|2|3|24|29|3004
1|2|3|24|29|3004

Fairly easy SQL isn’t it?

65 rows instead of 8. So if your aim is to write obfuscated SQL then this certainly is the way to go.

If you want to understand it, skip to the “algorithm explained” chapter.

Rant

Now what is wrong with the model clause!? A simple algorithm like bubble sort looks this complex?

In the recent months I have written a number of model clause solutions for different problems. However none of them is easy. Apart from very specific cases all model solutions are terrible to look at and terrible complex to build. In comparison it is much easier to do the same in excel.

I attribute this to several things.

  1. The way to reference a cell is verbose. And when we look at the code then the brackets break the eye scanning mechanism of a trained developer eye. If more than one dimension is involved it is even worse.
    Here are some examples:
    measure1[cv(),3] or  resultdata[iteration_number+1]
    It is hard to see this as one cell reference.
    In Excel this would simply be: B$3 or C2.
  2. The way to reference a cell from the previous row is complex. It requires a working calculation for the cell address. We can do this by applying the row_number() analytic function or using iteration_number (for iterative models). But this requires some extra logic.
  3. There is no simple way to set several different measures at once. A rule can only be applied for one measure. This means that sometimes we need to repeat the same logic for a different measure.
    In the bubble sort example this happened for the calculation for the position of token 1 and for position of token 2. In this case it can be simplified a lot because the position of token 2 is always exactly +1 from position of token 1. In other cases it is not that simple. Additional measures can serve as a kind of variable to capture rules that then need to applied to several other columns.

Excel Comparison

The model clause resembles Excel very much.

  • Dimensions are rows.
  • Measures are columns.
  • Rules are formulas

But why does the model clause feels so more complex that some simple excel cells.

Excel has a similar way to address cells. It is called the R1C1-style reference. It uses number coordinates to find a cell. But most of times we use the A1-Style reference. This is a little more intuitive and much shorter.

But more importantly the process to build formulas (=rules) is different. While building formulas we click on a source cell and its coordinates are used automatically. Excel then calculates the difference from the current cell to the referenced cell.

The main difference to Excel is that is has a separate formula for each cell, and we use just a simple way to fill out the formulas to the other cells. We don’t need that copy mechanism with the model clause, because the rules automatically are applied for all relevant cells. But it is hard with the model clause to give the direct reference for a different row.

Also the different styles and the different ways to use absolute and relative cell references in Excel (e.g. B$2) are more convenient than doing the same in the model clause.

Pro-Tipp: A good way to develop model rules is to build the formulas in excel first, then copy the logic to SQL.

The model clause also has several advantages over excel. Among others are

  • Excel only has 2 dimensions and no decent concept of partitions.
  • Excel columns can not be named (although single cells and cell ranges can).
  • Model allows to return only updated rows.
  • Model has a easy way to work with invalid cell references (IGNORE NAV)

Algorithm explained

Before we look at the statement, let’s look at the bubble sort logic first.

bubble sort logic

Start: 29|1|3004|3|2|24

I start with the first two tokens. 29 and 1. Token1 is always the left and token2 is always the right token.

Rule 1: If token1 is bigger than token2 we need to switch them.
Rule 2: If we are at position 1, we can go one step to the right.
Rule 3: If we don’t need to switch, then we can also go one step to the right.
Rule 4: If we did switch then go one step to the left.

Now lets see how the rules can be applied to the string.

Step 1: 1|29|3004|3|2|24 – Rule 1
Step 2: 1|29|3004|3|2|24 – Rule 2
Step 3: 1|29|3004|3|2|24 – Rule 3
Step 4: 1|29|3|3004|2|24 – Rule 1
Step 5: 1|3|29|3004|2|24 – Rules 4+1
Step 6: 1|3|29|3004|2|24 – Rule 3 (here I used an additional optimisation, by storing a high water mark and jumping as far right as the HWM allows)
Step 7: 1|3|29|2|3004|24 – Rule 1

Stop: We can finish the iteration if the HWM is to the far right and if no switch is needed anymore.

model SQL explained in detail

So how is this logic implemented in the model clause?

A token is found using a regular expression with the position of the token.

regexp_substr(src,'[^\|]+',1,4)

This finds the fourth token.

We start with a way to number our rows. The initial row is defined as 0.

dimension by (0 as dim)

0 is only used because later we address our rows using an iteration_number. If we would start with 1, then we can potentially overwrite our data. Often 0 is a good starting row.

We then define several columns. And the initial value for those columns.

measures (  src
   , 1 ele_pos1
   , 2 ele_pos2
   , cast (regexp_substr(src,'[^\|]+',1,1) as varchar2(100)) token1
   , cast (regexp_substr(src,'[^\|]+',1,2) as varchar2(100)) token2
   , regexp_count(src,'\|')+1 max_element
   , 2 hwm
   , cast (src as varchar2(500)) as res
   )

The datatype and size of the columns is automatically deducted from the initial value. This is why cast is sometimes needed, if the column size should be bigger than what the initial value indicates.

Now we define that an iterative model is to be used.

rules iterate (500) until (stop criteria)

Iterative models are good for implementing procedural logic. Or for tasks when it is unknown beforehand what the area for the calculation should be.

It gives us a variable called iteration_number which can be used as a cell address.

Now lets look at a few rules.

-- switch tokens if left token > right token
res[iteration_number+1] 
    = case when to_number(token1[iteration_number]) > to_number(token2[iteration_number]) then
      -- do the switch
      regexp_replace(
             regexp_replace(res[iteration_number],'[^\|]+',token2[iteration_number],1,ele_pos1[iteration_number])
             ,'[^\|]+',token1[iteration_number],1,ele_pos2[iteration_number])
      else res[iteration_number] -- no switch needed
      end ,

The left hand side of the rule “res[iteration_number+1]” creates a new row for the res column. The right hand side of the rule references data from the previous row “token1[iteration_number]” to do the switch logic.

If data from the same row is needed, it is possible to use “token1[cv()]”. cv() stands for cell value of the current dimension. It is possible to calculate with those values. So instead of token1[iteration_number] we could also write “token1[cv()-1]” to fetch the value from the previous row. For iterative models I find it more convenient to stay consistent and use iteration_number instead.

This rule implements rule 1 (switch tokens) from the bubble sort logic. However it is not a 1:1 matching of rules. We can see this when looking at the next rule.

Instead the logic needs to be implemented for each column. To calculate the next position a similar case construct is needed.

-- calculate next position (for token1)
ele_pos1[iteration_number+1] 
       = case
         when ele_pos1[iteration_number] = 1 then
           -- we reached first position, so go back to hwm
           hwm[iteration_number]
         when to_number(token1[iteration_number]) > to_number(token2[iteration_number]) then
           -- after a switch, move one position the the left and check there
           ele_pos1[iteration_number] - 1
         when ele_pos1[iteration_number] + 1 <= max_element[0]-1 then           -- no more switch, so go to hwm and look for next element            hwm[iteration_number]  
         else 
           -- just in case, don't do anything
           ele_pos1[iteration_number]
         end

Bubble sort rules 2-4 are implemented by a case construct.

As we can see in this example, the rules from the model clause do not match our business rules. Sometimes they do, but more often they do not.

conclusion

The Model clause can open up new ways to do solve SQL puzzles. Bubble sort is a good candidate for an iterative model.

For real world cases the Model clause is hardly maintainable. If we find a solid way to match business rules to model clause rules, then we have a good way to react to future changes of those business rules.

Sometimes model clause can provide a performance advantage, because of they way how the data is handled.

Model clause is a tool in our toolbox, although an extremely sophisticated and complex one. We need to train using this tool on a regular basis.

btw: Some of the constructs from the model clause can be rediscovered in other statements. For example the new analytic views in 12.2 also have a “dimension by” and a “measures” part. But the rules are missing (one can argue that attribute dimensions and hierarchies resemble the rules).

is {JSON}, or not is {JSON}, that is the question

Whether ’tis nobler in the mind to suffer
The [] and “” of outrageous json,
Or to take Arms against a Sea of troubles,
And by opposing {} end them: to select, to browse
No more; and by a browse, to say we end
the json array, and the thousand json objects
that Flesh is heir to? ‘Tis a consummation
devoutly to be wished. To select, to browse…

Intro

So I have this logging table where sometimes the message can be a json document. Most of the times it is an error message or some tracing information. But sometimes I want to show what data currently is inside a plsql collection. I wrote a small conversion function, that returns a clob which should contain JSON. This json ends up as a message in my logging table.

Inside an apex application I added the possibility to show this JSON as a modal page. I use the JSONView Plugin, however most browsers now have a default JSON display capability.

I detect if the message contains json data by using the 12.1 IS JSON operator.

In case it is json, I provide a link to the modal page and print the data there.

case 
  when LOG_MESSAGE is json 
  then
    '<a href="'||
    apex_util.prepare_url('f?p='||v('APP_ID')||':123:'||v('APP_SESSION')
    ||'::::P123_LOG_ID:'||to_char(LOG_ID))
    ||'" title="show JSON data">{JSON}</a>'
else 
  '<pre>'||LOG_MESSAGE||'</pre>'
end as log_message_enhanced

 
The modal page uses the application/json mime type.

The result looks similar to this:

 
modal_json_ok
 

How build such an apex page or how to do a conversion from a plsql collection to json is not part of this blog post. If your are interested in that, please leave a comment and I might consider to publish how to do so.

Problem

For certain documents the browser was not able to show the json. Instead it returned an error message and the json in text format. Here is an example.
modal_json_error

Sorry for the German Message, but I was not able to switch my Firefox to english mode easily. The error essentially says: “We could not parse the json, it seems the document does not comply to the specifications.”

So the database says it is json, but the browser says it is not!

What is going on? That is the question.
 

Solution

The reason for the behaviour is that Oracles IS JSON check uses the lax json syntax (by default). LAX json allows several things, among others it allows to have a list of objects with a trailing comma at the end. Exactly my issue.

Here is a basic SQL demonstration. Note the comma after the “Larry” inside the json object step.

select * from dual
where '[{"index":1,"name":"Larry",}]' 
      is json;
DUMMY
-----
X

So it is JSON. But only lax json.

Fortunatly we can also do a check for the more strict json interpretation.

select * from dual
where '[{"index":1,"name":"Larry",}]' 
      is json (STRICT);
No rows selected.

Can you spot the difference? The STRICT keyword including parenthesis tells the database that the document needs to confirm to the more strict specification.

This is of cause documented: About Strict and Lax JSON Syntax

The main differences are

  • STRICT: each JSON field and each string value must be enclosed in double quotation marks (“).
    LAX: An object literal can also be enclosed in single quotation marks (‘).
  • LAX: Case variations for keywords true, false, and null (for example, TRUE, True, TrUe, fALSe, NulL).
  • LAX: An extra comma (,) after the last element of an array or the last member of an object (for example, [a, b, c,], {a:b, c:d,}).
  • LAX: Numerals
    • with one or more leading zeros (for example, 0042.3).
    • Fractional numerals that lack 0 before the decimal point (for example, .14 instead of 0.14).
    • Numerals with no fractional part after the decimal point (for example, 342. or 1.e27).
    • A plus sign (+) preceding a numeral, meaning that the number is non-negative (for example, +1.3).

But there are more differences.

Using the (STRICT) keyword, solved my problem. Only those json documents where linked, that could be shown in the browser. The others were rendered as normal text.

Conclusion

 
Be aware that there are slightly different JSON specifications available.

If you want to show json inside a browser, then make sure the json document confirms to the strict json specification.

If you want to use json inside javascript, then probably the lax version is better suited.
 

 

10 Oracle plsql things you probably didn’t know

Many people enjoyed reading my last blog post “10 Oracle SQL features you probably didn’t know”. So I decided to spice it up a little more and do something similar for plsql.

I hope you like that one too.

With our further ado, let’s get started with the list.

10. The first Oracle version to feature plsql was Oracle DB version 6 (1988)

And no. Steven Feuerstein did NOT invent it.

At that time PLSQL did not have stored procedures nor did it have proper exception handling. But it already had embedded SQL.

I learned that from the great Lewis Cunningham. One of the godfathers of development with SQL and PLSQL.

Stored Procedures were added in Oracle 7 (1992). 7.3 was the version when I started to work with an Oracle Database. At that point plsql was in version 2.x. However there never was a version 3. Plsql versioning jumped to 8 when Oracle DB version 8 was introduced and plsql versioning was aligned with the db versions. So there are no plsql versions 3-7. But honestly? Nobody cares anymore that plsql does have its own versioning.

9. labels do not need to match

We can use <<labels>> in plsql. Mostly to increase readability of code. This is especially useful for loop constructs, but it also works for normal begin..end blocks.

begin
 <<mainloop>>
 loop
   <<check_some>>
   begin 
     <<dummyloop>>
     for r in (select * from dual) loop
       -- do nothing
       null;
     end loop dummyloop;
   end check_some;
   exit when 1=1;
 end loop mainloop;
end; 
/

As we can see there are several <<labels>>. And the usage of those labels at the “end” helps to distinguish which code part we are looking at [1].

But this is only as good as the programmer is!

Unfortunatly this works too:

begin
 <<mainloop>>
 loop
   <<check_some>>
   begin 
     <<dummyloop>>
     for r in (select * from dual) loop
       -- do nothing
       null;
     end loop mainloop;
   end check_sum;
   exit when 1=1;
 end loop dummyloop;
end; 
/

Here I mixed up the labels from the loops. And the “end check_sum” does not match the label at the beginning of the block. In fact the “label” at the end can be anything that is not a reserved word.

It runs identical to the previous code (still doing nothing). But it is way more confusing for the “future me” that has to maintain this mess.

8. pragma SERIALLY_REUSABLE

During the lifetime of a session, the package state (package variables, open cursors, etc.) are held in the UGA (user global area).  Subsequent calls in the same session to the same package profit from that by not needing to reinitialize the package state.

The pragma SERIALLY_REUSABLE is able to change this behaviour.

serially_reusable packages

After the work unit (server call) of a SERIALLY_REUSABLE package completes, Oracle Database does the following:

  • Closes any open cursors.
  • Frees some nonreusable memory (for example, memory for collection and long VARCHAR2 variables)
  • Returns the package instantiation to the pool of reusable instantiations kept for this package.

Essentially this means, that the package state exists only during the package call. Not for the whole session.

So far I never had the need to use this pragma. But I can imagine some very very special situations, where this might become interesting.

7. You can compile a package even while another session is running it

In general this is not possible. Assuming a package is currently running. Or to say it in technical terms: We have an active session (Session A) executing a packaged procedure.

A second session  (Session B) trying to do an “ALTER COMPILE PACKAGE (BODY)” would wait until session A finishes and will then afterwards try to compile the package. Upon success the first session A then will get a “package state has been discarded” error message as soon as it tries to run the same package once again. The second next try to run the package would succeed and will use the new package version. At that time the package state was discarded from the session and the new instantiation can be loaded.

This is all documented and well known behaviour.

We face three potential issues with this

  1. Session B needs to wait
  2. Session A might get an error
  3. Session B might wait so long that the developer decides to kill the client (closing SQL developer) thereby making everything 10 times worse, because the database compile call still is valid on the database session level – blocking all following attempts to run or compile the package.

EBR for the rescue!

Using Edition Based Redefinition (EBR) we can circumvent those issues.

Both sessions just need to use different editions. EBR allows us to store and run different code versions of the same plsql based object in the same database.

 

preparation

First create a package with a long running procedure. My example uses a procedure that runs for exactly 1 minute.


create or replace package myPck is
  procedure runMinute;
end myPck;
/

create or replace package body myPck is
   procedure runMinute is
   begin
     sys.dbms_lock.sleep(60);
   end runMinute;
end myPck;
/

 

Setup an edition DEV$ALPHA that is a child of the default edition (ORA$BASE).


create edition DEV$ALPHA;

 

You need an edition enabled schema to do this.

This is simple to do, but to explain EBR in more detail is beyond the scope of this blog post.

Example scenario

Lets run a few commands in two different sessions.

Session A resembles a USER/TESTER who currently executes the packaged function.

Session B resembles a DEVELOPER who wants to deploy a new version of the package.

Session A does this

set time on

alter session set edition=ORA$BASE;

execute myPck.runMinute;

Session B was started already and after the execute in Session A, run the following script in Session B.

alter session set edition=DEV$ALPHA;

— add a new procedure to the package

create or replace editionable package myPck …
/

create or replace editionable package body myPck …
/

— run the new procedure

exec myPck.run5secs;

Result is Session B finishes way before Session A does complete its 1 minute run.

See screenshotebr_run_sessions

q.e.d.

6. call a (pipelined) table function without the TABLE operator

It works only from 12.2 onwards. It is more of a SQL feature than a plsql one.

This will make a table function look indistinguishable from a parametrized view.

Example: split_string

First lets create a simple little table function. This one here just converts a delimited list into rows.

create or replace function split_string
 (p_str IN VARCHAR2
 ,p_delimiter IN VARCHAR2 default ','
 ) RETURN sys.odcivarchar2list PIPELINED 
IS
/** Function to split strings based upon delimiter
*
* @author Sven Weller
*
* @param p_str input string 
* @param p_delimiter delimiter string, default =, Delimiter should only be 1 char.
* @return list of strings
*
*/
 v_entry varchar2(4000);
 v_remaining_str varchar2(4000);
BEGIN
  -- input string needs to hold something to be able to split
  if p_str is not null then
     <<steps>>
    for i in 1..regexp_count(p_str,'\'||p_delimiter)+1 loop
      -- search + split
      v_entry := rtrim(regexp_substr(p_str,'[^\'||p_delimiter||']*('||p_delimiter||'|$)',1,i),p_delimiter);
      pipe row(v_entry);
    end loop steps; 
  else raise no_data_found; 
  end if; 

END split_string;
/

 

function created.

in 11g we call the function like this:

select * from TABLE(split_string('A:BB::CCC',':'));

in 12.2 we can now call it like that:

select * from split_string('A:BB::CCC',':');

As you can see the TABLE row source operator is gone. And it still works! The results of both statements are identical.

COLUMN_VALUE
A
BB
CCC

Want to test it? I made an example on livesql.com.

At the moment this is an undocumented 12.2 feature. So don’t use it for production code (yet). I quite like it. Less code is better! It might become some de-facto standard (similar to connect by level) and eventually will make it into the documentation.

5. dot notation for parameters

We can refer to parameters using the name of the module that declared them. This is useful when we need to distinguish a parameter from a column name.

example


create or replace function myFancyFunc (dummy in varchar2) return number
is
  ret number := 0;
begin
  begin
    select 1 into ret
    from dual
    where dummy = myFancyFunc.dummy
    and rownum = 1;
  exception
    when no_data_found then null;
  end;
  return ret;
end myFancyFunc;
/

Function MYFANCYFUNC compiled

select myFancyFunc('X') from dual;

1

select myFancyFunc('Y') from dual;

0

The function simply compares the value in the dummy column of the dual table to the value we input. If instead we would just compare dummy=dummy then we would get always 1 as a result. No matter what the input is. Even if we add an alias to the table and prefix the column with an alias, the non aliased “dummy” will still be interpreted as a column.

This behaviour is documented: Oracle Doc 12.1 – plsql name resolution

If a SQL statement references a name that belongs to both a column
and either a local variable or formal parameter,
then the column name takes precedence.

Interestingly we can also use labels on block level for specifying variables that are defined in this block.

set serveroutput on
<<main>>
declare
  dummy varchar2(10) := 'Y';
begin
  <<block1>>
  declare
    dummy varchar2(10) := 'X';
  begin
  <<block2>>
    declare
    dummy varchar2(10) := 'A';
    begin
      select dummy
      into dummy
      from dual
      where dummy = block1.dummy;

      dbms_output.put_line('MainBlock:'||main.dummy);
      dbms_output.put_line('Block1:'||block1.dummy);
      dbms_output.put_line('Block2:'||block2.dummy);
    end block2;
  end block1;
end main;
/

PL/SQL procedure successfully completed.

MainBlock:Y
Block1:X
Block2:X

Without dot notation the innermost variable (block2) is used – as we can see in the INTO part. And we can reference a different variable with the same name from a “higher” declaration by using the dot notation.

 

4. variable names be emojis

example

set serveroutput on
declare
  "💩"exception;
  pragma exception_init("💩",-20001);

  "⌚" timestamp := systimestamp;
  "🕑"interval day to second;
  "🎲"number;
  "💤"number := 2;
begin
  "🎲":= round(dbms_random.value(1,6));
  for "🔜"in 1.."🎲"loop
    dbms_lock.sleep("💤");
  end loop;
  "🕑":= systimestamp - "⌚" ;
  dbms_output.put_line('Slept for '|| "🕑");
exception
  when "💩"then
    dbms_output.put_line('Sorry something bad happend!');
    raise "💩";
end;
/

PL/SQL procedure successfully completed.

Slept for +00 00:00:08.049000

The source code looks a little bit different in sql developer. But trust me. I simply copy&pasted it from there to here.

emojicode

To make this work you need to use a font that supports emoijs/symbols, I used font “Segoe UI Symbol”. It is supposed to look better on windows 10[3].

If you are a hard core emoji lover then I suggest to have a look at emojicode.org

It is a emoji based programming language. Which did not make it into the esoteric programming languages list (yet).  Ook? Ook!

 

3. variables can be made mandatory (NOT NULL)

Check out the NOT NULL keyword during the variable declaration.

declare
  v_index number not null := 0;
begin
  v_index := 1;
  v_index := null;
end;
/

Error report –
ORA-06550: line 5, column 14: PLS-00382: expression is of wrong type
ORA-06550: line 5, column 3: PL/SQL: Statement ignored

The error message is a bit vague about what happened, but it is very exact where it happened (line 5, column 14). And what do we see there? A NULL expression.

The expression is of wrong type, because we added a NOT NULL constraint to the number type that was used. For more complex cases we can create our own sub types and use them. But if we just want to make sure that we do not need to consider null cases during further variable calls, then this is a possible way.

Link to plsql documentation

Currently there are no such other constraints  that we can use.  I could imagine with the potential arrival of SQL assertions, this might become a hot topic in plsql too.

2. you can “hack” dbms_output

Warning! This is dangerous. It might break some (poorly written) code that resides in the same schema. Do it at your own risk! It is also hilariously funny to do on april fools day to your fellow coworkers. I mean they shouldn’t use dbms_output anyway. That will teach them!

I start the example by showing the behaviour first. Then the code to produce this result.

behaviour


create or replace procedure doSomething is
  v_dummy dual.dummy%type;
begin
  select dummy into v_dummy from dual where 1=2;
exception
  when others then
    dbms_output.put_line(sqlerrm);
end doSomething;
/

Now we run the module a couple of times and want to see the output. We should expect a NO_DATA_FOUND error message.

set serveroutput on

execute doSomething;
execute doSomething;
execute doSomething;
execute doSomething;
execute doSomething;

Surprisingly instead of the error message we get something like this.

PL/SQL procedure successfully completed.

Wrong usage of DBMS_OUTPUT detected.

PL/SQL procedure successfully completed.

Make Databases Great Again!

PL/SQL procedure successfully completed.

ORA-01403: no data found

PL/SQL procedure successfully completed.

Make Databases Great Again!

PL/SQL procedure successfully completed.

So tell me what you want, what you really, really want

Omg! What is going on here?

Well here is the catch. We can “overload” dbms_output in out own schema. Then our package is called and not the original package from sys.

source code

create or replace package dbms_output
as
  procedure enable(BUFFER_SIZE number default null);
  procedure put_line(A in varchar2);
  procedure GET_LINE(LINE out VARCHAR2,STATUS out integer);
  procedure GET_LINES(LINES out sys.dbms_output.CHARARR,NUMLINES in out NUMBER);
  procedure GET_LINES(LINES out sys.DBMSOUTPUT_LINESARRAY,NUMLINES in out NUMBER);
end dbms_output ;
/

create or replace package body dbms_output
as
procedure enable(BUFFER_SIZE number default null) is
begin
  sys.dbms_output.enable(BUFFER_SIZE);
end;

function getRandomQuote (A in varchar2) return varchar2
is
  type quotes_t is table of varchar2(4000) index by binary_integer;
  v_quotes quotes_t;
  v_random binary_integer;
begin
  v_quotes(1) := 'You are hacked by the Chinese';
  v_quotes(2) := 'Wrong usage of DBMS_OUTPUT detected.';
  v_quotes(3) := 'System failure. Get away from keyboard';
  v_quotes(4) := 'Close all windows! NOW!';
  v_quotes(5) := 'Make Databases Great Again!';
  v_quotes(6) := A; -- sometimes return the correct text
  v_quotes(7) := A; -- sometimes return the correct text
  v_quotes(8) := 'So tell me what you want, what you really, really want';
  v_quotes(9) := 'None but ourselves can free our minds.';
  v_quotes(10) := 'Let there be light!';
  v_random := round(dbms_random.value(1,v_quotes.last));

  return v_quotes(v_random);

end getRandomQuote;

procedure put_line(A in varchar2) is
begin
  sys.dbms_output.put_line(getRandomQuote(A));
end;

procedure GET_LINE(LINE out VARCHAR2,STATUS out integer)
is
begin
  sys.dbms_output.GET_LINE(LINE,STATUS);
end;

procedure GET_LINES(LINES out sys.dbms_output.CHARARR,NUMLINES in out NUMBER)
is
begin
  sys.dbms_output.GET_LINES(LINES,NUMLINES);
end;

procedure GET_LINES(LINES out sys.DBMSOUTPUT_LINESARRAY,NUMLINES in out NUMBER)
is
begin
  sys.dbms_output.GET_LINES(LINES,NUMLINES);
end;
end dbms_output ;
/

How does it work?

Because how sql name resolution kicks in, the DBMS_OUTPUT package in our schema is used and not the public synonym for the DBMS_OUTPUT package from the sys schema.

The get_line functions then pushes the changed text to the normal buffer mechanism.

How can we avoid it?

Best is not to use DBMS_OUTPUT in real production code. It is a nice quick debugging tool. But not more than that.

Also if you prefix dbms_output always with the SYS schema, then it will call the original logic.
 

1. when others does not catch all exceptions

example

set serveroutput on 
declare
  e_cancelled exception;
  pragma exception_init(e_cancelled, -1013);
begin
  begin
    raise e_cancelled;
  exception
    when others then
      dbms_output.put_line('EXCEPTION OTHERS');
  end;
end;
/

ORA-01013: user requested cancel of current operation
ORA-06512: at line 6

We still see an exception, but not the dbms buffer output!

This needs some explanation.

There is a very limited set of exceptions that will not be captured by the WHEN OTHERS handler. We need to look closely and understand the exception itself to comprehend why this is a good thing.

Here the ORA-01013 is the “user requested cancel of current operation” exception. Essentially it means somebody pressed “CTRL+C” while running the code. In almost all environments this means: Stop doing whatever you do immediately! Or in more technical terms: It is an interrupt to the os process running your command. Same as executing “kill -2” (kill -SIGINT) in a nix environment (the-3-most-important-kill-signals-on-the-linux-unix-command-line). Even if the process is allowed to ignore the command, it shouldn’t do so by default.

ORA-01013 can sometimes also be the result of a timeout. Where the client is waiting for a response and after some time sends this as a timeout signal to the database session.

We are allowed to capture this exception and write a special handler for it.

set serveroutput on
declare
  e_cancelled exception;
  pragma exception_init(e_cancelled, -1013);
begin
  begin
    raise e_cancelled;
  exception
    when others then
      dbms_output.put_line('EXCEPTION OTHERS');
  end;
exception 
  when e_cancelled then
    dbms_output.put_line('OPERATION CANCELED');
END;
/

PL/SQL procedure successfully completed.

OPERATION CANCELED

Nothing to worry about. Just nice to know.

Please note: This example will behave differently in older outdated db versions. I think it was introduced as a fix for bug#12838063 in 11.2.0.4.

Other exceptions that are not handled include “ORA-03113: end-of-file on communication channel”.

But not “ORA-06508: PL/SQL: could not find program unit being called”. This was supposed to go through “when others” but testing on 12.2.0.1 revealed it is captured.

 

 

Footnotes


1. This is probably the only bug free code I ever wrote. It was meant to do nothing and it does that exceptionally well![2]
2. There might be room for some performance improvement. Allowing us to do nothing even faster.
3. On windows 10 💩 is supposed to look like 🔝💩.

 

 

 

10 Oracle SQL features you probably didn’t know

10 Oracle SQL features you probably didn’t know

I can not guarantee that you never heared of this. But the majority of developers doesn’t know or doesn’t seem to know. And honestly –  most of this information is fairly useless – and mostly harmless.

10. Why we have the CONCAT function

There is a concat function that can be used instead of the concat operator ||.

Nobody uses that! But there is a reason why it exists.

This is from the 8.1.7 Oracle documention

On most platforms, the concatenation operator is two solid vertical bars, as shown in Table 3-3. However, some IBM platforms use broken vertical bars for this operator. When moving SQL script files between systems having different character sets, such as between ASCII and EBCDIC, vertical bars might not be translated into the vertical bar required by the target Oracle environment. Oracle provides the CONCAT character function as an alternative to the vertical bar operator for cases when it is difficult or impossible to control translation performed by operating system or network utilities. Use this function in applications that will be moved between environments with differing character sets.

So we do have this function, because in the past this || was not consistent over all the different platforms using different character sets. Since nobody seems to use IBM EBCDIC anymore, there is no real need to use the CONCAT function.

9. select UNIQUE

Instead of doing a SELECT DISTINCT you can do SELECT UNIQUE.

But you shouldn’t. None of them. Forget I mentioned this.

Ok here is an example.

Example

select unique e.deptno from scott.emp e;

DEPTNO
-----
30
20
10

It is also possible to do SELECT COUNT(UNIQUE …) . Not sure when that one was introduced, but it seems to work now.

Sven says: “SELECT DISTINCT|UNIQUE should be considered a bug in real production code.”

I have yet to find an example where SELECT DISTINCT is needed. More likely there is a bug in the data model or missing joins in the where clause. GROUP BY or sometimes EXISTS are the better long term alternatives.

Using SELECT DISTINCT is absolutly fine for developer ad-hoc queries.

I feel this whole concept is a bit inconsistent. 12c introduced the new approximation function APPROX_COUNT_DISTINCT, but there is no APPROX_COUNT_UNIQUE!

8. there is an ANSI datatype DATE

The ANSI datatype DATE does not hold any time information (no hours, minutes, seconds). That is what the ANSI datatype TIME is for. Oracle does not officially know these two datatypes.

However:

EXTRACT function

Purpose

EXTRACT extracts and returns the value of a specified datetime field from a datetime or interval expression. The expr can be any expression that evaluates to a datetime or interval data type compatible with the requested field:

  • If YEAR or MONTH is requested, then expr must evaluate to an expression of data type DATE, TIMESTAMP, TIMESTAMP WITH TIME ZONE, TIMESTAMP WITH LOCAL TIME ZONE, or INTERVAL YEAR TO MONTH.
  • If DAY is requested, then expr must evaluate to an expression of data type DATE, TIMESTAMP, TIMESTAMP WITH TIME ZONE, TIMESTAMP WITH LOCAL TIME ZONE, or INTERVAL DAY TO SECOND.
  • If HOUR, MINUTE, or SECOND is requested, then expr must evaluate to an expression of data type TIMESTAMP, TIMESTAMP WITH TIME ZONE, TIMESTAMP WITH LOCAL TIME ZONE, or INTERVAL DAY TO SECOND. DATE is not valid here, because Oracle Database treats it as ANSI DATE data type, which has no time fields.
  • If TIMEZONE_HOUR, TIMEZONE_MINUTE, TIMEZONE_ABBR, TIMEZONE_REGION, or TIMEZONE_OFFSET is requested, then expr must evaluate to an expression of data type TIMESTAMP WITH TIME ZONE or TIMESTAMP WITH LOCAL TIME ZONE.

So essentially the doc says, that EXTRACT uses an ANSI DATE as an expression input.

Proof

select extract(day from sysdate) from dual;
21

select extract(second from sysdate) from dual;
ORA-30076: invalid extract field for extract source

select extract(second from systimestamp) from dual;
4.823

The error appears, because sysdate is kind of converted into a ANSI DATE and this does not hold time information.

 

Problem is: We just can not use this ANSI date type for normal activities.

Alternative for the ANSI date: TRUNC(datetime). In 12c as a virtual column.

Problem solved. ✔

 

7. group by ()

You can group by an empty parenthesis. Or let’s call it group by the FULL SET. This is in fact useful in some very specific cases.

It gives us the ability to get a NO_DATA_FOUND error if there is no data at all. Oracle guarantees that an aggregation select without a group by will always return a record. GROUP BY () can be added if you want to change this default behaviour on purpose.

Example

Let’s say we want to count how many employees are in one specific department.

select count(e.empno) emp#
from scott.dept d
left join scott.emp e on e.deptno=d.deptno
where d.dname = 'OPERATIONS';

EMP#
----
0

Since there are no employees in operations, this returns 0. This is correct and the information I want.

More specifically this query returns one row, but the e.empno is null and therefore is not counted. Count(*) would have returned 1!

But if we use a department name that does not even exists, then this will also return 0.

select count(e.empno) emp#
from scott.dept d
left join scott.emp e on e.deptno=d.deptno
where d.dname = 'FANCY_CLOUD_SPACE';

EMP#
----
0

Not correct! In this case I prefer to get a no data found error.

Grouping on the full set gives us this option.

select count(e.empno) emp#
from scott.dept d
left join scott.emp e on e.deptno=d.deptno
where d.dname = 'FANCY_CLOUD_SPACE'
group by ();

no rows found.

Btw: the SQL Developer syntax (pre-)parser identifies that () as an syntax error. But it will execute it successfully.

I have to admit, that in this case it would be much better to group by the department. But maybe it would be possible to get the count for multiple departments using an IN list. Then the group on department level would not help much.

 

Btw: GROUP BY NULL does the same thing, and might be less confusing for some developers. I personally prefere GROUP BY () because the () also is used in GROUPING_SETS.

 

6. + vs. –

In some edge cases using a different operator (+|-) leads to disturbing results.

Example: Substracting a tiny number from 1 does not equal 1. But adding a tiny number from 1 does equal 1!

select * from dual
where 1 = 1-0.000000000000000000000000000000000000001;

No data found!

This is expected behaviour…

select * from dual
where 1 = 1+0.000000000000000000000000000000000000001;

DUMMY
-----
X

Wat? Why do we get a result?!

The solution is simple. We overstepped the maximum precision of the number datatype when doing the addition. Max precision is 38 digits. The addition would require a precision of 39. Instead the result is rounded (or truncated) to a precision of 38 digits, which happens to equal 1.  The substraction result still is in the 38 precision range and therefore exact (very slightly less than 1).

Don’t confuse precision with maximum value! Precision is the number of different digits that we can store. This can then be moved by an exponent to the left or right for very large or very small values.

 

To put this into perspective

That precision is so extremly high, that we have major troubles visualizing it. So here is a comparsion. The comparison data comes from this hugely interesting site: http://money.visualcapitalist.com/all-of-the-worlds-money-and-markets-in-one-visualization/

The complete monetary assets in the whole word is estimated about 80 trillion $ (broad money=coins, banknotes, savings accounts, time deposits, …). That is a 80,000,000,000,000  or an 8 with 13 zeros at the end or 8e13 in scientific exponential notation.

select 8e13 from dual;

Considering all the national and private debts is even higher. Here we reach about  200 trillion $.

And if we consider derivates (=high risk gambles)  then the high estimations go as far as 1.2 quadrillion $.  That is a 1 followed by 24 more digits (mostly zeros) or 1.2e24.

Since all this is expressed in dollars, we can increase the number even further by chooseing a different currency.  For example the Nepalease Rupee is worth a little less than 0.01 dollar. So all the worlds derivatees could be expressed in Nepalease Rupees using a number as big as 1.2e26.

That means using a number with a precision of 38 we can easily express all the money in the world including derivates up to the very last Nepalease Rupee. And we still have only used 2/3rds of the maximum available precision. Or to say it differently. If we duplicate the planet Earth 1,000,000,000,000 (1 trillion) times – we could still give the exact amount of money on all planets in Napalease Rupees up to the very last Rupee using the number datatype.

That’s quite impressive!

Btw. The biggest number that can be expressed using the NUMBER datatype is 999…(38 9’s) x10125 . Which is way way bigger than anything we discussed so far.

 

5. instead of IN we can use = ANY

select 'TRUE'
 from dual
 where 1 = ANY (1,2,3,4,5)
TRUE

There are several similar unusual “replacement” options.

e.g.

  • instead of != we can also use ^=.
  • NOT IN is the same as  != ALL
  • SOME is a synonym for ANY – but only in sql!
  • <=LEAST | >=GREATEST can be replaced by <|>= ALL
  • >=LEAST | <=GREATEST can be replaced by >|<= SOME

I expect that SOME people do know about these operators, since questions about them are part of the SQL expert exam. So from ALL experts who took the exam at LEAST the few who passed should know SOMEthing about ANY.

The last replacement differs how NULL values are treated. In theory there could be cases where this comes in handy. I so far never had such an encounter.

Example

with testdata
as (select 1 a, 2 b, 3 c, 4 d from dual union all
   select 5 a, 4 b, 3 c, 2 d from dual union all
   select 1 a, null b, 2 c, null d from dual union all
   select null a, null b, null c, null d from dual)
select * from testdata
where 3 >= LEAST(a,b,c,d);

 

A B C D
-------
1 2 3 4
5 4 3 2

 

with testdata
as (select 1 a, 2 b, 3 c, 4 d from dual union all
    select 5 a, 4 b, 3 c, 2 d from dual union all
    select 1 a, null b, 2 c, null d from dual union all
    select null a, null b, null c, null d from dual)
select * from testdata
where 3 >= ANY(a,b,c,d);

 

A B C D
-------
1 2 3 4
5 4 3 2
1 - 2 -

4. external table on multiple files

External tables can load multiple files at once! As long as they have the same structure of cause. So here is an example.

CREATE TABLE EXT_DUMMY
(
    "RECORDTYPE" VARCHAR2(100 BYTE), 
    "COL1" VARCHAR2(100 BYTE), 
    "COL2" VARCHAR2(100 BYTE) 
)
ORGANIZATION EXTERNAL
(
    TYPE ORACLE_LOADER DEFAULT DIRECTORY "IMPORT_BAD_FILE" 
    ACCESS PARAMETERS ( 
             records delimited BY newline 
             FIELDS TERMINATED BY ';' 
             MISSING FIELD VALUES ARE NULL 
               ( RECORDTYPE CHAR
               , COL1 CHAR 
               , COL2 CHAR 
               ) 
    ) 
    LOCATION ( 'Testfile1.txt, Testfile2.txt' )
)
    reject limit 10
;

When we then do a select on the external table, we will see the data from both files.

We can also switch the external table to a different file.

alter table EXT_DUMMY location ('Testfile3.txt' )

 

For the experts: We can use the ROWID to find out which file was used to load a specific record. Special thanks go to OTN-forum member Odie_63 who found this solution. More info in this old OTN forum thread.

with ext_loc as (
      select position-1 as pos
           , name as filename
      from sys.external_location$
      where obj# = ( select object_id
                     from user_objects
                     where object_name = 'EXT_DUMMY' )
    )
select x.filename,
       t.*
from EXT_DUMMY t
join ext_loc x 
on x.pos = to_number(regexp_substr(dump(t.rowid,10,9,1),'\d+$'))
;

It’s a very clever piece of software. Essentially it extracts the filenumber from the rowid, looks up the file number in the data dictionary and combines that with our data set.

This select was done in a 10g database. In 12c we can probably use dbms_rowid to do the same, instead of regexp_substr(dump(rowid)).

3. insert default values

We can insert into a table using the “default” keyword to force default behaviour for this column.

Example

-- setup
create table swe_default_test 
(col1 varchar2(10) not null,
 col2 varchar2(10) default 'TEST' not null
 );
Table SWE_DEFAULT_TEST created.

-- test
insert into swe_default_test (col1) values ('X');
1 row inserted.

insert into swe_default_test (col1,col2) values ('X',null);
ORA-01400: cannot insert NULL into ("MYUSER"."SWE_DEFAULT_TEST"."COL2")

insert into swe_default_test (col1,col2) values ('X',default);
1 row inserted.

-- double check
select * from swe_default_test;
COL1 COL2 
---------- ----------
X TEST 
X TEST 

-- cleanup
drop table swe_default_test;
Table SWE_DEFAULT_TEST dropped.

 

In 12c we now have a new and I think better option to do the same. The default column can be defined additionally with “ON NULL”. Which has the effect, that inserting a NULL value, will lead to using the default value instead. Of cause the “default” keyword still works too.

-- setup
create table swe_default_test12c
(col1 varchar2(10) not null,
 col2 varchar2(10) default on null 'TEST' not null
 );
Table SWE_DEFAULT_TEST12C created.

-- test
insert into swe_default_test12c (col1,col2) values ('X',null);
1 row inserted.

insert into swe_default_test12c (col1,col2) values ('X',default);
1 row inserted.

-- doublecheck
select * from swe_default_test12c;
COL1 COL2 
---------- ----------
X TEST 
X TEST 

-- cleanup
drop table swe_default_test12c;
Table SWE_DEFAULT_TEST12C dropped.

As we can see both cases now work. The one using a NULL value and also useing the DEFAULT keyword.

This new 12c “default on null” feature can be used to replace the typical BEFORE ROW INSERT trigger. More info how to do this in Sequence and Audit columns with Apex 5 and 12c

 

2. (1,2) = ((1,2))

We can compare expression lists using the = operator. But the right side of the comparison needs an extra set of parenthesis.

select 'TRUE' checked from dual
 where (1,2,3,4,5) = ((1,2,3,4,5));
CHECKED
-------
TRUE

This is just a shorthand form of

1=1 and 2=2 and 3=3 and 4=4 and 5=5

If we do not use the second set of parenthesis on the right hand side of the comparison, then we get an error.

select 'TRUE' checked from dual
 where (1,2,3,4,5) = (1,2,3,4,5) ;
ORA-00920: invalid relational operator

This is documented. Although a tiny bit difficult to read in the syntax diagrams.

See: SQL Reference – Simple Comparison

syntax_compare_lists_1syntax_compare_lists_2

 

1. DUAL is a real table

Although there are some special optimizations in various tools, dual is a real table residing in the sys schema.

select owner, table_name
 from dba_tables
 where table_name = 'DUAL';
OWNER TABLE_NAME
----------------
SYS DUAL

 

In older database versions it was possible to do an insert into the DUAL table.

Warning! This is extremly dangerous. It will probably break ALL applications in your database.

insert into dual values ('Y');
select mysequence.nextval into v_id from dual;

TOO_MANY_ROWS error!

And nobody captures the TOO_MANY_ROWS exception for selects from dual.

Btw: A working safety mechanism is to add rownum = 1 to your query.

select mysequence.nextval into v_id from dual where rownum = 1;

Fortunately manipulating DUAL does not work anymore in recent versions of the database. I think 12c introduced it. Not even DBAs have the privs anymore to do this.

ORA-01031: insufficient privileges
*Cause: An attempt was made to perform a database operation without
the necessary privileges.

0. The Oracle SQL Syntax is documented!

Bam! There you have it! You wouldn’t have thought that. Well, you are not alone. So many developers seem to have no clue that this exists or where it exists.

When googeling use “oracle 12c doc” or “oracle 12.2 SQL ref” in front of your search term. It helps to find the proper pages.

12.2 doc portal

12.2 SQL introduction

12.2 SQL Language reference

 

 

Special thanks go out to Stefanie R., who gave me the idea for this blog post!

 

sqldev alert - overwrite package body modifications

DBA/Dev quick tipp: recover lost package body under EBR

Problem

It can happen that SQL developer suggests to reload an old package body that you are currently working on. If you click the suggested “autoupdate” then you will loose all current changes that you made to the package body. This just happened to a collegue of mine.

sqldev alert - overwrite package body modifications

It is possible to recover the old code using the flashback feature of the oracle database. But one has to be quicker than the undo retention period of the database.

If the schema was edition enabled this complicates things a tiny bit further. We need to make sure to fetch the plsql code from the correct edition that we are interested in.

When we are quering normal data dictionary views like DBA_SOURCE this will always use the current edition. DBA_SOURCE_AE will show all code editions.

Flashback does not work well with data dictionary views. However we can use the base tables and run a flashback query there to see the old package code.

Action

Three simple statements will do it.

Set your dba session to use the correct edition (this might not be needed if the object id is found using a slightly different query). The edition needs to match the edition where the source code was originally compiled

ALTER SESSION SET EDITION="DEV$ALPHA";

Find the object id of the package body

Use the correct name of the package and your edition name

select obj#
from sys."_CURRENT_EDITION_OBJ"
where name = 'MYPACKAGE'
and type# = 11 -- package body
and namespace = 2
and defining_edition = 'DEV$ALPHA';

Flashback query to show the source code for the identified object

select *
from sys.source$
--versions between timestamp (systimestamp - interval '25' minute) and systimestamp
as of timestamp (systimestamp - interval '25' minute)
where obj# = 1906090;

 

Done.

 

Aftermath

The starting situation can be reproduced by changeing body and spec of the same package. Once the spec is recompiled, the body needs to be recompiled too. Then SQL developer suggests to load the new code (which is in fact the old code). If you click yes, it will overwrite the current changes.