Since my previous blog posts about 10 Oracle SQL features you might not know and “10 Oracle plsql things you probably didn’t know”
raised quite some interest, I decided to add some more unknown features. Read careful, I started to write this blog post on 1st of April, so there is an easter egg hidden somewhere in this post. If you are not sure, always test and verify for yourself.
10. NVL can also handle unusual datatypes
NVL can also handle unusual datatypes like BOOLEAN, COLLECTIONS and Advanced Datatypes (ADTs).
set serveroutput on declare b boolean; begin if nvl(b,true) then dbms_output.put_line('TRUE'); else dbms_output.put_line('FALSE'); end if; end; /
This is part of the sys.standard implementation.
But since boolean is only supported in PLSQL we can’t do much with that in SQL.
9. secret column name “rowlimit_$$_rownumber”
We shouldn’t use “rowlimit_$$_rownumber” or “rowlimit_$$_total” as a column name.
Here is what could happen:
select dummy as "rowlimit_$$_rownumber" from dual fetch first 3 rows only;
ERROR at line 1:
ORA-00918: column ambiguously defined
The reason for this can be found when we use the new 12c functionality to expand a query. Typically this is used for views, but it can also be applied to some other features. Here for the logic that does the LIMIT action.
Special thanks to OTN forum members Solomon Yakobson and padders who pointed at the issue in this thread.
What happens behind the scene is that the limit clause “fetch first 3 rows” is changed (expanded) into a subquery that adds a second column “rowlimit_$$_rownumber” to the query. This column uses the row_number analytic function and is later used to filter upon the relevant rows of the LIMIT clause. The error happens because we now have two columns with the same name.
And here is one way to see the expanded code.
set linesize 1000 set longc 1000 set long 1000 variable c clob exec dbms_utility.expand_sql_text('select dummy from dual fetch first 3 rows only',:c) print c
SELECT "A1"."DUMMY" "DUMMY" FROM ( SELECT "A2"."DUMMY" "DUMMY", ROW_NUMBER() OVER ( ORDER BY NULL ) "rowlimit_$$_rownumber" FROM "SYS"."DUAL" "A2" ) "A1" WHERE "A1"."rowlimit_$$_rownumber"
“rowlimit_$$_total” has the same problem. It appears when we use PERCENT in the limit clause.
select dummy as "rowlimit_$$_total" from dual fetch first 3 percent rows only;
ORA-00918: column ambiguously defined
And if we expand the working query we see the reason.
SELECT "A1"."DUMMY" "DUMMY" FROM ( SELECT "A2"."DUMMY" "DUMMY", ROW_NUMBER() OVER( ORDER BY NULL ) "rowlimit_$$_rownumber", COUNT(*) OVER() "rowlimit_$$_total" FROM "SYS"."DUAL" "A2" ) "A1" WHERE "A1"."rowlimit_$$_rownumber" <= ceil("A1"."rowlimit_$$_total" * 3 / 100)
The PERCENT keyword requires to do a total count and uses this total count as a filter.
Fortunatly the chance that we by accident name our columns so is very very low.
8. Do you know all plsql pragmas?
Pragmas are instructions for the plsql compiler. There are many of them. Here is the list of pragmas I know or heared about. Not all of them are documented. Not all of them can be used by developers. Several can only be used as sys and come with additional restrictions, so they are only for Oracle internal purposes. Still they pique our curiosity.
The documented and not deprecated pragmas are in bold. At least we should know all of those.
One of the most misunderstood things in plsql.
Defines that the plsql logic runs independently from the main transaction.
Typical use case: To log away an error, even if the main transaction is rolled back.
It is not a workaround for mutating table errors!
Defines SQL builtin functions and operators.
This is an internal pragma for usage in package sys.standard.
This is a new pragma in 12.2.
The COVERAGE pragma marks PL/SQL source code to indicate that the code may not be feasibly tested for coverage. The pragma marks a specific code section. Marking infeasible code improves the quality of coverage metrics used to assess how much testing has been achieved.
Adds a compile time warning if the object is referenced. The message of the warning can be influenced. This new pragma was introduced in 12.2. We can add it to code that should be replaced. Useful in environments where multiple teams of developers call/reference the same code.
Combines a plsql exception with an exception number.
Another internal pragma that is used in package sys.standard.
I guess that the FIPSFLAG pragma has something to do with FIPS from NIST.
FIPS stands for “Federal Information Processing Standards.” It’s a set of government standards that define how certain things are used in the government–for example, encryption algorithms. FIPS defines certain specific encryption methods that can be used, as well as methods for generating encryption keys. It’s published by the National Institute of Standards and Technology, or NIST.
It seems that US-government computers have a FIPSFLAG enabled. Applications that run on these machines need to be FISMA compliant to be working on those machines.
Also interesting in that context:
Turns submodule inlining on or off. Submodule inlining is a plsql compiler feature implemented since 10g. The compiler can rewrite plsql code so that the resulting code runs faster. Among other options the compiler can add the code from inside a module directly at the point where that code is used (optimization level 3). This is called inlining. The performance advantage is that the expensive submodule call can be avoided. The disadvantage is that the same code is repeated everywhere where the submodule was originally. But we do not have to program this.
So we as developers can follow the DRY (don’t repeat yourself) paradigm and the optimizer tunes this code for performance. This is the best of two worlds.
Gateway for internal oracle functions to c libraries.
It is heavily used inside the sys.standard package spec.
--#### interface pragmas --#### Note that for any ICD which maps directly to a PVM --#### Opcode MUST be mapped to pes_dummy. --#### An ICD which invokes another ICD by flipping operands is --#### mapped to pes_flip, and an ICD whose result is the inverse of --#### another ICD is mapped to pes_invert --#### New ICDs should be placed at the end of this list, and a --#### corresponding entry must be made in the ICD table in pdz7 PRAGMA interface(c,length,"pes_dummy",1); PRAGMA interface(c,substr,"pes_dummy",1); PRAGMA interface(c,instr,"pesist",1); PRAGMA interface(c,UPPER,"pesupp",1); PRAGMA interface(c,LOWER,"peslow",1); PRAGMA interface(c,ASCII,"pesasc");
This is an internal pragma that restricts the use of particular new entries in package standard. It is only valid in package standard.
This is an internal pragma that can be added by database machine learning code. So it might appear by random somewhere in your code. If the schema is pokemon enabled you can use this pragma to train your modules to react faster or to eliminate invalid input data. The pragma was introduced in 19.1.4 using the multi lingual engine (MLE). So far it is only available in autonomous databases (cloud first). If your modules have collected enough power they can be combined to overtake and replace other packages during recompilation. The ultimate goal is to remove all bad performing code from the database.
(RNPS, WNPS, WNDS, RNDS, TRUST)
This is an outdated pragma. I can remember setting this in an Oracle 7.3 database.
It informs the database about the intended scope of the module. An error is raised if this pragma is violated.
RNPS = read no package state
WNPS = write no package state
RNDS = read no database state
WNDS = write no database state
TRUST = trust me, and don’t double check if all dependend objects do also behave correctly.
This pragma shouldn’t be needed anymore. Instead make functions DETERMINISTIC.
Loose all state when the call is finished. Package variables, open cursors and other plsql state is reset when the package is declared with this pragma.
This pragma sets/modifies the timestamp value of a package. Valid only in SYS (and probably only for package standard).
This pragma can be used if a function is mostly referenced directly inside a SQL statement. It avoids some of the additional overhead during the switch from the SQL to the PLSQL engine. Especially a simplified (less expensive) datatype check is done.
While the udf pragma is really a great performance feature it is currently very limited. For example the function can only have numeric parameters. If one parameter is a date, then the udf pragma will silently not work anymore, so we will not gain the performance benefit. If you want to improve that behaviour feel free to vote up this database enhancement idea by @LotharFlatz.
Btw: There are some indications that udf for functions with varchar2 parameters seem to be working in 12.1 but not anymore in 12.2. I didn’t verify this.
Because the compiler already does a good job, the pragma is usually not needed. In rare cases we might want to enforce inlining even if compiled with optimization level 2.
How many of the documented pragmas did you know? And how many of the additional ones?
Did you catch them all?
7. LoC limit
There is a limit for how many lines of code (LoC) a plsql object can have.
The limit was increased to 226 DIANA (Descriptive Intermediate Attributed Notation for Ada) nodes (~6 million LoC) in Oracle 8i. Before that it was only about 3000 Lines of Code (215 Diana Nodes).
Nowadays there are other limits that are more likely to be encountered, before the LoC limit is reached. See also: https://docs.oracle.com/en/database/oracle/oracle-database/18/lnpls/plsql-program-limits.html#GUID-00966B4C-B9A5-47D4-94AA-54AEBCC07CE9
Remember: compiler optimizations like inlining might increase your lines of code quite a bit.
6. datatype signtype
There is a datatype signtype. It allows only the numbers -1, 0 and 1.
set serveroutput on declare v_val pls_integer; v_sign signtype; begin for i in 1..10 loop v_val := round(dbms_random.value(-5,5)); v_sign := sign(v_val); dbms_output.put_line(v_sign); end loop; end; /
-1 1 -1 -1 -1 1 0 -1 -1 0
PL/SQL procedure successfully completed.
But this is PLSQL only. In SQL we can not use this type.
create table test(id number, s signtype);
ORA-00902: invalid datatype
Interesting, but so far I never found a need to use it.
5. functions without begin..end
We can declare functions that do not have a function body (no begin..end block).
create or replace function kommaSepariert(ctx in varchar2) return varchar2 deterministic parallel_enable aggregate using kommaSepariert_ot;
The secret here is that this function is an user defined aggregation function that uses an object type. And the type body holds the function logic.
Here is the matching type definition
create or replace TYPE "KOMMASEPARIERT_OT" as object( str varchar2(4000), static function odciaggregateinitialize( sctx in out kommaSepariert_ot) return number, member function odciaggregateiterate( self in out kommaSepariert_ot, ctx in varchar2) return number, member function odciaggregateterminate( self in kommaSepariert_ot, returnval out varchar2, flags in number) return number, member function odciaggregatemerge( self in out kommaSepariert_ot, ctx2 kommaSepariert_ot) return number); / create or replace TYPE BODY "KOMMASEPARIERT_OT" as static function odciaggregateinitialize( sctx in out kommaSepariert_ot) return number is begin sctx := kommaSepariert_ot(null); return odciconst.success; end; member function odciaggregateiterate( self in out kommaSepariert_ot, ctx in varchar2) return number is begin if self.str is not null then self.str := self.str ||','; end if; self.str := self.str || ctx; return odciconst.success; end; member function odciaggregateterminate( self in kommaSepariert_ot, returnval out varchar2, flags in number) return number is begin returnval := self.str; return odciconst.success; end; member function odciaggregatemerge( self in out kommaSepariert_ot, ctx2 kommaSepariert_ot) return number is begin if self.str is not null then self.str := self.str ||','; end if; self.str := self.str || ctx2.str; return odciconst.success; end; end; /
Such functions have been used in the past to combine strings. Nowadays we can use LISTAGG.
Here is a quick demonstration how it works
with testdata as(select 'abc' t from dual union all select 'def' t from dual union all select 'ghi' t from dual union all select 'jkl' t from dual) select kommasepariert(t) from testdata ;
4. The select clause can influence the number of rows returned
I’m not talking about using select DISTINCT (this is another cruel way where the select clause can change the number of rows returned).
Here is a more surprising situation. Consider those two slightly different queries.
with tbl as (select 1 val from dual union all select 2 val from dual union all select 3 val from dual ) SELECT CASE 0 WHEN 0 THEN 'A' WHEN SUM (val) THEN 'B' END AS c FROM tbl;
Result (3 rows) A A A
with tbl as (select 1 val from dual union all select 2 val from dual union all select 3 val from dual ) SELECT CASE 6 WHEN 0 THEN 'A' WHEN SUM (val) THEN 'B' END AS c FROM tbl;
Result (only 1 row) B
So 3 rows are returned if we check against 0 and 1 row is returned if we check against 6.
This is a side effect of two rules.
Rule 1: A select with an aggregate function doesn’t need a group by clause and then it is guaranteed to return a single row.
Rule 2: case statements use short-circuit evaluation.
In the first example the sum(val) was never evaluated, so no aggregation took place.
See also this otn thread where the situation was discussed.
I tested the behaviour in 220.127.116.11 SE and in 18.104.22.168 EE.
I also think this should be treated as a bug. Small changes as this to the select clause should not influence the number of rows returned.
3. Default behaviour for windowing clause in analytic functions
This is something I learned from the great Kim Berg Hansen (@Kibeha).
The default windowing clause is “RANGE BETWEEN unbounded preceding and current row”. This can sometimes lead to
wrong surprising results. In most cases we should switch and use ROWS BETWEEN. It is something a developer needs to be aware of.
from SQL reference – Analytic Functions
Whenever the order_by_clause results in identical values for multiple rows, the function behaves as follows:
CUME_DIST, DENSE_RANK, NTILE, PERCENT_RANK, and RANK return the same result for each of the rows.
ROW_NUMBER assigns each row a distinct value even if there is a tie based on the order_by_clause. The value is based on the order in which the row is processed, which may be nondeterministic if the ORDER BY does not guarantee a total ordering.
For all other analytic functions, the result depends on the window specification. If you specify a logical window with the RANGE keyword, then the function returns the same result for each of the rows. If you specify a physical window with the ROWS keyword, then the result is nondeterministic.
SUM is one of those “other” analytic functions.
Consider the following example. We have a table with a list of transactions. And we want to see the transaction value but also a cumulative sum for those values.
with testdata as (select 1 trans_id, 10 transaction_value, trunc(sysdate-10) transaction_day from dual union all select 2 trans_id, 20 transaction_value, trunc(sysdate-8) transaction_day from dual union all select 3 trans_id, -10 transaction_value, trunc(sysdate-2) transaction_day from dual union all select 4 trans_id, 30 transaction_value, trunc(sysdate-2) transaction_day from dual union all select 5 trans_id, 100 transaction_value, trunc(sysdate) transaction_day from dual ) select trans_id, transaction_day as trans_day, transaction_value as trans_value, sum(transaction_value) over (order by transaction_day) cumulative_sum from testdata order by trans_id;
TRANS_ID TRANS_DAY TRANS_VALUE CUMULATIVE_SUM 1 24.04.18 10 10 2 26.04.18 20 30 3 02.05.18 -10 50 4 02.05.18 30 50 5 04.05.18 100 150
As you can see the transaction 3 and 4 have the same cumulative sum. The reason is that our order criteria in the analytic window function does not separate those two rows.
There are two possible solutions. Either make sure that the order is not ambiquious. Or use “rows between”.
with testdata as (select 1 trans_id, 10 transaction_value, trunc(sysdate-10) transaction_day from dual union all select 2 trans_id, 20 transaction_value, trunc(sysdate-8) transaction_day from dual union all select 3 trans_id, -10 transaction_value, trunc(sysdate-2) transaction_day from dual union all select 4 trans_id, 30 transaction_value, trunc(sysdate-2) transaction_day from dual union all select 5 trans_id, 100 transaction_value, trunc(sysdate) transaction_day from dual ) select trans_id, transaction_day as trans_day, transaction_value as trans_value, sum(transaction_value) over (order by transaction_day rows between unbounded preceding and current row) cumulative_sum from testdata order by trans_id;
TRANS_ID TRANS_DAY TRANS_VALUE CUMULATIVE_SUM 1 24.04.18 10 10 2 26.04.18 20 30 3 02.05.18 -10 20 4 02.05.18 30 50 5 04.05.18 100 150
2. batched commits
The performance of many small commits can be improved when doing batched commits.
Instead of writing
we can do
commit work write batch;
Here are two real world examples where this was tested.
a) I recommended using batched commits to a colleague of mine, who tried to tune a set of java logic that run in highly parallel mode. The goal was to do 1 select + 2 inserts + commit in 1000 parallel sessions per second.
Switching to batched commits was so hugely successful, that they now raised the performance requirement to 2500 concurrent sessions per second. Which also means now the ball is passed back to the java developers to come up with a better mode to execute lots of small checks against the db.
b) I also tested batched commits in a different and more general context.
Most of our code has code instrumentation logic. That means we can turn on debugging with a certain trace level and while the code is running a lot of tracing information is written into a logging table. The instrumentation call (like logger.log_trace) uses an autonomous function to do so. Essentially it is a single insert followed by a commit. That also means that a lot of commits are executed. Which can put stress on the log writer and the storage system.
So I compared what happens when we do a commit vs. a batched commit while writing lots of tracing data.
The batched commit was orders of magnitude faster than the normal commit.
I plan to write a separate article to show the exact measurements.
EDIT+UPDATE: I need to double check and retest this improvement. It is possible that other factors influenced the measurements. Like other processes that throttled the log writer or hardware changes to the underlying storage system. Also it seems as if plsql already does a batched nowait commit per default (https://docs.oracle.com/en/database/oracle/oracle-database/18/lnpls/static-sql.html#GUID-56EC1B31-CA06-4460-A098-49ABD4706B9C). It might depend on the database version, but since 12.2 it is now documented. I couldn’t confirm this, and my tests so far seem to indicate an improvement.
So what is the disadvantage? Why not always use batched commits?
To be clear: For normal situations stay with the normal commit. But if you run into issues where the log writer is not fast enough then this can be a possibility.
The drawback (as I understand it) is that in the case of a database crash, you might not only loose the currently unfinished transactions but also some transactions, that were commited already, but which the logwriter didn’t finalize yet. Typically all transactions from the last 3 seconds are at risk.
1. “CRASH” is a reserved plsql word
I have no idea why.