How to setup APEX Developer Accounts on Oracle Cloud Infrastructure (free tier) – a suggestion

If you are an APEX Developer who just wants to move to the cloud, there are some things that you should be aware of. Here is my proposal what steps should be done to “cloud enable” APEX developer accounts. This post is only about setting up an account, not about any nifty features to use.

Scenario: We are an admin who already created a new APEX workspace and now wants to give some developers access to that workspace.

The actions were confirmed on Oracle Free Tier cloud featuring a 19c database and APEX 20.2. There is a good chance that some of the obstacles are removed in a newer version. At time of writing it is possible to setup an 21c database on the oracle free tier cloud.

Admin actions

1. create APEX developer account

Inside the workspace create a new APEX account.

Administration/Manage Users and Groups/Create User

Set it to developer. Set and remember the password. You will notice, when switching developer on, the button to “Require Change of Password on First Use” will deactivate.

APEX developer and admin accounts in the cloud are created as database users (in uppercase). This has several major implications.

  • The developer can NOT login into the workspace and change the password. For admins it is possible to set a new password using the admin page. Trying to change the password using the normal way of “change my password” results in a success message, but the change is silently ignored. I assume this is the reason, why the "Require Change of Password on First Use" button is deactivated.
  • The password rules of the database apply. On Autonomous Transaction Database (ATP) the accounts use the default profile, which uses the CLOUD_VERIFY_FUNCTION
https://docs.oracle.com/en/cloud/paas/autonomous-database/adbsa/manage-user-profiles.html#GUID-BD3C5573-DF0E-4E44-919A-08A471462B87
  • Using the same developer name over different workspaces means the same password is used for all. I think this is a welcomed feature, but it certainly is a change to what one is used on other platforms like on premises or on apex.oracle.com.
  • If the developer username has special characters in it, for example a dot or an @ as in “firstname.lastname@my.email”, then the database user later always needs to be put in double parenthesis “firstname.lastname@my.email”. Although that works, it prevents certain other options, like rest enabling the account (ORA-20018: Schema or user name must be a simple unquoted literal).

    I strongly recommend to only use letters, numbers and underscores for the name (simple literals). No dots, no @, no blanks or any other special chars. For specific rules see the docs:

Nonquoted identifiers must begin with an alphabetic character from your database character set. …
Nonquoted identifiers can only contain alphanumeric characters from your database character set and the underscore (_), dollar sign ($), and pound sign (#). … Oracle strongly discourages you from using $ and # in nonquoted identifiers.

From https://docs.oracle.com/en/database/oracle/oracle-database/19/sqlrf/Database-Object-Names-and-Qualifiers.html#GUID-75337742-67FD-4EC0-985F-741C93D918DA
  • The option to create multi users via one dialog does not work properly for developer accounts. Do not use this dialog! (It is possible to use it for end users.)

Side note: End users are not created as a database users. This means we can change the end user password using the normal APEX logic. And the other implications do not apply for them as well.

2. enable the developer to connect with SQL Developer

enable connect

Per default the created db user can not create a session. We need to give him that, so he/she can connect and change the password.

grant create session to <developer>;

proxy connect

A developer usually needs full access to the application schema (with the tables in it) he/she is working with. To avoid distributing the application schema password one can setup a proxy connection. This means the developer authenticates him/herself using the individual password, not the password of the application schema.

alter user <application_schema> grant connect through <developer>;

There are additional options possible like setting an extra password for the proxy connect or allowing only specific roles. For the typical developer scenario those extra settings seem not necessary.

Side note: A proxy connection is what APEX also uses to connect to the schema using only the APEX_PUBLIC_USER.

send wallet

To access the cloud database from a local client a key is needed. Oracle provides this key in a PKCS#12 wallet inside a wallet.zip file. The admin can download this wallet from the database service console. This is a database instance wallet that is only valid for this specific database. There are also regional wallets that are valid for all databases inside a region.

Write down who you sent the zip to. This is important since the admin might want to rotate the wallet. I prefer long intervals for the wallet rotation. But it should be rotated from time to time (for more information see: Rotate Wallets for Autonomous Database ).

Make the developer understand that he/she is not allowed to give the wallet to other users/developers.

Side note: There is an api to handle wallets (https://docs.oracle.com/en-us/iaas/api/#/en/database/20160918/AutonomousDatabaseWallet/UpdateAutonomousDatabaseWallet). Using this api one could automatically rotate the wallet at defined intervals.

Developer actions

3. wallet handling

Store the wallet_INSTANCENAME.zip into some local folder.

In general there is no need to unzip the wallet file. A noticeable exception would be if you want to use edition based redefinition. I wrote about this before:

4. setup SQL developer

Two connections are recommended.

One for the developer account itself. This one is needed to easily change the password. Additionally most developers like to have an environment were they can try out stuff. Further privs might be needed for that (create view, etc.).

And one connection for the application schema using a proxy connect.

To connect to the cloud from a local (and recent) SQL Developer installation is simple. Choose “Cloud Wallet” as the connection type and point the developer to the wallet zip file, choose the tns alias (service) and your done. On the free tier you should choose the “xxx_low” alias.

To create a proxy connect the name of the target schema (in my example DEMODB) needs to be configured under the tab “Proxy User”.

I also suggest to add this schema to the name of the connection. I use [schemaname] for that, because that is the syntax for proxy connects in SQL Plus.

SQL> connect "SVEN.WELLER"[DEMODB]/mysecretPwd1111

5. change the password

There are multiple ways to reset a password using SQL Developer. The good thing is, they also work for cloud connects (a recent jdbc driver is needed for way a) ).

a) Right-click on the connection and choose “Reset Password” . This also works for proxy connects and does reset the password for the individual user.

b) Type password in a connected worksheet

This changes the password for the schema. For a normal connection without a proxy this changes the password of the connected user.

In case of a proxy connect it tries to change the password for the PROXY client.

Since the developer will not have this password, he/she will not be able to change it.

c) And it is possible to change the password using the ALTER USER command. Everybody can do this for their own password even without enhanced privileges.

If the username has special chars in it – like in my case the dot – then do not forget to use ” around the name. Regardless of the special chars, the name will be in UPPERCASE. And of cause the password rules apply.

In addition to that alter user command, in the cloud environment, the original password needs to be mentioned too. otherwise the above command will result in the following error message

ORA-28221: REPLACE not specified
*Cause: User is changing password but password verification function is turned on and the original password is not specified and the user does not have the alter user system privilege.
*Action: Supply the original password.

Solution is easy, just add the old password to the alter user command. The complete syntax is

alter user <developerschema> 
     identified by <new password> 
     replace <old password>;

Conclusion

In the cloud each APEX developer is a database user. Treat it like one.

Jeff Smith shows a brand new option to manage DB users using SQL Developer web: https://www.thatjeffsmith.com/archive/2021/03/creating-users-granting-storage-quota-using-database-actions/

How to connect SQLDeveloper to an edition (EBR) on Autonomous Database (ATP)

On the Oracle Autonomous Database there are some restrictions on what is possible and what not. In generell Edition Based Redefinition (EBR) can be used on the Oracle Cloud Infrastructure. EBR is now part of Oracles Maximum Availability Architecture (MAA). It is possible to enable schemas to use editions, to create editionable objects, editioning views and cross edition triggers.

One thing that doesn’t work on Autonomous Transaction Database (ATP) – and I assume on the other Autonomous DBs as well – is to create a custom service using DBMS_SERVICE. This is not a feature directly needed for EBR. But such a service can be connected to a specific edition. When doing so this service acts as a separate layer for all kind of connects from the application or even from development tools like SQL Developer. This is super helpful. I wrote about this a couple of years ago: https://svenweller.wordpress.com/2015/06/16/working-with-editions-part-2/

For most tools like sql*plus or JDBC there are alternatives to specify an edition at connect time. Unfortunately so not for SQL Developer. Until I found out how to make it work there.

Installation steps

Here are the steps how to workaround that problem. This assumes a recent version of SQL Developer (I think at least 19.x is needed). I tested it with 20.4. More specifically the JDBC driver 18.3 is needed.

  1. Download the wallet zip file with the credentials from your cloud service control
  2. Unzip the wallet. It includes several files, among others the tnsnames.ora that defines the cloud connection strings.
  3. Create a connection in SQL Developer, but instead of a cloud connect to the zip file, use a custom JDBC connection
  4. The connect string needs the name of the alias as defined in the tnsnames.ora of the zip file and two custom parameters TNS_ADMIN and oracle.jdbc.editionName.
    Here is a template for the full string:
    jdbc:oracle:thin:@<tnsalias>?TNS_ADMIN=<path to the unzipped wallet folder> &oracle.jdbc.editionName=<Edition Name>

    If the path is a windows path, then the “\” needs to be doubled (escaped). For example TNS_ADMIN=”C:\\Users\\MyAccount\\Documents\\Tools\\sqldev connections\\Wallet_DEMOEBR”

Additional comments

The TNS_ADMIN parameter is a feature of the 18.3 JDBC thin driver (see What’s New in 19c and 18c JDBC and UCP). Using this parameter the sqlnet.ora file that otherwise defines the location of the wallet is not needed anymore. You also don’t need a TNS_ADMIN environment variable.

The new jdbc driver now has a ojdbc.properties file. It does not work to add oracle.jdbc.editionName to that file. The jdbc properties are needed already before the file is called.

Using the same syntax (&parameter=value) you can add any jdbc property to the connection string.

We can also add custom tns aliases to the tnsnames.ora file in the wallet location. This could make sense, for example if we like to lessen the number of retries in case a connect is not possible.

Several alternative approaches did not work. For example to use the ORA_EDITION environment variable. At least under windows this was not considered as a local environment parameter in combination with cloud connect. It would have been a pretty bad solution anyway, because all connections are then influenced by it.

One major difference compared to a service name is that when using JDBC if the target edition does not exist we get an error message “edition does not exist”. Whereas with a service name the connection silently falls back to the default edition. This can be a good or a bad thing, depending on your specific needs. Personally I prefer to get the error message.

SQL Developer export wizard

workaround: SQL Error: ORA-54013: INSERT operation disallowed on virtual columns

Problem

Unfortunately when we export data using the SQL developer export functionality it also exports data for virtual columns. While this is certainly useful for spreadsheet exports, it is rarely sensible for insert statements.

Any insert into a virtual column fails with error ORA-54013.

SQL Error: ORA-54013: INSERT operation disallowed on virtual columns

54013. 0000 – “INSERT operation disallowed on virtual columns”
*Cause: Attempted to insert values into a virtual column
*Action: Re-issue the statment without providing values for a virtual column

One way to avoid this is to set the virtual column to invisible before the export is started. SQL Developer will not export the data from invisible columns. But changing the data model in the source just to get a little more convenience is usually not the way to go. Remember that setting columns to invisible and visible again will put these columns at the end of the logical column list. Which could be a problem if code depends on the column order (which it shouldn’t). Also this requires to export the data again.

If you already have an export file with a lot of inserts, the following dirty little trick might help you.

Temp column trick

We create a new real column, insert the data into this column and throw it away afterwards. The virtual column is temporarily moved out of the way while doing so.

Lets assume we have a table persons and a virtual column that concatenates the columns firstname and lastname into fullname.

create table person (id number, 
                     firstname varchar2(100), 
                     lastname varchar2(100), 
                     fullname as lastname||',  '||firstname);

We rename the target column and temporarily add a new column at the end of the table. The insert statements produced by SQL Developer have the column names included in the insert, so they do not depend on column order.

alter table person rename fullname to "_FULLNAME";
alter table person add fullname VARCHAR2(1000);

Then run the insert scripts that were created by SQL Developer.

SET DEFINE OFF;
@PERSON_DATA_TABLE.sql

After the data was inserted, restore the old columns.

alter table person drop column fullname;
alter table person rename "_FULLNAME" to fullname;

Automation

And here are two selects that generate all those statements for a complete schema. The second select needs to run before the generated code from the first select is executed. Otherwise the columns in the dictionary already have been changed.

Preparation Script SQL

select 'alter table "'||table_name||'" rename "'||column_name||'" to "_'||column_name||'";'||chr(10)||
       'alter table "'||table_name||'" add "'||column_name||'" '||case when data_type in ('VARCHAR', 'VARCHAR2','CHAR','NCHAR','NVARCHAR2') then 'VARCHAR2(4000 byte)' else data_type end||';'
       as pre_insert_ddl
from user_tab_cols
where virtual_column='YES' 
--and user_generated='YES'
and hidden_column = 'NO';  

Restore Script SQL

select 'alter table "'||table_name||'" drop column "'||column_name||'";'||chr(10)||
'alter table "'||table_name||'" rename "_'||column_name||'" to "'||column_name||'";'
as post_insert_ddl
from user_tab_cols
where virtual_column='YES'
--and user_generated='YES'
and hidden_column = 'NO';

Restrictions and side effects

The scripts are not perfect. For example if you have a column name that is already at maximum identifier length (e.g. 30 or 128 characters long) then adding _ in front of the column will produce an error. I think this situation is very rare and should best handled by manually correcting the scripts.

Varchar2 columns are expected to be equal or less than 4k, which might not be correct when extended_string_size is used in newer DB versions.

Some special datatypes might need extra consideration. In general the datatype of the new column does not matter as long as it doesn’t produce an error during insert.

APEX Instrumentation and the SQL Developer

In general I do instrument all my code, but usually I have plsql code that calls some framework like logger. Because of that I rarely add extra APEX instrumentation code. Any trace or debug information is already written into the logger table.

Sometimes it makes sense to add calls to apex_debug too. Especially if this part of the logic is checked frequently using the APEX built-in debug functionality.

APEX 19.1 developer toolbar

Debugging in APEX has a huge advantage. As a developer it is very easy to do and to access the output.

The apex_debug package is available at least since APEX 5. Among others it includes a procedure enter to store parameters. It is recommended to call this at the beginning of your modules and add the combination of parameter name and value to the procedure.

Recently I added apex_debug.enter to several of my modules. Here are a few tricks I’d like to share.

Tipp 1: Debug level 5

Apex_debug has several levels. The default level is 4 (info).

If you want to see the information stored with apex_debug.enter, you need to show at least level 5.

The level can be set in the url. Instead of YES, set it to LEVEL5.

f?p=App:Page:Session:Request:Debug:ClearCache:itemNames:itemValues:PrinterFriendly

In APEX I use the enter procedure to store page item values that are used inside plsql blocks but also to see what is going on at the database level. Here is an example

before header plsql process

The process first calls apex_debug.enter. Then the procedure pk_setze_defaults.prepare_topbar calls apex_debug.enter a second time and stores the supplied parameter values.

And this is how the view debug output can look like

Show debug

Useful to see the state of page items at time of processing. And even more interesting is to see what procedures where called and which parameters had been used.

Tipp 2: avoid hardcoding the module name

The first parameter of the enter procedure is p_routine_name.

Instead of hardcoding the name we can call utl_call_stack to return the name of the module. You need to have at least database version 12c to use it.

utl_call_stack.concatenate_subprogram( 
    utl_call_stack.subprogram(1))

Utl_call_stack.subprogram gives us access to the module names inside the call stack. Number 1 in the stack is always the current module. Subprogram returns a collection which holds the package name and the submodule name. Sometimes multiple submodule names. The concatenate_subprogram function translates this collection into a readable string (divided by dots).

Example: Instead of hardcoding the module name ‘post_authenticate’

apex_debug.enter(p_routine_name=>'post_authenticate'
                ,p_name01 =>'p_user', p_value01 => p_user);


I use utl_call_stack to have the database fetch the module name at runtime

apex_debug.enter(p_routine_name=>utl_call_stack.concatenate_subprogram(utl_call_stack.subprogram(1))
                ,p_name01 =>'p_user', p_value01 => p_user);

The result:

PK_APEX_UTIL.post_authenticate p_user=>Sven

The first part is the module name (incuding package name), the second part is a list of parameter=>value combinations.

There are some side effects to it. The name is fetched at runtime, instead of decided at compile time.1 In some cirumstances (module inlining) a procedure can be rewritten by the plsql optimizer, so that the name of the module disappears. Utl_call_Stack would then return the name of the module, where the code was inlined into.

The runtime call is also slightly slower than the literal value.

Inside a plsql process in APEX the name will be __anonymous_block . Which is correct. APEX executes those blocks using dbms_sql. The name of the process is not known inside the block. But it can be seen in the APEX debug view output one line before the anonymous block.

So the advantage of not hardcoding the module name must be weighted against the possible side effects.

If you want the procedure name, but not the package name, then the following code will help. It returns only the name of the current (innermost) submodule:

utl_call_stack.subprogram(1)(utl_call_stack.lexical_depth(1)+1)

Tipp 3: use the newest SQL Developer version (18.4)

SQL Developer 18.4 has the built-in ability to grey out instrumentation code. I like this feature a lot. It allows the trained developers eye to quickly scan plsql code – without resting too much on the less important bits and pieces.

What surprised me is that this also includes apex_debug.

Here is an example screenshot. Notice how the whole call to apex_debug is greyed out.

SQL Developer 18.4 – PL/SQL code colors

Other packages that are greyed out are dbms_output, log and logger.

And we can add our own instrumentation framework to it. Which leads me to tipp 4.

Tipp 4: configure SQL Developer – add your instrumentation framework

It is a bit hard to find, but the color rule PlSqlCustom2 is where we can add our own package. Search for color in the preferences to find the entry. In one of my projects the instrumentation package is called pk_logging. So I add it like the screenshot shows.


Tools / Preferences => Code Editor / PL/SQL Syntax colors

And this is how the sample result looks like.

SQL Developer 18.4 – PL/SQL code colors – enhanced

Tipp 5: use snippets

Snippets are a nice little feature in SQL Developer. And you can add your own useful code snippets to it. I added a snippet for apex_debug.enter

SQL Developer – snippets view

Once implemented I simply double click the snippet and it adds the following code block.

    apex_debug.enter(p_routine_name => utl_call_stack.concatenate_subprogram(utl_call_stack.subprogram(1))
                    ,p_name01=>'XXX',p_value01=>xxx
                    ,p_name02=>'YYY',p_value02=>yyy
                    ,p_name03=>'ZZZ',p_value03=>zzz
                    );

If you like you can download this snippet from the SQL Developer exchange plattform (https://apex.oracle.com/pls/apex/f?p=43135:16:0::NO:RP,16:P16_ID:1141)

But it is easier just to create your own snippet and copy the code from here.

Conclusion

Make use of the developer tools you have. Use apex_debug, use SQLDeveloper. Think about your instrumentation framework. Make it as easy to use as possible.

Further readings

Footnotes

1. In APEX compile time and runtime of a PLSQL process are almost identical. The PLSQL process is dynamically compiled at runtime.

Upgrade to SQL Developer 18.2 ?

Why upgrade to SQL Developer 18?

This post intents to help you with the decision, if it is about time to upgrade. And if you are on the new version already, then you might find out about some new features that you might not have noticed so far.

I recently upgraded from version 17.3 to 18.2.

Here is a listing of my favourite changes. This listing includes new features, changed behaviours but also fixed bugs in either version 18.1 or 18.2. It is totally subjective. The focus is on SQL Developer only, I didn’t analyze changes regarding SQL Data Modeler, sqlcl or ORDS.

TL;DR;
Upgrade Now? => Yes!

Links

If you want you can go through the various documents of new features and bugfixes yourself.

Top new features or bugfixes

New Welcome Page

It is very likely you disabled the welcome page in a previous version. Go to Help/Start Page to show it. Then you can decide to keep it around by checking the appropriate check box in the lower left corner.

In the past this welcome screen was essentially useless. Now it has a section “RECENT” connections. It holds the list of the last five recently opend connections which I find myself using a lot.

The welcome screen still has a lot that can be improved (in red is what I dislike). But now it is a working and useful part of the tool.
sqldev_182_welcome_screen_b

Also interesting is the DEMO tab in the “Getting Started” section. Among others it links to videos about “Debugging PLSQL”, “Importing Data from Excel” and “Introduction to Oracle SQLcl”.

Improved Editor / Syntax Highlighting

Syntax Highlighting is arguably one of the most crucial features of any IDE.
A lot of bugfixes have been done for that.
Here is a quick list of changes/bugfixes with regards to the editor or syntax highlighting. The list is not complete.

  • Improved Arbori Editor:
    • syntax error highlighting
    • catch syntax errors before Arbori execution
  • Change Case as You Type was removed from the Code Insight Preferences page. Use the Code Formatter to set your identifier and keyword preferences, and format your code on demand instead.
  • SQL Injection vulnerability warnings for your PL/SQL
  • 27678178 PARSER DOESN’T LIKE NEW FOR CALLING A USERDEFINED CONSTRUCTOR IN A SQLBLOCK
  • 27718434 FORUM: SOME KEYWORDS ARE HIGHLIGHTED ONLY WHILE TYPING
  • 27803006 FORUM: BRACES () COLORING IS INTERMITTENT AND DIFFERENT FROM PREFERENCES
  • 27939025 FORUMS: BETTER PL/SQL SYNTAX HIGHLIGHTING/FORMATTER FOR CONDITIONAL COMPILATION SYNTAX

Since I use conditional compiling a lot in PLSQL the last fix is an essential one for me.

SQL Injection detection is a very promising feature. Especially when working with not so experienced developers. They easily might overlook some SQL injection issue and now the editor is able to warn about such scenarios. Jeff explains more about it here.

Btw: “Arbori” refers to the language used to describe parse trees during syntax analysis. Want to know more?

DBA Users Editor – SQL page now generates complete DDL for user including GRANTS

Way overdue. Happens a lot that you want a script to create the same schema that you have in some DEV environment and now you want to script the user creation for TEST and potentially PROD environments.

In the past the grants had been missing. A quick test of mine showed that all the grants that this schema needs seem to be there now. Granted roles, object privs, granted access to directory objects, granted system privileges. However grants that are given to other users are not included! So it is still not complete.

And remember, there also is a “create like” feature in the DBA panel.

sqldev_182_create_like

This “create like” has not all the grants that are on the DBA Users Editor SQL Page.

27169586 ALTER SCRIPT SCROLLS OFF PAGE, IMPOSSIBLE TO READ WITHOUT MUCH SCROLLING/RESIZE

I’m not sure, but I think I hit this bug very often in the past. And then it was annoying as hell. However we can’t access the bug numbers, so I’m not sure if that fix covers the problematic behaviour that I remember. At least it looks as if it is fixed now. Will monitor it closely.

Improved Code Formatter

I’m not a big fan of code formatters. But I know those matters to many developers. The bugfix lists are full of minor changes to it.

Two things I would like to point out:

  • Formatter: New Basic Preference: Convert Case Only

    This new setting means, the code will not be reorganized. Only the UPPER/lower case logic is applied. I personally like to have everything in lowercase (apart from some plsql method names where I use lowerCamelCase). So this setting allows me to quickly format SQL statements in the worksheet.
    sqldev_182_format_case_only

  • FORUM – ADDED FORMATTER OPTION: RIGHT-ALIGN QUERY KEYWORDS

    I know that many developers like to format their SQL queries in such a way, that the keywords are right alined. So this is possible now.

    sqldev_182_formatter_right_align
    Example Result

    -- left alined keywords
    select containerid
          ,status
          ,reason
          ,additional_info
    from table1 t1
    left join table2 t2 on t1.id   = t2.id
    where barcode   = :barcode;
    
     -- right alined keywords 
    select containerid
          ,status
          ,reason
          ,additional_info
      from table1 t1
      left join table2 t2 on t1.id   = t2.id
     where barcode   = :barcode;
    

Interesting new features or bugfixes

The following features are less deciding. So they might not be the sole reason why you want to upgrade. However they might be useful to know and to try out eventually.

  • Can now convert Oracle to ANSI Joins with context menu

    This feature sounds very promising. Unfortunatly I didn’t find the context menu yet, that allows to do so. So far I tried it only on a 11.2.0.4 db. Maybe the option uses functionality that requires a higher db version.

  • DBMS_XPLAN Added to Explain Drop Down toolbar menus
    sqldev_182_DBMS_XPLAN
    The new option does not create a plan immediatly. Instead it finds the sql_id and then produces a statement that uses the sql_id and then can read the execution plan using DBMS_XPLAN.

    select * 
    from table(dbms_xplan.display_cursor(
          sql_id=>'d7yr3jw0rj963', 
          format=>'ALLSTATS LAST'));
    
  • DBMS_XPLAN is the best way to get a nicely formatted execution plan.

  • Enhanced DB Search for dependencies and selected object types

    It is not clear what was enhanced. The search menu certainly looks the same. Also the search findings didn’t really seem to change.

    In case you don’t know, but search does cover view sources. This was already implemented a few versions ago.

    I use the DB Search a lot. And I welcome any improvements in that area, even if I don’t see them at first glance.

  • ORDS 18.2 bundled

    Tools/REST Data Services

    sqldev_182_ords_version

  • Extract variable/function prototype

    Refactoring code is something that SQL Developer supports since many versions now. It is surprising that almost no developers know about it or even use that frequently.

    This new option allows to quickly change expressions into local functions. This can save an aweful lot of codeing time!

    Example

    Somewhere inside my plsql code I have a concatenation.

    ...
    begin
    ...
        if v_json is not null then
          p_plate.additional_info := '{'||v_json||'}';
        end if;  
    ...
    

    I mark the '{'||v_json||'}' part, open the right context menu, choose Refactoring / Extract Variable/Function/CTE and give the function a new name, for example make_json_object.

    sqldev_182_refactor_expression

    The result is that my procedure got a local function. And the marked expression is replaced with a call to that function.

    ...
      --refactored function
      function make_json_object(v_json clob) return varchar2 is
      begin
        return '{'||v_json||'}';
      end make_json_object;
    begin
    ...
        if v_json is not null then
          p_plate.additional_info := make_json_object(v_json);
        end if;  
    ...
    

    The code would still require some rework, but it is a nice quick starting point.
    As always you can undo that change using CTRL-Z.

  • Allow explain/autotrace on selected text

    This I like a lot. You can mark a part of a bigger sql statement and get the explain plan for that part only. The marked code needs to be a valid SQL statement of cause.

    Highly useful for with clause statements. But also for expanded SQL queries.

  • 27962558 DRAG AND DROP FROM GRID TO EDITOR SHOULD QUOTE STRINGS

    The behaviour for drag and drop changed. I’m not so convinced about it, but I rarely use drag and drop from the grid.

    If we drag from a single varchar2 column, then all the values are copied, put in single quotes and separated by comma.
    However if we copy from multiple columns then the strings are not quoted.
    The quoteing only happens during drag and drop. Not when copying with CTRL+C, CTRL+V.

    I think this feature can be highly helpful, but needs some getting used to it.

Conclusion

There are several more changes, but those are the ones I am most interested in. As with any new toy we should check what is possible and how we can use it to improve our playing codeing experience.

Several of the new features are so convincing to me that I installed SQL Developer 18.2 now in all projects that I supervise.

Feel free to share what new features or bugfixes you like most!

SQL parse time issue – use “minified hinted outlines”

problem description

Recently I encountered a simple looking query that returned a very small result set, but used to take more than 5 seconds time to execute. When executing it a second time it was fast (less than 0.01 second). When adding a comment then the execution time was bad again (5 seconds). Adding a comment forces the optimizer to parse the query again. However cached blocks can be reused. So this simple test gives a good indication where the time is spent.

The query was simple looking at first but turned out to be a complex set of views within views with nested subqueries, union all combinations and so on. However the execution plan was all nested loops, the filter criteria was correctly pushed down through all the sub layers.

reason

The problem was that the complex sql took a very long time to hard parse.
Executing the plan was fast, but creating the execution plan was eating up the time. The CBO had trouble finding the best execution plan. The more tables are joined, the more different join orders are possible and need to be considered. This is called permutations (see tuning section below). It can take a considerably long time.

 

example

This was the query

SELECT
r.labwarebarcode
      ,r.orderid
      ,r.orderlineitemid
      ,r.streamid
      ,r.status
      ,r.lineitemcomment
      ,r.bat_identifier
      ,r.labwareitemid
      ,r.xposition
      ,r.yposition
      ,r.layoutid
      ,case when
         count(*) over (partition by r.streamid )                                        -- Number of wells
         =                                                                               -- equals
         count(case when r.status = 'SUCCESS' then 1 end) over (partition by r.streamid) -- number of successfull wells
         then 'FULFILLED'
         else 'NOT'
         end  as platestatus
from v_lnk_check_part_order_results r
where orderid = :orderid
order by streamid, labwareitemid;

Now how can we demonstrate that the parsing time is the problem?

To get a clear picture would be to trace your statement and run a TKPROF report for it. This is often something that is very hard to do for developers, because they do not have access to the OS and are not able without DBA privs to enable and disable the tracing and above all to run the TKPROF command. Here is a tutorial from John Watson that shows how to trace and use tkprof.

The way described at the beginning only gives us an indication. First run the statement several times until the timings are stable. This means the data blocks are cached in the buffer cache. Then slightly modify the statement, for example by adding a comment, adding a meaningless where condition (2=2) or removing one column from the output. This will lead to a new cursor and to a new hard parse. Run the same statement a second time and it should be fast again. The difference in execution time gives an approximization for the time needed to hard parse.

It is not perfect, but is a solid “educated guess”. I didn’t see an easy way in SQL developer to show those statistics using typical developer privs only.

the solution

Adding the following hint to the select solved the issue

/*+
LEADING(OLIS@SEL$11 ERRORS@SEL$11 WFI@SEL$2 EXTRAINFO@SEL$11 ILI@SEL$19 PL@SEL$27 ICH@SEL$17 SOL@SEL$25 BAT@SEL$21 CPD@SEL$23
CWFI@SEL$3 SOA@SEL$4 LI@SEL$3
POS@SEL$8 SOA@SEL$7 CWFI@SEL$6 LI@SEL$6
SOA@SEL$13 CE@SEL$13
OSM@SEL$12 O@SEL$12 S@SEL$12 LI@SEL$12)
ALL_ROWS
*/

What this hint does and how to get it is explained below.

I call it the “minified hinted outline” approach. Although “Extrem lead hinting” was a close runner up.

tuning effort

We already identified the hard parsing as the point where the CBO spends its time. And more specifically it is the point where the CBO creates multiple execution plans and compares them.


Source:Oracle Docs – Query Optimizer Concepts

To understand the problem that the CBO faces, here is a table that lists the number of potential table combinations that the CBO needs to consider. The permutations go up exponentially with the number of tables.

permutation considerations

# tables join permutations to consider
2 2! = 2, table A => table B & table B => table A
3 3! = 6, A=>B=>C & A=>C=>B & B=>A=>C & B=>C=>A & C=>A=>B & C=>B=>A
4 4! = 24
5 5! = 120
6 6! = 720
7 7! = 5040
15 15! = 1,307,674,368,000

There is a maximum number of permutations that the CBO considers.
This maximum number is 2000 (default since 11g). It can be influenced by the hidden parameter _optimizer_max_permutations. In older database version this was a normal init.ora parameter, but it was deprecated in 9i and replaced by the “hidden” parameter (http://oracleinaction.com/undocumented-params-11g/
). It might be that this maximum number is per query block. The documentation is not perfectly clear about it. A query block can be an independent subquery, a non-merged/inline view or a WITH clause section. So for ten query blocks, it might be that the maximum number of permutations to consider is 20000. I didn’t test that hypothesis but there are some indications that this happened in this specific case.

As we can see from the list this magical number of 2000 permutations is already reached when 7 tables are joined. After that the optimizer will stop looking at other combinations. Btw: that is why the table order in the FROM clause can still make a difference! So put the main tables at the beginning of the FROM clause, so that the CBO considers them correctly in its access path permutations.

general approaches

  • Stored outlines and SQL plan management
  • global table hints

stored outlines

A stored outline will store the execution plan for one specific SQL statement. They are mostly used for plan stability, however a nice side effect is, that the hard parse phase is shortend when a stored outline is found.
A good introduction about stored outlines can be found in the 10g documentation: https://docs.oracle.com/cd/B19306_01/server.102/b14211/outlines.htm#i30836 . Stored outlines are available since 9i at least. The modern version of a stored outline is a SQL plan baseline and SQL plan management (SPM).

Further reading:
Oracle white paper: http://www.oracle.com/technetwork/database/bi-datawarehousing/twp-sql-plan-mgmt-12c-1963237.pdf#22

Blog “Insight into the workings of the Optimizer”: https://blogs.oracle.com/optimizer/entry/how_to_use_sql_plan

global table hints

Essentially what stored outlines do is to set a couple of hints.

We can do this directly ourselfs by using global table hints.
The normal way to provide a hint, is to add it to the section where it is relevant for. But when views are involved, you probably won’t want to change the view definitions and cutter them with hints. First of all the views might belong to some other schema. Also they are used in different points in the application, so a hint that my be good for one specific SQL might not be good for another SQL using the same view.

Global table hints come to the rescue! A global table hint is able to “inject” a hint into the deeper layers of a view.

Quick and not so dirty solution

As already mentioned adding the following hint to the select solved the issue

/*+
LEADING(OLIS@SEL$11 ERRORS@SEL$11 WFI@SEL$2 EXTRAINFO@SEL$11 ILI@SEL$19 PL@SEL$27 ICH@SEL$17 SOL@SEL$25 BAT@SEL$21 CPD@SEL$23
CWFI@SEL$3 SOA@SEL$4 LI@SEL$3
POS@SEL$8 SOA@SEL$7 CWFI@SEL$6 LI@SEL$6
SOA@SEL$13 CE@SEL$13
OSM@SEL$12 O@SEL$12 S@SEL$12 LI@SEL$12)
ALL_ROWS
*/

All we need to get such a hint is clever usage of the Oracle SQL Developer.

how to

Using SQL developer we can check the execution plan for our query (F10).
In the lower section of the plan there is a section “Other XML” and in this section is a long list of HINTs. This section can be exported.

OR we can use the explain plan command and after that do this:

select *
from table(dbms_xplan.display(
             format=>'BASIC ALIAS OUTLINE'
          ));

“OUTLINE” will show the complete sql outline. Which is the sum of all hints that need to be applied to get this specific plan.
“ALIAS” will show the query block names that are used.
“BASIC” is just to focus on the relevant parts for this example.

A solid first test strategy is to copy and paste the complete outline section into the sql statement and see if it is now fast when using all the hints.

However this list is way to long and we can eliminate step by step most hints. I would recommend to keep only the leading hints. Remember, we just want to reduce the complexity for the high number of table permutations. The leading hints (and a few others like merge/no_merge) are responsible for freezing the join orders. Make sure to get all leading hints. If one of them is missing, the whole order of execution might break.

If the query is still reasonably fast then we can work with them only. In my case the execution time slightly increased to 0.15 seconds. Which is still way below the full 5 seconds when calling it without a hint.

The result may look like this:

LEADING(@SEL$6A1B73E7 CWFI@SEL$3 SOA@SEL$4 LI@SEL$3)
LEADING(@SEL$AB668856 POS@SEL$8 SOA@SEL$7 CWFI@SEL$6 LI@SEL$6)
LEADING(@SEL$5B04CACE SOA@SEL$13 CE@SEL$13)
LEADING(@SEL$FC92CB92 OSM@SEL$12 O@SEL$12 S@SEL$12 LI@SEL$12)
LEADING(@SEL$5461C9FE OLIS@SEL$11 ERRORS@SEL$11 WFI@SEL$2 EXTRAINFO@SEL$11 ILI@SEL$19 PL@SEL$27 ICH@SEL$17 SOL@SEL$25 BAT@SEL$21 CPD@SEL$23)
ALL_ROWS

Ok lets look more closely at one of the leading hints that we have:
LEADING(@SEL$5B04CACE SOA@SEL$13 CE@SEL$13)

@SEL$5B04CACE is a named query block. Where oracle itself had choosen the name. We can see those query block names when using the “ALIAS” setting during explain plan creation. I believe those query block names could potentially change when useing the same statement on a different database (e.g. moving the code from DEV to TEST DB). Therefore the goal is to get rid of those aliases for query blocks.

SOA@SEL$13 is the table(or view) alias “SOA” and SEL$13 is the 13th SELECT command in our query. Including the outermost select call that we issue ourselfs. So the table order is SOA before CE inside the query block “SEL$5B04CACE”.

We can then combine all those leading hints into one single leading hint. The order is relevant. Sometimes it needs a little testing to find the proper order of the hints. This is because the starting points, the query block names need to be removed when only one hint is used. In my case the “@SEL$5461C9FE” was the outermost block (the select itself), so the leading tables that came after that block were the first ones to consider. This outermost block can be identified by studying the ALIAS section of the execution plan.

Fazit

“minified hinted outlines” are a quick solution for a very specific type of problem. Typically when using complex queries on views we might want to consider such a solution. It is a quick solution as an alternativ to stored outlines or SPM.

drawbacks

This solution freezes parts of the execution plan. But in a less invasive way than a real stored outline would do. For example we do not say which index to use, instead the CBO still decides about the best way how to access a specific table. It is just that the order of the tables and subquery blocks is hinted and by that frozen.

If the select or the views change, then the leading hint needs to be regenerated. However this concept was working sucessfully even when moving the query including hint from test to a production environment.

scheduled job on procedure with DATE parameters

To set up a scheduled job in an oracle database is really simple. Oracle SQL Developer helps a lot to do so. It has a nice configuration wizard that creates all the needed function calls to DBMS_SCHEDULER for you.
scheduler_date_1

But some certain small issues can still be a hassle. For example if you want to run a procedure with parameters. Number and string parameters can be passed using DBMS_SCHEDULER.SET_JOB_ARGUMENT_VALUE. If you use sql developer to create the job for you, then it will use this function for all types of parameters.

That means you can’t easily pass sysdate. Because that is simply passed as a string.

This is the code that SQL developer generates.

SYS.DBMS_SCHEDULER.SET_JOB_ARGUMENT_VALUE(
job_name => '"MYSCHEMA"."myScheduledJob"',
argument_position => 1,
argument_value => '10');
SYS.DBMS_SCHEDULER.SET_JOB_ARGUMENT_VALUE(
job_name => '"MYSCHEMA"."myScheduledJob"',
argument_position => 2,
argument_value => 'sysdate');
SYS.DBMS_SCHEDULER.SET_JOB_ARGUMENT_VALUE(
job_name => '"MYSCHEMA"."myScheduledJob"',
argument_position => 3,
argument_value => 'false');

It is less known that you can also pass other types of parameters using DBMS_SCHEDULER.SET_JOB_ANYDATA_VALUE. It accepts an anydata data type. Anydata is like a placeholder for any scalar data type. You put a certain value or expression in and it “stores” the value and the data type.

I always change the job_name parameters from SQL Developer and replace it with a variable.
Here is an example of a real job that I set up using a date parameter value (value = “in 10 minutes”). always change the job_name parameters from SQL Developer and replace it with a variable.


-- create job
declare
v_jobname varchar2(30) := 'myScheduledJob';
v_date_format varchar2(100);
BEGIN
select value
into v_date_format
from v$nls_parameters
where parameter = 'NLS_DATE_FORMAT';

SYS.DBMS_SCHEDULER.CREATE_JOB (
job_name => v_jobname,
job_type => 'STORED_PROCEDURE',
job_action => 'MYSCHEMA.MYPACKAGE.RUN_BATCH',
number_of_arguments => 2,
start_date => TO_TIMESTAMP_TZ('2015-06-03 19:05:00 Europe/Berlin','YYYY-MM-DD HH24.MI.SS TZR'),
--start_date => systimestamp + interval '1' minute,
repeat_interval => NULL,
--end_date => TO_TIMESTAMP_TZ('2015-07-29 05:30:00 Europe/Berlin','YYYY-MM-DD HH24.MI.SS TZR'),
job_class => '"SYS"."DEFAULT_JOB_CLASS"',
enabled => FALSE,
auto_drop => FALSE,
comments => 'Do run something longrunning');

SYS.DBMS_SCHEDULER.SET_JOB_ARGUMENT_VALUE(
job_name => v_jobname,
argument_position => 1,
argument_value => 10);

SYS.DBMS_SCHEDULER.SET_JOB_ANYDATA_VALUE(
job_name => v_jobname,
argument_position => 2,
argument_value => sys.anydata.convertDate(sysdate + interval '10' min)
);

SYS.DBMS_SCHEDULER.SET_ATTRIBUTE(
name => v_jobname,
attribute => 'logging_level', value => DBMS_SCHEDULER.LOGGING_RUNS);

SYS.DBMS_SCHEDULER.SET_ATTRIBUTE(
name =>v_jobname,
attribute => 'max_run_duration', value => INTERVAL '1' DAY);

SYS.DBMS_SCHEDULER.SET_ATTRIBUTE(
name => v_jobname,
attribute => 'schedule_limit', value => INTERVAL '1' DAY);

SYS.DBMS_SCHEDULER.enable(
name => v_jobname);
commit;
END;
/

The expression “anydata.convertDate(sysdate + interval ’10’ min)” was used to set the parameter value.

Feel free to use that as a template for your own scheduled jobs.