SQL Developer export wizard

workaround: SQL Error: ORA-54013: INSERT operation disallowed on virtual columns


Unfortunately when we export data using the SQL developer export functionality it also exports data for virtual columns. While this is certainly useful for spreadsheet exports, it is rarely sensible for insert statements.

Any insert into a virtual column fails with error ORA-54013.

SQL Error: ORA-54013: INSERT operation disallowed on virtual columns

54013. 0000 – “INSERT operation disallowed on virtual columns”
*Cause: Attempted to insert values into a virtual column
*Action: Re-issue the statment without providing values for a virtual column

One way to avoid this is to set the virtual column to invisible before the export is started. SQL Developer will not export the data from invisible columns. But changing the data model in the source just to get a little more convenience is usually not the way to go. Remember that setting columns to invisible and visible again will put these columns at the end of the logical column list. Which could be a problem if code depends on the column order (which it shouldn’t). Also this requires to export the data again.

If you already have an export file with a lot of inserts, the following dirty little trick might help you.

Temp column trick

We create a new real column, insert the data into this column and throw it away afterwards. The virtual column is temporarily moved out of the way while doing so.

Lets assume we have a table persons and a virtual column that concatenates the columns firstname and lastname into fullname.

create table person (id number, 
                     firstname varchar2(100), 
                     lastname varchar2(100), 
                     fullname as lastname||',  '||firstname);

We rename the target column and temporarily add a new column at the end of the table. The insert statements produced by SQL Developer have the column names included in the insert, so they do not depend on column order.

alter table person rename fullname to "_FULLNAME";
alter table person add fullname VARCHAR2(1000);

Then run the insert scripts that were created by SQL Developer.


After the data was inserted, restore the old columns.

alter table person drop column fullname;
alter table person rename "_FULLNAME" to fullname;


And here are two selects that generate all those statements for a complete schema. The second select needs to run before the generated code from the first select is executed. Otherwise the columns in the dictionary already have been changed.

Preparation Script SQL

select 'alter table "'||table_name||'" rename "'||column_name||'" to "_'||column_name||'";'||chr(10)||
       'alter table "'||table_name||'" add "'||column_name||'" '||case when data_type in ('VARCHAR', 'VARCHAR2','CHAR','NCHAR','NVARCHAR2') then 'VARCHAR2(4000 byte)' else data_type end||';'
       as pre_insert_ddl
from user_tab_cols
where virtual_column='YES' 
--and user_generated='YES'
and hidden_column = 'NO';  

Restore Script SQL

select 'alter table "'||table_name||'" drop column "'||column_name||'";'||chr(10)||
'alter table "'||table_name||'" rename "_'||column_name||'" to "'||column_name||'";'
as post_insert_ddl
from user_tab_cols
where virtual_column='YES'
--and user_generated='YES'
and hidden_column = 'NO';

Restrictions and side effects

The scripts are not perfect. For example if you have a column name that is already at maximum identifier length (e.g. 30 or 128 characters long) then adding _ in front of the column will produce an error. I think this situation is very rare and should best handled by manually correcting the scripts.

Varchar2 columns are expected to be equal or less than 4k, which might not be correct when extended_string_size is used in newer DB versions.

Some special datatypes might need extra consideration. In general the datatype of the new column does not matter as long as it doesn’t produce an error during insert.

APEX Instrumentation and the SQL Developer

In general I do instrument all my code, but usually I have plsql code that calls some framework like logger. Because of that I rarely add extra APEX instrumentation code. Any trace or debug information is already written into the logger table.

Sometimes it makes sense to add calls to apex_debug too. Especially if this part of the logic is checked frequently using the APEX built-in debug functionality.

APEX 19.1 developer toolbar

Debugging in APEX has a huge advantage. As a developer it is very easy to do and to access the output.

The apex_debug package is available at least since APEX 5. Among others it includes a procedure enter to store parameters. It is recommended to call this at the beginning of your modules and add the combination of parameter name and value to the procedure.

Recently I added apex_debug.enter to several of my modules. Here are a few tricks I’d like to share.

Tipp 1: Debug level 5

Apex_debug has several levels. The default level is 4 (info).

If you want to see the information stored with apex_debug.enter, you need to show at least level 5.

The level can be set in the url. Instead of YES, set it to LEVEL5.


In APEX I use the enter procedure to store page item values that are used inside plsql blocks but also to see what is going on at the database level. Here is an example

before header plsql process

The process first calls apex_debug.enter. Then the procedure pk_setze_defaults.prepare_topbar calls apex_debug.enter a second time and stores the supplied parameter values.

And this is how the view debug output can look like

Show debug

Useful to see the state of page items at time of processing. And even more interesting is to see what procedures where called and which parameters had been used.

Tipp 2: avoid hardcoding the module name

The first parameter of the enter procedure is p_routine_name.

Instead of hardcoding the name we can call utl_call_stack to return the name of the module. You need to have at least database version 12c to use it.


Utl_call_stack.subprogram gives us access to the module names inside the call stack. Number 1 in the stack is always the current module. Subprogram returns a collection which holds the package name and the submodule name. Sometimes multiple submodule names. The concatenate_subprogram function translates this collection into a readable string (divided by dots).

Example: Instead of hardcoding the module name ‘post_authenticate’

                ,p_name01 =>'p_user', p_value01 => p_user);

I use utl_call_stack to have the database fetch the module name at runtime

                ,p_name01 =>'p_user', p_value01 => p_user);

The result:

PK_APEX_UTIL.post_authenticate p_user=>Sven

The first part is the module name (incuding package name), the second part is a list of parameter=>value combinations.

There are some side effects to it. The name is fetched at runtime, instead of decided at compile time.1 In some cirumstances (module inlining) a procedure can be rewritten by the plsql optimizer, so that the name of the module disappears. Utl_call_Stack would then return the name of the module, where the code was inlined into.

The runtime call is also slightly slower than the literal value.

Inside a plsql process in APEX the name will be __anonymous_block . Which is correct. APEX executes those blocks using dbms_sql. The name of the process is not known inside the block. But it can be seen in the APEX debug view output one line before the anonymous block.

So the advantage of not hardcoding the module name must be weighted against the possible side effects.

If you want the procedure name, but not the package name, then the following code will help. It returns only the name of the current (innermost) submodule:


Tipp 3: use the newest SQL Developer version (18.4)

SQL Developer 18.4 has the built-in ability to grey out instrumentation code. I like this feature a lot. It allows the trained developers eye to quickly scan plsql code – without resting too much on the less important bits and pieces.

What surprised me is that this also includes apex_debug.

Here is an example screenshot. Notice how the whole call to apex_debug is greyed out.

SQL Developer 18.4 – PL/SQL code colors

Other packages that are greyed out are dbms_output, log and logger.

And we can add our own instrumentation framework to it. Which leads me to tipp 4.

Tipp 4: configure SQL Developer – add your instrumentation framework

It is a bit hard to find, but the color rule PlSqlCustom2 is where we can add our own package. Search for color in the preferences to find the entry. In one of my projects the instrumentation package is called pk_logging. So I add it like the screenshot shows.

Tools / Preferences => Code Editor / PL/SQL Syntax colors

And this is how the sample result looks like.

SQL Developer 18.4 – PL/SQL code colors – enhanced

Tipp 5: use snippets

Snippets are a nice little feature in SQL Developer. And you can add your own useful code snippets to it. I added a snippet for apex_debug.enter

SQL Developer – snippets view

Once implemented I simply double click the snippet and it adds the following code block.

    apex_debug.enter(p_routine_name => utl_call_stack.concatenate_subprogram(utl_call_stack.subprogram(1))

If you like you can download this snippet from the SQL Developer exchange plattform (https://apex.oracle.com/pls/apex/f?p=43135:16:0::NO:RP,16:P16_ID:1141)

But it is easier just to create your own snippet and copy the code from here.


Make use of the developer tools you have. Use apex_debug, use SQLDeveloper. Think about your instrumentation framework. Make it as easy to use as possible.

Further readings


1. In APEX compile time and runtime of a PLSQL process are almost identical. The PLSQL process is dynamically compiled at runtime.

Upgrade to SQL Developer 18.2 ?

Why upgrade to SQL Developer 18?

This post intents to help you with the decision, if it is about time to upgrade. And if you are on the new version already, then you might find out about some new features that you might not have noticed so far.

I recently upgraded from version 17.3 to 18.2.

Here is a listing of my favourite changes. This listing includes new features, changed behaviours but also fixed bugs in either version 18.1 or 18.2. It is totally subjective. The focus is on SQL Developer only, I didn’t analyze changes regarding SQL Data Modeler, sqlcl or ORDS.

Upgrade Now? => Yes!


If you want you can go through the various documents of new features and bugfixes yourself.

Top new features or bugfixes

New Welcome Page

It is very likely you disabled the welcome page in a previous version. Go to Help/Start Page to show it. Then you can decide to keep it around by checking the appropriate check box in the lower left corner.

In the past this welcome screen was essentially useless. Now it has a section “RECENT” connections. It holds the list of the last five recently opend connections which I find myself using a lot.

The welcome screen still has a lot that can be improved (in red is what I dislike). But now it is a working and useful part of the tool.

Also interesting is the DEMO tab in the “Getting Started” section. Among others it links to videos about “Debugging PLSQL”, “Importing Data from Excel” and “Introduction to Oracle SQLcl”.

Improved Editor / Syntax Highlighting

Syntax Highlighting is arguably one of the most crucial features of any IDE.
A lot of bugfixes have been done for that.
Here is a quick list of changes/bugfixes with regards to the editor or syntax highlighting. The list is not complete.

  • Improved Arbori Editor:
    • syntax error highlighting
    • catch syntax errors before Arbori execution
  • Change Case as You Type was removed from the Code Insight Preferences page. Use the Code Formatter to set your identifier and keyword preferences, and format your code on demand instead.
  • SQL Injection vulnerability warnings for your PL/SQL

Since I use conditional compiling a lot in PLSQL the last fix is an essential one for me.

SQL Injection detection is a very promising feature. Especially when working with not so experienced developers. They easily might overlook some SQL injection issue and now the editor is able to warn about such scenarios. Jeff explains more about it here.

Btw: “Arbori” refers to the language used to describe parse trees during syntax analysis. Want to know more?

DBA Users Editor – SQL page now generates complete DDL for user including GRANTS

Way overdue. Happens a lot that you want a script to create the same schema that you have in some DEV environment and now you want to script the user creation for TEST and potentially PROD environments.

In the past the grants had been missing. A quick test of mine showed that all the grants that this schema needs seem to be there now. Granted roles, object privs, granted access to directory objects, granted system privileges. However grants that are given to other users are not included! So it is still not complete.

And remember, there also is a “create like” feature in the DBA panel.


This “create like” has not all the grants that are on the DBA Users Editor SQL Page.


I’m not sure, but I think I hit this bug very often in the past. And then it was annoying as hell. However we can’t access the bug numbers, so I’m not sure if that fix covers the problematic behaviour that I remember. At least it looks as if it is fixed now. Will monitor it closely.

Improved Code Formatter

I’m not a big fan of code formatters. But I know those matters to many developers. The bugfix lists are full of minor changes to it.

Two things I would like to point out:

  • Formatter: New Basic Preference: Convert Case Only

    This new setting means, the code will not be reorganized. Only the UPPER/lower case logic is applied. I personally like to have everything in lowercase (apart from some plsql method names where I use lowerCamelCase). So this setting allows me to quickly format SQL statements in the worksheet.


    I know that many developers like to format their SQL queries in such a way, that the keywords are right alined. So this is possible now.

    Example Result

    -- left alined keywords
    select containerid
    from table1 t1
    left join table2 t2 on t1.id   = t2.id
    where barcode   = :barcode;
     -- right alined keywords 
    select containerid
      from table1 t1
      left join table2 t2 on t1.id   = t2.id
     where barcode   = :barcode;

Interesting new features or bugfixes

The following features are less deciding. So they might not be the sole reason why you want to upgrade. However they might be useful to know and to try out eventually.

  • Can now convert Oracle to ANSI Joins with context menu

    This feature sounds very promising. Unfortunatly I didn’t find the context menu yet, that allows to do so. So far I tried it only on a db. Maybe the option uses functionality that requires a higher db version.

  • DBMS_XPLAN Added to Explain Drop Down toolbar menus
    The new option does not create a plan immediatly. Instead it finds the sql_id and then produces a statement that uses the sql_id and then can read the execution plan using DBMS_XPLAN.

    select * 
    from table(dbms_xplan.display_cursor(
          format=>'ALLSTATS LAST'));
  • DBMS_XPLAN is the best way to get a nicely formatted execution plan.

  • Enhanced DB Search for dependencies and selected object types

    It is not clear what was enhanced. The search menu certainly looks the same. Also the search findings didn’t really seem to change.

    In case you don’t know, but search does cover view sources. This was already implemented a few versions ago.

    I use the DB Search a lot. And I welcome any improvements in that area, even if I don’t see them at first glance.

  • ORDS 18.2 bundled

    Tools/REST Data Services


  • Extract variable/function prototype

    Refactoring code is something that SQL Developer supports since many versions now. It is surprising that almost no developers know about it or even use that frequently.

    This new option allows to quickly change expressions into local functions. This can save an aweful lot of codeing time!


    Somewhere inside my plsql code I have a concatenation.

        if v_json is not null then
          p_plate.additional_info := '{'||v_json||'}';
        end if;  

    I mark the '{'||v_json||'}' part, open the right context menu, choose Refactoring / Extract Variable/Function/CTE and give the function a new name, for example make_json_object.


    The result is that my procedure got a local function. And the marked expression is replaced with a call to that function.

      --refactored function
      function make_json_object(v_json clob) return varchar2 is
        return '{'||v_json||'}';
      end make_json_object;
        if v_json is not null then
          p_plate.additional_info := make_json_object(v_json);
        end if;  

    The code would still require some rework, but it is a nice quick starting point.
    As always you can undo that change using CTRL-Z.

  • Allow explain/autotrace on selected text

    This I like a lot. You can mark a part of a bigger sql statement and get the explain plan for that part only. The marked code needs to be a valid SQL statement of cause.

    Highly useful for with clause statements. But also for expanded SQL queries.


    The behaviour for drag and drop changed. I’m not so convinced about it, but I rarely use drag and drop from the grid.

    If we drag from a single varchar2 column, then all the values are copied, put in single quotes and separated by comma.
    However if we copy from multiple columns then the strings are not quoted.
    The quoteing only happens during drag and drop. Not when copying with CTRL+C, CTRL+V.

    I think this feature can be highly helpful, but needs some getting used to it.


There are several more changes, but those are the ones I am most interested in. As with any new toy we should check what is possible and how we can use it to improve our playing codeing experience.

Several of the new features are so convincing to me that I installed SQL Developer 18.2 now in all projects that I supervise.

Feel free to share what new features or bugfixes you like most!

SQL parse time issue – use “minified hinted outlines”

problem description

Recently I encountered a simple looking query that returned a very small result set, but used to take more than 5 seconds time to execute. When executing it a second time it was fast (less than 0.01 second). When adding a comment then the execution time was bad again (5 seconds). Adding a comment forces the optimizer to parse the query again. However cached blocks can be reused. So this simple test gives a good indication where the time is spent.

The query was simple looking at first but turned out to be a complex set of views within views with nested subqueries, union all combinations and so on. However the execution plan was all nested loops, the filter criteria was correctly pushed down through all the sub layers.


The problem was that the complex sql took a very long time to hard parse.
Executing the plan was fast, but creating the execution plan was eating up the time. The CBO had trouble finding the best execution plan. The more tables are joined, the more different join orders are possible and need to be considered. This is called permutations (see tuning section below). It can take a considerably long time.



This was the query

      ,case when
         count(*) over (partition by r.streamid )                                        -- Number of wells
         =                                                                               -- equals
         count(case when r.status = 'SUCCESS' then 1 end) over (partition by r.streamid) -- number of successfull wells
         then 'FULFILLED'
         else 'NOT'
         end  as platestatus
from v_lnk_check_part_order_results r
where orderid = :orderid
order by streamid, labwareitemid;

Now how can we demonstrate that the parsing time is the problem?

To get a clear picture would be to trace your statement and run a TKPROF report for it. This is often something that is very hard to do for developers, because they do not have access to the OS and are not able without DBA privs to enable and disable the tracing and above all to run the TKPROF command. Here is a tutorial from John Watson that shows how to trace and use tkprof.

The way described at the beginning only gives us an indication. First run the statement several times until the timings are stable. This means the data blocks are cached in the buffer cache. Then slightly modify the statement, for example by adding a comment, adding a meaningless where condition (2=2) or removing one column from the output. This will lead to a new cursor and to a new hard parse. Run the same statement a second time and it should be fast again. The difference in execution time gives an approximization for the time needed to hard parse.

It is not perfect, but is a solid “educated guess”. I didn’t see an easy way in SQL developer to show those statistics using typical developer privs only.

the solution

Adding the following hint to the select solved the issue

OSM@SEL$12 O@SEL$12 S@SEL$12 LI@SEL$12)

What this hint does and how to get it is explained below.

I call it the “minified hinted outline” approach. Although “Extrem lead hinting” was a close runner up.

tuning effort

We already identified the hard parsing as the point where the CBO spends its time. And more specifically it is the point where the CBO creates multiple execution plans and compares them.

Source:Oracle Docs – Query Optimizer Concepts

To understand the problem that the CBO faces, here is a table that lists the number of potential table combinations that the CBO needs to consider. The permutations go up exponentially with the number of tables.

permutation considerations

# tables join permutations to consider
2 2! = 2, table A => table B & table B => table A
3 3! = 6, A=>B=>C & A=>C=>B & B=>A=>C & B=>C=>A & C=>A=>B & C=>B=>A
4 4! = 24
5 5! = 120
6 6! = 720
7 7! = 5040
15 15! = 1,307,674,368,000

There is a maximum number of permutations that the CBO considers.
This maximum number is 2000 (default since 11g). It can be influenced by the hidden parameter _optimizer_max_permutations. In older database version this was a normal init.ora parameter, but it was deprecated in 9i and replaced by the “hidden” parameter (http://oracleinaction.com/undocumented-params-11g/
). It might be that this maximum number is per query block. The documentation is not perfectly clear about it. A query block can be an independent subquery, a non-merged/inline view or a WITH clause section. So for ten query blocks, it might be that the maximum number of permutations to consider is 20000. I didn’t test that hypothesis but there are some indications that this happened in this specific case.

As we can see from the list this magical number of 2000 permutations is already reached when 7 tables are joined. After that the optimizer will stop looking at other combinations. Btw: that is why the table order in the FROM clause can still make a difference! So put the main tables at the beginning of the FROM clause, so that the CBO considers them correctly in its access path permutations.

general approaches

  • Stored outlines and SQL plan management
  • global table hints

stored outlines

A stored outline will store the execution plan for one specific SQL statement. They are mostly used for plan stability, however a nice side effect is, that the hard parse phase is shortend when a stored outline is found.
A good introduction about stored outlines can be found in the 10g documentation: https://docs.oracle.com/cd/B19306_01/server.102/b14211/outlines.htm#i30836 . Stored outlines are available since 9i at least. The modern version of a stored outline is a SQL plan baseline and SQL plan management (SPM).

Further reading:
Oracle white paper: http://www.oracle.com/technetwork/database/bi-datawarehousing/twp-sql-plan-mgmt-12c-1963237.pdf#22

Blog “Insight into the workings of the Optimizer”: https://blogs.oracle.com/optimizer/entry/how_to_use_sql_plan

global table hints

Essentially what stored outlines do is to set a couple of hints.

We can do this directly ourselfs by using global table hints.
The normal way to provide a hint, is to add it to the section where it is relevant for. But when views are involved, you probably won’t want to change the view definitions and cutter them with hints. First of all the views might belong to some other schema. Also they are used in different points in the application, so a hint that my be good for one specific SQL might not be good for another SQL using the same view.

Global table hints come to the rescue! A global table hint is able to “inject” a hint into the deeper layers of a view.

Quick and not so dirty solution

As already mentioned adding the following hint to the select solved the issue

OSM@SEL$12 O@SEL$12 S@SEL$12 LI@SEL$12)

All we need to get such a hint is clever usage of the Oracle SQL Developer.

how to

Using SQL developer we can check the execution plan for our query (F10).
In the lower section of the plan there is a section “Other XML” and in this section is a long list of HINTs. This section can be exported.

OR we can use the explain plan command and after that do this:

select *
from table(dbms_xplan.display(
             format=>'BASIC ALIAS OUTLINE'

“OUTLINE” will show the complete sql outline. Which is the sum of all hints that need to be applied to get this specific plan.
“ALIAS” will show the query block names that are used.
“BASIC” is just to focus on the relevant parts for this example.

A solid first test strategy is to copy and paste the complete outline section into the sql statement and see if it is now fast when using all the hints.

However this list is way to long and we can eliminate step by step most hints. I would recommend to keep only the leading hints. Remember, we just want to reduce the complexity for the high number of table permutations. The leading hints (and a few others like merge/no_merge) are responsible for freezing the join orders. Make sure to get all leading hints. If one of them is missing, the whole order of execution might break.

If the query is still reasonably fast then we can work with them only. In my case the execution time slightly increased to 0.15 seconds. Which is still way below the full 5 seconds when calling it without a hint.

The result may look like this:


Ok lets look more closely at one of the leading hints that we have:

@SEL$5B04CACE is a named query block. Where oracle itself had choosen the name. We can see those query block names when using the “ALIAS” setting during explain plan creation. I believe those query block names could potentially change when useing the same statement on a different database (e.g. moving the code from DEV to TEST DB). Therefore the goal is to get rid of those aliases for query blocks.

SOA@SEL$13 is the table(or view) alias “SOA” and SEL$13 is the 13th SELECT command in our query. Including the outermost select call that we issue ourselfs. So the table order is SOA before CE inside the query block “SEL$5B04CACE”.

We can then combine all those leading hints into one single leading hint. The order is relevant. Sometimes it needs a little testing to find the proper order of the hints. This is because the starting points, the query block names need to be removed when only one hint is used. In my case the “@SEL$5461C9FE” was the outermost block (the select itself), so the leading tables that came after that block were the first ones to consider. This outermost block can be identified by studying the ALIAS section of the execution plan.


“minified hinted outlines” are a quick solution for a very specific type of problem. Typically when using complex queries on views we might want to consider such a solution. It is a quick solution as an alternativ to stored outlines or SPM.


This solution freezes parts of the execution plan. But in a less invasive way than a real stored outline would do. For example we do not say which index to use, instead the CBO still decides about the best way how to access a specific table. It is just that the order of the tables and subquery blocks is hinted and by that frozen.

If the select or the views change, then the leading hint needs to be regenerated. However this concept was working sucessfully even when moving the query including hint from test to a production environment.

scheduled job on procedure with DATE parameters

To set up a scheduled job in an oracle database is really simple. Oracle SQL Developer helps a lot to do so. It has a nice configuration wizard that creates all the needed function calls to DBMS_SCHEDULER for you.

But some certain small issues can still be a hassle. For example if you want to run a procedure with parameters. Number and string parameters can be passed using DBMS_SCHEDULER.SET_JOB_ARGUMENT_VALUE. If you use sql developer to create the job for you, then it will use this function for all types of parameters.

That means you can’t easily pass sysdate. Because that is simply passed as a string.

This is the code that SQL developer generates.

job_name => '"MYSCHEMA"."myScheduledJob"',
argument_position => 1,
argument_value => '10');
job_name => '"MYSCHEMA"."myScheduledJob"',
argument_position => 2,
argument_value => 'sysdate');
job_name => '"MYSCHEMA"."myScheduledJob"',
argument_position => 3,
argument_value => 'false');

It is less known that you can also pass other types of parameters using DBMS_SCHEDULER.SET_JOB_ANYDATA_VALUE. It accepts an anydata data type. Anydata is like a placeholder for any scalar data type. You put a certain value or expression in and it “stores” the value and the data type.

I always change the job_name parameters from SQL Developer and replace it with a variable.
Here is an example of a real job that I set up using a date parameter value (value = “in 10 minutes”). always change the job_name parameters from SQL Developer and replace it with a variable.

-- create job
v_jobname varchar2(30) := 'myScheduledJob';
v_date_format varchar2(100);
select value
into v_date_format
from v$nls_parameters
where parameter = 'NLS_DATE_FORMAT';

job_name => v_jobname,
job_type => 'STORED_PROCEDURE',
number_of_arguments => 2,
start_date => TO_TIMESTAMP_TZ('2015-06-03 19:05:00 Europe/Berlin','YYYY-MM-DD HH24.MI.SS TZR'),
--start_date => systimestamp + interval '1' minute,
repeat_interval => NULL,
--end_date => TO_TIMESTAMP_TZ('2015-07-29 05:30:00 Europe/Berlin','YYYY-MM-DD HH24.MI.SS TZR'),
job_class => '"SYS"."DEFAULT_JOB_CLASS"',
enabled => FALSE,
auto_drop => FALSE,
comments => 'Do run something longrunning');

job_name => v_jobname,
argument_position => 1,
argument_value => 10);

job_name => v_jobname,
argument_position => 2,
argument_value => sys.anydata.convertDate(sysdate + interval '10' min)

name => v_jobname,
attribute => 'logging_level', value => DBMS_SCHEDULER.LOGGING_RUNS);

name =>v_jobname,
attribute => 'max_run_duration', value => INTERVAL '1' DAY);

name => v_jobname,
attribute => 'schedule_limit', value => INTERVAL '1' DAY);

name => v_jobname);

The expression “anydata.convertDate(sysdate + interval ’10’ min)” was used to set the parameter value.

Feel free to use that as a template for your own scheduled jobs.