The “EBnoR” Manifesto
Edition based not only redefinition
Author: Sven-Uwe Weller
ceo syntegris information solutions GmbH
Edition based redefinition
EBR – is not well known.
It is barely used.
Edition based not only redefinition
EBnoR – will change that.
EBR is a tool with unique possibilities.
EBnoR deals with cases
the originally intended scope.
EBR is difficult to use.
EBnoR is easy to use.
EBnoR avoids complexities.
EBnoR is powered by the strengths of EBR.
Edition based redefinition is a unique feature of the Oracle database that is available since version 11.2. It can be used at no additional license costs in all database editions (EE, SE, SE1, SE2, XE).
The term “edition” is used to describe a set of plsql based objects. Among those object types are plsql packages, views and synonyms. A complete list is here in the Oracle documentation.
A different edition can hold different code for the same plsql object. In essence each edition resembles a version of your database application code. EBR allows to store and run different versions of your application at the same time. The different code versions do exist inside the database at the same time.
This opens up a whole set of some very special possibilities that usually do not exist in other environments. EBR allows to do an upgrade to a new application release without shutting down the application. The new code version is installed in a new edition. Currently running sessions will still work with the old release. Deployment problems can be tested using the new release. Once those issues are solved, then the new edition is made available (as the new default edition). Only when a user ends his (database) session and reconnects then the new code version from the now new default edition will be used.
distinction to VCS
EBR should not be confused with a version control system like SVN or GIT. Although there are some similarities it serves a different purpose. A version control system supports the development and deployment process for a team of programmers. They can store and merge code there. Forks and code branches can be used to support development that temporarily goes into different directions. Code merges allow to combine branches again. This is something that EBR can and will not do. EBR is about running the different code versions in parallel. Application end users profit from EBR, not developers. At best an edition resembles a release in the version control system.
Sven says: A VCS saves and documents code, EBR executes code.
versions of data
Data resides in tables. Tables are not editionable objects. That means two things. Changes to table structures are not editionable directly. And the data itself is not subject to code release changes. That is a good thing. The data and the code that works with the data is separated.
Creating a new edition means that code is duplicated. Code can be duplicated easily, duplicating data is a much bigger issue.
Sometimes a code change requires to change existing data. This increases the complexity to run the old and the new code version at the same time. EBR offers solutions using cross-edition triggers and editioning views. How to use cross edition triggers is described in Bryn Llewellyn’s excellent white paper from 2009 (http://www.oracle.com/technetwork/articles/grid/edition-based-redefinition-1-133045.pdf). Read and understand this paper first before you go on and read this manifesto.
Large, mission critical applications built on Oracle Database 11g Release 1 and earlier versions are often unavailable for tens of hours while the application’s database objects are patched or upgraded. Oracle Database 11g Release 2 introduces edition-based redefinition, a revolutionary new capability that allows online application upgrade with uninterrupted availability of the application. When the installation of the upgrade is complete, the pre-upgrade application and the post-upgrade application can be used at the same time. Therefore an existing session can continue to use the pre-upgrade application until its user decides to end it; and all new sessions can use the post-upgrade application. As soon as no sessions are any longer using the pre-upgrade application, it can be retired. In other words, the application as a whole enjoys hot rollover from the pre-upgrade version to the post-upgrade version.
This whitepaper explains how edition-based redefinition works, and how to write online application upgrade scripts using this capability, at the level of detail needed by engineers who will write such scripts.”
the current status
EBR is in the market for a long time now. Still many developers and DBAs do not know about it. If they know about it, they are not using it actively. I identified a set of problem areas that are responsible for this current status of EBR. I see possible solutions. Using EBR the focus should not only be on redefining the code, instead EBR can open up a set of new and unique possibilities. Zero downtime application upgrades are only one of them. I call the set of solutions “Edition based not only redefinition” or short EBnoR.
problem areas under the influence of intelligent people (developers and DBAs)
|the deployment dilemma
||shortly before GO-live it is too late to think about EBR
|the design change gamble
||keep changes required by EBR at a minimum, design with EBR in mind
|the zero downtime promise
||Perfectness – if even possible – is expensive. Don’t go for perfectness, almost perfect is good enough.
|the cross edition complexity
||Cross edition triggers are needed for extrem cases. Huge effort for only a tiny result.
problem areas influencing decision makers (architects and project managers)
|the cross plattform idependency nonsense
||EBR is only available for Oracle databases
|the marketing confusion
||Editions are not license editions
EBnoR does offer solutions to each of those problem areas. Using small sidesteps from Oracles recommend path will lead to major improvements in development time, flexibility and ultimately results in a quicker learning curve.
The “deployment dilemma”
The advantages of EBR shine during one of the later steps in the software development lifecycle – the deployment phase. The dilemma is that at such a late point the additional programming effort to use EBR competes with adding more features or bug fixing of the application.
Oracle propagates that the advantage of using EBR to do application upgrades with minimal downtime is so big, that it is worth the additional effort and time. This is true in very rare cases only and mostly only for very large companies than can effort to spent money on this additional effort.
The problem is the decision point. Budget is reserved for a specific change in one application. There never is enough budget. There never is additional budget. Especially not shortly before the next application upgrade will go live. At this point there is not enough budget and not enough time.
EBR is a tool not an application itself. As such most companies hesitate to set up an extra project or a major software change just for using a tool.
The decision needs to be moved from budget owners to the developers and to an early point during the software life cycle. Only then will EBR find more integrations.
EBnoR: Start with EBR from day 1 in the project!
Use editions already during development!
The “design change gamble”
EBR requires some changes to the data design and to the schema.
One requirement is to add a view layer on top of all tables. Among others this allows to use cross edition triggers to do data changes. This is certainly a solid approach, however it is a major change to the data model. As such this will face critic from various other parties involved.
Typical arguments that are raised against such a change are:
- Performance will suffer because of the view
- The views do not confer to our naming conventions
- The views hide complexity
- Some tools work (better) with tables instead of views
- Enabling EBR can not be undone
The gamble is that EBR will take the blame if implemented at a final stage during development. If any issue arises, e.g. performance in the production environment drops, some will argue that the reason is in the additional view layer. Just because that was the only thing that was changed. It might be hard to prove otherwise, as long as comparable performance data is missing.
Editions can be dropped. Switching on editioning in the database has no implications on using the feature or removing editioning features.
EBnoR: Avoid major design changes!
Use a view layer if possible. If not, still use EBR without this extra layer.
Test early while using editioning!
The “zero downtime promise”
Absolute zero downtime is very hard to reach. Even a simple ALTER TABLE statement will lock the full table. During that the table is not available. Sessions will wait (which is a different term for downtime) or even break.
Focus on changes that have a minimal impact on downtime. For example adding a column is usually no problem. Even if the column is added to the old application without adding the new code it should not break anything. Write your application in such a way that added columns will not break existing code. This is easy to do. If the column is mandatory always provide a default value. Updating the data in this column might need more time. Dropping a column should be avoided because this usually breaks old application versions.
If the normal downtime window is a few days, then using EBR can change that window to less than an hour by focusing on the DML changes (tables mostly). For a normal user such a small downtime window is the same as zero. He will often not even notice such a short break. This does not mean that the new application version needs to be available after an hour. But just that after an hour the user is able to continue to work with the old version. After that downtime window the new edition can be installed, tested and then a gradual switch over of user sessions can be done.
Zero downtime is expensive. Even with EBR.
EBnoR: Aim for very low but not for zero downtime.
Don’t promise zero to users!
The “crossedition complexity”
Bryns paper describes how to use cross edition triggers to handle data changes between one application version and the next one. There is a major intrinsic problem with that. Let us first investigate how cross edition triggers work.
Cross edition triggers allow to manipulate data in one edition and add transformation logic so that the same dataset is correctly represented in the next or previous edition.
- “A forward crossedition trigger is fired by application DML issued by sessions using the pre-upgrade edition. Such a trigger is used to implement transformations from the old representation forwards into the new representation.“
- “A reverse crossedition trigger is fired by application DML issued by sessions using the post-upgrade edition. Such a trigger is used to implement transformations from the new representation backwards into the old representation.”
The problem is not how to write a trigger, but in the complexity of forward and backward data manipulations.
Sometimes the data manipulation needed to go to the next application version is simple, but can not be automated easily. Manual data cleansing actions can be such a case.
Some scenarios do not have a simple 1 to 1 transformation rule from data in the edition A and data in edition B. To do the forward transformation is usually part of the project. Very often it is much harder to provide the backward transformation. Sometimes this is only possible while keeping the old data structure thereby duplicating and replicating data. This defies the original reason for change.
The EBR solution is to have this change only temporarily. Therefore old editions should be dropped very soon after the application was switched to the new code (and data) version.
- Often requires to store several versions of the same data. Usually in different columns.
- Need to automate data transformation rules
- Huge additional effort not related to a better application
- Multiple editions require lots of cross edition triggers
Start slow and avoid cross edition data transformations!
This needs some explanation and examples.
For example if a design decision was made to trim the time component from all date values in a column. So that instead of a range comparison an equal comparison can be made.
dateCol >= trunc(:searchDate) and dateCol < trunc(:searchDate) + 1
dateCol = trunc(:searchDate)
The typical change includes a DML statement that updates the column using TRUNC.
set dateCol = trunc(dateCol)
where dateCol != trunc(dateCol);
This change can not be undone because information (the time component) is lost.
So to write a reverse crossedition trigger to mimic the exact same source data is difficult.
One solution could involve keeping the old column including time precision and adding a second column that holds only the truncated date value. Then add (or change) a editioning view in the old and new edition that serves as an api to the relevant column. A forward crossedition trigger would truncate the data that is inserted using the old application version. But without knowing where the time component is coming from, it would be impossible to write a reverse edition trigger that adds the time correctly. A default logic would be needed. Maybe depending on some other values. Like the time should be between the time from previous and following rows. So we need several new objects, a fairly complex code to do the reverse change, the data model is more complex than the simple update solution but above all data would be duplicated.
But why care? In this specific example the old application code might still work, even if the time component is truncated. So there a two possible easy ways to avoid complexity
- run the update several times, for example after each batch load of new data
- add a normal db trigger in the old edition that truncates all newly inserted date values
:new.dateCol := trunc(:old.dateCol)
Yes this would require a change in the old edition and as such would change old application code. If this can not be done because of organisational hurdles, then create an intermediate “preparation” edition and add the trigger there.
Consider a step by step approach. There are many situations out there, where we can get away without the need to implement additional cross edition logic. Avoiding this helps to get EBR started and become acquainted with the feature. Once we got used to it, we can add more – eventually even using cross edition logic for specific use cases.
EBnoR: Do not depend on editioning views, do not use cross edition triggers.
Avoid deleting objects, don’t hesitate to add objects (especially columns).
If you need to delete objects this will influence all editions. After careful consideration just do it. After all, rules are there to be broken. Announce it as a major application version which will take a small downtime. Do not let the zero downtime paradigm make you write complex code. Complex code in the end costs more in terms of maintainability and testing effort.
learning curve considerations
A learning curve comparison shows how a reduced approach will ease the first steps into the feature.
- plsql code versions: start using EBR already during development. Then the developers get used to working with editions. Building a new release and deploying it in a specific edition then is nothing strange anymore. Do development and tests in the same database but in different editions. Each step in the development lifecycle deserves a new edition.
- design changes: avoid changes in the data model that would influence old editions in a negative way. For example avoid dropping columns. Also write your code in a way it does not break when a new column is added to a table. Model your data first before you start to build a new application.
- low or zero downtime: often a tiny downtime is ok. If the aim is to be fast, but not to be perfect then the additional effort suddenly is way reduced. Do not aim for perfectness!
Example: Adding a column to a table requires that the table is locked for a moment. Especially if the new column is not null and is filled with some extra values. Other session might wait for this. If the table is very very large such a lock can take some time. If it takes a minute thats fine. It will not be zero downtime, but almost every project can effort to wait a minute during a new deployment. If it takes half a day, then you must invest effort to optimise it.
- cross edition data: avoid using cross edition triggers. They serve only a very specific very special purpose. Build your data changes in a way that you do not depend on cross edition triggers. The need for cross edition triggers is the major contributor to why there is a increased programming effort when EBR is used.
The “cross platform independency nonsense”
“EBR can not be used in our company because we want to stay database vendor independent”
Such an argument can often be heard from java developers or other programmers outside of the database world.
You brought a Porsche 911. This is an expensive but very fast car. However you choose not to drive it in 5th and 6th gear because there are other cars out there that do not have a 5th and sth gear. Does this make sense?
In my opinion the claim for database independency is often made to hide the incompetency or inability of the developer to learn about the unique features that their specific database allows. In general it is easy to structure code in such a way that vendor specific features are encapsulated and properly modularized. This will made future changes to another database less problematic.
This is also true for EBR. If you decide to enable and use editions you can do it in such a way that all the specific implementation details are hidden from the normal code. The developers and the DBAs should be “edition aware”. But after some initial setup, there is no extra effort that EBR requires. In general you would simply use a connect string that connects to the edition of your choice. There is almost no need to consider editions directly in the code.
There are a few exceptions. Like if you want to start a scheduled job then you proabably want to start that job in the same edition as the session that is currently running and creates that job. This is possible using a job class using a service that is tied to an edition. Or more general: Everytime you have a kind of client access to your database, you want to make sure that you can influence the edition that is used at connection time.
Modularize and structure your code well!
Immediately counter the argument by using strong analogies.
The “marketing confusion”
I think the EBR name is suboptimal (nicely spoken). First of all the term “edition” is confusing, because it is also used as a name for database editions (EE, SE, XE). The R stands for “redefinition”. The term is a poor description of adding a new code release to your application.
I admit I’m not creative enough to suggest a better name, but as an analyst I’m able to pinpoint a flaw when I see one.
Nowadays every “thing” or “technology” has a smart speaking name and a funny logo. The name usually is also an abbreviation, and one could think this is only of minor importance. It is not. This is the name that feature/tool goes by. This name and logo is used in slides, presentations, and social media channels.
Node JS packages: https://www.npmjs.com
Architecture components for Spark: https://www.linkedin.com/pulse/hadoop-summit-2015-takeaway-lambda-architecture-laurent-bride
Just look at all those small but important looking pics in this simple architecture diagram. It makes you feel as if you missed something important, if you don’t know them all.
- EBR is hard to pronounce and to spell out.
- The term “edition” is already established in the Oracle namespace. And usually it stands close to “license costs”.
- A nice enchanting logo is missing.
- EBnoR / EbanoR = Edition based not only Redefinition
- PRETI = plsql runtime environment and test integration
- or something completely different = OSCD (could be in use already by some other organisation)
Oracle, get some logo wizards and marketing people to work on that!
The EBR feature allows to do more than the standard model intended by the Oracle development team.
The standard model is
- install a new application version in a new edition in production
- test if the deployment went well
- switch the current edition to the new edition
- after some short grace period drop the old edition
During this whole time the impact on connected users is minimal.
This is great!
But there is more…
Using EBR in certain ways can open up new possibilities and ease the upgrade pain.
New possibilities include
restore and run old code
Assume a problem is reported, where you suspect that the reason is gone meanwhile. But the problem existed inside some older code version. Useing EBR it is now possible to quickly prove that assumption. Create a new edition in the environment where you want to reproduce the issue (usually DEV, but might be some integration or QA test database). Fetch the old plsql code from your versioning system and add it into this new edition.
Run the test in both editions to see if the problem occurs with the old code but not with the new code.
After that test simple throw away the whole edition (drop edition cascade).
develop and test in the same database
Build releases in ORA$BASE
=> test in edition TST$BETA
=> develop in edition DEV$ALPHA.
This is something I discovered when actively developing and working with EBR. It is one of the best things to do. It might even save license costs because you can eliminate a complete test database. Although some test databases do not need additional licenses. It does not mean you can save up upon tests. But certain types of tests can profit hugely from EBR. Testers use one edition, developers use another (child) edition. Developers can actively analyse bugs that testers found without the need to move data from the test to the dev database. And developers can fix bugs without disturbing tests running in the same database.
Build the test environment automatically from your versioning system.
provide backward compatibility
This is often useful for API development (see also ThickDB | SmartDB paradigm).
APIs provide a specific interface (usually views and plsql packages). APIs expose a limited set of objects and methods for a specific task. APIs do not expose data directly. This is were EBR shines. All those objects are editionalble objects!
Often we have multiple and different consumers using the same api. If API functionality is enhanced, then we can provide the newer version of the API in a new edition. If we keep the old edition instead of dropping it almost immediately, consumers can still use their old code without the need to immediatly switch to the new version.
As long as cross edition triggers can be avoided, there are no major problems with keeping multiple API versions for a longer period of time.
Tipp: All editioned objects should be compiled in each edition even if there was no change. This actualizes a separate code version in each new edition.
the EBnoR (Edition Based not only Redefinition) approach
- Start using EBR on day 1 in the project
- Aim for low but not zero downtime
- Avoid (complex) cross edition logic
- Develop and test in the same database in different editions
Some final words
I had planned to publish this manifesto for quite a long time time (several years now). Since I started to put down those thoughts, EBR was enhanced. New recommendations have been made by Oracle plsql development. Other people, including the AskTom Team, encountered similar things with EBR. Some of those ideas and recommendations match with EBnoR. Still I expect EBnoR to be very controversial. This is intended.
For sake of brevety I skipped a larger part of examples and direct code demos.
EBR also still has issues where I see no immediate solution. For example editions are database level objects. As a developer I would prefere them to be schema level objects. The problems connected to this are beyond the scope of even EBnoR.