During an ODA X7-2S upgrade from 19.9 to 19.13 we encountered the following issue. The prepatch report mentioned that the check “Validate kernel log level” failed with the message
OS kernel log level is set to debug, this may result in a failure when patching Clusterware
If kernel OS log level is more than KERN_ERR(3) then GI patching may fail
This problem also seems to exist in versions 19.10+ . It is a problem that can not be ignored. Trying to update the server anyways will lead to an error.
Here is an example how such a prepatch report might look like
Patch pre-check report ------------------------------------------------------------------------ Job ID: d94c910d-5ed5-4b02-9c65-9e525c176817 Description: Patch pre-checks for [OS, ILOM, GI, ORACHKSERVER] Status: FAILED Created: March 14, 2023 11:52:10 AM CET Result: One or more pre-checks failed for [GI] Node Name --------------- ODA01 Pre-Check Status Comments ------------------------------ -------- -------------------------------------- __OS__ Validate supported versions Success Validated minimum supported versions. Validate patching tag Success Validated patching tag: 19.13.0.0.0. Is patch location available Success Patch location is available. Verify OS patch Success Verified OS patch Validate command execution Success Validated command execution __ILOM__ Validate supported versions Success Validated minimum supported versions. Validate patching tag Success Validated patching tag: 19.13.0.0.0. Is patch location available Success Patch location is available. Checking Ilom patch Version Success Successfully verified the versions Patch location validation Success Successfully validated location Validate command execution Success Validated command execution __GI__ Validate GI metadata Success Successfully validated GI metadata Validate supported GI versions Success Validated minimum supported versions. Validate available space Success Validated free space under /u01 Is clusterware running Success Clusterware is running Validate patching tag Success Validated patching tag: 19.13.0.0.0. Is system provisioned Success Verified system is provisioned Validate ASM in online Success ASM is online Validate kernel log level Failed OS kernel log level is set to debug, this may result in a failure when patching Clusterware If kernel OS log level is more than KERN_ERR(3) then GI patching may fail Validate minimum agent version Success GI patching enabled in current DCSAGENT version Validate Central Inventory Success oraInventory validation passed Validate patching locks Success Validated patching locks Validate clones location exist Success Validated clones location Validate DB start dependencies Success DBs START dependency check passed Validate DB stop dependencies Success DBs STOP dependency check passed Evaluate GI patching Success Successfully validated GI patching Validate command execution Success Validated command execution __ORACHK__ Running orachk Success Successfully ran Orachk Validate command execution Success Validated command execution
You can check the setting of the kernel log level like this
[root@ODA01 ~]# cat /proc/sys/kernel/printk
10 4 1 7
The first entry “10” means the loglevel is set to debug. It should be set to “3” (=error).
However changing the /proc/sys/kernel/printk
file is not the correct way to solve the issue.
One must edit the file /etc/default/grub
and to remove the “debug
” entry there.
Then run
grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg
Grub is the “grand unified bootloader” that activates when the ODA ( or a VM ) is started. The above command takes the default config and applies it to all relevant vm configurations.
On some ODAs (I believe older ODA X6-2) you might need to apply the change to a different configuration file. On our ODA X7-2S this file was not existent so we did not change it.
grub2-mkconfig -o /boot/grub2/grub.cfg
After this the server needs to be restarted, so that the new setting is applied.
And here is a link to a MOSC thread that helped to solve the issue.
https://community.oracle.com/mosc/discussion/comment/16906486#Comment_16906486
I hope this saves you some time, in case you encounter the same problem.