ODA Quickie – How to solve ODABR Error: Dirty bit is set.

The problem

A little while ago during an ODA X7-2S upgrade from 19.6 to 19.9 the following error was encountered.

SUCCESS: 2021-06-04 10:02:05: ...EFI device backup saved as '/opt/odabr/out/hbi/efi.img'
INFO: 2021-06-04 10:02:05: ...step3 - checking EFI device backup
ERROR: 2021-06-04 10:02:05: Error running fsck over /opt/odabr/out/hbi/efi.img
ERROR: 2021-06-04 10:02:05: Command: 'fsck -a /opt/odabr/out/hbi/efi.img' failed as fsck from util-linux 2.23.2 fsck.fat 3.0.20 (12 Jun 2013) 0x25: Dirty bit is set. Fs was not properly unmounted and some data may be corrupt.  Automatically removing dirty bit. Performing changes. /opt/odabr/out/hbi/efi.img: 23 files, 1245/63965 clusters
INFO: 2021-06-04 10:02:05: Mounting EFI back
ERROR: 2021-06-04 10:02:06: Backup not completed, exiting...

This seems to be a known issue for Bare Metal ODAs. But the way to solve the problem is poorly documented.

The mos notes

The Oracle ODABR support document mentions the problem twice and gives slightly different solutions.

Check the “ODABR – Use Case” and the “known issues section”.

https://support.oracle.com/epmos/faces/DocumentDisplay?id=2466177.1

The document also mentions Internal Bug 31435951 ODABR FAILS IN FSCK WITH “DIRTY BIT IS SET”.

From the public ODABR document

This is not an ODABR issue. ODABR is signalling a fsck error because your (in this case) efi partition is not in expected status… 
To fix this:

unmount efi
fsck.vfat -v -a -w <efidevice>
mount efi

Unfortunatly the workaround is a bit vague and hard to understand. The efi partition is mounted as /boot/efi . The “efi device” is not the same as the mount point but can be gathered from that.


Here are the exact commands that helped me to solve the issue.

The solution

First check your filesystem (the output was taken after we repaired the issue) – your mileage may vary.

[root@ODA01 odabr]# df -h
Filesystem                          Size  Used Avail Use% Mounted on
devtmpfs                             94G   24K   94G   1% /dev
tmpfs                                94G  1.4G   93G   2% /dev/shm
tmpfs                                94G  4.0G   90G   5% /run
tmpfs                                94G     0   94G   0% /sys/fs/cgroup
/dev/mapper/VolGroupSys-LogVolRoot   30G   11G   17G  40% /
/dev/mapper/VolGroupSys-LogVolU01   148G   92G   49G  66% /u01
/dev/mapper/VolGroupSys-LogVolOpt    59G   43G   14G  77% /opt
tmpfs                                19G     0   19G   0% /run/user/1001
tmpfs                                19G     0   19G   0% /run/user/0
/dev/asm/commonstore-13             5.0G  367M  4.7G   8% /opt/oracle/dcs/commonstore
/dev/asm/reco-215                   497G  260G  238G  53% /u03/app/oracle
/dev/asm/datredacted-13             100G   28G   73G  28% /u02/app/oracle/oradata/redacted
/dev/asm/datredacted2-13            100G   74G   27G  74% /u02/app/oracle/oradata/redacted2
/dev/md0                            477M  208M  244M  47% /boot
/dev/sda1                           500M  9.8M  490M   2% /boot/efi

This shows us the “efi device” is /dev/sda1

Then we did the steps as described in the documentation:

[root@ODA01 odabr]# umount /boot/efi

[root@ODA01 odabr]# fsck.vfat -v -a -w /dev/sda1
fsck.fat 3.0.20 (12 Jun 2013)
fsck.fat 3.0.20 (12 Jun 2013)
Checking we can access the last sector of the filesystem
0x25: Dirty bit is set. Fs was not properly unmounted and some data may be corrupt.
 Automatically removing dirty bit.
Boot sector contents:
System ID "mkdosfs"
Media byte 0xf8 (hard disk)
       512 bytes per logical sector
      8192 bytes per cluster
        16 reserved sectors
First FAT starts at byte 8192 (sector 16)
         2 FATs, 16 bit entries
    131072 bytes per FAT (= 256 sectors)
Root directory starts at byte 270336 (sector 528)
       512 root directory entries
Data area starts at byte 286720 (sector 560)
     63965 data clusters (524001280 bytes)
63 sectors/track, 255 heads
         0 hidden sectors
   1024000 sectors total
Reclaiming unconnected clusters.
Performing changes.
/dev/sda1: 23 files, 1245/63965 clusters

[root@ODA01 odabr]# mount /boot/efi

After this, we could sucessfully create an ODABR snapshot

[root@ODA01 odabr]# ./odabr backup -snap -osize 50 -usize 80
INFO: 2021-06-04 12:14:49: Please check the logfile '/opt/odabr/out/log/odabr_87615.log' for more details


│▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒│
 odabr - ODA node Backup Restore - Version: 2.0.1-62
 Copyright Oracle, Inc. 2013, 2020
 --------------------------------------------------------
 Author: Ruggero Citton <ruggero.citton@oracle.com>
 RAC Pack, Cloud Innovation and Solution Engineering Team
│▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒│

INFO: 2021-06-04 12:14:49: Checking superuser
INFO: 2021-06-04 12:14:49: Checking Bare Metal
INFO: 2021-06-04 12:14:49: Removing existing LVM snapshots
WARNING: 2021-06-04 12:14:49: LVM snapshot for 'opt' does not exist
WARNING: 2021-06-04 12:14:49: LVM snapshot for 'u01' does not exist
WARNING: 2021-06-04 12:14:49: LVM snapshot for 'root' does not exist
INFO: 2021-06-04 12:14:49: Checking LVM size
INFO: 2021-06-04 12:14:49: Boot device backup
INFO: 2021-06-04 12:14:49: Getting EFI device
INFO: 2021-06-04 12:14:49: ...step1 - unmounting EFI
INFO: 2021-06-04 12:14:50: ...step2 - making efi device backup
SUCCESS: 2021-06-04 12:14:54: ...EFI device backup saved as '/opt/odabr/out/hbi/efi.img'
INFO: 2021-06-04 12:14:54: ...step3 - checking EFI device backup
INFO: 2021-06-04 12:14:54: Getting boot device
INFO: 2021-06-04 12:14:54: ...step1 - making boot device backup using tar
SUCCESS: 2021-06-04 12:15:05: ...boot content saved as '/opt/odabr/out/hbi/boot.tar.gz'
INFO: 2021-06-04 12:15:05: ...step2 - unmounting boot
INFO: 2021-06-04 12:15:05: ...step3 - making boot device backup using dd
SUCCESS: 2021-06-04 12:15:10: ...boot device backup saved as '/opt/odabr/out/hbi/boot.img'
INFO: 2021-06-04 12:15:10: ...step4 - mounting boot
INFO: 2021-06-04 12:15:10: ...step5 - mounting EFI
INFO: 2021-06-04 12:15:11: ...step6 - checking boot device backup
INFO: 2021-06-04 12:15:12: OCR backup
INFO: 2021-06-04 12:15:13: ...ocr backup saved as '/opt/odabr/out/hbi/ocrbackup_87615.bck'
INFO: 2021-06-04 12:15:13: Making LVM snapshot backup
SUCCESS: 2021-06-04 12:15:13: ...snapshot backup for 'opt' created successfully
SUCCESS: 2021-06-04 12:15:15: ...snapshot backup for 'u01' created successfully
SUCCESS: 2021-06-04 12:15:15: ...snapshot backup for 'root' created successfully
SUCCESS: 2021-06-04 12:15:15: LVM snapshots backup done successfully

Side note: We used smaller backup sizes, to circumvent issues with not having enough space for the snapshot, although there was enough space. But this was not connected to the “dirty bit” issue.

I hope this helps others to troubleshoot their ODA.