Btrfsmaintenance
Scripts for btrfs maintenance tasks like periodic scrub, balance, trim or defrag on selected mountpoints or directories.
Install / Use
/learn @kdave/BtrfsmaintenanceREADME
Btrfs maintenance toolbox
Table of contents:
This is a set of scripts supplementing the btrfs filesystem and aims to automate a few maintenance tasks. This means the scrub, balance, trim or defragmentation.
Each of the tasks can be turned on/off and configured independently. The default config values were selected to fit the default installation profile with btrfs on the root filesystem.
Overall tuning of the default values should give a good balance between effects of the tasks and low impact of other work on the system. If this does not fit your needs, please adjust the settings.
Tasks
The following sections will describe the tasks in detail. There's one config
option that affects the task concurrency, BTRFS_ALLOW_CONCURRENCY. This is
to avoid extra high resource consumption or unexpected interaction among the
tasks and will serialize them in the order they're started by timers.
scrub
Description: Scrub operation reads all data and metadata from the devices and verifies the checksums. It's not mandatory, but may point out problems with faulty hardware early as it touches data that might not be in use and bit rot.
If there's a redundancy of data/metadata, i.e. the DUP or RAID1/5/6 profiles, scrub is able to repair the data automatically if there's a good copy available.
Impact when active: Intense read operations take place and may slow down or block other filesystem activities, possibly only for short periods.
Tuning:
- the recommended period is once in a month but a weekly period is also acceptable
- you can turn off the automatic repair (
BTRFS_SCRUB_READ_ONLY) - the default IO priority is set to idle but scrub may take long to finish,
you can change priority to normal (
BTRFS_SCRUB_PRIORITY)
Related commands:
- you can check status of last scrub run (either manual or through the cron
job) by
btrfs scrub status /path - you can cancel a running scrub anytime if you find it inconvenient (
btrfs scrub cancel /path), the progress state is saved each 5 seconds and next time scrub will start from that point
balance
Description: The balance command can do a lot of things, in general moves data around in big chunks. Here we use it to reclaim back the space of the underused chunks so it can be allocated again according to current needs.
The point is to prevent some corner cases where it's not possible to e.g. allocate new metadata chunks because the whole device space is reserved for all the chunks, although the total space occupied is smaller and the allocation should succeed.
The balance operation needs enough workspace so it can shuffle data around. By
workspace we mean device space that has no filesystem chunks on it, not to be
confused by free space as reported e.g. by df.
Impact when active: Possibly big. There's a mix of read and write operations, is seek-heavy on rotational devices. This can interfere with other work in case the same set of blocks is affected.
The balance command uses filters to do the work in smaller batches.
Before kernel version 5.2, the impact with quota groups enabled can be extreme. The balance operation performs quota group accounting for every extent being relocated, which can have the impact of stalling the file system for an extended period of time.
Expected result: If possible all the underused chunks are removed, the
value of total in output of btrfs fi df /path should be lower than before.
Check the logs.
The balance command may fail with no space reason but this is considered a minor fault as the internal filesystem layout may prevent the command to find enough workspace. This might be a time for manual inspection of space.
Tuning:
- you can make the space reclaim more aggressive by adding higher percentage to
BTRFS_BALANCE_DUSAGEorBTRFS_BALANCE_MUSAGE. Higher value means bigger impact on your system and becomes very noticeable. - the metadata chunks usage pattern is different from data and it's not necessary to reclaim metadata block groups that are more than 30 full. The default maximum is 10 which should not degrade performance too much but may be suboptimal if the metadata usage varies wildly over time. The assumption is that underused metadata chunks will get used at some point so it's not absolutely required to do the reclaim.
- the useful period highly depends on the overall data change pattern on the filesystem
Changed defaults since 0.5:
Versions up to 0.4.2 had usage filter set up to 50% for data and up to 30% for metadata. Based on user feedback, the numbers have been reduced to 10% (data) and 5% (metadata). The system load during the balance service will be smaller and the result of space compaction still reasonable. Multiple data chunks filled to less than 10% can be merged into fewer chunks. The file data can change in large volumes, e.g. deleting a big file can free a lot of space. If the space is left unused for the given period, it's desirable to make it more compact. Metadata consumption follows a different pattern and reclaiming only the almost unused chunks makes more sense, otherwise there's enough reserved metadata space for operations like reflink or snapshotting.
A convenience script is provided to update the unchanged defaults,
/usr/share/btrfsmaintenance/update-balance-usage-defaults.sh .
trim
Description: The TRIM operation (aka. discard) can instruct the underlying device to optimize blocks that are not used by the filesystem. This task is performed on-demand by the fstrim utility.
This makes sense for SSD devices or other type of storage that can translate the TRIM action to something useful (e.g. thin-provisioned storage).
Impact when active: Should be low, but depends on the amount of blocks being trimmed.
Tuning:
- the recommended period is weekly, but monthly is also fine
- the trim commands might not have an effect and are up to the device, e.g. a block range too small or other constraints that may differ by device type/vendor/firmware
- the default configuration is off because of the system fstrim.timer
defrag
Description: Run defragmentation on configured directories. This is for convenience and not necessary as defragmentation needs are usually different for various types of data.
Please note that the defragmentation process does not descend to other mount
points and nested subvolumes or snapshots. All nested paths would need to be
enumerated in the respective config variable. The command utilizes find -xdev, you can use that to verify in advance which paths will the
defragmentation affect.
Special case:
There's a separate defragmentation task that happens automatically and defragments only the RPM database files. This is done via a zypper plugin and the defrag pass triggers at the end of the installation.
This improves reading the RPM databases later, but the installation process fragments the files very quickly so it's not likely to bring a significant speedup here.
Periodic scheduling
There are now two ways how to schedule and run the periodic tasks: cron and systemd timers. Only one can be active on a system and this should be decided at the installation time.
Cron
Cron takes care of periodic execution of the scripts, but they can be run any
time directly from /usr/share/btrfsmaintenance/, respecting the configured
values in /etc/sysconfig/btrfsmaintenance.
The changes to configuration file need to be reflected in the /etc/cron
directories where the scripts are linked for the given period.
If the period is changed, the cron symlinks have to be refreshed:
- manually -- use
systemctl restart btrfsmaintenance-refresh(or thercbtrfsmaintenance-refreshshortcut) - in yast2 -- sysconfig editor triggers the refresh automatically
- using a file watcher -- if you install
btrfsmaintenance-refresh.path, this will utilize the file monitor to detect changes and will run the refresh
Systemd timers
There's a set of timer units that run the respective task script. The periods
are configured in the /etc/sysconfig/btrfsmaintenance file as well. The
timers have to be installed using a similar way as cron. Please note that the
'.timer' and respective '.service' files have to be installed so the timers
work properly.
Some package managers (e.g. apt) will configure the timers automatically at
install time - you can check with ls /usr/lib/systemd/system/btrfs*.
To install the timers manually, run btrfsmaintenance-refresh-cron.sh timer.
Quick start
The tasks' periods and other parameters should fit most use cases and do not
need to be touched. Review the mount points (variables ending with
_MOUNTPOINTS) whether you want to run the tasks there or not.
Distro integration
Currently the support for widely used distros is present. More distros can be added. This section describes how the pieces are put together and should give some overview.
Installation
For debian based systems, run dist-install.sh as root.
For non-debian based systems, check for distro provided package or do manual installation of files as described below.
btrfs-*.shtask scripts are expected at/usr/share/btrfsmaintenancesysconfig.btrfsmaintenanceconfiguration template is put to:/etc/sysconfig/btrfsmaintenanceon SUSE and RedHat based systems or derivatives/etc/default/btrfsmaintenanceon Debian and derivatives/usr/lib/zypp/plugins/commit/btrfs-defrag-plugin.shor/usr/lib/zypp/plugins/commit/btrfs-defrag-plugin.pypost-update script for zypper (the package manager), applies to SUSE-based distros for now- cron refresh scripts are installed (see bellow)
The defrag plugin has a shell and python i
