GithubHelp home page GithubHelp logo

kdave / btrfsmaintenance Goto Github PK

View Code? Open in Web Editor NEW
904.0 36.0 79.0 154 KB

Scripts for btrfs maintenance tasks like periodic scrub, balance, trim or defrag on selected mountpoints or directories.

License: GNU General Public License v2.0

Python 12.12% Shell 87.88%
btrfs administration

btrfsmaintenance's Introduction

Btrfs maintenance toolbox

Table of contents:

This is a set of scripts supplementing the btrfs filesystem and aims to automate a few maintenance tasks. This means the scrub, balance, trim or defragmentation.

Each of the tasks can be turned on/off and configured independently. The default config values were selected to fit the default installation profile with btrfs on the root filesystem.

Overall tuning of the default values should give a good balance between effects of the tasks and low impact of other work on the system. If this does not fit your needs, please adjust the settings.

Tasks

The following sections will describe the tasks in detail. There's one config option that affects the task concurrency, BTRFS_ALLOW_CONCURRENCY. This is to avoid extra high resource consumption or unexpected interaction among the tasks and will serialize them in the order they're started by timers.

scrub

Description: Scrub operation reads all data and metadata from the devices and verifies the checksums. It's not mandatory, but may point out problems with faulty hardware early as it touches data that might not be in use and bit rot.

If there's a redundancy of data/metadata, i.e. the DUP or RAID1/5/6 profiles, scrub is able to repair the data automatically if there's a good copy available.

Impact when active: Intense read operations take place and may slow down or block other filesystem activities, possibly only for short periods.

Tuning:

  • the recommended period is once in a month but a weekly period is also acceptable
  • you can turn off the automatic repair (BTRFS_SCRUB_READ_ONLY)
  • the default IO priority is set to idle but scrub may take long to finish, you can change priority to normal (BTRFS_SCRUB_PRIORITY)

Related commands:

  • you can check status of last scrub run (either manual or through the cron job) by btrfs scrub status /path
  • you can cancel a running scrub anytime if you find it inconvenient (btrfs scrub cancel /path), the progress state is saved each 5 seconds and next time scrub will start from that point

balance

Description: The balance command can do a lot of things, in general moves data around in big chunks. Here we use it to reclaim back the space of the underused chunks so it can be allocated again according to current needs.

The point is to prevent some corner cases where it's not possible to e.g. allocate new metadata chunks because the whole device space is reserved for all the chunks, although the total space occupied is smaller and the allocation should succeed.

The balance operation needs enough workspace so it can shuffle data around. By workspace we mean device space that has no filesystem chunks on it, not to be confused by free space as reported e.g. by df.

Impact when active: Possibly big. There's a mix of read and write operations, is seek-heavy on rotational devices. This can interfere with other work in case the same set of blocks is affected.

The balance command uses filters to do the work in smaller batches.

Before kernel version 5.2, the impact with quota groups enabled can be extreme. The balance operation performs quota group accounting for every extent being relocated, which can have the impact of stalling the file system for an extended period of time.

Expected result: If possible all the underused chunks are removed, the value of total in output of btrfs fi df /path should be lower than before. Check the logs.

The balance command may fail with no space reason but this is considered a minor fault as the internal filesystem layout may prevent the command to find enough workspace. This might be a time for manual inspection of space.

Tuning:

  • you can make the space reclaim more aggressive by adding higher percentage to BTRFS_BALANCE_DUSAGE or BTRFS_BALANCE_MUSAGE. Higher value means bigger impact on your system and becomes very noticeable.
  • the metadata chunks usage pattern is different from data and it's not necessary to reclaim metadata block groups that are more than 30 full. The default maximum is 10 which should not degrade performance too much but may be suboptimal if the metadata usage varies wildly over time. The assumption is that underused metadata chunks will get used at some point so it's not absolutely required to do the reclaim.
  • the useful period highly depends on the overall data change pattern on the filesystem

Changed defaults since 0.5:

Versions up to 0.4.2 had usage filter set up to 50% for data and up to 30% for metadata. Based on user feedback, the numbers have been reduced to 10% (data) and 5% (metadata). The system load during the balance service will be smaller and the result of space compaction still reasonable. Multiple data chunks filled to less than 10% can be merged into fewer chunks. The file data can change in large volumes, e.g. deleting a big file can free a lot of space. If the space is left unused for the given period, it's desirable to make it more compact. Metadata consumption follows a different pattern and reclaiming only the almost unused chunks makes more sense, otherwise there's enough reserved metadata space for operations like reflink or snapshotting.

A convenience script is provided to update the unchanged defaults, /usr/share/btrfsmaintenance/update-balance-usage-defaults.sh .

trim

Description: The TRIM operation (aka. discard) can instruct the underlying device to optimize blocks that are not used by the filesystem. This task is performed on-demand by the fstrim utility.

This makes sense for SSD devices or other type of storage that can translate the TRIM action to something useful (e.g. thin-provisioned storage).

Impact when active: Should be low, but depends on the amount of blocks being trimmed.

Tuning:

  • the recommended period is weekly, but monthly is also fine
  • the trim commands might not have an effect and are up to the device, e.g. a block range too small or other constraints that may differ by device type/vendor/firmware
  • the default configuration is off because of the system fstrim.timer

defrag

Description: Run defragmentation on configured directories. This is for convenience and not necessary as defragmentation needs are usually different for various types of data.

Please note that the defragmentation process does not descend to other mount points and nested subvolumes or snapshots. All nested paths would need to be enumerated in the respective config variable. The command utilizes find -xdev, you can use that to verify in advance which paths will the defragmentation affect.

Special case:

There's a separate defragmentation task that happens automatically and defragments only the RPM database files. This is done via a zypper plugin and the defrag pass triggers at the end of the installation.

This improves reading the RPM databases later, but the installation process fragments the files very quickly so it's not likely to bring a significant speedup here.

Periodic scheduling

There are now two ways how to schedule and run the periodic tasks: cron and systemd timers. Only one can be active on a system and this should be decided at the installation time.

Cron

Cron takes care of periodic execution of the scripts, but they can be run any time directly from /usr/share/btrfsmaintenance/, respecting the configured values in /etc/sysconfig/btrfsmaintenance.

The changes to configuration file need to be reflected in the /etc/cron directories where the scripts are linked for the given period.

If the period is changed, the cron symlinks have to be refreshed:

  • manually -- use systemctl restart btrfsmaintenance-refresh (or the rcbtrfsmaintenance-refresh shortcut)
  • in yast2 -- sysconfig editor triggers the refresh automatically
  • using a file watcher -- if you install btrfsmaintenance-refresh.path, this will utilize the file monitor to detect changes and will run the refresh

Systemd timers

There's a set of timer units that run the respective task script. The periods are configured in the /etc/sysconfig/btrfsmaintenance file as well. The timers have to be installed using a similar way as cron. Please note that the '.timer' and respective '.service' files have to be installed so the timers work properly.

Some package managers (e.g. apt) will configure the timers automatically at install time - you can check with ls /usr/lib/systemd/system/btrfs*.

To install the timers manually, run btrfsmaintenance-refresh-cron.sh timer.

Quick start

The tasks' periods and other parameters should fit most use cases and do not need to be touched. Review the mount points (variables ending with _MOUNTPOINTS) whether you want to run the tasks there or not.

Distro integration

Currently the support for widely used distros is present. More distros can be added. This section describes how the pieces are put together and should give some overview.

Installation

For debian based systems, run dist-install.sh as root.

For non-debian based systems, check for distro provided package or do manual installation of files as described below.

  • btrfs-*.sh task scripts are expected at /usr/share/btrfsmaintenance
  • sysconfig.btrfsmaintenance configuration template is put to:
  • /etc/sysconfig/btrfsmaintenance on SUSE and RedHat based systems or derivatives
  • /etc/default/btrfsmaintenance on Debian and derivatives
  • /usr/lib/zypp/plugins/commit/btrfs-defrag-plugin.sh or /usr/lib/zypp/plugins/commit/btrfs-defrag-plugin.py post-update script for zypper (the package manager), applies to SUSE-based distros for now
  • cron refresh scripts are installed (see bellow)

The defrag plugin has a shell and python implementation, choose what suits the installation better.

cron jobs

The periodic execution of the tasks is done by the 'cron' service. Symlinks to the task scripts are located in the respective directories in /etc/cron.<PERIOD>.

The script btrfsmaintenance-refresh-cron.sh will synchronize the symlinks according to the configuration files. This can be called automatically by a GUI configuration tool if it's capable of running post-change scripts or services. In that case there's btrfsmaintenance-refresh.service systemd service.

This service can also be automatically started upon any modification of the configuration file in /etc/sysconfig/btrfsmaintenance by installing the btrfsmaintenance-refresh.path systemd watcher.

Post-update defragmentation

The package database files tend to be updated in a random way and get fragmented, which particularly hurts on btrfs. For rpm-based distros this means files in /var/lib/rpm. The script or plugin simply runs a defragmentation on the affected files. See btrfs-defrag-plugin.sh or btrfs-defrag-plugin.py for more details.

At the moment the 'zypper' package manager plugin exists. As the package managers differ significantly, there's no single plugin/script to do that.

Settings

The settings are copied to the expected system location from the template (sysconfig.btrfsmaintenance). This is a shell script and can be sourced to obtain values of the variables.

The template contains descriptions of the variables, default and possible values and can be deployed without changes (expecting the root filesystem to be btrfs).

Tuning periodic snapshotting

There are various tools and handwritten scripts to manage periodic snapshots and cleaning. The common problem is tuning the retention policy constrained by the filesystem size and not running out of space.

This section will describe factors that affect that, using snapper as an example, but adapting to other tools should be straightforward.

Intro

Snapper is a tool to manage snapshots of btrfs subvolumes. It can create snapshots of given subvolume manually, periodically or in a pre/post way for a given command. It can be configured to retain existing snapshots according to time-based settings. As the retention policy can be very different for various use cases, we need to be able to find matching settings.

The settings should satisfy user's expectation about storing previous copies of the subvolume but not taking too much space. In an extreme, consuming the whole filesystem space and preventing some operations to finish.

In order to avoid such situations, the snapper settings should be tuned according to the expected use case and filesystem size.

Sample problem

Default settings of snapper on default root partition size can easily lead to no-space conditions (all TIMELINE values set to 10). Frequent system updates make it happen earlier, but this also affects long-term use.

Factors affecting space consumption

  1. frequency of snapshotting
  2. amount of data changes between snapshots (delta)
  3. snapshot retention settings
  4. size of the filesystem

Each will be explained below.

The way how the files are changed affects the space consumption. When a new data overwrite existing, the new data will be pinned by the following snapshot, while the original data will belong to previous snapshot. This means that the allocated file blocks are freed after the last snapshot pointing to them is gone.

Tuning

The administrator/user is supposed to know the approximate use of the partition with snapshots enabled.

The decision criteria for tuning is space consumption and we're optimizing to maximize retention without running out of space.

All the factors are intertwined and we cannot give definite answers but rather describe the tendencies.

Snapshotting frequency

  • automatic: if turned on with the TIMELINE config option, the periodic snapshots are taken hourly. The daily/weekly/monthly/yearly periods will keep the first hourly snapshot in the given period.

  • at package update: package manager with snapper support will create pre/post snapshots before/after an update happens.

  • manual: the user can create a snapshot manually with snapper create, with a given snapshot type (i.e. single, pre, post).

Amount of data change

This is a parameter hard to predict and calculate. We work with rough estimates, e.g. megabytes, gigabytes etc.

Retention settings

The user is supposed to know possible needs of recovery or examination of previous file copies stored in snapshots.

It's not recommended to keep too old snapshots, e.g. monthly or even yearly if there's no apparent need for that. The yearly snapshots should not substitute backups, as they reside on the same partition and cannot be used for recovery.

Filesystem size

Bigger filesystem allows for longer retention, higher frequency updates and amount of data changes.

As an example of a system root partition, the recommended size is 30 GiB, but 50 GiB is selected by the installer if the snapshots are turned on.

For non-system partition it is recommended to watch remaining free space. Although getting an accurate value on btrfs is tricky, due to shared extents and snapshots, the output of df gives a rough idea. Low space, like under a few gigabytes is more likely to lead to no-space conditions, so it's a good time to delete old snapshots or review the snapper settings.

Typical use cases

A rolling distro

  • frequency of updates: high, multiple times per week
  • amount of data changed between updates: high

Suggested values:

TIMELINE_LIMIT_HOURLY="12"
TIMELINE_LIMIT_DAILY="5"
TIMELINE_LIMIT_WEEKLY="2"
TIMELINE_LIMIT_MONTHLY="1"
TIMELINE_LIMIT_YEARLY="0"

The size of root partition should be at least 30GiB, but more is better.

Regular/enterprise distro

  • frequency of updates: low, a few times per month
  • amount of data changed between updates: low to moderate

Most data changes come probably from the package updates, in the range of hundreds of megabytes per update.

Suggested values:

TIMELINE_LIMIT_HOURLY="12"
TIMELINE_LIMIT_DAILY="7"
TIMELINE_LIMIT_WEEKLY="4"
TIMELINE_LIMIT_MONTHLY="6"
TIMELINE_LIMIT_YEARLY="1"

Big file storage

  • frequency of updates: moderate to high
  • amount of data changed between updates: no changes in files, new files added, old deleted

Suggested values:

TIMELINE_LIMIT_HOURLY="12"
TIMELINE_LIMIT_DAILY="7"
TIMELINE_LIMIT_WEEKLY="4"
TIMELINE_LIMIT_MONTHLY="6"
TIMELINE_LIMIT_YEARLY="0"

Note, that deleting a big file that has been snapshotted will not free the space until all relevant snapshots are deleted.

Mixed

  • frequency of updates: unpredictable
  • amount of data changed between updates: unpredictable

Examples:

  • home directory with small files (in range of kilobytes to megabytes), large files (hundreds of megabytes to gigabytes).
  • git trees, bare and checked out repositories

Not possible to suggest config numbers as it really depends on user expectations. Keeping a few hourly snapshots should not consume too much space and provides a copy of files, e.g. to restore after accidental deletion.

Starting point:

TIMELINE_LIMIT_HOURLY="12"
TIMELINE_LIMIT_DAILY="7"
TIMELINE_LIMIT_WEEKLY="1"
TIMELINE_LIMIT_MONTHLY="0"
TIMELINE_LIMIT_YEARLY="0"

Summary

Type Hourly Daily Weekly Monthly Yearly
Rolling 12 5 2 1 0
Regular 12 7 4 6 1
Big files 12 7 4 6 0
Mixed 12 7 1 0 0

About

The goal of this project is to help administering btrfs filesystems. It is not supposed to be distribution specific. Common scripts/configs are preferred but per-distro exceptions will be added when necessary.

License: GPL 2

Contributing guide.

btrfsmaintenance's People

Contributors

asj avatar barak avatar comio avatar davidmwilkinson avatar fcrozat avatar floyddotnet avatar grmrgecko avatar ian-kelling avatar jeffmahoney avatar jsimonetti avatar karlmistelberger avatar kdave avatar lordofpipes avatar madmartin avatar micha-btz avatar microraptor avatar prplhaz4 avatar qiwichupa avatar sten0 avatar super7ramp avatar vmiklos avatar wtip avatar yoasif avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

btrfsmaintenance's Issues

Unexpected operator in bash

when I run the scripts directly in bash I get the following warning:

sudo sh /usr/share/btrfsmaintenance/btrfs-scrub.sh
/usr/share/btrfsmaintenance/btrfs-scrub.sh: 14: [: /home:/var/www:/var/opt/gitlab/git-data: unexpected operator

The script then continues fine. Just wondered if my config was ok?

BTRFS_BALANCE_MOUNTPOINTS="/home":"/var/www":"/var/opt/gitlab/git-data"

blackout period for workday

If there were a global lock that kept new btrfsmaintenance tasks from running, thus preventing them from stepping on each others' toes (see issue #15 and discussion therein), this could be used for another purpose. It is a pain to have btrfs scrub or balance start while I'm trying to use my desktop computer. I'd rather these do their work when I'm not trying to do mine! How about another task, called "btrfs-blackout" or something like that, which would do nothing but hold a btrfsmaintenance lock file during specified periods. E.g., Mon-Fri 08:00-16:00 would be nice on my work desktop, so my computer doesn't decide it has more important things to do than respond to my inputs while I'm in my office trying to get work done.

This could also be used as a place to centralize policy decisions about this: grab the lock when on battery power, for example, or grab the lock for 10 minutes after power on to give other things time to settle down, or grab it if there has been interactive usage within the last few minutes, or grab it if the load factor is high.

(I know these tasks are supposed to be running at a very low priority and not interfere with anything else. But I suspect they evict things from the buffer cache, and also somehow manage to hog the disk anyway. This is a btrfs performance issue, and ideally would ultimately be addressed there. But in the meantime, it would be best if my desktop did not become unusable for an hour following its first reboot after two weeks turned off. It's even worse if my laptop goes south during a lecture to 350 students.)

This also relates to a few other issues: #15, #42, #53.

Allow configurable compression algorithm for btrfs-defrag

From the btrfs-filesystem man page, defrag section:

-c[<algo>]
    compress file contents while defragmenting. Optional argument selects the compression algorithm, zlib (default), lzo or zstd. Currently it’s not possible to select no compression. See also section EXAMPLES.

I read this as meaning that, when defragging, compression will always be applied. Since there currently is no way in btrfsmaintenance to specify a value option or a corresponding value, zlib is used, which may not be desirable. It would be great if this was a configurable option in the defaults file, so you could select e.g. zstd.

Improvement from using asynchronous --background|--bg for balance operation?

I recently read about btrfs-balance's --background|--bg option. Would desktop users of btrfsmaintenance benefit from increased system responsiveness due to its asynchronous nature? I wonder because asynchronous fs operations don't block the gui like synchronous ones tend to. Or does an asynchronous balance introduce the possibility of filesystem inconsistency if an unexpected interruption occurs?

Question: Why different default musage/dusage defaults?

First, thanks for these scripts! I'd cobbled something together that didn't work nearly as well.

Why do the default values for dusage and musage not match? I would have assumed that making them both the same would yield optimal balancing, but musage only goes to 30 while dusage goes to 50.

## Path: System/File systems/btrfs
## Type: string
## Default: "1 5 10 20 30 40 50"
#
# The usage percent for balancing data block groups.
#
# Note: default values should not disturb normal work but may not reclaim
# enough block groups. If you observe that, add higher values but beware that
# this will increase IO load on the system.
BTRFS_BALANCE_DUSAGE="1 5 10 20 30 40 50"
## Path: System/File systems/btrfs
## Type: string
## Default: "1 5 10 20 30"
#
# The usage percent for balancing metadata block groups. The values are also
# used in case the filesystem has mixed blockgroups.
#
# Note: default values should not disturb normal work but may not reclaim
# enough block groups. If you observe that, add higher values but beware that
# this will increase IO load on the system.
BTRFS_BALANCE_MUSAGE="1 5 10 20 30"

btrfs-balance freezes desktop for up to 30 minutes

I've recently upgraded to Leap 15.0 and after two days of usage, my desktop suddenly froze for about 30 minutes. Login took 2-3 minutes. Starting a root shell took a minute. Starting atop took a minute. btrfs was at 100% CPU.

It turned out that btrfs-balance.sh was the culprit. I figured that this was because the script has never run before, so I rolled my eyes and sat it out.

Yesterday, the script run again, hogging the computer for a couple of minutes.

I have two major issues with this:

  • It happened both times after resuming from hibernate to disk. Since hibernate has been unstable in the past, this is a "oh, come on, not again!" for me.
  • It takes too long without any indication what is going on.

My main disk is an SSD. Hogging the computer for several minutes when I want to do work is too much.

For the time being, I've disabled the script with systemctl disable btrfs-balance.timer

Please either make it run faster (less than 5 seconds) with a clearly visible indication that this job has started (so I know why my computer isn't responding anymore) or turn it into a background job which doesn't freeze the whole computer.

Multiple runs of scheduled jobs when period differs from default

I noticed this past month that the scrub job happened twice:

Jun 01 00:00:03 dave-lab systemd[1]: Started Scrub btrfs filesystem, verify block checksums.
...
Jun 01 15:14:17 dave-lab systemd[1]: btrfs-scrub.service: Succeeded.
Jun 01 15:14:17 dave-lab systemd[1]: btrfs-scrub.service: Consumed 2h 46min 28.412s CPU time.
Jun 01 15:14:17 dave-lab systemd[1]: Started Scrub btrfs filesystem, verify block checksums.

I had changed my scrub period to start at 4AM every 1st of the month prior to this. However, when I check via systemctl what the next trigger time is, it still reports that it's scheduled to start at midnight on July 1:

[~]$ cat /etc/sysconfig/btrfsmaintenance | grep "BTRFS_SCRUB_PERIOD"
BTRFS_SCRUB_PERIOD="*-*-01 04:00:00"
[~]$ systemctl status btrfs-scrub.timer                             
● btrfs-scrub.timer - Scrub btrfs filesystem, verify block checksums
     Loaded: loaded (/usr/lib/systemd/system/btrfs-scrub.timer; enabled; vendor preset: enabled)
    Drop-In: /etc/systemd/system/btrfs-scrub.timer.d
             └─schedule.conf
     Active: active (waiting) since Thu 2021-06-10 12:40:12 EDT; 58min ago
    Trigger: Thu 2021-07-01 00:00:00 EDT; 2 weeks 6 days left
   Triggers: ● btrfs-scrub.service
       Docs: man:btrfs-scrub

Jun 10 12:40:12 dave-lab systemd[1]: Started Scrub btrfs filesystem, verify block checksums.

systemctl shows 2 entries for TimersCalendar when investigating:

[~]$ systemctl show btrfs-scrub.timer | head -n 4
Unit=btrfs-scrub.service
TimersCalendar={ OnCalendar=*-*-01 04:00:00 ; next_elapse=Thu 2021-07-01 04:00:00 EDT }
TimersCalendar={ OnCalendar=*-*-01 00:00:00 ; next_elapse=Thu 2021-07-01 00:00:00 EDT }
OnClockChange=no

Upon inspection, this same thing occurs with the other periods I'd overridden:

[~]$ systemctl show btrfs-balance.timer | head -n 4
Unit=btrfs-balance.service
TimersCalendar={ OnCalendar=*-*-* 03:30:00 ; next_elapse=Fri 2021-06-11 03:30:00 EDT }
TimersCalendar={ OnCalendar=*-*-01 00:00:00 ; next_elapse=Thu 2021-07-01 00:00:00 EDT }
OnClockChange=no
[~]$ systemctl show btrfs-defrag.timer | head -n 4
Unit=btrfs-defrag.service
TimersCalendar={ OnCalendar=*-*-* 03:15:00 ; next_elapse=Fri 2021-06-11 03:15:00 EDT }
TimersCalendar={ OnCalendar=*-*-01 00:00:00 ; next_elapse=Thu 2021-07-01 00:00:00 EDT }
OnClockChange=no

But not with trim, which I had left disabled with the 'none' period:

[~]$ systemctl show btrfs-trim.timer | head -n 4
Unit=btrfs-trim.service
TimersCalendar={ OnCalendar=*-*-01 00:00:00 ; next_elapse=n/a }
OnClockChange=no
OnTimezoneChange=no

After some digging, I found this:

OnCalendar=
Defines realtime (i.e. wallclock) timers with calendar event
expressions. See systemd.time(7) for more information on the
syntax of calendar event expressions. Otherwise, the
semantics are similar to OnActiveSec= and related settings.

Note that timers do not necessarily expire at the precise
time configured with this setting, as it is subject to the
AccuracySec= setting below.

May be specified more than once, in which case the timer unit
will trigger whenever any of the specified expressions
elapse.
Moreover calendar timers and monotonic timers (see
above) may be combined within the same timer unit.

If the empty string is assigned to any of these options, the
list of timers is reset (both OnCalendar= timers and
monotonic timers, see above), and all prior assignments will
have no effect.

The last paragraph of the section describes the change I'm proposing. Changing the drop-in files for the timers to this:

[Timer]
OnCalendar=
OnCalendar=$PERIOD

will reset the list of timers and respect only the period specified in the /etc/sysconfig/btrfsmaintenance file. Without this change, the default period is always used in addition to the values explicitly specified.

I've forked and made a commit for what I believe the change should be here: davidmwilkinson@aa7cbc3

I can open a PR if desired, but since it's a one line change it's up to you

"echo -n" bashism in /bin/sh scripts

The scripts use echo -n to parse the colon-separated paths in the config file. This is a problem if /bin/sh is dash, since -n is taken as part of the string to be echoed and not as an option, and ends up getting prepended to the first path in the list.

As a solution, either the more portable printf should be used in place of echo, or the shebangs should be changed from /bin/sh to /bin/bash.

daily/weekly/monthly tasks run in parallel, causing slowdown.

For example, in the default configuration, weekly btrfs balance will run at the same time as monthly btrfs scrub if the balance takes longer than 7 minutes to run on debian. This should be avoided, or at least have an option to avoid it. On debian, with anacron or cron, daily/weekly/monthly start 5-22 after each other. Each of daily/weekly/monthly will run it's own tasks in sequence, not parallel.

sytemd-cat should be syStemd-cat

diff --git a/btrfs-defrag.sh b/btrfs-defrag.sh
index 3c03c23..f1169c1 100755
--- a/btrfs-defrag.sh
+++ b/btrfs-defrag.sh
@@ -37,7 +37,7 @@ done
 } | \
 case "$BTRFS_LOG_OUTPUT" in
        stdout) cat;;
-       journal) sytemd-cat -t "$LOGIDENTIFIER";;
+       journal) systemd-cat -t "$LOGIDENTIFIER";;
        *) cat;;
 esac

diff --git a/btrfs-trim.sh b/btrfs-trim.sh
index 9bbbe09..55d3065 100755
--- a/btrfs-trim.sh
+++ b/btrfs-trim.sh
@@ -37,7 +37,7 @@ done
 } | \
 case "$BTRFS_LOG_OUTPUT" in
        stdout) cat;;
-       journal) sytemd-cat -t "$LOGIDENTIFIER";;
+       journal) systemd-cat -t "$LOGIDENTIFIER";;
        *) cat;;
 esac

Call /bin/bash instead of /bin/sh

btrfs-balance etc. call /bin/sh, therefore, [ "$FS" == "btrfs" ] in btrfsmaintenance-functions line 76 will not work.

Fix: call /bin/bash

Feature Request for more versatile periods

I wish to have the scrub run on the 1st Saturday of the month at 9am. Midnight doesn't make much sense for me (I'll likely shut down the PC before it has a chance to finish), nor does just doing it on whatever happens to be the first day.

I set BTRFS_SCRUB_PERIOD="Saturday --1..7 9:00" in the config, but the next run is on midnight on the 1st. After this run it will be on the first Saturday on the month @ 9:00. So it runs twice a month.

I realise that the valid options are 'Type: string(none,daily,weekly,monthly)', so probably not a bug, but a feature request to support the full set of https://jlk.fjfi.cvut.cz/arch/manpages/man/systemd.time.7#CALENDAR_EVENTS

incomplete systemd setup instructions in dist-install.sh

Hi,
I installed btrfsmaintenance as outlined in README.md and followed the instructions given by dist-install.sh, specifically this one:

  • copy *.timer files to the systemd.unit path (eg. /usr/lib/systemd/system/ or /etc/systemd/system)

However it's also necessary to copy the *.service files to the same location, else systemd gives somewhat cryptic error output and the maintenance scripts won't be executed. So I'd like to suggest a slight change to the displayed instructions: "- copy *.timer and *.service files [...]".

start process to send email notifications upon btrfs problems?

I am wondering how btrfs users can ensure to automatically get notified (only) upon errors and warnings (that usually happen in the background and are very likely to remain unnoticed).

Btrfsmaintenance seemed like good find for this, it covers the necessary background tasks.

But couldn't it also run something like a watchdog task (i.e. bash pipe)?
It could filter the log for any btrfs warnings and errors that may occur during the daily usage, and send out emails as soon as they occur.

dist-install.sh broken

  1. rename $prefix to $sysconfigdir.
  2. btrfsmaintenance-functions is not copied to installation dir.

Error handling

These scripts don't handle errors. Please consider set -e at the top of each script so that error conditions will be trapped. The exit 0 at the bottom of each script can also be removed as it's redundant.

Use case / justification:

We monitor btrfs-balance.service, however when an error condition is encountered the script returns with an exit code of 0 even though the balance failed: ERROR: error during balancing '/backup': No space left on device There may be more info in syslog - try dmesg | tail

BTRFS_VERBOSITY missing

In btrfs-defrag.sh there is a reference to $BTRFS_VERBOSITY in the execution of the find command.

Specifically,

find "$P" -size "$BTRFS_DEFRAG_MIN_SIZE" -type f -xdev \ -exec /sbin/btrfs filesystem defrag -t 32m -f "$BTRFS_VERBOSITY" '{}' \;

This causes an execution error as the "$BTRFS_VERBOSITY" gets translated as "" which causes an error with find (at least find v4.4.2)

systemd timers don't work properly with non-default time periods

I changed the scrub period in /etc/sysconfig/btrfsmaintenance from monthly to weekly, and it seems that when using systemd timers, btrfsmaintenance-refresh.service implements this change by creating a file/etc/systemd/system/btrfs-scrub-timer.d/schedule.conf to make the timer trigger weekly. However, it seems the effect is to make the timer start both monthly and weekly. I noticed that it activated today on the 1st of the month, even though it had just activated two days ago at the beginning of the week. I don't believe that that is the intended behavior, and it only happens with the systemd timers, not if you use cron.

btrfs-defrag-plugin: switch to python3 by default

The btrfs-defrag plugin used for post-installation can be run from python version 2 and 3 environments and the runtime detection will continue to work. The default interpreter will be changed to python3.

dist-install.sh > TimerUnits

Hi!
First: tx for the good work.
Second:
Implement in dist-install.sh the Log Message, somthing like:
"Now copy the Service.Units and Timer.Units to /etc/systemd/system and do a btrfsmaintenance-refresh-cron.sh timer"

[wishlist] Please consider enabling systemd hardening features in provided service files

Please consider enabling systemd hardening features in provided service files. See http://0pointer.de/blog/projects/security.html and systemd.service(5) for available facilities. Examples of these are blocking network access, private /tmp directories, making directories read-only, or hiding directories.

Other then the premise of declaring minimal required dependencies, it seems like it might be useful to enforce best practises such as never defragmenting snapshots. eg: specify a list of directories that hold snapshots in the config file, and then block access and/or writes to them. Granted, I'm not convinced this is the best approach, and am merely providing it as an example.

Switching to type=simple broke the after= effect

@fcrozat

after= was used in the service file in order to prevent e.g. btrfs-balance and btrfs-trim from running at the same time because this leads to performance issues (see boo#1063638).

This worked well because services were oneshot, i.e. with no real active state: inactive → activating → inactive.

Now, since commit 64be9c7: "*.service switch to Type=simple systemd service", services are simple, which gives something like: inactive → activating → active → inactive.

So now the after= field, which only prevents to start until the listed services haven't finished starting up, does not ensure that the btrfs operations are not running in parallel.

Unless someone has a better solution, I propose to revert commit 64be9c7.

Better support for scheduling wrt AC power

This is forked from discussion in issue #29, cc @sten0

  • ConditionACPower=yes in the timer units, because maintenance operations will needlessly deplete battery life on laptops, and servers that are on emergency UPS power should probably defer as well
  • Combined with anachron-like behaviour so that these tasks will run when AC power is restored
  • Support for running these tasks in one-shot mode. I believe this is useful for the following situation: a company usually scrubs on the 1st of the month, but an immanent product launch will ensure that the servers will be very busy for at least the first week of the next month. I'm not sure if systemd supports deferring scheduled tasks, but it definitely has a mechanism for checking if a job has run in the last 'now -30 days'. One-shot mode means the scrub can be manually triggered at a time before the surge in traffic to best assure quality of service. The unit for check on the 1st of next month exits with "scrub unneeded". The unit for check on the 1st of next next month finds that a scrub has now been completed in ~37 days or so and executes as expected.

Add support for custom MAILTO

Output of the jobs is mailed to root@localhost per default. It would be nice if one could specify an email address where the output of the jobs is to be sent.

Safely stopping a maintenance script?

Is there a way to safely stop a running maintenenance script? Sometimes they are long running, especially if combined with other I/O. Is there a way to tell systemd to stop the scripts after a specific amount of time, or if system utilisation becomes too high?

Hard-coded path for fstrim breaks in ubuntu

In btrfs-trim.sh there is a hard-coded path reference to /usr/sbin/fstrim. In at least ubuntu distributions fstrim is installed by default to /sbin.

Recommend removing hard-coded paths to commands.

defrag does not descend into subvolumes

Hi,
btrfs-defrag.sh makes use of find -xdev, which treats a subvolume boundary as device boundary and stops descending. If this is the intended behavior, it should be documented.
If it is not the intendedn behavior, the script needs to build a list of subvolumes that are part oft the BTRFS_DEFRAG_PATHS and invoke find for each of them.
Greetings
Marc

possible confusion in documentation regarding paths/mountpoints

First of all - thanks for this project :)

I've just set it up and for an initial baseline i ran the defrag, balance and scrub scripts. In the config file i've set / and /home to be scrubbed, which are subvolumes on the same btrfs partiton.

sudo btrfs scrub status / now returns the exact same output as sudo btrfs scrub status /home - thus making the impression that scrub is run across all subvolumes and not just the provided mountpoint. I couldn't find anything about that in the btrfs documentation though so i thought i'd post it here.

For BTRFS_DEFRAG_PATHS it is explicitly mentioned that it doesn't cross mountpoints, for BTRFS_BALANCE_MOUNTPOINTS and BTRFS_SCRUB_MOUNTPOINTS it isn't 100% clear to me if it does or not.
As i've assumed the "does not cross" rule applies to all settings, balance and scrub is run multiple times.

Now that i look at it, the differentiation between _PATHS and _MOUNTPOINTS kind of indicates this, but maybe it would be good to include a hint to that in the comments for the respecting settings? Or am i totally wrong with my conclusion that scrub and balance cross subvolumes?

Provide auto argument for BTRFS_DEFRAG_PATHS

Like for other path specifications it'd be nice to have the option to specifcy auto as value for BTRFS_DEFRAG_PATHS and let paths considered for defragment be detected automatically.

Analyze/propose snapper snapshot TIMELINE settings

Implement script that wil analyze or propose settings for snapper TIMELINE values according to the recommendations in "Tuning periodic snapshotting".

analyze:

  • scan all snapper configs on the system
  • try to match to the closest preset
  • present results in a human readable form

propose:

  • print values of the proposed settings for a given profile
  • allow dry-run or apply the settings

btrfs-defrag-plugin: Check fragmentation level before filling up disk

The defragmentation is run unconditionally, even if the majority of extens is still large, e.g.:
` /usr/sbin/filefrag -v /usr/lib/sysimage/rpm/Packages

Filesystem type is: 9123683e
File size of /usr/lib/sysimage/rpm/Packages is 129085440 (31515 blocks of 4096 bytes)
 ext:     logical_offset:        physical_offset: length:   expected: flags:
   0:        0..       0:   47234564..  47234564:      1:             shared
   1:        1..       1:   47234562..  47234562:      1:   47234565: shared
   2:        2..    2140:   47243301..  47245439:   2139:   47234563: shared
   3:     2141..    4031:   47291194..  47293084:   1891:   47245440: shared
   4:     4032..    4369:   47340585..  47340922:    338:   47293085: shared
   5:     4370..    4370:   47332867..  47332867:      1:   47340923: shared
   6:     4371..    5895:   47340924..  47342448:   1525:   47332868: shared
   7:     5896..    7626:   47527000..  47528730:   1731:   47342449: shared
   8:     7627..    9317:   47759384..  47761074:   1691:   47528731: shared
   9:     9318..   10923:   47809669..  47811274:   1606:   47761075: shared
  10:    10924..   12474:   48474629..  48476179:   1551:   47811275: shared
  11:    12475..   14655:   41523831..  41526011:   2181:   48476180: shared
  12:    14656..   14658:   41509577..  41509579:      3:   41526012: shared
  13:    14659..   14800:   41526015..  41526156:    142:   41509580: shared
  14:    14801..   17127:   44118314..  44120640:   2327:   41526157: shared
  15:    17130..   18564:   14822118..  14823552:   1435:   44120643: shared
  16:    18565..   19997:   15854971..  15856403:   1433:   14823553: shared
  17:    19998..   21397:   35984576..  35985975:   1400:   15856404: shared
  18:    21398..   22713:   36143758..  36145073:   1316:   35985976: shared
  19:    22714..   24330:   44172104..  44173720:   1617:   36145074: shared
  20:    24331..   24331:   44122071..  44122071:      1:   44173721: shared
  21:    24332..   25342:   44173722..  44174732:   1011:   44122072: shared
  22:    25343..   26656:   44174906..  44176219:   1314:   44174733: shared
  23:    26657..   27870:   46696520..  46697733:   1214:   44176220: shared
  24:    27871..   29692:   47173682..  47175503:   1822:   46697734: shared
  25:    29693..   31514:   47203363..  47205184:   1822:   47175504: last,shared,eof
/usr/lib/sysimage/rpm/Packages: 26 extents found

This is after installing a single package, with deactivated defrag plugin

Defragmentation causes high disk usage (man btrfs-filesystem):

Warning
Defragmenting with Linux kernel versions < 3.9 or ≥ 3.14-rc2 as well as with Linux stable
kernel versions ≥ 3.10.31, ≥ 3.12.12 or ≥ 3.13.4 will break up the reflinks of COW data
(for example files copied with cp --reflink, snapshots or de-duplicated data). This may
cause considerable increase of space usage depending on the broken up reflinks.

Raid5 scrub really slow.

Hi, trying out raid5 with the "new" raid1c3/4 profiles. It's slow by it's nature, but..

Noticed scrub crawls 12MiB/s when I usually get 40MiB/s, also I'd rather scrub one disc at the time.
It seems to be a thing.

I added "BTRFS_SCRUB_MOUNTPOINTS="/dev/sda1:/dev/sdb1:/dev/sdc1:/dev/sdd1" in /etc/default/btrfsmaintenance

But it does not seem to catch on when I: sudo systemctl start btrfs-scrub:

btrfs-scrub.service - Scrub btrfs filesystem, verify block checksums
     Loaded: loaded (/usr/lib/systemd/system/btrfs-scrub.service; static)
     Active: inactive (dead) since Tue 2020-09-15 17:57:46 CEST; 2s ago
TriggeredBy: ● btrfs-scrub.timer
       Docs: man:fstrim
    Process: 63436 ExecStart=/usr/share/btrfsmaintenance/btrfs-scrub.sh (code=exited, status=0/SUCCESS)
   Main PID: 63436 (code=exited, status=0/SUCCESS)

Sep 15 17:57:46 host btrfs-scrub[63442]: Path /dev/sda1 is not btrfs, skipping
Sep 15 17:57:46 host btrfs-scrub[63442]: Running scrub on /dev/sdb1
Sep 15 17:57:46 host btrfs-scrub[63442]: Path /dev/sdb1 is not btrfs, skipping
Sep 15 17:57:46 host btrfs-scrub[63442]: Running scrub on /dev/sdc1
Sep 15 17:57:46 host btrfs-scrub[63442]: Path /dev/sdc1 is not btrfs, skipping
Sep 15 17:57:46 host btrfs-scrub[63442]: Running scrub on /dev/sdd1
Sep 15 17:57:46 host btrfs-scrub[63442]: Path /dev/sdd1 is not btrfs, skipping
Sep 15 17:57:46 host systemd[1]: btrfs-scrub.service: Succeeded.

Add support for .timer units

Currently, the scripts are started by the cron daemon. There's some user interst to use the systemd .timer units.

The scripts are independent of the executing daemon. What's needed:

  • enhance configuration of sysconfig file to point to the timer-defined period (eg. value timer instead of daily/..., so we don't have to sync 2 files), the periods are defined in the .timer files
  • document the changes
  • enhance installers to select which type of executing daemon is wanted

Disable scrub by default?

I would suggest to disable scrub by default. It loads CPU really severely, causes very high Load Average and is not really needed when there is no RAID, I think.

btrfs-defrag.sh complained about a missing function is_btrfs

Looks like you have omitted the line to source your common functions.

Can be fixed by editing /usr/share/btrfsmaintenance/btrfs-defrag.sh (or whatever it's path is on your system):
And adding this line:
. $(dirname $(realpath "$0"))/btrfsmaintenance-functions
after line 17 ie current version ie line saying with LOGIDENTIFIER='btrfs-defrag'

Rgds,
Andy

Run or document about `systemctl daemon-reload`

When we run systemctl restart btrfsmaintenance-refresh.service and then systemctl status btrfs-scrub.timer (for instance), there is a warning:

Warning: The unit file, source configuration file or drop-ins of btrfs-scrub.timer changed on disk. Run 'systemctl daemon-reload' to reload units.

Consequently it should either be run systemctl daemon-reload at the end of btrfsmaintenance-refresh-cron.sh or documented to run it. (I’m not sure if it is a good practice to automatically run this command - I don’t know if there could be side-effects on other systemd units.)

All systemd timers not set to monthly run 1st of month at 0:00 in addition to PERIOD setting

Affects all timers used in btrfsmaintenance.

Using btrfs-trim as an example:

I set the trim to run every Monday at 15:45 with the following setting.
/etc/default/btrfsmaintenance
BTRFS_TRIM_PERIOD="Monday 15:45"

Ran the refresh script with systemctl restart btrfsmaintenance-refresh

Ran fine at the allotted time, but then when I check the status of the timer.

systemctl status btrfs-trim.timer
Loaded: loaded (/lib/systemd/system/btrfs-trim.timer; enabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/btrfs-trim.timer.d
└─schedule.conf
Trigger: Fri 2019-02-01 00:00:00 GMT; 3 days left

It will run next on the 1st (Friday) of the month at 0:00.

The timer is sourcing both OnCalendar=monthly from the timer section of /lib/systemd/system/btrfs-trim.timer and OnCalendar=Monday 15:45 from the timer section of /etc/systemd/system/btrfs-trim.timer.d/schedule.conf

Removing the OnCalendar=monthly from the timer section of /lib/systemd/system/btrfs-trim.timer and re-running the refresh script fixes the issue.

That won't persist after the package is updated though, so the workaround I'm using is to copy /lib/systemd/system/btrfs-trim.timer as /etc/systemd/system/btrfs-trim.timer and remove the OnCalendar=monthly from the timer section as systemd will read that file instead of the /lib version.

balance task will fail on mixed mode filesystems

I have btrfs on /boot, it's 2.4 gigs, btrfs.mkfs set it to mixed mode by default. btrfs-balance.sh runs this:

btrfs balance start -dusage=0 /boot

Which fails, tells me to look at syslog or dmesg, syslog says:

kernel: BTRFS error (device sda3): with mixed groups data and metadata balance options must be the same

Tried this instead which worked:

btrfs balance start -dusage=0 -musage=0 /boot

btrfs-balance.sh: line 27: [: !=: unary operator expected

I see
/usr/share/btrfsmaintenance/btrfs-balance.sh: line 27: [: !=: unary operator expected

sys-fs/btrfsmaintenance-0.4 on Gentoo.

# cat /etc/default/btrfsmaintenance 
## Path:        System/File systems/btrfs
## Type:        string(none,stdout,journal,syslog)
## Default:     "stdout"
#
# Output target for messages. Journal and syslog messages are tagged by the task name like
# 'btrfs-scrub' etc.
BTRFS_LOG_OUTPUT="stdout"

## Path:        System/File systems/btrfs
## Type:        string
## Default:     ""
#
# Run periodic defrag on selected paths. The files from a given path do not
# cross mount points or other subvolumes/snapshots. If you want to defragment
# nested subvolumes, all have to be listed in this variable.
# (Colon separated paths)
BTRFS_DEFRAG_PATHS="/:/mnt/btrfs-top-lvl/root/usr/portage:/mnt/btrfs-top-lvl/root/var/cache:/mnt/btrfs-top-lvl/root/var/tmp:/mnt/btrfs-top-lvl/root/var/tmp.notmpfs:/mnt/btrfs-top-lvl/root/var/lib/layman:/mnt/btrfs-top-lvl/root/usr/src:/mnt/btrfs-top-lvl/data/VirtualMachines"

## Path:           System/File systems/btrfs
## Type:           string(none,daily,weekly,monthly)
## Default:        "none"
## ServiceRestart: btrfsmaintenance-refresh
#
# Frequency of defrag.
BTRFS_DEFRAG_PERIOD="none"

## Path:        System/File systems/btrfs
## Type:        string
## Default:     "+1M"
#
# Minimal file size to consider for defragmentation
BTRFS_DEFRAG_MIN_SIZE="+1M"

## Path:        System/File systems/btrfs
## Type:        string
## Default:     "/"
#
# Which mountpoints/filesystems to balance periodically. This may reclaim unused
# portions of the filesystem and make the rest more compact.
# (Colon separated paths)
# The special word/mountpoint "auto" will evaluate all mounted btrfs filesystems at runtime
BTRFS_BALANCE_MOUNTPOINTS="/:/mnt/usb/data:/mnt/usb/mobiledata"

## Path:           System/File systems/btrfs
## Type:           string(none,daily,weekly,monthly)
## Default:        "weekly"
## ServiceRestart: btrfsmaintenance-refresh
#
# Frequency of periodic balance.
BTRFS_BALANCE_PERIOD="monthly"

## Path:        System/File systems/btrfs
## Type:        string
## Default:     "1 5 10 20 30 40 50"
#
# The usage percent for balancing data block groups.
#
# Note: default values should not disturb normal work but may not reclaim
# enough block groups. If you observe that, add higher values but beware that
# this will increase IO load on the system.
BTRFS_BALANCE_DUSAGE="1 5 10 20 30 40 50"

## Path:        System/File systems/btrfs
## Type:        string
## Default:     "1 5 10 20 30"
#
# The usage percent for balancing metadata block groups. The values are also
# used in case the filesystem has mixed blockgroups.
#
# Note: default values should not disturb normal work but may not reclaim
# enough block groups. If you observe that, add higher values but beware that
# this will increase IO load on the system.
BTRFS_BALANCE_MUSAGE="1 5 10 20 30"

## Path:        System/File systems/btrfs
## Type:        string
## Default:     "/"
#
# Which mountpoints/filesystems to scrub periodically.
# (Colon separated paths)
# The special word/mountpoint "auto" will evaluate all mounted btrfs filesystems at runtime
BTRFS_SCRUB_MOUNTPOINTS="/:/mnt/usb/data:/mnt/usb/mobiledata"

## Path:        System/File systems/btrfs
## Type:        string(none,weekly,monthly)
## Default:     "monthly"
## ServiceRestart: btrfsmaintenance-refresh
#
# Frequency of periodic scrub.
BTRFS_SCRUB_PERIOD="monthly"

## Path:        System/File systems/btrfs
## Type:        string(idle,normal)
## Default:     "idle"
#
# Priority of IO at which the scrub process will run. Idle should not degrade
# performance but may take longer to finish.
BTRFS_SCRUB_PRIORITY="idle"

## Path:        System/File systems/btrfs
## Type:        boolean
## Default:     "false"
#
# Do read-only scrub and don't try to repair anything.
BTRFS_SCRUB_READ_ONLY="false"

## Path:           System/File systems/btrfs
## Description:    Configuration for periodic fstrim
## Type:           string(none,daily,weekly,monthly)
## Default:        "none"
## ServiceRestart: btrfsmaintenance-refresh
#
# Frequency of periodic trim. Off by default so it does not collide with
# fstrim.timer . If you do not use the timer, turn it on here. The recommended
# period is 'weekly'.
BTRFS_TRIM_PERIOD="none"

## Path:        System/File systems/btrfs
## Description: Configuration for periodic fstrim - mountpoints
## Type:        string
## Default:     "/"
#
# Which mountpoints/filesystems to trim periodically.
# (Colon separated paths)
# The special word/mountpoint "auto" will evaluate all mounted btrfs filesystems at runtime
BTRFS_TRIM_MOUNTPOINTS="/:/mnt/usb/data:/mnt/usb/mobiledata"

Provide man pages for commands

A lot of Unix distrubutions reccommend (need) man pages for commands. Debian is one of them.
The man page format is not very friendly to write but there are tools to convert from friendlier formats to man page.

One of these is asciidoc. It has a man page mode that can create man pages. It can also create html pages from those documents and Github supports asciidoc.

My recommendation is to split the README.md into one asciidoc file per command and link them back.

Since a2x command is available on most distributions this should be ok.

[1] http://www.methods.co.nz/asciidoc/chunked/ch24.html
[2] http://www.methods.co.nz/asciidoc/asciidoc.1.txt

[trivial] Check for # of subvolumes

Hi Dave,

It just occurred to me that btrfsmaintenance might be a good place to globally check if the number of subvolumes is getting out of control. At the very least, this proposal will produce an alert if a periodic job fails to maintain a sane snapshot count. I believe that it will increase the likelihood of positive outcomes for those who try out btrfs, while ignorant to the inverse correlation between too-many-snapshots and performance and likelihood of ENOSPC. The method I thought of is simple, but might be too simplistic. Something like:

SUBVOLUME_LIMIT=300
for i in $BTRFS_DEFRAG_PATHS; do
sub_count=btrfs sub list -a $i | wc -l
if [ $sub_count -gt $SUBVOLUME_LIMIT ]; then
exceeded=expr $sub_count - $SUBVOLUME_LIMIT
echo "Warning: Subvolume limit for $i exceeds $SUBVOLUME_LIMIT by $exceeded subvolumes" | mail -s "Btrfs warning for $i" root@localhost
fi
done

To avoid adding another variable to sysconfig.btrfsmaintenance $i should be associated with the operation most likely to cause an ENOSPC or performance issue...but it might merit it's own configuration knob. If you answer these questions I'd be happy to prepare a formal patch.

Sincerely,
Nicholas

unexpected operator with dash

hi there,

I have setup the config with:

BTRFS_BALANCE_MOUNTPOINTS="/mnt/btrfs/root:/mnt/btrfs/media:/mnt/backup"

for balance and for scrub and if i run the scripts, there are the following error message:

/etc/cron.monthly//btrfs-scrub: 14: [: /mnt/btrfs/root:/mnt/btrfs/media:/mnt/backup: unexpected operator
/etc/cron.weekly//btrfs-balance: 14: [: /mnt/btrfs/root:/mnt/btrfs/media:/mnt/backup: unexpected operator

I'm using Debian Sid:

ll /bin/sh
lrwxrwxrwx 1 root root 4 Jan 24 06:16 /bin/sh -> dash
dpkg -l|grep dash
ii dash 0.5.8-2.4 amd64 POSIX-compliant shell

All scripts run successfully on the mount points. If I run the script with bash there is no error maybe point /bin/sh to /bin/bash?
I will add an additional question to trim. I don't use the fstrim timer for systemd.
But there are a lot of information in the net and the most are quite old. Should i run trim on a luks encrypted volume with btrfs on a ssd? As far as i understand the wiki from btrfs this is not needed because the filesystem itself take care of it.

thanks Micha

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.