GithubHelp home page GithubHelp logo

patryk27 / lxd-snapper Goto Github PK

View Code? Open in Web Editor NEW
48.0 4.0 6.0 2.71 MB

LXD snapshots, automated

License: MIT License

Rust 87.94% Nix 5.97% Shell 0.28% Python 5.81%
backup lxd lxc zfs snapshot rust nix-flake

lxd-snapper's Introduction

lxd-snapper

LXD snapshots, automated.

lxd-snapper automates creating & removing LXD snapshots - just prepare a snapshotting policy, setup a cronjob, and enjoy your containers.

tl;dr it's a fancy wrapper for lxc snapshot & lxc delete; like LXD's built-in snapshots.schedule, but more powerful.

Requirements

  • LXD 4 / 5
  • Linux (x86_64)

Plus, if you plan on building lxd-snapper locally:

  • Cargo & Rust 1.63.0
  • Nix (for running integration tests)

Getting started

Downloading

You can either download pre-built binaries:

# x86_64
$ wget https://github.com/Patryk27/lxd-snapper/releases/download/v1.3.0/lxd-snapper-linux64 -O lxd-snapper
$ chmod u+x lxd-snapper

... or build lxd-snapper on your own:

$ git clone https://github.com/Patryk27/lxd-snapper
$ cd lxd-snapper

# Using Cargo
$ cargo build --release
$ ./target/release/lxd-snapper

# (or) Using Nix v3
$ nix build
$ ./result/bin/lxd-snapper

Configuring

Setting-up lxd-snapper is easy: you just need to prepare a configuration file that will describe which LXD instances (so containers and/or virtual machines) you want to get snapshotted and for how long those snapshots should be kept around.

We can start with the most basic configuration:

policies:
  my-first-policy:
    keep-last: 2

... which defines a single policy called my-first-policy that will snapshot all of your instances, keeping around the latest two snapshots per each instance.

To check how it works, let's go ahead and create some containers:

$ lxc launch ubuntu: hello
$ lxc launch ubuntu: world
# (the container's OS doesn't matter - Ubuntu is just an example)

$ lxc ls
+-------+---------+------+------+-----------+-----------+
| NAME  |  STATE  | IPV4 | IPV6 |   TYPE    | SNAPSHOTS |
+-------+---------+------+------+-----------+-----------+
| hello | RUNNING | ...  | ...  | CONTAINER | 0         |
+-------+---------+------+------+-----------+-----------+
| world | RUNNING | ...  | ...  | CONTAINER | 0         |
+-------+---------+------+------+-----------+-----------+

Now, to snapshot those containers, first we have to store that configuration from before into a file - say, config.yaml - and then run lxd-snapper backup:

$ lxd-snapper --dry-run -c config.yaml backup
(--dry-run is active, no changes will be applied)

hello
  - creating snapshot: auto-20221105-130019 [ OK ]

world
  - creating snapshot: auto-20221105-130019 [ OK ]

Summary
-------
  processed instances: 2
  created snapshots: 2

As you can see, there's a detailed output of everything that's happened - or rather of everything that would happen: we used a switch called --dry-run which tells lxd-snapper that you only want to preview the changes without actually creating or removing any snapshots.

We can confirm that nothing's changed by re-running lxc ls and seeing that we've still got zero snapshots:

$ lxc ls
+-------+---------+------+------+-----------+-----------+
| NAME  |  STATE  | IPV4 | IPV6 |   TYPE    | SNAPSHOTS |
+-------+---------+------+------+-----------+-----------+
| hello | RUNNING | ...  | ...  | CONTAINER | 0         |
+-------+---------+------+------+-----------+-----------+
| world | RUNNING | ...  | ...  | CONTAINER | 0         |
+-------+---------+------+------+-----------+-----------+

--dry-run is useful after you've made some changes to the configuration and want to confirm that everything is working as intended - since that's the case with us, we can now re-run lxc-snapper backup without --dry-run:

$ lxd-snapper -c config.yaml backup

/* ... */

Summary
-------
  processed instances: 2
  created snapshots: 2

... and voilà:

$ lxc ls
+-------+---------+------+------+-----------+-----------+
| NAME  |  STATE  | IPV4 | IPV6 |   TYPE    | SNAPSHOTS |
+-------+---------+------+------+-----------+-----------+
| hello | RUNNING | ...  | ...  | CONTAINER | 1         |
+-------+---------+------+------+-----------+-----------+
| world | RUNNING | ...  | ...  | CONTAINER | 1         |
+-------+---------+------+------+-----------+-----------+

Our policy says keep-last: 2, so let's go ahead and run lxd-snapper backup twice more, to trigger this limit:

$ lxd-snapper -c config.yaml backup
$ lxd-snapper -c config.yaml backup

$ lxc ls
+-------+---------+------+------+-----------+-----------+
| NAME  |  STATE  | IPV4 | IPV6 |   TYPE    | SNAPSHOTS |
+-------+---------+------+------+-----------+-----------+
| hello | RUNNING | ...  | ...  | CONTAINER | 3         |
+-------+---------+------+------+-----------+-----------+
| world | RUNNING | ...  | ...  | CONTAINER | 3         |
+-------+---------+------+------+-----------+-----------+

Now we've got three snapshots per each container - why not two? Because as a safety measure, the backup command always only creates snapshots - never deletes them.

To remove stale snapshots, we have to run prune:

$ lxd-snapper --dry-run -c config.yaml prune
(--dry-run is active, no changes will be applied)

hello
  - keeping snapshot: auto-20221105-130214
  - keeping snapshot: auto-20221105-130213
  - deleting snapshot: auto-20221105-130157 [ OK ]

world
  - keeping snapshot: auto-20221105-130214
  - keeping snapshot: auto-20221105-130213
  - deleting snapshot: auto-20221105-130157 [ OK ]

Summary
-------
  processed instances: 2
  deleted snapshots: 2
  kept snapshots: 4

As before, we've started with --dry-run as to see if everything looks alright - and since it seems so, it's time to kick those stale snapshots out of our filesystem for good:

$ lxd-snapper -c config.yaml prune

/* ... */

Summary
-------
  processed instances: 2
  deleted snapshots: 2
  kept snapshots: 4

$ lxc ls
+-------+---------+------+------+-----------+-----------+
| NAME  |  STATE  | IPV4 | IPV6 |   TYPE    | SNAPSHOTS |
+-------+---------+------+------+-----------+-----------+
| hello | RUNNING | ...  | ...  | CONTAINER | 2         |
+-------+---------+------+------+-----------+-----------+
| world | RUNNING | ...  | ...  | CONTAINER | 2         |
+-------+---------+------+------+-----------+-----------+

Re-running prune will now do nothing, since all of the containers have correct number of snapshots:

$ lxd-snapper -c config.yaml prune
hello
  - keeping snapshot: auto-20221105-130214
  - keeping snapshot: auto-20221105-130213

world
  - keeping snapshot: auto-20221105-130214
  - keeping snapshot: auto-20221105-130213

Summary
-------
  processed instances: 2
  deleted snapshots: 0
  kept snapshots: 4

(there's also a command called backup-and-prune that runs backup and prune one after another, which is what you'll usually want to do.)

And that's basically it - that's how lxd-snapper works; now let's see what makes it unique!

Filtering instances

By default, lxd-snapper snapshots all of the instances it can find on the local machine - you can affect that with various included- and excluded- options:

policies:
  # Matches all instances inside the `important-client` project and keeps the
  # last 20 snapshots for each of them:
  a:
    included-projects: ['important-client']
    keep-last: 20
    
  # Matches all instances _outside_ the `important-client` project and keeps the
  # last 5 snapshots for each of them:
  b:
    excluded-projects: ['important-client']
    keep-last: 5
    
  # Matches all instances named `important-container` (across all projects) and
  # keeps the last 20 snapshots for each of them:
  c:
    included-instances: ['important-container']
    keep-last: 20
    
  # Matches all instances _not_ named `important-container` (across all
  # projects) and keeps the last 5 snapshots for each of them:
  d:
    excluded-instances: ['important-container']
    keep-last: 5
    
  # Matches all instances that are running at the time of performing `backup` /
  # `prune`.
  #
  # Possible values: Aborting, Running, Ready, Starting, Stopped, and Stopping.
  e:
    included-statuses: ['Running']
    
  # Matches all instances that are _not_ running at the time of performing
  # `backup` / `prune`.
  f:
    excluded-statuses: ['Running']
    
  # Matches all instances named `php` or `nginx` that belong to project
  # `client-a` or `client-b`.
  #
  # For an instance to match this policy, it has to match all `included-*`
  # rules, so e.g.:
  #
  # - an instance named `php` for `client-c` will be skipped, since `client-c`
  #   doesn't match `included-projects`,
  #
  # - an instance named `nextcloud` for `client-a` will be skipped, since
  #   `nextcloud` doesn't match `included-instances`.
  #
  # In SQL, this would be:
  #
  # SELECT *
  #   FROM instances
  #  WHERE (project = "client-a" OR project = "client-b")
  #    AND (name = "php" OR name = "nginx")
  #    AND (status = "Running")
  g:
    included-projects: ['client-a', 'client-b']
    included-instances: ['php', 'nginx']
    included-statuses: ['Running']
 
  # Similarly as above (notice the reversed operator for `excluded-*`):
  #
  # SELECT *
  #   FROM instances
  #  WHERE (project = "client-a" OR project = "client-b")
  #    AND (name != "php" AND name != "nginx")
  h:
    included-projects: ['client-a', 'client-b']
    excluded-instances: ['php', 'nginx']

Retention strategies

lxd-snapper supports Borg-style retention strategies; each policy must specify at least one keep- option that says for how long its snapshots should be kept around.

The most straightforward setting is keep-last - e.g.:

policies:
  my-policy:
    keep-last: 5

... would keep the five newest snapshots for each container.

(i.e. if you ran backup-and-prune once a day, that would effectively keep the five days worth of snapshots around)

Being versatile, lxd-snapper also supports keep-hourly, keep-daily etc., allowing you to create more fancy policies such as:

policies:
  my-policy:
    keep-hourly: 6
    keep-daily: 5
    keep-weekly: 4
    keep-monthly: 3
    keep-yearly: 2

This would keep snapshots from 6 latest hours + 5 latest days + 4 latest weeks + 3 latest months + 2 latest years = 20 snapshots per instance.

Or, rephrasing:

  • we'd have a snapshot per each past hour, up to 6 of them (e.g. 15:00, 14:00, 13:00, 12:00, 11:00 & 10:00),
  • we'd have a snapshot per each past day, up to 5 of them (e.g. today, yesterday, the day before yesterday, 3 days ago & 4 days ago),
  • we'd have a snapshot per each past week, up to 4 of them (e.g. this week, the past week, two weeks ago & three weeks ago),
  • et cetera, et cetera.

This system takes a while to get used to, but it's also extremely versatile; you can find more examples inside the docs/example-configs directory and inside Borg's documentation.

Of course, you don't have to get fancy -- keep-last should get the job done most of the time.

Cascading

Say, you're using LXD projects and you've got a few containers:

$ lxc ls --project client-a
+-------+---------+------+------+-----------+-----------+
| NAME  |  STATE  | IPV4 | IPV6 |   TYPE    | SNAPSHOTS |
+-------+---------+------+------+-----------+-----------+
| mysql | RUNNING | ...  | ...  | CONTAINER | 0         |
+-------+---------+------+------+-----------+-----------+
| php   | RUNNING | ...  | ...  | CONTAINER | 0         |
+-------+---------+------+------+-----------+-----------+

$ lxc ls --project client-b
+-------+---------+------+------+-----------+-----------+
| NAME  |  STATE  | IPV4 | IPV6 |   TYPE    | SNAPSHOTS |
+-------+---------+------+------+-----------+-----------+
| mysql | RUNNING | ...  | ...  | CONTAINER | 0         |
+-------+---------+------+------+-----------+-----------+
| php   | RUNNING | ...  | ...  | CONTAINER | 0         |
+-------+---------+------+------+-----------+-----------+

$ lxc ls --project client-c
+-------+---------+------+------+-----------+-----------+
| NAME  |  STATE  | IPV4 | IPV6 |   TYPE    | SNAPSHOTS |
+-------+---------+------+------+-----------+-----------+
| mysql | RUNNING | ...  | ...  | CONTAINER | 0         |
+-------+---------+------+------+-----------+-----------+
| php   | RUNNING | ...  | ...  | CONTAINER | 0         |
+-------+---------+------+------+-----------+-----------+

And, for the sake of argument, let's say that you want to create the following configuration:

  • all mysql-s should have 5 latest snapshots,
  • all php-s should have 2 latest snapshots,
  • except for client-c, which is important and should get 10 snapshots.

That's what cascading is for - when multiple policies match a single container:

policies:
  # Matches: client-a/mysql, client-b/mysql, client-c/mysql
  all-mysqls:
    included-instances: ['mysql']
    keep-last: 5
    
  # Matches: client-a/php, client-b/php, client-c/php
  all-phps:
    included-instances: ['php']
    keep-last: 2
    
  # Matches: client-c/mysql, client-c/php
  important-clients:
    included-projects: ['client-c']
    keep-last: 10

... lxd-snapper will combine them top-bottom into a single policy, separately for each container.

What this means practically is that when a few policies match a single instance, policies that are below will have higher priority than the ones above them: important-clients is below all-mysqls and all-phps, so its keep-last is more important for client-c/mysql and client-c/php.

This merging happens on a per-retention-strategy basis, so if we had:

policies:
  # Matches: client-a/mysql, client-b/mysql, client-c/mysql
  all-mysqls:
    included-instances: ['mysql']
    keep-daily: 2
    
  # Matches: client-a/php, client-b/php, client-c/php
  all-phps:
    included-instances: ['php']
    keep-hourly: 8
    
  # Matches: client-c/mysql, client-c/php
  important-clients:
    included-projects: ['client-c']
    keep-last: 20

... then our effective configuration would be:

client-a/mysql + client-b/mysql
  keep-daily = 2
  
client-a/php + client-b/php
  keep-hourly = 8

client-c/mysql
  keep-daily = 2
  keep-last = 20
  (= 22 snapshots)

client-c/php
  keep-hourly = 8
  keep-last = 20
  (= 28 snapshots)

Other possible use cases for this feature include creating a global "catch all" policy, and then creating exceptions of it:

policies:
  all:
    keep-last: 10
    
  storages:
    include-containers: ['nextcloud', 'minio']
    keep-last: 20

This would keep 10 snapshots for all of the containers, with the exception of nextcloud and minio that would have 20 snapshots.

Hooks

Hooks are small shell commands executed when lxd-snapper performs a certain action; you can configure them by creating a hooks: section inside the configuration:

hooks:
  on-backup-started: 'echo "on-backup-started" >> /tmp/log.txt'
  on-snapshot-created: 'echo "on-snapshot-created: {{ remoteName }}, {{ projectName }}, {{ instanceName }}, {{snapshotName}}" >> /tmp/log.txt'
  on-instance-backed-up: 'echo "on-instance-backed-up: {{ remoteName }}, {{ projectName }}, {{ instanceName }}" >> /tmp/log.txt'
  on-backup-completed: 'echo "on-backup-completed" >> /tmp/log.txt'

  on-prune-started: 'echo "on-prune-started" >> /tmp/log.txt'
  on-snapshot-deleted: 'echo "on-snapshot-deleted: {{ remoteName }}, {{ projectName }}, {{ instanceName }}, {{ snapshotName }}" >> /tmp/log.txt'
  on-instance-pruned: 'echo "on-instance-pruned: {{ remoteName }}, {{ projectName }}, {{ instanceName }}" >> /tmp/log.txt'
  on-prune-completed: 'echo "on-prune-completed" >> /tmp/log.txt'

policies:
  # ...

They come handy e.g. for synchronizing snapshots to external storage:

hooks:
  on-snapshot-created: 'zfs send ... | ssh zfs recv ...'
  on-snapshot-deleted: 'zfs send ... | ssh zfs recv ...'

policies:
  # ...

Most of the hooks support variable interpolation - they are strings that are replaced by lxd-snapper with some concrete value before the hook is run:

  • on-snapshot-created has {{ remoteName }}, {{ projectName }}, {{ instanceName }} and {{ snapshotName }},
  • on-instance-backed-up has {{ remoteName }}, {{ projectName }} and {{ instanceName }},
  • on-snapshot-deleted has {{ remoteName }}, {{ projectName }}, {{ instanceName }} and {{ snapshotName }},
  • on-instance-pruned has {{ remoteName }}, {{ projectName }} and {{ instanceName }}.

... where:

  • {{ remoteName }} corresponds to NAME as visible in lxc remote ls (local by default),
  • {{ projectName }} corresponds to NAME as visible in lxc project ls (default by default),
  • {{ instanceName }} corresponds to NAME as visible in lxc ls,
  • {{ snapshotName }} corresponds to NAME as visible in lxc info instance-name.

Caveats & Tips:

  • hooks are skipped during --dry-run,

  • you can provide at most one script per hook (e.g. you can't have on-backup-started defined twice),

  • you don't have to provide scripts for hooks you're not interested in (e.g. specifying just on-backup-started is alright),

  • hooks are run only from inside lxd-snapper (e.g. on-snapshot-created will not be run for a manual lxc snapshot performed from the command line),

  • hooks are launched as soon as the event happens and block lxd-snapper until the hook completes - e.g.

    hooks:
      on-snapshot-created: 'delay 10'

    ... will delay creating each snapshot by 10 seconds; if that's problematic for your use case, you might want to buffer the changes like so:

    hooks:
      on-backup-started: 'rm /tmp/created-snapshots.txt'
      on-snapshot-created: 'echo "{{ instanceName }},{{ snapshotName }}" >> /tmp/created-snapshots.txt'
      on-backup-completed: './sync-snapshots.sh /tmp/created-snapshots.txt'
  • when a hook returns a non-zero exit code, it will be treated as an error,

  • hook's stdout and stderr are not displayed, unless the hook returns a non-zero exit code (stdout & stderr will be then visible in the error message),

  • variables can be written {{likeThat}} or {{ likeThat }}, whichever way you prefer.

Remotes

By default, lxd-snapper sees containers & virtual machines only from the local LXD instance (i.e. as if you run lxc ls).

If you're using LXD remotes, and you'd like for lxd-snapper to snapshot them too, you have to provide their names in the configuration file:

remotes:
  - server-a
  - server-b
  - server-c

If you'd like to snapshot both the local LXD and the remote ones, use a remote called local:

remotes:
  - local
  - server-a
  - server-b
  - server-c

(those labels correspond to NAME as visible in lxc remote ls)

By default, each policy will match all of the specified remotes - if you want to narrow that down, you can use included-remotes and excluded-remotes:

remotes:
  - unimportant-server-A
  - unimportant-server-B
  - important-server-A

policies:
  all-servers:
    keep-last: 10
  
  important-servers:
    included-remotes: ['important-server-A']
    keep-last: 25 

If you're going for a centralized backup solution, you can pair this feature with hooks to pull the newly-created snapshots into your coordinator-machine:

hooks:
  on-instance-backed-up: 'lxc copy --refresh {{ remoteName }}:{{ instanceName }} {{ instanceName }}'
  on-instance-pruned: 'lxc copy --refresh {{ remoteName }}:{{ instanceName }} {{ instanceName }}'

remotes:
  - server-A
  - server-B
  - server-C

policies:
  all-servers:
    keep-last: 10

Scheduling

Finally, lxd-snapper is a fire-and-forget application - it doesn't daemonize itself; to keep instances backed-up & pruned on time, you will want to create a systemctl timer or a cronjob for it:

5 * * * * /usr/bin/lxd-snapper -c /etc/lxd-snapper.yaml backup-and-prune

Configuration syntax reference

# (optional, defaults to 'auto-')
#
# Prefix used to distinguish between snapshots created by lxd-snapper and 
# everything else (e.g. a manual `lxc snapshot`).
#
# `lxd-snapper backup` will create snapshots with this prefix and
# `lxd-snapper prune` will only ever remove snapshots that match this prefix.
snapshot-name-prefix: '...'

# (optional, defaults to '%Y%m%d-%H%M%S')
#
# Formatting string used to build the rest of the snapshot name.
# 
# Format:
# https://docs.rs/chrono/0.4.22/chrono/format/strftime/index.html
snapshot-name-format: '...'

# (optional, defaults to '10m')
#
# Timeout for each call to lxc; prevents lxd-snapper from running forever if lxc
# happens to hang.
#
# If you've got a (very) slow storage, you might want to increase this limit, 
# but the default should be enough for a typical setup.
#
# Format:
# https://docs.rs/humantime/latest/humantime/
# (e.g. '30s', '5m', '1h' etc.)
lxc-timeout: '...'

# (optional)
hooks:
  on-backup-started: '...'
  on-instance-backed-up: '...'
  on-snapshot-created: '...'
  on-backup-completed: '...'
  
  on-prune-started: '...'
  on-snapshot-deleted: '...'
  on-instance-pruned: '...'
  on-prune-completed: '...'

# (optional, defaults to `local`)
remotes:
  - local
  - server-A
  - server-B

# (at least one required)
policies:
  policy-name:
    included-remotes: ['...', '...']
    excluded-remotes: ['...', '...']
    included-projects: ['...', '...']
    excluded-projects: ['...', '...']
    included-instances: ['...', '...']
    excluded-instances: ['...', '...']
    included-statuses: ['...', '...']
    excluded-statuses: ['...', '...']

    keep-hourly: 1
    keep-daily: 1
    keep-weekly: 1
    keep-monthly: 1
    keep-yearly: 1
    keep-last: 1
    keep-limit: 1

Contributing

Merge requests are very much welcome! :-)

lxd-snapper is a pretty standard Rust project, so cargo & rustc should be enough to get you going.

There are also end-to-end tests written using NixOS Testing Framework that you can run with nix flake check -j4.

Disclaimer

Snapshots are not a replacement for backups - to keep your data safe, use snapshots and backups together, wisely.

License

Copyright (c) 2019 Patryk Wychowaniec [email protected].
Licensed under the MIT license.

lxd-snapper's People

Contributors

aivanise avatar patryk27 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

lxd-snapper's Issues

Volume selection

Is there any way, how to include or exclude specific volume from being snapshoted please?

For example, I don't want /tmp being snapshoted.

Thanks.

Last version

Hello.

I don't understand your versioning in tags. What is the last version please?

I would like to add package to Gentoo repo and I need to know, which version.

Thanks.
Přemysl

Feature request: Copy snapshots to remote server

I understand that this might be outside the scope of the project, but it would be amazing if copying snapshots to a remote server was supported. The current tools to do this with lxd alone is very lackluster!

wish: implement a retry policy

Hi,

Thanks for a wonderful tool, it saved my life a couple of times already :)

I have a large(ish) cluster, 6 nodes, 150+ containers and there is always something going on, either a backup or devs playing around overloading individual nodes, upgrades, maintenance,etc, so more often than not lxc times out and then the complete service "fails", like this:

Dec 06 09:02:27 lxd10.2e-systems.com lxd-snapper[30497]: -> deleting snapshot: auto-20221206-040026
Dec 06 09:02:27 lxd10.2e-systems.com lxd-snapper[30497]: error: lxc returned a non-zero status code and said:
Dec 06 09:02:27 lxd10.2e-systems.com lxd-snapper[30497]: -> [ FAILED ]
...
Dec 06 09:03:37 lxd10.2e-systems.com lxd-snapper[30497]: Error: Some instances couldn't be pruned

in some cases this is a problem as a snapshot that is not deleted on time uses disk space which is sometimes scarce, so would it be possible to implement some kind of retry policy, preferably configurable, like:

retry: 5
retry-interval: 30s

Build problem

Hello.

I can't build your project. Here is the build output:

p-dev@velvet ~/lxd-snapper $ cargo build
   Compiling proc-macro2 v1.0.24
   Compiling unicode-xid v0.2.1
   Compiling syn v1.0.45
   Compiling serde_derive v1.0.117
   Compiling libc v0.2.79
   Compiling autocfg v1.0.1
   Compiling serde v1.0.117
   Compiling version_check v0.9.2
   Compiling byteorder v1.3.4
   Compiling memchr v2.3.3
   Compiling ryu v1.0.5
   Compiling lazy_static v1.4.0
   Compiling anyhow v1.0.33
   Compiling itoa v0.4.6
   Compiling unicode-width v0.1.8
   Compiling bitflags v1.2.1
   Compiling unicode-segmentation v1.6.0
   Compiling serde_json v1.0.59
   Compiling hashbrown v0.9.1
   Compiling linked-hash-map v0.5.3
   Compiling termcolor v1.1.0
   Compiling dtoa v0.4.6
   Compiling strsim v0.10.0
   Compiling encode_unicode v0.3.6
   Compiling os_str_bytes v2.3.2
   Compiling vec_map v0.8.2
   Compiling textwrap v0.12.1
   Compiling proc-macro-error-attr v1.0.4
   Compiling proc-macro-error v1.0.4
   Compiling num-traits v0.2.12
   Compiling num-integer v0.1.43
   Compiling indexmap v1.6.0
   Compiling yaml-rust v0.4.4
   Compiling heck v0.3.1
   Compiling csv-core v0.1.10
   Compiling regex-automata v0.1.9
   Compiling quote v1.0.7
   Compiling atty v0.2.14
   Compiling time v0.1.44
   Compiling dirs v1.0.5
   Compiling pathsearch v0.2.0
   Compiling colored v1.9.3
   Compiling term v0.5.2
   Compiling thiserror-impl v1.0.21
   Compiling clap_derive v3.0.0-beta.2
   Compiling thiserror v1.0.21
   Compiling bstr v0.2.13
   Compiling chrono v0.4.19
   Compiling serde_yaml v0.8.13
   Compiling clap v3.0.0-beta.2
   Compiling csv v1.1.3
   Compiling lib_lxd v0.0.0 (/home/p/p-dev/lxd-snapper/libs/lxd)
error[E0554]: `#![feature]` may not be used on the stable release channel
 --> libs/lxd/src/lib.rs:3:1
  |
3 | #![feature(try_blocks)]
  | ^^^^^^^^^^^^^^^^^^^^^^^

error: aborting due to previous error

For more information about this error, try `rustc --explain E0554`.
error: could not compile `lib_lxd`.

To learn more, run the command again with --verbose.
warning: build failed, waiting for other jobs to finish...
error: build failed

I am using cargo and rustc version 1.47.0.

Thanks for your help.

P.

Support remote instances

If I understand it correctly, lxd-snapper currently supports only local instances.
I currently have a backup process set up where my backup server interacts with another lxd remote instance. It manually creates snapshots and pulls them afterwards with lxc copy --refresh. I'd love to replace the manual snapshot process with lxd-snapper and I'd rather not move this bit to the productive server but instead keep it in one place.

Would it be possible to implement a config parameter that allows specifying a remote that a specific policy is applied to?
I imagine that there are many use cases for this, e.g., if people manage multiple different lxd instances.

Set Maximum Size of Snapshots?

In normal snapper, the maximum snapshot size can be set as a fraction of the maximum size of the filesystem. Would you be willing to provide an example in the example config in the readme about how to set this up in lxd-snapper?

Volume snapshots

LXD allows to manage volume snapshots using the lxc storage volume snapshot command - lxd-snapper could be extended to support it, too.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.