GithubHelp home page GithubHelp logo

Comments (4)

redmop avatar redmop commented on August 15, 2024 1

Any snapshots that are completely sent/received, can be resumed from.
Syncoid does this properly.

On 5/26/2016 9:27 PM, Jim Salter wrote:

It'll start over from the beginning. When ZFS send/resume makes it into
production I'll support it with syncoid, but right now there's no resume
capability available.

Note that this is typically only a potential issue for initial
replication.
Later replication is incremental and generally FAR less data, therefore
less potential for interrupted transfer.


(Sent from my tablet - please blame any weird errors on autocorrect)

On May 26, 2016 15:10:55 flo82 [email protected] wrote:

i want to transfer terrabytes of data with syncoid.
what will happen if the transfer get's interrupted due to a technical /
network connection loss?

if i'm starting the sync again - where will it start?
thanks for the answer - btw: i'm using the openzfs implementation of
zfs in
the latest version.


You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
#38


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
#38 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/ANsAr2DBjZ3AoKjzeli3lgxU-FM1z6nKks5qFmSegaJpZM4In22-.

from sanoid.

jimsalterjrs avatar jimsalterjrs commented on August 15, 2024

It'll start over from the beginning. When ZFS send/resume makes it into
production I'll support it with syncoid, but right now there's no resume
capability available.

Note that this is typically only a potential issue for initial replication.
Later replication is incremental and generally FAR less data, therefore
less potential for interrupted transfer.


(Sent from my tablet - please blame any weird errors on autocorrect)

On May 26, 2016 15:10:55 flo82 [email protected] wrote:

i want to transfer terrabytes of data with syncoid.
what will happen if the transfer get's interrupted due to a technical /
network connection loss?

if i'm starting the sync again - where will it start?
thanks for the answer - btw: i'm using the openzfs implementation of zfs in
the latest version.


You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
#38

from sanoid.

jimsalterjrs avatar jimsalterjrs commented on August 15, 2024

That's a good point. To clarify that:

When Syncoid is doing a full (initial) replication, there is only one
snapshot being sent, which is likely a lot of data because it's full not
incremental. If this is interrupted, it will need to start over from
scratch. For example if a 1TB snapshot is interrupted at 900MB sent,
it's going to have to start over at 0 the next time.

However, for /subsequent/ replication - including the follow-on
incrementals on a first Syncoid of a dataset which already had snapshots

  • the incrementals are pretty small, and even if the /Syncoid/ process
    is interrupted, any snapshots which fully replicated remain present on
    the target.

For example, let's say you do syncoid sourcepool/sourceset
root@target:targetpool/targetset. If sourcepool/sourceset has ten
snapshots and seven of them have fully replicated when the Syncoid
process is interrupted, the next Syncoid attempt will pick up from
there, and begin an incremental replication of snapshots 8, 9, and 10.

On 05/27/2016 12:34 PM, Shawn Perry wrote:

Any snapshots that are completely sent/received, can be resumed from.
Syncoid does this properly.

On 5/26/2016 9:27 PM, Jim Salter wrote:

It'll start over from the beginning. When ZFS send/resume makes it into
production I'll support it with syncoid, but right now there's no resume
capability available.

Note that this is typically only a potential issue for initial
replication.
Later replication is incremental and generally FAR less data, therefore
less potential for interrupted transfer.


(Sent from my tablet - please blame any weird errors on autocorrect)

On May 26, 2016 15:10:55 flo82 [email protected] wrote:

i want to transfer terrabytes of data with syncoid.
what will happen if the transfer get's interrupted due to a
technical /
network connection loss?

if i'm starting the sync again - where will it start?
thanks for the answer - btw: i'm using the openzfs implementation of
zfs in
the latest version.


You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
#38


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub

#38 (comment),

or mute the thread

https://github.com/notifications/unsubscribe/ANsAr2DBjZ3AoKjzeli3lgxU-FM1z6nKks5qFmSegaJpZM4In22-.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#38 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/ABOLHIeWVIxle93BmYC8AeieIU6TduHvks5qFx0IgaJpZM4In22-.

from sanoid.

jimsalterjrs avatar jimsalterjrs commented on August 15, 2024

the follow-on incrementals on a first Syncoid of a dataset which already had snapshots

One last comment, to explain/demonstrate what I'm talking about there.

We have a dataset, demo, that contains ten snapshots @0-@9, each of which contains 50M of data new the dataset as of that snapshot.

We'll interrupt Syncoid several times while transferring it to a remote system:

root@banshee:~/git/sanoid# syncoid banshee/demo root@target:target/demo
INFO: Sending oldest full snapshot banshee/demo@autosnap_2016-05-27_18:30:01_hourly (~ 9 KB) to new target filesystem:
40.7kB 0:00:00 [9.53MB/s] [===================================] 412%            
INFO: Updating new target filesystem with incremental banshee/demo@autosnap_2016-05-27_18:30:01_hourly ... syncoid_banshee_2016-05-27:18:33:04 (~ 337.3 MB):
^C40MB 0:00:02 [37.2MB/s] [=============>                      ] 41% ETA 0:00:02

mbuffer: warning: error during output to <stdout>: Broken pipe
CRITICAL ERROR:  /sbin/zfs send -I banshee/demo@autosnap_2016-05-27_18:30:01_hourly banshee/demo@syncoid_banshee_2016-05-27:18:33:04 | /usr/bin/pv -s 353668616 | /usr/bin/lzop  | /usr/bin/mbuffer  -q -s 128k -m 16M 2>/dev/null | /usr/bin/ssh -c [email protected],arcfour -S /tmp/[email protected] [email protected] ' /usr/bin/mbuffer  -q -s 128k -m 16M 2>/dev/null | /usr/bin/lzop -dfc |  /sbin/zfs receive -F target/demo' failed: 2 at /usr/local/bin/syncoid line 196.

OK, the majority of what we're looking at here is a hellaciously obnoxious (but informative!) error message that happens when we kill Syncoid with a ^C. But from the top, Syncoid does a FULL replication of demo@0 - then once that's done, the same Syncoid run starts an incremental replication from demo@0-demo@9. We interrupted the incremental 41% of the way through.

What happens when we run Syncoid again?

root@banshee:~/git/sanoid# syncoid banshee/demo root@target:target/demo
Sending incremental banshee/demo@2 ... syncoid_banshee_2016-05-27:18:33:12 (~ 236.1 MB):
^Cmbuffer: warning: error during output to <stdout>: canceled  ] 43% ETA 0:00:01
 138MB 0:00:01 [81.6MB/s] [===================>                ] 58%            

(Super obnoxious error message elided!) We are replicating incremental demo@2-demo@9 this time - meaning that when we interrupted Syncoid the first time, we'd successfully replicated demo@0 and demo@1 and demo@2.

We interrupted Syncoid again, this time at 58% through, and then restart for a third run:

root@banshee:~/git/sanoid# syncoid banshee/demo root@target:target/demo
Sending incremental banshee/demo@5 ... syncoid_banshee_2016-05-27:18:33:17 (~ 134.9 MB):
 138MB 0:00:01 [69.8MB/s] [==================================>] 102%            
^C

This time Syncoid began replicating incremental demo@5-demo@9, meaning we'd successfully gotten demo@3,4,5 on the second run. How far'd we get this time? The progress bar claimed 102%, but the progress bar is obviously not perfectly accurate, and I did kill it inside a single second of runtime. Let's see:

root@banshee:~/git/sanoid# syncoid banshee/demo root@target:target/demo
Sending incremental banshee/demo@syncoid_banshee_2016-05-27:18:33:17 ... syncoid_banshee_2016-05-27:18:33:22 (~ 4 KB):
1.52kB 0:00:00 [27.7kB/s] [============>                       ] 38%            

Yeah, we actually had finished, as you can see by the fact that we're replicating from one demo@syncoid snapshot to another demo@syncoid snapshot in this final run.

Hope this helped, instead of just confusing things worse. =)

from sanoid.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.