GithubHelp home page GithubHelp logo

mutagen-io / mutagen Goto Github PK

View Code? Open in Web Editor NEW
3.4K 3.4K 147.0 6.7 MB

Fast file synchronization and network forwarding for remote development

Home Page: https://mutagen.io

License: Other

Go 98.62% Shell 0.94% Assembly 0.33% Dockerfile 0.11%

mutagen's People

Contributors

andrewmed avatar dependabot[bot] avatar dj95 avatar djs55 avatar leoluk avatar rfay avatar syrusakbary avatar xenoscopic avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mutagen's Issues

ControlMaster socket names can exceed system limits for very long host names

on macos, I tried creating a new sync session and got this message:

unix_listener: "/Users/username11/.mutagen/connections/username11_myreallyextraalonghostname.example.net:_22.PKkx13g7IH0gvjis" too long for Unix domain socket

The length of the path for the socket above was 109 characters. If I use the IPv4 address rather than the "myreallylonghostname..." fqdn, the sync session is created as expected.

After a little experimentation, I think the limit is 108.

Clarification on interaction with Git (and other VCS) repositories

I really like that a stated goal of the project is "remove development" as that is exactly the problem I'm currently trying to solve. My goal is for our devs (including me) to continue to be able to use preferred local editors, mostly on macOS like SublimeText and VSCode, but move our Docker-based development stack off to more powerful remote VPS servers.

I've been playing with Mutagen for this use case and enjoyed immediate success with two-way sync which was great! As I spent a little more time working though I hit some rough edges around synchronizing a git-based project and I'm curious if you've developed any best practices / have suggestions on how to best handle this.

At this point I'm not exactly sure what series of events would trigger this, but I was able to frequently to get my git tree in weird states where a ton of files would show as modified. I think this was occurring while I was showing off how cool it was that I could modify a source file in any number of ways and have the changes immediately propagate between my laptop and ec2 instance. It also could have been happened when I switched back and forth between git branches both on my local and remote.

I can certainly gather better diagnostic information, but I was wondering if you have any general advice. For example, do you advise syncing the .git folder in a git project? I like the idea of being able to do git operations on the machines on either side of the sync but I could limit git operations to only one end if that makes things less error prone, etc. Have you run into issues with this or did I trip myself up on something else? Thanks!

Command for one shot sync and include ignored files

In some projects, I have some big ignored files (SQL dumps).

I still need them to be transfered initially on project first sync, or manually trigger their transfer with a command, but I don't want them to be synched automatically like other files for performance reasons.

In some other case, I would like to sync .git directory before performing a build, because build script of the application is grabbing the version through git.

I think of a new command like this one

mutagen sync --disable-ignore <session-id>

mutagen sync could perform a "one shot" sync, and --disable-ignore force inclusion of ignored files.

mutagen sync command would return only when the one shot sync is terminated (=all files are transfered).

I'm still not sure it's worth ignoring those big files ? Does it have a big impact on performance when those files almost never change ?

Add additional conflict resolution and synchronization modes

Currently, conflict resolution needs to be manually resolved, which is probably the best option for majority of scenarios but this is a big problem when using mutagen when the agent is not guaranteed to be running all the time.

One specific example of this is when using mutagen to synchronize volumes between a docker container and your host file system. Specifically, if you had a container that was previously running but was stopped and code changes were made while the container was stopped, re-starting the container causes a conflict because the file in the container differs from the one on the host machine.

I'd like to propose a way to specify which of the mount points should be used as canonical source when a conflict happens. For example, I'd like to be able that specify that in case of conflicts, any source file changes on the host (alpha) should take precedence over the same file inside the container (beta).

Without this the list of conflicts (and therefore files that don't get propagated) can grow really fast, specially in large codebases with a lot of contributions. Doing a git pull while the container was stopped would cause tens or hundreds of conflicts.

This also causes files who have been deleted from the host to re-appear after the container/agent starts again, since they exist inside the container.

Mutagen docker stack command

Hey @havoc-io I've created this:
https://github.com/matthew-gill/mutagen-stack

Which creates a command to handle mutagen stacks including clean up of both docker-compose and mutagen sessions.

It's designed to only work with one mutagen session, but should be relatively easy to extend.

This is just for your perusal, I don't claim to be an expert in bash etc, but I've been using this for a few days and have found it useful...

Error not helpful when Mutagen daemon is offline

mutagen list
error: unable to invoke session listing: unable to open connection to server: unable to read pipe name: open D:\Users\FRUP67305\.mutagen\daemon\daemon.pipe: Le fichier spécifié est introuvable.

Handle directory symlinks appropriately on Windows

There is a fix coming in Go 1.9 that causes the os package to no longer set the os.ModeDir bit for symlinks on Windows that point to directories (in addition to the os.ModeSymlink bit). At the moment, our scanning routines only check for the os.ModeDir bit and don't ensure that the os.ModeSymlink bit isn't set. For the current algorithm, this needs to change, and we need to explicitly ignore entries that are symlinks. Once we support symlinks in the future, we can change this, but we'll need to be cognizant of this bug in Go 1.8 and always check for symlinks first.

Consider supporting a watchman-based watch backend

I already have a watchman instance set up on my machine for the working directories I would be interested in syncing. It would be nice to avoid spending extra handles etc. by sharing the watch backend with watchman.

It could also be a nice option on remote machines to use watchman if it is already found on the system, again to avoid spending extra resources, although I find that this will be less common for me.

Unable to identify SSH executable (Windows)

I'm having a hard time executing this on Windows.

> C:\Apps\mutagen.exe create C:\Apps\project [email protected]:~/Documents/project
Connecting to agent (POSIX)...
Error: create failed: unable to connect to beta: unable to connect to SSH remote: unable to connect to agent: unable to create SSH command: unable to identify SSH executable: unable to locate command

The SSH executable is in the path. I'm not much of a Windows guy so maybe I'm missing something obvious.

Mutagen doesn't work if destination is a mounted volume

It seems that mutagen doesn't allow one to sync files between the host and a mounted - throw sshfs in my case - volume.

I can't sync files directly to the remote host, because of paranoic security settings on it, which only allow access by one-time passwords. So developers use sshfs (or samba) to mount remote projects.

When I instruct the daemon to sync my local project with to the mounted destination, mutagen seems only sync project's hierarchy (directories), without any files.

I can to provide additional details or daemon logs, if you point me to the place where I can get them.

Fish shell (and possibly others) don't support && command chaining

Laptop and desktop both running Arch Linux with openssh installed, both machines are able to SSH into the other.

I'm trying to create a session from the desktop to the laptop and it's failing during the connect phase.

error: unable to handle prompt requests: unable to receive authentication challenge: remote error: unable to connect to beta: unable to connect to SSH remote: unable to install agent: unable to invoke agent installation: exit status 127

First I was thinking this was maybe a custom port issue so I tried disabling that and restarting the daemon on the laptop (I confirmed I was able to ssh into the laptop on port 22 after this), I deleted the connection profile data from ~/.mutagen/connections and attempted to connect again but recieved the same error.

I tried enabling password authentication on the ssh daemon and restarted it, cleared the connection profile again and tried connecting, same error (no password prompt).

I tried disabling pub key authentication on the daemon, restarted, cleared the connection profile, tried connecting, same error (still no password prompt).

Finally I tried moving the ~/.ssh/authorized_keys file out on the laptop to ~/.ssh/authorized_keys.bak and restarted the daemon again (confirmed I could connect to the laptop from desktop using a password), cleared the connection profile again and tried creating a session from the desktop. This time it finally did prompt for a password but upon entering it the same error as above was received.

Edit:

after posting this I just went to shutdown my laptop and it's showing 5 remote sessions running under my user (I am not currently connected via normal ssh).

Support Symbolic Links on Windows

Right now we support synchronization of symbolic links if both endpoints are POSIX systems, but symbolic links are not propagated to or from Windows machines. There are a variety of reasons behind this:

  1. Symbolic links on Windows behave significantly differently than those on POSIX systems. Some information on the exact differences can be found here. The typed nature of symbolic links on Windows will complicate things.
  2. Symbolic links aren't supported before Windows Vista (though this isn't a huge issue since we're dropping support for Windows XP soon anyway).
  3. Symbolic links require administrator permissions to create (or they require that the SeCreateSymbolicLinkPrivilege be given to the user, but in either case they don't work out of the box). As of Windows 10, symbolic links can be created without these elevated permissions (more information available here), but they require that Developer Mode be enabled (which, again, it isn't by default), and they require that the SYMBOLIC_LINK_FLAG_ALLOW_UNPRIVILEGED_CREATE flag be provided to CreateSymbolicLinkW (a change which isn't going to land until Go 1.11).
  4. Perhaps most importantly, symbolic link targets can't be round-tripped to disk on Windows systems. The Go os package (necessarily) modifies target paths to adhere to the requirements of CreateSymbolicLinkW, which causes the replacement of / with \, potential conversion to a UNC path, etc. The complete ritual can be found here. We need to be able to generate a reasonable equivalence check between Windows and POSIX symbolic links to ensure that they haven't changed, which is impossible to do in the general case, and will be very hard to do even if we restrict ourselves to supporting a subset of symbolic link targets.

Given the complexities of symbolic links on Windows, the permissions required to create them, their relatively small user base, and the current lack of a safe and sound mechanism for checking their equality, I'm inclined to punt on this until further research and design can be performed.

Any comments on potential designs are welcome.

Scans for very large directories create perceptible synchronization latency

We (currently 3 guys at the office) are now using mutagen 0.4.1 to sync files from Windows (alpha) to Linux VM (beta) and it behaves nicely. Application under development are running more than 3 times faster than with a NFS share (winnfsd), and installation of dependencies are damn faster (composer, npm, yarn ...)

I still don't know which change detection mode is really effective, but I assume it's based on file system notifications as the directory under watch is not in the root of the drive.

Only issue is that when a change is made on the Windows side during development, we still have to wait some seconds before the file is actually synched on Linux VM.

For our use case, Mutagen would be perfect if changes on a single file are transfered almost instantly after the file system notification, or at least under a second. What's the expected behavior and latency concerning a change on a single file ? Is there an internal temporal buffer before sync to be runned ? If such a buffer exists, could it become configurable ?

.mutagenignore support

Is there a way to ignore files based on a file lying in root of synchronisation directory ? Something like .mutagenignore ?

High agent CPU usage in Docker container via HyperKit

I had some unexplained long periods of high CPU usage when running an agent inside a docker container. The CPU usage on the host was fine but the CPU usage of the agent process was very high during a period where no file change was happening.

Is there a good way to debug those issues? I'm happy to provide more details.

Windows file watches can't be re-established.

I've had to temporarily disable file monitoring on Windows (and fall back to polling at a shorter interval) because the github.com/rjeczalik/notify package can't seem to re-establish watches on the same path, even if they are stopped cleanly. It reports that they are correctly re-established, but no events ever arrive. This is summarized in rjeczalik/notify#69, but that issue as closed as a caveat, so we're probably going to have to just write a simple ReadDirectoryChangesW wrapper ourselves. This isn't a big deal though, and will certainly benefit us in terms of binary size and complexity.

Evaluate the potential of UPX

The agent binaries are still about 5x bigger than I'd like them. UPX can squash them down, but comes with some caveats (e.g. the entire binary being decompressed into memory or in some cases a temporary directory). We should determine if the benefits outweigh the costs. Hopefully Go will continue on its march towards smaller binaries (golang/go#6853) via compiler and linker optimizations, though I don't think the Go toolchain will ever reach anything close the UPX potential.

Sync inconsistency

There seems to be a sync inconsistency for certain workflows. I can reproduce the problem by installing an npm package (react) on an OSX machine, then deleting the folder I installed it into on the target machine. The folder on the source machine is not completely deleted. There's a leftover symlink.

Reproduction:

Alpha (OSX)

mkdir test
cd test
npm install react

Beta (Linux)

rm -rf test

Alpha (OSX):

ls -alR
total 0
drwxr-xr-x   3 tristanz  staff  102 Jan  6 22:12 .
drwxr-xr-x  12 tristanz  staff  408 Jan  6 22:12 ..
drwxr-xr-x   3 tristanz  staff  102 Jan  6 22:13 node_modules

./node_modules:
total 0
drwxr-xr-x  3 tristanz  staff  102 Jan  6 22:13 .
drwxr-xr-x  3 tristanz  staff  102 Jan  6 22:12 ..
drwxr-xr-x  3 tristanz  staff  102 Jan  6 22:12 .bin

./node_modules/.bin:
total 8
drwxr-xr-x  3 tristanz  staff  102 Jan  6 22:12 .
drwxr-xr-x  3 tristanz  staff  102 Jan  6 22:13 ..
lrwxr-xr-x  1 tristanz  staff   22 Jan  6 22:12 loose-envify -> ../loose-envify/cli.js

Expand testing

Testing on Mutagen has so far been manual, and the assortment of tests even in core packages leaves a lot to be desired. We also need full end-to-end tests.

Create session with username containing @

I wish to create a session on a ssh server that's behind a Wallix frontend (some kind of proxy for SSH servers), and username contains @ to point to effective SSH Server behind Wallix.

> mutagen create C:\devel\mutagen-sessions\oison-for.****-rec.fr root@VMRD10:alvergnat@wab.****.fr:/applications/test
Connecting to agent (POSIX)...
Error: create failed: unable to connect to beta: unable to connect to endpoint: agent handshake failed with error output:
ssh: Could not resolve hostname vmrd10: Name or service not known

It doesn't work, because it seems to use root as username, and VMRD10:alvergnat@wab.****.fr as host, but it should use root@VMRD10:alvergnat as username and wab.****.fr as host.

When using ssh client, or filezilla SFTP, it works properly with the same string.

Docstrings need to be added to all code

Since none of our packages are re-used outside of Mutagen, we weren't as rigorous as we should have been about documenting all exported package members, but we should do this now.

Unable to relocate staged file across filesystem boundaries

When home directory and sync target directory are not on the same filesystem I get the following errors:

logs/.gitkeep: unable to create file: unable to relocate staged file: rename /home/app/.mutagen/staging/ea88434f-e8f5-43de-990f-a56e096829dd_beta/da/04c7f074f9aa5225293b4e32936e01d38b50eb90_false_da39a3ee5e6b4b0d3255bfef95601890afd80709 /data/logs/.gitkeep: invalid cross-device link

Permission changes not detected by FSEvents on macOS

For some reason the github.com/rjeczalik/notify package isn't detecting file permission changes on macOS. The github.com/fsnotify/fsevents does seem to work in this regard. I think we'll need to swap out the former with the latter.

Both of these packages have their little quirks, and it's mostly the fault of FSEvents. We may want to investigate a more holistic refactoring of recursive watching.

Use in Docker

Hi,

Nice project. Will definitely try it soon.

Did you already try to integrate / use it with Docker, as a replacement of native files sharing ?

I think of somethings like what is done with unison and rsync via docker-sync tools.

Various issues associated with Windows/POSIX synchronization

It seems mutagen doesn't support sync from windows to linux because chmod is not supported on windows.

It also cause problems with symbolic links.

See the following status, Alpha is running Windows and Beta is running linux.

Session: 4cbdd611-a26a-4a46-9623-badffbac15d6
Status: Scanning files on beta
Alpha:
        URL: C:\devel\projects\AFB-oison
        Status: Connected
        Problems:
                mapserver/mapserver.map: unable to ensure path does not exist: path exists
                oison-backend/liquibase.properties: unable to ensure path does not exist: path exists
                oison-backend/phpunit.xml: unable to ensure path does not exist: path exists
                abc: unable to create file: unable to relocate staged file: unable to set file mode: chmod C:\devel\projects\AFB-oison\abc252047704: not supported by windows
                def: unable to create file: unable to relocate staged file: unable to set file mode: chmod C:\devel\projects\AFB-oison\def766032087: not supported by windows
                docker-compose.override.yml: unable to ensure path does not exist: path exists
                .docker/.ca-certificates/GFI_Informatique_Orléans_Root_CA.crt: unable to create file: unable to relocate staged file: unable to set file mode: chmod C:\devel\projects\AFB-oison\.docker\.ca-certificates\GFI_Informatique_Orléans_Root_CA.crt941617226: not supported by windows
Beta:
        URL: [email protected]:/home/vagrant/test
        Status: Connected

"unable to ensure path does not exist: path exists" => Symbolic links
"unable to set file mode: chmod C:\devel\projects\AFB-oison\abc252047704: not supported by windows" => New file created on Beta (linux). chmod should be ignored on windows, or only set readonly if "w" flag is missing on the linux counterpart.

It also creates a bunch of files when it fails, it seems temporary file is not removed when those failures occurs.

image

I'll be glad to contribute with a Pull Request for this issue, but I never wrote anything in Go lang.

cmd.exe consoles don't generate newline on Ctrl-C

This is a very minor annoyance, but these consoles don't generate a newline on Ctrl-C interrupts (perhaps because of how SIGINT is implemented on Windows). Thus, when doing session monitoring, an interrupt won't cause a newline to be printed and the user has to manually hit return before being taken back to the prompt. We can probably detect our console environment and manually print this, but we'll also have to switch to manually watching for SIGINT in that case.

Windows-side modification loses executability bit on POSIX remote

I'm running mutagen between windows and linux.

When I chmod +x a script file on the linux side, it's applied properly, but if I change something in the content of the script from the windows side, the +x flag is then lost on synchronisation.

I think permission should always be kept from the linux side in this case.

Document Mutagen internals

I'd like to do a full write-up of Mutagen's watching, reconciliation, staging, and transition algorithms.

some questions

Hi @havoc-io I've a few questions about the project:

  1. how works the backup system, is it incremental?..so if I change a gb file with a small change I need reupload the whole file again or just the changes?..
  2. do you have plan for support amazon S3?
  3. what is the roadmap?..some custom feature is planned in the near future?...

thank so much @havoc-io !! hope you're great

Delete file before sync completed but sync continues

Hi.
I am new at Mutagen and was playing around with some scenarios where I would like to implement it. I noticed the following:

I have a slow connection form my house to my server in Europe. I created a sync point between a local folder and one on the server. The files within weren't too big and the sync point was crated with all files sync between the 2 points.

I then copied a big file into the local folder. While the file was being copied, I noticed a error from the mutagen monitor, waiting for a rescan. All good.
When the local copy was done , mutagen started to sync the file across to the other side.
Mutagen monitor gave me a status of "Staging files on beta: 0% (0/1)".
No other movement. In the beginning I thought the process hanged as the % never climbed...
Some investigation and I found on the server, ~/.mutagen/staging/xxxxxxxxx.
There I noticed that the file was indeed being sync over.. (I guess I can ask here why the process will not show this? Why do we stay at 0%?)

As I could see that the file is indeed being synced over, I wanted to stop this process (as it was just a test), so I deleted the file locally.
This made no difference to the mutagen process.

I could see on the server that the staging file was still growing in size. I could see locally mutagen monitor still sitting at 0%, still trying to sync the file across.
All new changes and new files was now not being synced over as mutagen was still trying to sync this file that I deleted.

As my link is slow and the file was big, this took literal hours to complete, then realized that the file was already deleted, then synced the remaining files. Then settled down.

I tried this test again, same result. This time I did a pause/resume after a few minutes. The pause/resume actually made mutagen re-scan the local files, noticed that the big file was gone, and immediately stopped the staging of this file, and was done.

As my use case would be to have mutagen run unattended (I would not want to pause/resume), it would be nice if mutagen could somehow check or be told, from time to time, that there has been changes, and act accordingly.

I'm happy to supply more detail on this. Do let me know.

Vissie

Docker design feedback

This issue is for users to provide feedback on their experiences with integrating Mutagen's Docker support into their Docker workflows. Mutagen itself provides the minimal set of tools necessary to synchronize with Docker container filesystems, but it will be most useful when coupled with orchestration tools and other higher-level container management workflows that can automatically set up synchronization.

In order for Mutagen to provide the best API and configuration options to support these tools, users are encouraged to experiment with integrating Mutagen into their workflows and reporting any issues that they run into or ideas that they might have about how to make things easier. Everybody's setup and use case is different, so it's difficult to predict how exactly Mutagen will be used. Ideas about additional commands, flags, URL formats, environment variables, or other behavioral control mechanisms to allow for easier embedding would be helpful. The more concrete and specific the feedback, the more useful it will be. For example, feedback like "it would be helpful if the create command had a flag to do X (because Y)" is more helpful than "Make it work with Kubernetes!"

The feedback provided here will be instrumental in evolving and expanding Mutagen's Docker support.

Investigate using QEMU on Travis CI

At the moment, we can only easily test with GOOS={linux, darwin, windows} and GOARCH={386, amd64} because we don't have CI infrastructure for more exotic operating systems and architectures. In theory, though, we could use QEMU on Travis. Some brilliant/crazy person has already done this here, though, as expected, it's quite slow and nearly hits the Travis time limit. But things have gotten faster since 2013, so it's worth a look.

Daemon launches in mintty consoles don't disconnect from tty.

When the daemon is launched in the background using mutagen daemon in a mintty console, it has an event attached to the process that allows mintty (or Cygwin or someone) to track that it is still alive (event when we use CREATE_NEW_PROCESS_GROUP and DETACHED_PROCESS), and this can either prevent the mintty console from closing (it will just hang until the daemon is terminated) or reopening properly (it will say there are no available ttys until the daemon is terminated). I think the solution here is to manually trigger this event (it's either cygtty.input.avail.0 or cygtty.slave_alive.0). There's a Windows API call for this called PulseEvent, but its documentation says it is unreliable.

In the mean time, there's a workaround: nohup mutagen daemon -r &. Or just launch the daemon from a cmd.exe console.

But anyway, if nohup can get around this problem then we probably can too, but whatever tty disconnection nohup is doing might rely on Cygwin C APIs that we can't get at from Go (or at least wouldn't want to), so we may have to hack this somehow.

An alternative, and perhaps better approach, would be simply developing a tray icon to host the daemon on Windows.

Compare file timestamps with cache timestamps as well

This is essentially the same issue as the Racy Git problem. It has a small but non-zero cross-section, and it could technically cause changes to be missed on disk during transition checks (which means those changes could potentially be lost).

What we probably ought to be doing is additionally comparing cache timestamps with entry timestamps to watch for racily clean entries and invalidating the cache entry if we find any.

Perhaps the most elegant solution for this is adding a timestamp field to the cache itself that is populated on load, but that'll require some special work in our unmarshalling code. Plus, we don't reload the cache in the endpoint each time we save it - we keep it in memory).

In the mean time, racily clean entries can simply be touched to have their changes detected.

Stabilize the Mutagen daemon API for third-party usage via a more portable transport

Would you consider adding an API to mutagen daemon ? Maybe with Websocket support to get notifications from the daemon ?

My idea is to build a web UI on top of this API to display synchronisation session informations, create/terminate/pause sessions, configure ignore list, and so on.

Using the command line to start/stop sessions is not that easy when you are running multiple synchronisation sessions.

"unable to invoke agent installation" on Raspberry Pi B+

I get the following error when attempting to create a synchronization session between a CentOS 7 computer and a Raspberry Pi B+ running Raspbian:

$ mutagen create data-dir user@hostname:~/data-dir
Installing agent...                                                             
Error: create failed: unable to connect to beta: unable to connect to SSH remote: unable to install agent: unable to invoke agent installation: exit status 255

When I try to run the mutagen-agent binary on the Raspberry Pi B+, it fails:

user@hostname:~ $ file .mutagen-agent44a72715-c57b-4c0b-8da1-50059dcc81b9 
.mutagen-agent44a72715-c57b-4c0b-8da1-50059dcc81b9: ELF 32-bit LSB executable, ARM, EABI5 version 1 (SYSV), statically linked, stripped
user@hostname:~ $ ./.mutagen-agent44a72715-c57b-4c0b-8da1-50059dcc81b9 
Illegal instruction

Processor info for the Raspberry Pi B+

user@hostname:~ $ cat /proc/cpuinfo 
processor	: 0
model name	: ARMv6-compatible processor rev 7 (v6l)
BogoMIPS	: 697.95
Features	: half thumb fastmult vfp edsp java tls 
CPU implementer	: 0x41
CPU architecture: 7
CPU variant	: 0x0
CPU part	: 0xb76
CPU revision	: 7

Hardware	: BCM2835
Revision	: 0010

Is it possible to compile and manually install the mutagen-agent on the B+?

Docker issue - Error: unable to connect to daemon: context deadline exceeded

I am experimenting with mutagen and docker.

I have this conainer running

> docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
0814c5b056b4        alpine              "sleep 999d"        10 minutes ago      Up 10 minutes                           nucleus_sync_1

With ./mutagen create ~/delmoi/ docker://nucleus_sync_1/src I get:

Error: unable to connect to daemon: context deadline exceeded

I am using Docker version 18.06.1-ce, build e68fc7a215 on Linux.

Am I doing something wrong? If not, how can I help debug this?

Support for overlong filenames

If a file name is overly long (sometimes unavoidable when working in collaborations with loose or undefined standards), it is not possible to synchronize this file.

When calling mutagen list the following error message is presented

Problems:
                <super long path + filename > : file name too long

I know that the problem is a symptom of bad file name conventions, but is there a way to also support these overlong names?

Docker: Improve documentation regarding home directory permissions

I can start a mutagen session using root, but not a different user, for example www-data.

Please can you let me know if I have missed some prerequisite on my docker container.

Situation:
$ mutagen create app/www docker://www-data@docker-dev-environment_php_1/var/www/

Expected outcome:
mutagen session starts

Actual outcome:
Error: create failed: unable to connect to beta: unable to connect to endpoint: unable to install agent: unable to invoke agent installation: exit status 1

Mutagen sets permissions to 600 both ways

I've been testing with Mutagen recently, and have noticed that every time I edit a file on a remote system, when Mutagen puts the file up to the remote system, it sets the permissions on the file to 600. If I edit the file on the remote system, mutagen pulls down the file to my local machine, and sets the permissions to the file to 600.

If I force the permissions on the file in both locations to something like 664, for example, then edit the file, mutagen again changes the permissions of the file back to 600 (at the location that the file was synced to).

The remote machine is a Linux system, the local machine is a Mac, if that helps at all.

How can I prevent mutagen from modifying a file's permissions?

Thanks!

Add a --wait flag to the flush command to wait for a synchronization cycle to complete

Often when I am working with synchronized directories, I have a local copy which I am editing, and I run commands on a remote machine. Usually, when I want to run a command on that machine, I expect it to have an up-to-date copy of the local file structure.

Unfortunately, with high latency connections or larger change-sets, it can sometimes happen that my build command makes it to the remote before the changes do, meaning that I get odd errors due to running a build on an out-of-date tree.

It could be nice to support a mutagen flush command which would print a message (like "waiting on active syncs") and block if an ongoing sync is in process, only exiting when active syncs are complete. With a hybrid/polling based system it may also make sense to force a poll when invoking mutagen flush to ensure the remote is up-to-date. I could then call that subcommand in my build wrapper before sending the build command.

Extend file permission propagation modes

I'm using mutagen to sync from windows to Linux, and I wish created files/directories to be 666/777 instead of 600/700. Would you consider adding this feature ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.