GithubHelp home page GithubHelp logo

wiserain / docker-rclone Goto Github PK

View Code? Open in Web Editor NEW
77.0 77.0 31.0 70 KB

Docker image for rclone mount with scripts

Home Page: https://hub.docker.com/r/wiserain/rclone

Dockerfile 26.76% Shell 73.24%

docker-rclone's Issues

Container keeps restarting before i can finish config

I'm trying to make a setup like in this post, and started the config but keep getting kicked out of the terminal because the container keeps restarting.

the docker log:

[s6-init] making user provided files available at /var/run/s6/etc...exited 0.

[s6-init] ensuring user provided files have correct perms...exited 0.

[fix-attrs.d] applying ownership & permissions fixes...

[fix-attrs.d] done.

[cont-init.d] executing container initialization scripts...

[cont-init.d] 10-adduser: executing... 


GID/UID

-------------------------------------

User uid:    1000

User gid:    1000

-------------------------------------


[cont-init.d] 10-adduser: exited 0.

[cont-init.d] 40-config: executing... 

[cont-init.d] 40-config: exited 0.

[cont-init.d] 50-rclone: executing... 

*** checking rclone.conf

Waiting for rclone configuration file in /config/rclone.conf. Retrying in 30s ...

RUN: docker exec -it <DOCKER_CONTAINER> rclone_setup

Waiting for rclone configuration file in /config/rclone.conf. Retrying in 30s ...

RUN: docker exec -it <DOCKER_CONTAINER> rclone_setup

It loops through the last two lines over and over.

Here is my setup in the docker-compose

  rclone:
    container_name: rclone
    image: wiserain/rclone
    restart: always
    network_mode: "bridge"
    privileged: true
    environment:
      - PUID=$PUID
      - PGID=$PGID
      - TZ=$TZ
#      - RCLONE_REMOTE_PATH=gmedia-crypt:/cloud
    volumes:
      - $ROOT/config/rclone:/config
      - $ROOT/config/rclone/logs:/log
      - $ROOT/cache/rclone:/cache
#      - $ROOT/config/rclone/mounts/gmedia-crypt:/cloud:shared
      - $ROOT/config/rclone/mounts/gmedia-local:/local:shared     # Optional: if you have a folder to be mergerfs/unionfs with
      - $ROOT/media:/data:shared
    devices:
      - /dev/fuse
    cap_add:
      - MKNOD
      - SYS_ADMIN

Is it a bug in the docker image or in my setup?

Files in rclone mount not showing on host

I've never experienced an issue like this before, been through hours of troubleshooting with users and mount points and not worked out the cause yet - Thought I'd post an issue just incase someone else is experiencing the same.

All of the files that rclone is syncing from my Google Drive are visible in the container folders (/cloud, /data) - However they're not being sent through to the docker host folders where I've bound them (/mergerfs:/data) for example.

I've got a user on my host called abc with GID and UID 911 to match the container, just incase it was a permissions issue.

If I manually create a file in any of the container folders, it's immediately visible in the host.
Only files created by rclone or the mergerfs/unionfs aren't coming through to the host and are therefore not visible to other containers either.

Improvements for the readme

This is really neat, it's a bit janky having to call the RClone move from a cron job on the host so this is a much nicer solution.
But I've found that there's a few things in the readme that appear to have changed over time and don't match any longer.

This line:
Along with the rclone folder, you can specify one local directory to be mergerfs with by POOLING_FS=mergerfs.

It should state:
"...you have to specify one local directory"
Otherwise the script doesn't appear to call Mergerfs, both have to have values.

And this seems to be incorrect too:
RCLONE_REMOTE_PATH=remote_name:path/to/mount

Should be:
RCLONE_REMOTE_PATH=remote_name:
The /path/to/mount is hardcoded as /cloud which can be defined in the docker-compose as - /path/to/mount:/cloud

And these two:
COPY_LOCAL_SCHEDULE
MOVE_LOCAL_SCHEDULE

Should be:
COPY_LOCAL_CRON
MOVE_LOCAL_CRON

I also found that MOVE_LOCAL_AFTER_DAYS has to be set to any number for the move script to work, I don't understand the code well but I think the MOVE_LOCAL_EXCEEDS_GB has a null value check but MOVE_LOCAL_AFTER_DAYS doesn't and so it errors. Or it could be defaulted to a sensible number like 5 the same way as the GUID and PGID are.

Purpose of --rc?

Why is the rclone remote control API turned on, what is it used for? I understand the ports are exposed to the container itself only

mergerfs ignoring defined PUID & PGID

When using the mergerfs option, it ignores user-provided PUID & PGID environment variables.

Using the following:

environment:
  - PUID=1000
  - PGID=1000
volumes:
  - ${USERDIR}/test-rclone:/cloud:shared
  - ${USERDIR}/test-local:/local:shared
  - ${USERDIR}/test-merged:/data:shared

Creates the following output folder permissions:

drwxr-xr-x  2 root     root     4.0K May  6 15:14 test-local
drwxr-xr-x  2 user     user     4.0K May  6 15:14 test-merged
drwxr-xr-x  2 user     user     4.0K May  6 15:14 test-rclone

My current workaround is to create the local folder with user permissions before starting the container, but this is not ideal for portability reasons.

RCLONE_REFRESH_ON_MOUNT

Any chance you could provide some examples for this?
I am basically wanting to refresh the filelist at startup in a similar way to the below command does it
/usr/bin/rclone rc vfs/refresh recursive=true --rc-addr 127.0.0.1:5574 _async=true
but i have been unsable to workout how to correctly use the ENV variable above to correctly trigger a command like the one above for the whole mount,

I have tried / and . but both result in
2022/05/05 12:13:13 REFRES: >>> refreshing "."
2022/05/05 12:13:16 REFRES: ".": file does not exist

Since rcloen does not keep the filelist over restarts this would allow it to use fastlane to quickly cache the remote filesystem structure and reduce the number of overall calls need for when an app then scans the filesystem. This is useful for remote filesystems that provide changes to the filesystem structure as updates

mergerfs

mergerfs \ -o uid=${PUID:-911},gid=${PGID:-911},umask=022,allow_other \ -o ${MFS_USER_OPTS} \ /local=RW:/cloud=NC /data

if that is executed that means it searches for /local and /cloud in the container
but the compose that i have copied from the github says:

- /volume1/mediaserver/remote:/data:shared - /volume1/mediaserver/local:/local

/data:shared and /local

checkin in the docker indeed /cloud is empty and the cloud files are on /data

i don't get it?

is that correct?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.