GithubHelp home page GithubHelp logo

wiserain / docker-rclone Goto Github PK

View Code? Open in Web Editor NEW
76.0 5.0 29.0 67 KB

Docker image for rclone mount with scripts

Home Page: https://hub.docker.com/r/wiserain/rclone

Dockerfile 26.76% Shell 73.24%

docker-rclone's Introduction

docker-rclone

Docker image for rclone mount with

  • Ubuntu 24.04
  • pooling filesystem (a choice of mergerfs or unionfs)
  • some useful scripts

Usage

version: '3'

services:
  rclone:
    container_name: rclone
    image: wiserain/rclone                      # or ghcr.io/wiserain/rclone
    restart: always
    network_mode: "bridge"
    volumes:
      - ${DOCKER_ROOT}/rclone/config:/config
      - ${DOCKER_ROOT}/rclone/log:/log
      - ${DOCKER_ROOT}/rclone/cache:/cache
      - /your/mounting/point:/data:shared
      - /local/dir/to/be/merged/with:/local     # Optional: if you have a folder to be mergerfs/unionfs with
    devices:
      - /dev/fuse
    cap_add:
      - SYS_ADMIN
    security_opt:
      - apparmor:unconfined
    environment:
      - PUID=${PUID}
      - PGID=${PGID}
      - TZ=Asia/Seoul
      - RCLONE_REMOTE_PATH=remote_name:path/to/mount

equivalently,

docker run -d \
    --name=rclone \
    --cap-add SYS_ADMIN \
    --device /dev/fuse \
    --security-opt apparmor=unconfined \
    -v ${DOCKER_ROOT}/rclone/config:/config \
    -v ${DOCKER_ROOT}/rclone/log:/log \
    -v ${DOCKER_ROOT}/rclone/cache:/cache \
    -v /your/mounting/point:/data:shared \
    -v /local/dir/to/be/merged/with:/local \
    -e PUID=${PUID} \
    -e PGID=${PGID} \
    -e TZ=Asia/Seoul \
    -e RCLONE_REMOTE_PATH=remote_name:path/to/mount \
    wiserain/rclone

First, you need to prepare an rclone configuration file in /config/rclone.conf. It can be done manually (copy yourself) or by running a built-in script below

docker-compose exec <service_name> rclone_setup

Then, up and run your container with a proper environment variable RCLONE_REMOTE_PATH which specifies an rclone remote path you want to mount. In the initialization process of every container start, it will check 1) existance of rclone.conf and 2) validation of RCLONE_REMOTE_PATH whether it actually exists in rclone.conf. If there is any problem, please check container log by

docker logs <container name or sha1, e.g. rclone>

rclone mount

Here is the internal command for rclone mount.

rclone mount ${RCLONE_REMOTE_PATH} ${rclone_mountpoint} \
    --uid=${PUID:-911} \
    --gid=${PGID:-911} \
    --cache-dir=/cache \
    --use-mmap \
    --allow-other \
    --umask=002 \
    --rc \
    --rc-no-auth \
    --rc-addr=:5574 \
    ${RCLONE_MOUNT_USER_OPTS}

Please note that variables only with capital letters are configurable by environment variables. Also, be aware that rc is enabled by default.

ENV Description Default
PUID / PGID uid and gid for running an app 911 / 911
TZ timezone, required for correct timestamp in log
RCLONE_REMOTE_PATH this should be in rclone.conf
RCLONE_CONFIG path to rclone.conf /config/rclone.conf
RCLONE_LOG_LEVEL log level for rclone runtime NOTICE
RCLONE_LOG_FILE to redirect logging to file
RCLONE_MOUNT_USER_OPTS additioanl arguments will be appended to the basic options in the above command

rclone serve

You can serve a folder tree mounted by rclone mount using the RCLONE_SERVE_MODE environment variable. This auxiliary service allows you to take advantage of cached contents without having to redefine the --vfs-* flags.

Here is the internal command for rclone serve

rclone serve ${RCLONE_SERVE_MODE} ${rclone_mountpoint} \
    ${RCLONE_SERVE_ARGS:---use-mmap --addr=:5678}

, where related environment variables are

ENV Description Default
RCLONE_SERVE_MODE one of dlna, docker, ftp, http, restic, sftp, webdav
RCLONE_SERVE_ARGS arguments for rclone serve --use-mmap --addr=:5678

mergerfs or unionfs (optional)

Along with the rclone folder, you can specify one local directory to be mergerfs with by POOLING_FS=mergerfs. Internally, it will execute a following command

mergerfs \
    -o uid=${PUID:-911},gid=${PGID:-911},umask=022,allow_other \
    -o ${MFS_USER_OPTS} \
    /local=RW:/cloud=NC /data

where a default value of MFS_USER_OPTS is

MFS_USER_OPTS="rw,use_ino,func.getattr=newest,category.action=all,category.create=ff,cache.files=auto-full,dropcacheonclose=true"

If you want unionfs instead of mergerfs, set POOLING_FS=unionfs, which will apply

unionfs \
    -o uid=${PUID:-911},gid=${PGID:-911},umask=022,allow_other \
    -o ${UFS_USER_OPTS} \
    /local=RW:/cloud=RO /data

where a default value of UFS_USER_OPTS is

UFS_USER_OPTS="cow,direct_io,nonempty,auto_cache,sync_read"

Built-in scripts

Two scripts performing basic rclone operations such as copy and move between /local and /cloud are prepared for your conveinence. Since they are from local to cloud directories, it is meaningful only when you mount an additional /local directory.

copy_local

You can make a copy of files in /local to /cloud by

docker exec -it <container name or sha1, e.g. rclone> copy_local

If you want to exclude a certain folder from copy, just put an empty .nocopy file on the folder root. Then, the script will ignore the sub-tree from the operation.

move_local

In contrast to copy_local, move_local consists of three consecutive sub-operations. First, it will move old files. If MOVE_LOCAL_AFTER_DAYS is set, files older than that days will be moved. Then, it will move files exceed size of MOVE_LOCAL_EXCEEDS_GB by the amount of MOVE_LOCAL_FREEUP_GB. Finally, it will move the rest of files in /local only if MOVE_LOCAL_ALL=true. As a cleanup, it will remove empty directories in /local unless KEEP_EMPTY_DIRS=1 is set.

The command and the way to exclude subfolders are almost the same as for copy_local, but .nomove file in this case.

cron - disabled by default

After making sure that a single execution of scripts is okay, you can add cron jobs of these operations by setting environment variables.

ENV Description Default
COPY_LOCAL_CRON cron schedule for copy_local
MOVE_LOCAL_CRON cron schedule for move_local

Credit

docker-rclone's People

Contributors

dependabot[bot] avatar solidrhino avatar wiserain avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

docker-rclone's Issues

Container keeps restarting before i can finish config

I'm trying to make a setup like in this post, and started the config but keep getting kicked out of the terminal because the container keeps restarting.

the docker log:

[s6-init] making user provided files available at /var/run/s6/etc...exited 0.

[s6-init] ensuring user provided files have correct perms...exited 0.

[fix-attrs.d] applying ownership & permissions fixes...

[fix-attrs.d] done.

[cont-init.d] executing container initialization scripts...

[cont-init.d] 10-adduser: executing... 


GID/UID

-------------------------------------

User uid:    1000

User gid:    1000

-------------------------------------


[cont-init.d] 10-adduser: exited 0.

[cont-init.d] 40-config: executing... 

[cont-init.d] 40-config: exited 0.

[cont-init.d] 50-rclone: executing... 

*** checking rclone.conf

Waiting for rclone configuration file in /config/rclone.conf. Retrying in 30s ...

RUN: docker exec -it <DOCKER_CONTAINER> rclone_setup

Waiting for rclone configuration file in /config/rclone.conf. Retrying in 30s ...

RUN: docker exec -it <DOCKER_CONTAINER> rclone_setup

It loops through the last two lines over and over.

Here is my setup in the docker-compose

  rclone:
    container_name: rclone
    image: wiserain/rclone
    restart: always
    network_mode: "bridge"
    privileged: true
    environment:
      - PUID=$PUID
      - PGID=$PGID
      - TZ=$TZ
#      - RCLONE_REMOTE_PATH=gmedia-crypt:/cloud
    volumes:
      - $ROOT/config/rclone:/config
      - $ROOT/config/rclone/logs:/log
      - $ROOT/cache/rclone:/cache
#      - $ROOT/config/rclone/mounts/gmedia-crypt:/cloud:shared
      - $ROOT/config/rclone/mounts/gmedia-local:/local:shared     # Optional: if you have a folder to be mergerfs/unionfs with
      - $ROOT/media:/data:shared
    devices:
      - /dev/fuse
    cap_add:
      - MKNOD
      - SYS_ADMIN

Is it a bug in the docker image or in my setup?

mergerfs

mergerfs \ -o uid=${PUID:-911},gid=${PGID:-911},umask=022,allow_other \ -o ${MFS_USER_OPTS} \ /local=RW:/cloud=NC /data

if that is executed that means it searches for /local and /cloud in the container
but the compose that i have copied from the github says:

- /volume1/mediaserver/remote:/data:shared - /volume1/mediaserver/local:/local

/data:shared and /local

checkin in the docker indeed /cloud is empty and the cloud files are on /data

i don't get it?

is that correct?

RCLONE_REFRESH_ON_MOUNT

Any chance you could provide some examples for this?
I am basically wanting to refresh the filelist at startup in a similar way to the below command does it
/usr/bin/rclone rc vfs/refresh recursive=true --rc-addr 127.0.0.1:5574 _async=true
but i have been unsable to workout how to correctly use the ENV variable above to correctly trigger a command like the one above for the whole mount,

I have tried / and . but both result in
2022/05/05 12:13:13 REFRES: >>> refreshing "."
2022/05/05 12:13:16 REFRES: ".": file does not exist

Since rcloen does not keep the filelist over restarts this would allow it to use fastlane to quickly cache the remote filesystem structure and reduce the number of overall calls need for when an app then scans the filesystem. This is useful for remote filesystems that provide changes to the filesystem structure as updates

mergerfs ignoring defined PUID & PGID

When using the mergerfs option, it ignores user-provided PUID & PGID environment variables.

Using the following:

environment:
  - PUID=1000
  - PGID=1000
volumes:
  - ${USERDIR}/test-rclone:/cloud:shared
  - ${USERDIR}/test-local:/local:shared
  - ${USERDIR}/test-merged:/data:shared

Creates the following output folder permissions:

drwxr-xr-x  2 root     root     4.0K May  6 15:14 test-local
drwxr-xr-x  2 user     user     4.0K May  6 15:14 test-merged
drwxr-xr-x  2 user     user     4.0K May  6 15:14 test-rclone

My current workaround is to create the local folder with user permissions before starting the container, but this is not ideal for portability reasons.

Files in rclone mount not showing on host

I've never experienced an issue like this before, been through hours of troubleshooting with users and mount points and not worked out the cause yet - Thought I'd post an issue just incase someone else is experiencing the same.

All of the files that rclone is syncing from my Google Drive are visible in the container folders (/cloud, /data) - However they're not being sent through to the docker host folders where I've bound them (/mergerfs:/data) for example.

I've got a user on my host called abc with GID and UID 911 to match the container, just incase it was a permissions issue.

If I manually create a file in any of the container folders, it's immediately visible in the host.
Only files created by rclone or the mergerfs/unionfs aren't coming through to the host and are therefore not visible to other containers either.

Purpose of --rc?

Why is the rclone remote control API turned on, what is it used for? I understand the ports are exposed to the container itself only

Improvements for the readme

This is really neat, it's a bit janky having to call the RClone move from a cron job on the host so this is a much nicer solution.
But I've found that there's a few things in the readme that appear to have changed over time and don't match any longer.

This line:
Along with the rclone folder, you can specify one local directory to be mergerfs with by POOLING_FS=mergerfs.

It should state:
"...you have to specify one local directory"
Otherwise the script doesn't appear to call Mergerfs, both have to have values.

And this seems to be incorrect too:
RCLONE_REMOTE_PATH=remote_name:path/to/mount

Should be:
RCLONE_REMOTE_PATH=remote_name:
The /path/to/mount is hardcoded as /cloud which can be defined in the docker-compose as - /path/to/mount:/cloud

And these two:
COPY_LOCAL_SCHEDULE
MOVE_LOCAL_SCHEDULE

Should be:
COPY_LOCAL_CRON
MOVE_LOCAL_CRON

I also found that MOVE_LOCAL_AFTER_DAYS has to be set to any number for the move script to work, I don't understand the code well but I think the MOVE_LOCAL_EXCEEDS_GB has a null value check but MOVE_LOCAL_AFTER_DAYS doesn't and so it errors. Or it could be defaulted to a sensible number like 5 the same way as the GUID and PGID are.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.