This repository is meant to get a configuration set for installing a fresh server for Docker hosting. It’s specialized for my personal usage, but if it fits your needs, feel free to use it and give your feedback.
- Base path
- Base installation
- Some of the docker images I use in this environment
- Using docker-compose
- Using Traefik v2.x as main front-end
- Back-up containers
- Clean-up FTP backups
- Rolling backups
- Using custom Docker images for Roadiz
- Rotating logs
All scripts and configurations files are written in order to perform in ~/docker-server-env
folder.
Please, adapt them if you want to clone this git repository elsewhere.
Skip this part if your hosting provider has already provisioned your server with latest docker and docker-compose services.
#
# Base apps
#
sudo apt update;
sudo apt install sudo curl nano git zsh;
#
# Install oh-my-zsh
#
sh -c "$(curl -fsSL https://raw.github.com/robbyrussell/oh-my-zsh/master/tools/install.sh)"
#
# If you don’t have any password (public key only)
# Change your shell manually…
#
sudo chsh -s /bin/zsh
#
# Clone this repository in root’s home
#
git clone https://github.com/ambroisemaupate/docker-server-env.git ~/docker-server-env;
#
# Execute base installation
# It will install more lib, secure postfix and pull base docker images
#
cd ~/docker-server-env
#
# Pass DISTRIB env to install [ubuntu/debian]
# sudo DISTRIB="debian" bash ./install.sh if not root
sudo DISTRIB="debian" bash ./install.sh
If you are not root
user, do not forget to add your user to docker
group.
sudo usermod -aG docker myuser
- traefik: as the main front proxy. It handles Let’s Encrypt certificates too.
- solr (I limit heap size to 256m because we don’t usually use big document data, and it can be painful on a small VPS server)
- ambroisemaupate/ftp-backup: smart FTP/SFTP backup image
- ambroisemaupate/ftp-cleanup: smart FTP/SFTP backup clean-up image than delete files older than your defined limit. It won’t delete older backup files if they are the only ones available.
- ambroisemaupate/light-ssh, For SSH access directly inside your container with some useful command as
mysqldump
,git
andcomposer
. - mariadb: for latest php72-alpine-nginx images and all official docker images
- gitlab-ce: If you want to setup your own Gitlab instance with a dedicated registry, all running on docker
This server environment is optimized to work with docker-compose for declaring your services.
You’ll find examples to launch front-proxy and Roadiz based containers with docker-compose
in compose/
folder. Just copy the sample example-se/
folder naming it with your website reference.
cp -a ./compose/example-se ./compose/mywebsite.tld
Then, use docker-compose up -d --force-recreate
to create in background all your websites containers.
We need to use the same network with docker-compose to be able
to discover your containers from other global containers, such as the front-proxy
and your daily backups.
See https://docs.docker.com/compose/networking/#configure-the-default-network for further details. Here is the additional lines to append to your custom docker-compose applications:
networks:
frontproxynet:
external: true
Then add the frontproxynet
to your backends container that you want to expose to your front-proxy (traefik or nginx-proxy)
services:
app:
image: nginx:latest
networks:
- default
- frontproxynet
db:
image: mariadb:latest
networks:
- default
https://docs.traefik.io/providers/docker/
If install.sh
script did not setup traefik conf automatically, do:
cp ./compose/traefik/traefik.sample.toml ./compose/traefik/traefik.toml;
cp ./compose/traefik/.env.dist ./compose/traefik/.env;
touch ./compose/traefik/acme.json;
chmod 0600 ./compose/traefik/acme.json;
Then you can start traefik service with docker-compose
cd ./compose/traefik;
docker-compose pull && docker-compose up -d --force-recreate;
Traefik dashboard will be available on a dedicated domain name: edit ./compose/traefik/.env
file to choose a monitoring host and password. We strongly encourage you to change default user and password using htpasswd -n
.
Warning: IP whitelisting won’t work correctly if you enabled AAAA (ipv6) record for your domains. Traefik won’t see
X-Real-IP
. For the moment, if you need to get correct IP address, just use ipv4.
Added backup and backup_cleanup services to your docker-compose.yml file:
services:
#
# AFTER your app main services (web, db, solr…)
#
backup:
image: ambroisemaupate/ftp-backup
networks:
# Container should be on same network as database
- default
depends_on:
# List here your database service
- db
environment:
LOCAL_PATH: /var/www/html
DB_USER: example
DB_HOST: db
DB_PASS: password
DB_NAME: example
FTP_PROTO: ftp
FTP_PORT: 21
FTP_HOST: ftp.server
FTP_USER: example
FTP_PASS: example
REMOTE_PATH: /home/example/backups/site
volumes:
# Populate your local path with your app service volumes
# this will backup ONLY your critical data, not your app
# code and vendor.
- private_files:/var/www/html/files:ro
- public_files:/var/www/html/web/files:ro
- gen_src:/var/www/html/app/gen-src:ro
backup_cleanup:
image: ambroisemaupate/ftp-cleanup
networks:
- default
environment:
# Make sure to use the same credentials
# as backup service
FTP_PROTO: ftp
FTP_PORT: 21
FTP_HOST: ftp.server
FTP_USER: example
FTP_PASS: example
STORE_DAYS: 5
# this path MUST exists on remote server
FTP_PATH: /home/example/backups/site
Test if your credentials are valid: docker-compose --no-deps --force-recreate up backup backup_cleanup
. This should launch the 2 services cleaning up older backups and
creating new ones. One for your files stored in /var/www/html
(check that you are using your main service volumes here), and a second one for your database dump.
ℹ️ You can use a .env
file in your project path to avoid typing FTP and DB credential twice.
Then add docker-compose lines to your host crontab -e
(do not forget to specify your docker-compose.yml
path):
# crontab
# You must change directory in order to access .env file
# Clean and backup "site_a" files and database at midnight
0 0 * * * cd /root/docker-server-env/compose/site_a && /usr/local/bin/docker-compose up --no-deps --force-recreate backup backup_cleanup
# Clean and backup "site_b" files and database 15 minutes later
15 0 * * * cd /root/docker-server-env/compose/site_b && /usr/local/bin/docker-compose up --no-deps --force-recreate backup backup_cleanup
backup_cleanup service uses a FTP/SFTP script that will check files older than $STORE_DAYS
and delete them after. It will do nothing if there are only one of each files and database backup available. This is useful to prevent deletion of non-running services by keeping at least one backup. backup_cleanup does not use sshftpfs volume to perform file listing so you can use it with every FTP/SFTP account.
Backup clean-up is already handled by your docker-compose services (see above).
You can add as many backup services as you want to create rolling backups: daily, weekly, monthly:
# …
# DAILY
backup_daily:
image: ambroisemaupate/ftp-backup
depends_on:
- db
environment:
LOCAL_PATH: /var/www/html
DB_USER: test
DB_HOST: db
DB_PASS: test
DB_NAME: test
FTP_PROTO: ftp
FTP_PORT: 21
FTP_HOST: ftp.server.test
FTP_USER: test
FTP_PASS: test
REMOTE_PATH: /home/test/backups/daily
volumes:
- public_files:/var/www/html/web/files:ro
backup_cleanup_daily:
image: ambroisemaupate/ftp-cleanup
environment:
FTP_PROTO: ftp
FTP_PORT: 21
FTP_HOST: ftp.server.test
FTP_USER: test
FTP_PASS: test
STORE_DAYS: 7
FTP_PATH: /home/test/backups/daily
# WEEKLY
backup_weekly:
image: ambroisemaupate/ftp-backup
depends_on:
- db
environment:
LOCAL_PATH: /var/www/html
DB_USER: test
DB_HOST: db
DB_PASS: test
DB_NAME: test
FTP_PROTO: ftp
FTP_PORT: 21
FTP_HOST: ftp.server.test
FTP_USER: test
FTP_PASS: test
REMOTE_PATH: /home/test/backups/weekly
volumes:
- public_files:/var/www/html/web/files:ro
backup_cleanup_weekly:
image: ambroisemaupate/ftp-cleanup
environment:
FTP_PROTO: ftp
FTP_PORT: 21
FTP_HOST: ftp.server.test
FTP_USER: test
FTP_PASS: test
STORE_DAYS: 30
FTP_PATH: /home/test/backups/weekly
# MONTHLY
backup_monthly:
image: ambroisemaupate/ftp-backup
depends_on:
- db
environment:
LOCAL_PATH: /var/www/html
DB_USER: test
DB_HOST: db
DB_PASS: test
DB_NAME: test
FTP_PROTO: ftp
FTP_PORT: 21
FTP_HOST: ftp.server.test
FTP_USER: test
FTP_PASS: test
REMOTE_PATH: /home/test/backups/monthly
volumes:
- public_files:/var/www/html/web/files:ro
backup_cleanup_monthly:
image: ambroisemaupate/ftp-cleanup
environment:
FTP_PROTO: ftp
FTP_PORT: 21
FTP_HOST: ftp.server.test
FTP_USER: test
FTP_PASS: test
STORE_DAYS: 366
FTP_PATH: /home/test/backups/monthly
then launch them once a day, once a week, once a month from your crontab:
# Rolling backups (do not use same hour of night to save CPU)
# Daily
00 2 * * * cd /root/docker-server-env/compose/site_a && /usr/local/bin/docker-compose up -d --no-deps --force-recreate backup_daily backup_cleanup_daily
# Weekly (on Monday early morning)
00 3 * * 1 cd /root/docker-server-env/compose/site_a && /usr/local/bin/docker-compose up -d --no-deps --force-recreate backup_weekly backup_cleanup_weekly
# Monthly (on each 1st day)
00 4 1 * * cd /root/docker-server-env/compose/site_a && /usr/local/bin/docker-compose up -d --no-deps --force-recreate backup_monthly backup_cleanup_monthly
Example files can be found in ./compose/example-roadiz-registry/
and ./scripts/bck-example-roadiz-registry.sh.sample
if you are building custom Roadiz images with direct volumes for your websites and private registry such as Gitlab one.
Copy .env.dist
to .env
to store your secrets at one place.
After you update your website image:
docker-compose pull app;
# use --no-deps to avoid recreating db and solr service too.
docker-compose up -d --force-recreate --no-deps app;
# if you created a Makefile in your docker image
docker-compose exec -u www-data app make cache;
Add the etc/logrotate.d/dockerbck
configuration to your real logrotate.d
system folder.