GithubHelp home page GithubHelp logo

linuxserver / docker-swag Goto Github PK

View Code? Open in Web Editor NEW
2.5K 31.0 229.0 805 KB

Nginx webserver and reverse proxy with php support and a built-in Certbot (Let's Encrypt) client. It also contains fail2ban for intrusion prevention.

Home Page: https://docs.linuxserver.io/general/swag

License: GNU General Public License v3.0

Dockerfile 76.84% Shell 3.90% HTML 19.27%
hacktoberfest

docker-swag's Introduction

linuxserver.io

Blog Discord Discourse Fleet GitHub Open Collective

The LinuxServer.io team brings you another container release featuring:

  • regular and timely application updates
  • easy user mappings (PGID, PUID)
  • custom base image with s6 overlay
  • weekly base OS updates with common layers across the entire LinuxServer.io ecosystem to minimise space usage, down time and bandwidth
  • regular security updates

Find us at:

  • Blog - all the things you can do with our containers including How-To guides, opinions and much more!
  • Discord - realtime support / chat with the community and the team.
  • Discourse - post on our community forum.
  • Fleet - an online web interface which displays all of our maintained images.
  • GitHub - view the source for all of our repositories.
  • Open Collective - please consider helping us by either donating or contributing to our budget

Scarf.io pulls GitHub Stars GitHub Release GitHub Package Repository GitLab Container Registry Quay.io Docker Pulls Docker Stars Jenkins Build LSIO CI

SWAG - Secure Web Application Gateway (formerly known as letsencrypt, no relation to Let's Encrypt™) sets up an Nginx webserver and reverse proxy with php support and a built-in certbot client that automates free SSL server certificate generation and renewal processes (Let's Encrypt and ZeroSSL). It also contains fail2ban for intrusion prevention.

swag

Supported Architectures

We utilise the docker manifest for multi-platform awareness. More information is available from docker here and our announcement here.

Simply pulling lscr.io/linuxserver/swag:latest should retrieve the correct image for your arch, but you can also pull specific arch images via tags.

The architectures supported by this image are:

Architecture Available Tag
x86-64 amd64-<version tag>
arm64 arm64v8-<version tag>
armhf

Application Setup

Validation and initial setup

  • Before running this container, make sure that the url and subdomains are properly forwarded to this container's host, and that port 443 (and/or 80) is not being used by another service on the host (NAS gui, another webserver, etc.).
  • If you need a dynamic dns provider, you can use the free provider duckdns.org where the URL will be yoursubdomain.duckdns.org and the SUBDOMAINS can be www,ftp,cloud with http validation, or wildcard with dns validation. You can use our duckdns image to update your IP on duckdns.org.
  • For http validation, port 80 on the internet side of the router should be forwarded to this container's port 80
  • For dns validation, make sure to enter your credentials into the corresponding ini (or json for some plugins) file under /config/dns-conf
    • Cloudflare provides free accounts for managing dns and is very easy to use with this image. Make sure that it is set up for "dns only" instead of "dns + proxy"
    • Google dns plugin is meant to be used with "Google Cloud DNS", a paid enterprise product, and not for "Google Domains DNS"
    • DuckDNS only supports two types of DNS validated certificates (not both at the same time):
      1. Certs that only cover your main subdomain (ie. yoursubdomain.duckdns.org, leave the SUBDOMAINS variable empty)
      2. Certs that cover sub-subdomains of your main subdomain (ie. *.yoursubdomain.duckdns.org, set the SUBDOMAINS variable to wildcard)
  • --cap-add=NET_ADMIN is required for fail2ban to modify iptables
  • After setup, navigate to https://yourdomain.url to access the default homepage (http access through port 80 is disabled by default, you can enable it by editing the default site config at /config/nginx/site-confs/default.conf).
  • Certs are checked nightly and if expiration is within 30 days, renewal is attempted. If your cert is about to expire in less than 30 days, check the logs under /config/log/letsencrypt to see why the renewals have been failing. It is recommended to input your e-mail in docker parameters so you receive expiration notices from Let's Encrypt in those circumstances.

Certbot Plugins

SWAG includes many Certbot plugins out of the box, but not all plugins can be includes. If you need a plugin that is not included, the quickest way to have the plugin available is to use our Universal Package Install Docker Mod.

Set the following environment variables on your container:

DOCKER_MODS=linuxserver/mods:universal-package-install
INSTALL_PIP_PACKAGES=certbot-dns-<plugin>

Set the required credentials (usually found in the plugin documentation) in /config/dns-conf/<plugin>.ini. It is recommended to attempt obtaining a certificate with STAGING=true first to make sure the plugin is working as expected.

Security and password protection

  • The container detects changes to url and subdomains, revokes existing certs and generates new ones during start.
  • Per RFC7919, the container is shipping ffdhe4096 as the dhparams.pem.
  • If you'd like to password protect your sites, you can use htpasswd. Run the following command on your host to generate the htpasswd file docker exec -it swag htpasswd -c /config/nginx/.htpasswd <username>
  • You can add multiple user:pass to .htpasswd. For the first user, use the above command, for others, use the above command without the -c flag, as it will force deletion of the existing .htpasswd and creation of a new one
  • You can also use ldap auth for security and access control. A sample, user configurable ldap.conf is provided, and it requires the separate image linuxserver/ldap-auth to communicate with an ldap server.

Site config and reverse proxy

  • The default site config resides at /config/nginx/site-confs/default.conf. Feel free to modify this file, and you can add other conf files to this directory. However, if you delete the default file, a new default will be created on container start.
  • Preset reverse proxy config files are added for popular apps. See the README.md file under /config/nginx/proxy_confs for instructions on how to enable them. The preset confs reside in and get imported from this repo.
  • If you wish to hide your site from search engine crawlers, you may find it useful to add this configuration line to your site config, within the server block, above the line where ssl.conf is included add_header X-Robots-Tag "noindex, nofollow, nosnippet, noarchive"; This will ask Google et al not to index and list your site. Be careful with this, as you will eventually be de-listed if you leave this line in on a site you wish to be present on search engines
  • If you wish to redirect http to https, you must expose port 80

Using certs in other containers

  • This container includes auto-generated pfx and private-fullchain-bundle pem certs that are needed by other apps like Emby and Znc.
    • To use these certs in other containers, do either of the following:
    1. (Easier) Mount the container's config folder in other containers (ie. -v /path-to-swag-config:/swag-ssl) and in the other containers, use the cert location /swag-ssl/keys/letsencrypt/
    2. (More secure) Mount the SWAG folder etc that resides under /config in other containers (ie. -v /path-to-swag-config/etc:/swag-ssl) and in the other containers, use the cert location /swag-ssl/letsencrypt/live/<your.domain.url>/ (This is more secure because the first method shares the entire SWAG config folder with other containers, including the www files, whereas the second method only shares the ssl certs)
    • These certs include:
    1. cert.pem, chain.pem, fullchain.pem and privkey.pem, which are generated by Certbot and used by nginx and various other apps
    2. privkey.pfx, a format supported by Microsoft and commonly used by dotnet apps such as Emby Server (no password)
    3. priv-fullchain-bundle.pem, a pem cert that bundles the private key and the fullchain, used by apps like ZNC

Using fail2ban

  • This container includes fail2ban set up with 5 jails by default:
    1. nginx-http-auth
    2. nginx-badbots
    3. nginx-botsearch
    4. nginx-deny
    5. nginx-unauthorized
  • To enable or disable other jails, modify the file /config/fail2ban/jail.local
  • To modify filters and actions, instead of editing the .conf files, create .local files with the same name and edit those because .conf files get overwritten when the actions and filters are updated. .local files will append whatever's in the .conf files (ie. nginx-http-auth.conf --> nginx-http-auth.local)
  • You can check which jails are active via docker exec -it swag fail2ban-client status
  • You can check the status of a specific jail via docker exec -it swag fail2ban-client status <jail name>
  • You can unban an IP via docker exec -it swag fail2ban-client set <jail name> unbanip <IP>
  • A list of commands can be found here: https://www.fail2ban.org/wiki/index.php/Commands

Updating configs

  • This container creates a number of configs for nginx, proxy samples, etc.
  • Config updates are noted in the changelog but not automatically applied to your files.
  • If you have modified a file with noted changes in the changelog:
    1. Keep your existing configs as is (not broken, don't fix)
    2. Review our repository commits and apply the new changes yourself
    3. Delete the modified config file with listed updates, restart the container, reapply your changes
  • If you have NOT modified a file with noted changes in the changelog:
    1. Delete the config file with listed updates, restart the container
  • Proxy sample updates are not listed in the changelog. See the changes here: https://github.com/linuxserver/reverse-proxy-confs/commits/master
  • Proxy sample files WILL be updated, however your renamed (enabled) proxy files will not.
  • You can check the new sample and adjust your active config as needed.

Migration from the old linuxserver/letsencrypt image

Please follow the instructions on this blog post.

Usage

To help you get started creating a container from this image you can either use docker-compose or the docker cli.

docker-compose (recommended, click here for more info)

---
services:
  swag:
    image: lscr.io/linuxserver/swag:latest
    container_name: swag
    cap_add:
      - NET_ADMIN
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Etc/UTC
      - URL=yourdomain.url
      - VALIDATION=http
      - SUBDOMAINS=www, #optional
      - CERTPROVIDER= #optional
      - DNSPLUGIN=cloudflare #optional
      - PROPAGATION= #optional
      - EMAIL= #optional
      - ONLY_SUBDOMAINS=false #optional
      - EXTRA_DOMAINS= #optional
      - STAGING=false #optional
    volumes:
      - /path/to/swag/config:/config
    ports:
      - 443:443
      - 80:80 #optional
    restart: unless-stopped
docker run -d \
  --name=swag \
  --cap-add=NET_ADMIN \
  -e PUID=1000 \
  -e PGID=1000 \
  -e TZ=Etc/UTC \
  -e URL=yourdomain.url \
  -e VALIDATION=http \
  -e SUBDOMAINS=www, `#optional` \
  -e CERTPROVIDER= `#optional` \
  -e DNSPLUGIN=cloudflare `#optional` \
  -e PROPAGATION= `#optional` \
  -e EMAIL= `#optional` \
  -e ONLY_SUBDOMAINS=false `#optional` \
  -e EXTRA_DOMAINS= `#optional` \
  -e STAGING=false `#optional` \
  -p 443:443 \
  -p 80:80 `#optional` \
  -v /path/to/swag/config:/config \
  --restart unless-stopped \
  lscr.io/linuxserver/swag:latest

Parameters

Containers are configured using parameters passed at runtime (such as those above). These parameters are separated by a colon and indicate <external>:<internal> respectively. For example, -p 8080:80 would expose port 80 from inside the container to be accessible from the host's IP on port 8080 outside the container.

Parameter Function
-p 443 Https port
-p 80 Http port (required for http validation and http -> https redirect)
-e PUID=1000 for UserID - see below for explanation
-e PGID=1000 for GroupID - see below for explanation
-e TZ=Etc/UTC specify a timezone to use, see this list.
-e URL=yourdomain.url Top url you have control over (customdomain.com if you own it, or customsubdomain.ddnsprovider.com if dynamic dns).
-e VALIDATION=http Certbot validation method to use, options are http or dns (dns method also requires DNSPLUGIN variable set).
-e SUBDOMAINS=www, Subdomains you'd like the cert to cover (comma separated, no spaces) ie. www,ftp,cloud. For a wildcard cert, set this exactly to wildcard (wildcard cert is available via dns validation only)
-e CERTPROVIDER= Optionally define the cert provider. Set to zerossl for ZeroSSL certs (requires existing ZeroSSL account and the e-mail address entered in EMAIL env var). Otherwise defaults to Let's Encrypt.
-e DNSPLUGIN=cloudflare Required if VALIDATION is set to dns. Options are acmedns, aliyun, azure, bunny, cloudflare, cpanel, desec, digitalocean, directadmin, dnsimple, dnsmadeeasy, dnspod, do, domeneshop, dreamhost, duckdns, dynudns, freedns, gandi, gehirn, glesys, godaddy, google, google-domains, he, hetzner, infomaniak, inwx, ionos, linode, loopia, luadns, namecheap, netcup, njalla, nsone, ovh, porkbun, rfc2136, route53, sakuracloud, standalone, transip, and vultr. Also need to enter the credentials into the corresponding ini (or json for some plugins) file under /config/dns-conf.
-e PROPAGATION= Optionally override (in seconds) the default propagation time for the dns plugins.
-e EMAIL= Optional e-mail address used for cert expiration notifications (Required for ZeroSSL).
-e ONLY_SUBDOMAINS=false If you wish to get certs only for certain subdomains, but not the main domain (main domain may be hosted on another machine and cannot be validated), set this to true
-e EXTRA_DOMAINS= Additional fully qualified domain names (comma separated, no spaces) ie. extradomain.com,subdomain.anotherdomain.org,*.anotherdomain.org
-e STAGING=false Set to true to retrieve certs in staging mode. Rate limits will be much higher, but the resulting cert will not pass the browser's security test. Only to be used for testing purposes.
-v /config Persistent config files

Portainer notice

This image utilises cap_add or sysctl to work properly. This is not implemented properly in some versions of Portainer, thus this image may not work if deployed through Portainer.

Environment variables from files (Docker secrets)

You can set any environment variable from a file by using a special prepend FILE__.

As an example:

-e FILE__MYVAR=/run/secrets/mysecretvariable

Will set the environment variable MYVAR based on the contents of the /run/secrets/mysecretvariable file.

Umask for running applications

For all of our images we provide the ability to override the default umask settings for services started within the containers using the optional -e UMASK=022 setting. Keep in mind umask is not chmod it subtracts from permissions based on it's value it does not add. Please read up here before asking for support.

User / Group Identifiers

When using volumes (-v flags), permissions issues can arise between the host OS and the container, we avoid this issue by allowing you to specify the user PUID and group PGID.

Ensure any volume directories on the host are owned by the same user you specify and any permissions issues will vanish like magic.

In this instance PUID=1000 and PGID=1000, to find yours use id your_user as below:

id your_user

Example output:

uid=1000(your_user) gid=1000(your_user) groups=1000(your_user)

Docker Mods

Docker Mods Docker Universal Mods

We publish various Docker Mods to enable additional functionality within the containers. The list of Mods available for this image (if any) as well as universal mods that can be applied to any one of our images can be accessed via the dynamic badges above.

Support Info

  • Shell access whilst the container is running:

    docker exec -it swag /bin/bash
  • To monitor the logs of the container in realtime:

    docker logs -f swag
  • Container version number:

    docker inspect -f '{{ index .Config.Labels "build_version" }}' swag
  • Image version number:

    docker inspect -f '{{ index .Config.Labels "build_version" }}' lscr.io/linuxserver/swag:latest

Updating Info

Most of our images are static, versioned, and require an image update and container recreation to update the app inside. With some exceptions (noted in the relevant readme.md), we do not recommend or support updating apps inside the container. Please consult the Application Setup section above to see if it is recommended for the image.

Below are the instructions for updating containers:

Via Docker Compose

  • Update images:

    • All images:

      docker-compose pull
    • Single image:

      docker-compose pull swag
  • Update containers:

    • All containers:

      docker-compose up -d
    • Single container:

      docker-compose up -d swag
  • You can also remove the old dangling images:

    docker image prune

Via Docker Run

  • Update the image:

    docker pull lscr.io/linuxserver/swag:latest
  • Stop the running container:

    docker stop swag
  • Delete the container:

    docker rm swag
  • Recreate a new container with the same docker run parameters as instructed above (if mapped correctly to a host folder, your /config folder and settings will be preserved)

  • You can also remove the old dangling images:

    docker image prune

Image Update Notifications - Diun (Docker Image Update Notifier)

tip: We recommend Diun for update notifications. Other tools that automatically update containers unattended are not recommended or supported.

Building locally

If you want to make local modifications to these images for development purposes or just to customize the logic:

git clone https://github.com/linuxserver/docker-swag.git
cd docker-swag
docker build \
  --no-cache \
  --pull \
  -t lscr.io/linuxserver/swag:latest .

The ARM variants can be built on x86_64 hardware using multiarch/qemu-user-static

docker run --rm --privileged multiarch/qemu-user-static:register --reset

Once registered you can define the dockerfile to use with -f Dockerfile.aarch64.

Versions

  • 23.03.24: - Fix perms on the generated priv-fullchain-bundle.pem.
  • 14.03.24: - Existing users should update: authelia-location.conf, authelia-server.conf - Update Authelia conf samples with support for 4.38.
  • 11.03.24: - Restore support for DynuDNS using certbot-dns-dynudns.
  • 06.03.24: - Existing users should update: site-confs/default.conf - Cleanup default site conf.
  • 04.03.24: - Remove stream.conf inside the container to allow users to include their own block in nginx.conf.
  • 23.01.24: - Rebase to Alpine 3.19 with php 8.3, add root periodic crontabs for logrotate.
  • 01.01.24: - Add GleSYS DNS plugin.
  • 11.12.23: - Deprecate certbot-dns-dynu to resolve dependency conflicts with other plugins.
  • 30.11.23: - Existing users should update: site-confs/default.conf - Fix index.php being downloaded on 404.
  • 23.11.23: - Run certbot as root to allow fix http validation.
  • 01.10.23: - Fix "unrecognized arguments" issue in DirectAdmin DNS plugin.
  • 28.08.23: - Add Namecheap DNS plugin.
  • 12.08.23: - Add FreeDNS plugin. Detect certbot DNS authenticators using CLI.
  • 07.08.23: - Add Bunny DNS Configuration.
  • 27.07.23: - Added support for dreamhost validation.
  • 25.05.23: - Rebase to Alpine 3.18, deprecate armhf.
  • 27.04.23: - Existing users should update: authelia-location.conf, authelia-server.conf, authentik-location.conf, authentik-server.conf - Simplify auth configs and fix Set-Cookie header bug.
  • 13.04.23: - Existing users should update: nginx.conf, authelia-location.conf, authentik-location.conf, and site-confs/default.conf - Move ssl.conf include to default.conf. Remove Authorization headers in authelia. Sort proxy_set_header in authelia and authentik.
  • 25.03.23: - Fix renewal post hook.
  • 10.03.23: - Cleanup unused csr and keys folders. See certbot 2.3.0 release notes.
  • 09.03.23: - Add Google Domains DNS support, google-domains.
  • 02.03.23: - Set permissions on crontabs during init.
  • 09.02.23: - Existing users should update: proxy.conf, authelia-location.conf and authelia-server.conf - Add Authentik configs, update Authelia configs.
  • 06.02.23: - Add porkbun support back in.
  • 21.01.23: - Unpin certbot version (allow certbot 2.x). !!BREAKING CHANGE!! We are temporarily removing the certbot porkbun plugin until a new version is released that is compatible with certbot 2.x.
  • 20.01.23: - Rebase to alpine 3.17 with php8.1.
  • 16.01.23: - Remove nchan module because it keeps causing crashes.
  • 08.12.22: - Revamp certbot init.
  • 03.12.22: - Remove defunct cloudxns plugin.
  • 22.11.22: - Pin acme to the same version as certbot.
  • 22.11.22: - Pin certbot to 1.32.0 until plugin compatibility improves.
  • 05.11.22: - Update acmedns plugin handling.
  • 06.10.22: - Switch to certbot-dns-duckdns. Update cpanel and gandi dns plugin handling. Minor adjustments to init logic.
  • 05.10.22: - Use certbot file hooks instead of command line hooks
  • 04.10.22: - Add godaddy and porkbun dns plugins.
  • 03.10.22: - Add default_server back to default site conf's https listen.
  • 22.09.22: - Added support for DO DNS validation.
  • 22.09.22: - Added certbot-dns-acmedns for DNS01 validation.
  • 20.08.22: - Existing users should update: nginx.conf - Rebasing to alpine 3.15 with php8. Restructure nginx configs (see changes announcement).
  • 10.08.22: - Added support for Dynu DNS validation.
  • 18.05.22: - Added support for Azure DNS validation.
  • 09.04.22: - Added certbot-dns-loopia for DNS01 validation.
  • 05.04.22: - Added support for standalone DNS validation.
  • 28.03.22: - created a logfile for fail2ban nginx-unauthorized in /etc/cont-init.d/50-config
  • 09.01.22: - Added a fail2ban jail for nginx unauthorized
  • 21.12.21: - Fixed issue with iptables not working as expected
  • 30.11.21: - Move maxmind to a new mod
  • 22.11.21: - Added support for Infomaniak DNS for certificate generation.
  • 20.11.21: - Added support for dnspod validation.
  • 15.11.21: - Added support for deSEC DNS for wildcard certificate generation.
  • 26.10.21: - Existing users should update: proxy.conf - Mitigate https://httpoxy.org/ vulnerabilities. Ref: https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx#Defeating-the-Attack-using-NGINX-and-NGINX-Plus
  • 23.10.21: - Fix Hurricane Electric (HE) DNS validation.
  • 12.10.21: - Fix deprecated LE root cert check to fix failures when using STAGING=true, and failures in revoking.
  • 06.10.21: - Added support for Hurricane Electric (HE) DNS validation. Added lxml build deps.
  • 01.10.21: - Check if the cert uses the old LE root cert, revoke and regenerate if necessary. Here's more info on LE root cert expiration
  • 19.09.21: - Add an optional header to opt out of Google FLoC in ssl.conf.
  • 17.09.21: - Mark SUBDOMAINS var as optional.
  • 01.08.21: - Add support for ionos dns validation.
  • 15.07.21: - Fix libmaxminddb issue due to upstream change.
  • 07.07.21: - Rebase to alpine 3.14.
  • 24.06.21: - Update default nginx conf folder.
  • 28.05.21: - Existing users should update: authelia-server.conf - Use resolver.conf and patch for CVE-2021-32637.
  • 20.05.21: - Modify resolver.conf generation to detect and ignore ipv6.
  • 14.05.21: - Existing users should update: nginx.conf, ssl.conf, proxy.conf, and the default site-conf - Rework nginx.conf to be inline with alpine upstream and relocate lines from other files. Use linuxserver.io wheel index for pip packages. Switch to using ffdhe4096 for dhparams.pem per RFC7919. Added worker_processes.conf, which sets the number of nginx workers, and resolver.conf, which sets the dns resolver. Both conf files are auto-generated only on first start and can be user modified later.
  • 21.04.21: - Existing users should update: authelia-server.conf and authelia-location.conf - Add remote name/email headers and pass http method.
  • 12.04.21: - Add php7-gmp and php7-pecl-mailparse.
  • 12.04.21: - Add support for vultr dns validation.
  • 14.03.21: - Add support for directadmin dns validation.
  • 12.02.21: - Clean up rust/cargo cache, which ballooned the image size in the last couple of builds.
  • 10.02.21: - Fix aliyun, domeneshop, inwx and transip dns confs for existing users.
  • 09.02.21: - Rebasing to alpine 3.13. Add nginx mods brotli and dav-ext. Remove nginx mods lua and lua-upstream (due to regression over the last couple of years).
  • 26.01.21: - Add support for hetzner dns validation.
  • 20.01.21: - Add check for ZeroSSL EAB retrieval.
  • 08.01.21: - Add support for getting certs from ZeroSSL via optional CERTPROVIDER env var. Update aliyun, domeneshop, inwx and transip dns plugins with the new plugin names. Hide donoteditthisfile.conf because users were editing it despite its name. Suppress harmless error when no proxy confs are enabled.
  • 03.01.21: - Existing users should update: /config/nginx/site-confs/default.conf - Add helper pages to aid troubleshooting
  • 10.12.20: - Add support for njalla dns validation
  • 09.12.20: - Check for template/conf updates and notify in the log. Add support for gehirn and sakuracloud dns validation.
  • 01.11.20: - Add support for netcup dns validation
  • 29.10.20: - Existing users should update: ssl.conf - Add frame-ancestors to Content-Security-Policy.
  • 04.10.20: - Existing users should update: nginx.conf, proxy.conf, and ssl.conf - Minor cleanups and reordering.
  • 20.09.20: - Existing users should update: nginx.conf - Added geoip2 configs. Added MAXMINDDB_LICENSE_KEY variable to readme.
  • 08.09.20: - Add php7-xsl.
  • 01.09.20: - Existing users should update: nginx.conf, proxy.conf, and various proxy samples - Global websockets across all configs.
  • 03.08.20: - Initial release.

docker-swag's People

Contributors

aptalca avatar coreyramirezgomez avatar darkorb avatar drizuid avatar ejach avatar erriez avatar evotk avatar fariszr avatar feilner avatar gilbn avatar gilesp avatar github-cli avatar homerr avatar j-brewer avatar james-d-elliott avatar johnmaguire avatar linuxserver-ci avatar mhofer117 avatar millerthegorilla avatar nemchik avatar netthier avatar obsidiangroup avatar peglah avatar pott3r3r avatar reey avatar robindadswell avatar roxedus avatar spunkie avatar thelamer avatar thespad avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

docker-swag's Issues

transip dns plugin

the certbot-dns-transip:dns-transip plugin is about to be deprecated:

Plugin legacy name certbot-dns-transip:dns-transip may be removed in a future version. Please use dns-transip instead.

[feature request] Custom hook after renewal/creation

It would be nice if we can register/create some script that will be executed after renewal/creation of certificate process.
Example: I'd like to copy certificate to some folder when it's renewed. I know that I can handle this from outside of container with some periodical checking, but if swag contains cron for this, why just don't use that.
So it would be nice if we can have some custom script executed within --post-hook. Optionally there could be also custom script for --pre-hook.

Problem using the variable FILE__DUCKDNSTOKEN

Hi there,

Currently, I got my swag container working like a charm. I am using a wildcard certificate with DuckDNS validation. My DuckDNS token is defined in my portainer stack (working like a docker-compose file).
I would like to put it in a file and use the variable FILE__DUCKDNSTOKEN to pass the content to swag but I cannot get it to work.

I am using this method with my openldap and authelia containers.

Please find information about the problem and my environment. I hope I haven't forgotten anything

Expected Behavior

Swag reads the content of the file defined in FILE__DUCKDNSTOKEN and generate the certificate.

Current Behavior

Swag fails at the DNS verification step. The error says the TXT record is incorrect and asks me to check my DuckDNS token.
The volume is correctly mounted in swag and the file is readable.
image

Steps to Reproduce

Create a file /root/docker/secrets/swag/DUCKDNSTOKEN and write my DUCKDNSTOKEN inside.
Open my portainer stack
Add a volume to my swag service
--> add: - /root/docker/secrets/swag:/run/secrets
Replace the env variable DUCKDNSTOKEN by FILE__DUCKDNSTOKEN
--> replace: -DUCKDNSTOKEN=ABCDEF...
--> by: FILE__DUCKDNSTOKEN=/run/secrets/DUCKDNSTOKEN
Deploy the container

Environment

Debian 10 running on a Hyper-V virtual machine (4 vCPU / 4Gb RAM)
x64 arch
Docker has been installed via the official repository (apt)
I use Portainer CE v2 to deploy my containers through stacks

Command used to create docker container (run/create/compose/screenshot)

Here is my stack

image

Docker logs

Here is swag logs

[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] 01-envfile: executing...
[env-init] DUCKDNSTOKEN set from FILE__DUCKDNSTOKEN
[cont-init.d] 01-envfile: exited 0.
[cont-init.d] 10-adduser: executing...

-------------------------------------
          _         ()
         | |  ___   _    __
         | | / __| | |  /  \
         | | \__ \ | | | () |
         |_| |___/ |_|  \__/


Brought to you by linuxserver.io
-------------------------------------

To support the app dev(s) visit:
Certbot: https://supporters.eff.org/donate/support-work-on-certbot

To support LSIO projects visit:
https://www.linuxserver.io/donate/
-------------------------------------
GID/UID
-------------------------------------

User uid:    1000
User gid:    1000
-------------------------------------

[cont-init.d] 10-adduser: exited 0.
[cont-init.d] 20-config: executing...
[cont-init.d] 20-config: exited 0.
[cont-init.d] 30-keygen: executing...
using keys found in /config/keys
[cont-init.d] 30-keygen: exited 0.
[cont-init.d] 50-config: executing...
Variables set:
PUID=1000
PGID=1000
TZ=Europe/Paris
URL=XXX
SUBDOMAINS=wildcard
EXTRA_DOMAINS=
ONLY_SUBDOMAINS=false
VALIDATION=duckdns
DNSPLUGIN=
EMAIL=XXX
STAGING=

SUBDOMAINS entered, processing
Wildcard cert for XXX will be requested
E-mail address entered: XXX
duckdns validation is selected
the resulting certificate will only cover the subdomains due to a limitation of duckdns, so it is advised to set the root location to use www.subdomain.duckdns.org
Generating new certificate
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator manual, Installer None
Obtaining a new certificate
Performing the following challenges:
dns-01 challenge for XXX
Running manual-auth-hook command: /app/duckdns-txt
Output from manual-auth-hook command duckdns-txt:
OKsleeping 60

Error output from manual-auth-hook command duckdns-txt:
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100     2    0     2    0     0      2      0 --:--:-- --:--:-- --:--:--     2
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0curl: (6) Could not resolve host: &txt=XXX

Waiting for verification...
Challenge failed for domain XXX
dns-01 challenge for XXX
Cleaning up challenges
Some challenges have failed.
IMPORTANT NOTES:
 - The following errors were reported by the server:

   Domain: XXX
   Type:   unauthorized
   Detail: Incorrect TXT record
   "XXX" found at
   _acme-challenge.XXX

   To fix these errors, please make sure that your domain name was
   entered correctly and the DNS A/AAAA record(s) for that domain
   contain(s) the right IP address.
ERROR: Cert does not exist! Please see the validation error above. Make sure your DUCKDNSTOKEN is correct.

nginx: [warn] "ssl_stapling" ignored

Expected Behavior

Fast startup of the nginx server.

Current Behavior

The nginx server does start. But before it's fully usable it produces the following warnings:

nginx: [warn] "ssl_stapling" ignored, host not found in OCSP responder "ocsp.int-x3.letsencrypt.org" in the certificate "/config/keys/letsencrypt/fullchain.pem"

Every few seconds a new warning like this is generated. This process takes about half a minute. During that period the nginx server is not accessible. After that the nginx server works as expected.

Steps to Reproduce

Can't provide this, since I noticed this behavior today when I was manually restarting the server. No configs were changed, or proxy sites added/removed.

Environment

OS: QNAP QTS 4.4.3
CPU architecture: x86_64
How docker service was installed: from the official docer repo

Command used to create docker container (run/create/compose/ screenshot)

settings

Docker logs

[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] 01-envfile: executing... 
[cont-init.d] 01-envfile: exited 0.
[cont-init.d] 10-adduser: executing... 
usermod: no changes

-------------------------------------
          _         ()
         | |  ___   _    __
         | | / __| | |  /  \ 
         | | \__ \ | | | () |
         |_| |___/ |_|  \__/


Brought to you by linuxserver.io
-------------------------------------

To support the app dev(s) visit:
Certbot: https://supporters.eff.org/donate/support-work-on-certbot

To support LSIO projects visit:
https://www.linuxserver.io/donate/
-------------------------------------
GID/UID
-------------------------------------

User uid:    1003
User gid:    100
-------------------------------------

[cont-init.d] 10-adduser: exited 0.
[cont-init.d] 20-config: executing... 
[cont-init.d] 20-config: exited 0.
[cont-init.d] 30-keygen: executing... 
using keys found in /config/keys
[cont-init.d] 30-keygen: exited 0.
[cont-init.d] 50-config: executing... 
Variables set:
PUID=1003
PGID=100
TZ=<hidden>
URL=<hidden>
SUBDOMAINS=wildcard
EXTRA_DOMAINS=
ONLY_SUBDOMAINS=false
VALIDATION=dns
DNSPLUGIN=cloudflare
EMAIL=<hidden>
STAGING=

SUBDOMAINS entered, processing
Wildcard cert for <hidden> will be requested
E-mail address entered: <hidden>
dns validation via cloudflare plugin is selected
Certificate exists; parameters unchanged; starting nginx
Starting 2019/12/30, GeoIP2 databases require personal license key to download. Please retrieve a free license key from MaxMind,
and add a new env variable "MAXMINDDB_LICENSE_KEY", set to your license key.
[cont-init.d] 50-config: exited 0.
[cont-init.d] 60-renew: executing... 
The cert does not expire within the next day. Letting the cron script handle the renewal attempts overnight (2:08am).
[cont-init.d] 60-renew: exited 0.
[cont-init.d] 99-custom-files: executing... 
[custom-init] no custom files found exiting...
[cont-init.d] 99-custom-files: exited 0.
[cont-init.d] done.
[services.d] starting services
[services.d] done.
Server ready
nginx: [warn] "ssl_stapling" ignored, host not found in OCSP responder "ocsp.int-x3.letsencrypt.org" in the certificate "/config/keys/letsencrypt/fullchain.pem"
nginx: [warn] "ssl_stapling" ignored, host not found in OCSP responder "ocsp.int-x3.letsencrypt.org" in the certificate "/config/keys/letsencrypt/fullchain.pem"
nginx: [warn] "ssl_stapling" ignored, host not found in OCSP responder "ocsp.int-x3.letsencrypt.org" in the certificate "/config/keys/letsencrypt/fullchain.pem"
nginx: [warn] "ssl_stapling" ignored, host not found in OCSP responder "ocsp.int-x3.letsencrypt.org" in the certificate "/config/keys/letsencrypt/fullchain.pem"
nginx: [warn] "ssl_stapling" ignored, host not found in OCSP responder "ocsp.int-x3.letsencrypt.org" in the certificate "/config/keys/letsencrypt/fullchain.pem"
nginx: [warn] "ssl_stapling" ignored, host not found in OCSP responder "ocsp.int-x3.letsencrypt.org" in the certificate "/config/keys/letsencrypt/fullchain.pem"
nginx: [warn] "ssl_stapling" ignored, host not found in OCSP responder "ocsp.int-x3.letsencrypt.org" in the certificate "/config/keys/letsencrypt/fullchain.pem"
nginx: [warn] "ssl_stapling" ignored, host not found in OCSP responder "ocsp.int-x3.letsencrypt.org" in the certificate "/config/keys/letsencrypt/fullchain.pem"
nginx: [warn] "ssl_stapling" ignored, host not found in OCSP responder "ocsp.int-x3.letsencrypt.org" in the certificate "/config/keys/letsencrypt/fullchain.pem"
nginx: [alert] detected a LuaJIT version which is not OpenResty's; many optimizations will be disabled and performance will be compromised (see https://github.com/openresty/luajit2 for OpenResty's LuaJIT or, even better, consider using the OpenResty releases from https://openresty.org/en/download.html)

How to I connect container (for example duplicati in my case) to the proxy?

Went on Discord but I'm not permitted to post in any of the builds, rules and other available channels and there's no support channel.

Read all the docs, included the config/nginx/proxy-confs/duplicati.subdomain.conf but don't understand how the proxy is supposed to know at which subdomain to reach it.

Where do I tell either the proxy or duplicati which subdomain should be used?

Wheels for cryptography error when building locally

linuxserver.io


Expected Behavior

Building locally should go without error and finish building

Current Behavior

The build fails at the point where it tries to build wheel for cryptography.

Steps to Reproduce

These are the steps I used from the documentation:

git clone https://github.com/linuxserver/docker-swag.git
cd docker-swag
docker build \
  --no-cache \
  --pull \
  -t ghcr.io/linuxserver/swag:latest .

Environment

OS: Debian 10
CPU architecture: x86_64
How docker service was installed:
From the official docker repo

Command used to create docker container (run/create/compose/screenshot)

git clone https://github.com/linuxserver/docker-swag.git
cd docker-swag
docker build \
  --no-cache \
  --pull \
  -t ghcr.io/linuxserver/swag:latest .

Docker logs

Sending build context to Docker daemon  241.7kB
Step 1/10 : FROM ghcr.io/linuxserver/baseimage-alpine-nginx:3.12
3.12: Pulling from linuxserver/baseimage-alpine-nginx
Digest: sha256:56330694c2a142cc9e1abe067eb188cff4b766b1e74e6648ffe9d1f8e59c2485
Status: Image is up to date for ghcr.io/linuxserver/baseimage-alpine-nginx:3.12
 ---> dc74722fe69b
Step 2/10 : ARG BUILD_DATE
 ---> Running in ed9fec5497b9
Removing intermediate container ed9fec5497b9
 ---> 3be78f0d1ffc
Step 3/10 : ARG VERSION
 ---> Running in 0e61c363a873
Removing intermediate container 0e61c363a873
 ---> bd94ca9a68b8
Step 4/10 : ARG CERTBOT_VERSION
 ---> Running in 4134d27a67c2
Removing intermediate container 4134d27a67c2
 ---> da22d54efadc
Step 5/10 : LABEL build_version="Linuxserver.io version:- ${VERSION} Build-date:- ${BUILD_DATE}"
 ---> Running in 8fa1eb1c6734
Removing intermediate container 8fa1eb1c6734
 ---> e6dda62ee03d
Step 6/10 : LABEL maintainer="aptalca"
 ---> Running in 8c346f24393e
Removing intermediate container 8c346f24393e
 ---> 39f4d3258fec
Step 7/10 : ENV DHLEVEL=2048 ONLY_SUBDOMAINS=false AWS_CONFIG_FILE=/config/dns-conf/route53.ini
 ---> Running in 331dc0ef328d
Removing intermediate container 331dc0ef328d
 ---> 6db30c69156d
Step 8/10 : ENV S6_BEHAVIOUR_IF_STAGE2_FAILS=2
 ---> Running in a3c516d2eeb5
Removing intermediate container a3c516d2eeb5
 ---> 3d09e1c63b0e
Step 9/10 : RUN  echo "**** install build packages ****" &&  apk add --no-cache --virtual=build-dependencies    g++     gcc     libffi-dev      openssl-dev python3-dev &&  echo "**** install runtime packages ****" &&  apk add --no-cache --upgrade      curl    fail2ban        gnupg   memcached       nginx   nginx-mod-http-echo         nginx-mod-http-fancyindex       nginx-mod-http-geoip2   nginx-mod-http-headers-more     nginx-mod-http-image-filter     nginx-mod-http-lua  nginx-mod-http-lua-upstream     nginx-mod-http-nchan    nginx-mod-http-perl     nginx-mod-http-redis2   nginx-mod-http-set-misc         nginx-mod-http-upload-progress      nginx-mod-http-xslt-filter      nginx-mod-mail  nginx-mod-rtmp  nginx-mod-stream        nginx-mod-stream-geoip2         nginx-vim  php7-bcmath      php7-bz2        php7-ctype      php7-curl       php7-dom        php7-exif       php7-ftp        php7-gd         php7-iconv      php7-imap  php7-intl        php7-ldap       php7-mcrypt     php7-memcached  php7-mysqli     php7-mysqlnd    php7-opcache    php7-pdo_mysql  php7-pdo_odbc   php7-pdo_pgsql      php7-pdo_sqlite         php7-pear       php7-pecl-apcu  php7-pecl-redis         php7-pgsql      php7-phar       php7-posix      php7-soap       php7-sockets        php7-sodium     php7-sqlite3    php7-tokenizer  php7-xml        php7-xmlreader  php7-xmlrpc     php7-xsl        php7-zip        py3-cryptography    py3-future      py3-pip         whois &&  echo "**** install certbot plugins ****" &&  if [ -z ${CERTBOT_VERSION+x} ]; then         CERTBOT="certbot";  else         CERTBOT="certbot==${CERTBOT_VERSION}";  fi &&  pip3 install -U         pip &&  pip3 install -U         ${CERTBOT}      certbot-dns-aliyun certbot-dns-cloudflare   certbot-dns-cloudxns    certbot-dns-cpanel      certbot-dns-digitalocean        certbot-dns-dnsimple    certbot-dns-dnsmadeeasy    certbot-dns-domeneshop   certbot-dns-google      certbot-dns-hetzner     certbot-dns-inwx        certbot-dns-linode      certbot-dns-luadns      certbot-dns-netcup  certbot-dns-njalla      certbot-dns-nsone       certbot-dns-ovh         certbot-dns-rfc2136     certbot-dns-route53     certbot-dns-transip     certbot-plugin-gandi        cryptography    requests &&  echo "**** remove unnecessary fail2ban filters ****" &&  rm        /etc/fail2ban/jail.d/alpine-ssh.conf &&  echo "**** copy fail2ban default action and filter to /default ****" &&  mkdir -p /defaults/fail2ban &&  mv /etc/fail2ban/action.d /defaults/fail2ban/ &&  mv /etc/fail2ban/filter.d /defaults/fail2ban/ &&  echo "**** copy proxy confs to /default ****" &&  mkdir -p /defaults/proxy-confs &&  curl -o      /tmp/proxy.tar.gz -L        "https://github.com/linuxserver/reverse-proxy-confs/tarball/master" &&  tar xf  /tmp/proxy.tar.gz -C    /defaults/proxy-confs --strip-components=1 --exclude=linux*/.gitattributes --exclude=linux*/.github --exclude=linux*/.gitignore --exclude=linux*/LICENSE &&  echo "**** configure nginx ****" &&  rm -f /etc/nginx/conf.d/default.conf &&  curl -o      /defaults/dhparams.pem -L       "https://lsio.ams3.digitaloceanspaces.com/dhparams.pem" &&  echo "**** cleanup ****" &&  apk del --purge    build-dependencies &&  for cleanfiles in *.pyc *.pyo;   do      find /usr/lib/python3.*  -iname "${cleanfiles}" -exec rm -f '{}' +  ; done &&  rm -rf       /tmp/*  /root/.cache
 ---> Running in 8e00aab85a26
**** install build packages ****
fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/community/x86_64/APKINDEX.tar.gz
(1/25) Installing libgcc (9.3.0-r2)
(2/25) Installing libstdc++ (9.3.0-r2)
(3/25) Installing binutils (2.34-r1)
(4/25) Installing gmp (6.2.0-r0)
(5/25) Installing isl (0.18-r0)
(6/25) Installing libgomp (9.3.0-r2)
(7/25) Installing libatomic (9.3.0-r2)
(8/25) Installing libgphobos (9.3.0-r2)
(9/25) Installing mpfr4 (4.0.2-r4)
(10/25) Installing mpc1 (1.1.0-r1)
(11/25) Installing gcc (9.3.0-r2)
(12/25) Installing musl-dev (1.1.24-r10)
(13/25) Installing libc-dev (0.7.2-r3)
(14/25) Installing g++ (9.3.0-r2)
(15/25) Installing linux-headers (5.4.5-r1)
(16/25) Installing libffi (3.3-r2)
(17/25) Installing pkgconf (1.7.2-r0)
(18/25) Installing libffi-dev (3.3-r2)
(19/25) Installing openssl-dev (1.1.1i-r0)
(20/25) Installing libbz2 (1.0.8-r1)
(21/25) Installing gdbm (1.13-r1)
(22/25) Installing sqlite-libs (3.32.1-r0)
(23/25) Installing python3 (3.8.5-r0)
(24/25) Installing python3-dev (3.8.5-r0)
(25/25) Installing build-dependencies (20210208.173458)
Executing busybox-1.31.1-r19.trigger
OK: 356 MiB in 87 packages
**** install runtime packages ****
fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/community/x86_64/APKINDEX.tar.gz
(1/157) Installing curl (7.69.1-r3)
(2/157) Installing libmnl (1.0.4-r0)
(3/157) Installing libnftnl-libs (1.1.6-r0)
(4/157) Installing iptables (1.8.4-r2)
(5/157) Installing ip6tables (1.8.4-r2)
(6/157) Installing fail2ban (0.11.1-r3)
(7/157) Installing libgpg-error (1.37-r0)
(8/157) Installing libassuan (2.5.3-r0)
(9/157) Installing libcap (2.27-r0)
(10/157) Installing libblkid (2.35.2-r0)
(11/157) Installing libmount (2.35.2-r0)
(12/157) Installing glib (2.64.6-r0)
(13/157) Installing libgcrypt (1.8.5-r0)
(14/157) Installing libsecret (0.20.3-r0)
(15/157) Installing pinentry (1.1.0-r2)
Executing pinentry-1.1.0-r2.post-install
(16/157) Installing nettle (3.5.1-r1)
(17/157) Installing p11-kit (0.23.22-r0)
(18/157) Installing libtasn1 (4.16.0-r1)
(19/157) Installing libunistring (0.9.10-r0)
(20/157) Installing gnutls (3.6.15-r0)
(21/157) Installing libksba (1.4.0-r0)
(22/157) Installing db (5.3.28-r1)
(23/157) Installing libsasl (2.1.27-r6)
(24/157) Installing libldap (2.4.50-r1)
(25/157) Installing npth (1.6-r0)
(26/157) Installing gnupg (2.2.23-r0)
(27/157) Installing libevent (2.1.11-r1)
(28/157) Installing libseccomp (2.4.4-r0)
(29/157) Installing memcached (1.6.6-r0)
Executing memcached-1.6.6-r0.pre-install
(30/157) Installing nginx-mod-http-echo (1.18.0-r1)
(31/157) Installing nginx-mod-http-fancyindex (1.18.0-r1)
(32/157) Installing libmaxminddb (1.4.3-r0)
(33/157) Installing nginx-mod-http-geoip2 (1.18.0-r1)
(34/157) Installing nginx-mod-http-headers-more (1.18.0-r1)
(35/157) Installing brotli-libs (1.0.9-r1)
(36/157) Installing libpng (1.6.37-r1)
(37/157) Installing freetype (2.10.4-r0)
(38/157) Installing libjpeg-turbo (2.0.5-r0)
(39/157) Installing libwebp (1.1.0-r0)
(40/157) Installing libgd (2.3.0-r1)
(41/157) Installing nginx-mod-http-image-filter (1.18.0-r1)
(42/157) Installing nginx-mod-devel-kit (1.18.0-r1)
(43/157) Installing luajit (5.1.20190925-r0)
(44/157) Installing nginx-mod-http-lua (1.18.0-r1)
(45/157) Installing nginx-mod-http-lua-upstream (1.18.0-r1)
(46/157) Installing nginx-mod-http-nchan (1.18.0-r1)
(47/157) Installing perl (5.30.3-r0)
(48/157) Installing perl-error (0.17029-r0)
(49/157) Installing perl-git (2.26.2-r0)
(50/157) Installing git-perl (2.26.2-r0)
(51/157) Installing nginx-mod-http-perl (1.18.0-r1)
(52/157) Installing nginx-mod-http-redis2 (1.18.0-r1)
(53/157) Installing nginx-mod-http-set-misc (1.18.0-r1)
(54/157) Installing nginx-mod-http-upload-progress (1.18.0-r1)
(55/157) Installing libxslt (1.1.34-r0)
(56/157) Installing nginx-mod-http-xslt-filter (1.18.0-r1)
(57/157) Installing nginx-mod-mail (1.18.0-r1)
(58/157) Installing nginx-mod-rtmp (1.18.0-r1)
(59/157) Installing nginx-mod-stream (1.18.0-r1)
(60/157) Installing nginx-mod-stream-geoip2 (1.18.0-r1)
(61/157) Installing nginx-vim (1.18.0-r1)
(62/157) Upgrading php7-common (7.3.26-r0 -> 7.3.27-r0)
(63/157) Upgrading php7 (7.3.26-r0 -> 7.3.27-r0)
(64/157) Installing php7-bcmath (7.3.27-r0)
(65/157) Installing php7-bz2 (7.3.27-r0)
(66/157) Installing php7-ctype (7.3.27-r0)
(67/157) Installing php7-curl (7.3.27-r0)
(68/157) Installing php7-dom (7.3.27-r0)
(69/157) Upgrading php7-mbstring (7.3.26-r0 -> 7.3.27-r0)
(70/157) Installing php7-exif (7.3.27-r0)
(71/157) Installing php7-ftp (7.3.27-r0)
(72/157) Installing libxau (1.0.9-r0)
(73/157) Installing libbsd (0.10.0-r0)
(74/157) Installing libxdmcp (1.1.3-r0)
(75/157) Installing libxcb (1.14-r1)
(76/157) Installing libx11 (1.6.12-r0)
(77/157) Installing libxext (1.3.4-r0)
(78/157) Installing libice (1.0.10-r0)
(79/157) Installing libsm (1.2.3-r0)
(80/157) Installing libxt (1.2.0-r0)
(81/157) Installing libxpm (3.5.13-r0)
(82/157) Installing php7-gd (7.3.27-r0)
(83/157) Installing php7-iconv (7.3.27-r0)
(84/157) Installing c-client (2007f-r11)
(85/157) Installing php7-imap (7.3.27-r0)
(86/157) Installing icu-libs (67.1-r0)
(87/157) Installing php7-intl (7.3.27-r0)
(88/157) Upgrading php7-json (7.3.26-r0 -> 7.3.27-r0)
(89/157) Installing php7-ldap (7.3.27-r0)
(90/157) Installing libmcrypt (2.5.8-r8)
(91/157) Installing php7-pecl-mcrypt (1.0.3-r0)
(92/157) Upgrading php7-session (7.3.26-r0 -> 7.3.27-r0)
(93/157) Installing php7-pecl-igbinary (3.1.6-r0)
(94/157) Installing libmemcached-libs (1.0.18-r4)
(95/157) Installing php7-pecl-memcached (3.1.5-r0)
(96/157) Upgrading php7-openssl (7.3.26-r0 -> 7.3.27-r0)
(97/157) Installing php7-mysqlnd (7.3.27-r0)
(98/157) Installing php7-mysqli (7.3.27-r0)
(99/157) Installing php7-opcache (7.3.27-r0)
(100/157) Installing php7-pdo (7.3.27-r0)
(101/157) Installing php7-pdo_mysql (7.3.27-r0)
(102/157) Installing unixodbc (2.3.7-r2)
(103/157) Installing php7-pdo_odbc (7.3.27-r0)
(104/157) Installing libpq (12.5-r0)
(105/157) Installing php7-pdo_pgsql (7.3.27-r0)
(106/157) Installing php7-pdo_sqlite (7.3.27-r0)
(107/157) Upgrading php7-xml (7.3.26-r0 -> 7.3.27-r0)
(108/157) Installing php7-pear (7.3.27-r0)
(109/157) Installing php7-pecl-apcu (5.1.19-r0)
(110/157) Installing php7-pecl-redis (5.2.2-r1)
(111/157) Installing php7-pgsql (7.3.27-r0)
(112/157) Installing php7-phar (7.3.27-r0)
(113/157) Installing php7-posix (7.3.27-r0)
(114/157) Installing php7-soap (7.3.27-r0)
(115/157) Installing php7-sockets (7.3.27-r0)
(116/157) Installing libsodium (1.0.18-r0)
(117/157) Installing php7-sodium (7.3.27-r0)
(118/157) Installing php7-sqlite3 (7.3.27-r0)
(119/157) Installing php7-tokenizer (7.3.27-r0)
(120/157) Installing php7-xmlreader (7.3.27-r0)
(121/157) Installing php7-xmlrpc (7.3.27-r0)
(122/157) Installing php7-xsl (7.3.27-r0)
(123/157) Installing libzip (1.6.1-r1)
(124/157) Installing php7-zip (7.3.27-r0)
(125/157) Installing py3-cparser (2.20-r0)
(126/157) Installing py3-cffi (1.14.0-r2)
(127/157) Installing py3-idna (2.9-r0)
(128/157) Installing py3-asn1crypto (1.3.0-r0)
(129/157) Installing py3-six (1.15.0-r0)
(130/157) Installing py3-cryptography (2.9.2-r0)
(131/157) Installing py3-ordered-set (4.0.1-r0)
(132/157) Installing py3-appdirs (1.4.4-r1)
(133/157) Installing py3-parsing (2.4.7-r0)
(134/157) Installing py3-packaging (20.4-r0)
(135/157) Installing py3-setuptools (47.0.0-r0)
(136/157) Installing py3-future (0.18.2-r1)
(137/157) Installing py3-chardet (3.0.4-r4)
(138/157) Installing py3-certifi (2020.4.5.1-r0)
(139/157) Installing py3-urllib3 (1.25.9-r0)
(140/157) Installing py3-requests (2.23.0-r0)
(141/157) Installing py3-msgpack (1.0.0-r0)
(142/157) Installing py3-lockfile (0.12.2-r3)
(143/157) Installing py3-cachecontrol (0.12.6-r0)
(144/157) Installing py3-colorama (0.4.3-r0)
(145/157) Installing py3-distlib (0.3.0-r0)
(146/157) Installing py3-distro (1.5.0-r1)
(147/157) Installing py3-webencodings (0.5.1-r3)
(148/157) Installing py3-html5lib (1.0.1-r4)
(149/157) Installing py3-pytoml (0.1.21-r0)
(150/157) Installing py3-pep517 (0.8.2-r0)
(151/157) Installing py3-progress (1.5-r0)
(152/157) Installing py3-toml (0.10.1-r0)
(153/157) Installing py3-retrying (1.3.3-r0)
(154/157) Installing py3-contextlib2 (0.6.0-r0)
(155/157) Installing py3-pip (20.1.1-r0)
(156/157) Installing libidn (1.35-r0)
(157/157) Installing whois (5.5.6-r0)
Executing busybox-1.31.1-r19.trigger
OK: 496 MiB in 237 packages
**** install certbot plugins ****
Collecting pip
  Downloading pip-21.0.1-py3-none-any.whl (1.5 MB)
Installing collected packages: pip
  Attempting uninstall: pip
    Found existing installation: pip 20.1.1
    Uninstalling pip-20.1.1:
      Successfully uninstalled pip-20.1.1
Successfully installed pip-21.0.1
Collecting certbot
  Downloading certbot-1.12.0-py2.py3-none-any.whl (251 kB)
Collecting certbot-dns-aliyun
  Downloading certbot_dns_aliyun-0.38.1-py2.py3-none-any.whl (11 kB)
Collecting certbot-dns-cloudflare
  Downloading certbot_dns_cloudflare-1.12.0-py2.py3-none-any.whl (11 kB)
Collecting certbot-dns-cloudxns
  Downloading certbot_dns_cloudxns-1.12.0-py2.py3-none-any.whl (9.0 kB)
Collecting certbot-dns-cpanel
  Downloading certbot_dns_cpanel-0.2.2-py2.py3-none-any.whl (8.5 kB)
Collecting certbot-dns-digitalocean
  Downloading certbot_dns_digitalocean-1.12.0-py2.py3-none-any.whl (9.8 kB)
Collecting certbot-dns-dnsimple
  Downloading certbot_dns_dnsimple-1.12.0-py2.py3-none-any.whl (8.8 kB)
Collecting certbot-dns-dnsmadeeasy
  Downloading certbot_dns_dnsmadeeasy-1.12.0-py2.py3-none-any.whl (9.1 kB)
Collecting certbot-dns-domeneshop
  Downloading certbot_dns_domeneshop-0.2.8-py2.py3-none-any.whl (9.1 kB)
Collecting certbot-dns-google
  Downloading certbot_dns_google-1.12.0-py2.py3-none-any.whl (11 kB)
Collecting certbot-dns-hetzner
  Downloading certbot_dns_hetzner-1.0.5-py2.py3-none-any.whl (12 kB)
Collecting certbot-dns-inwx
  Downloading certbot_dns_inwx-2.1.2-py2.py3-none-any.whl (12 kB)
Collecting certbot-dns-linode
  Downloading certbot_dns_linode-1.12.0-py2.py3-none-any.whl (9.2 kB)
Collecting certbot-dns-luadns
  Downloading certbot_dns_luadns-1.12.0-py2.py3-none-any.whl (8.9 kB)
Collecting certbot-dns-netcup
  Downloading certbot_dns_netcup-1.1.0-py2.py3-none-any.whl (9.4 kB)
Collecting certbot-dns-njalla
  Downloading certbot_dns_njalla-1.0.0-py3-none-any.whl (8.9 kB)
Collecting certbot-dns-nsone
  Downloading certbot_dns_nsone-1.12.0-py2.py3-none-any.whl (8.9 kB)
Collecting certbot-dns-ovh
  Downloading certbot_dns_ovh-1.12.0-py2.py3-none-any.whl (9.1 kB)
Collecting certbot-dns-rfc2136
  Downloading certbot_dns_rfc2136-1.12.0-py2.py3-none-any.whl (10 kB)
Collecting certbot-dns-route53
  Downloading certbot_dns_route53-1.12.0-py2.py3-none-any.whl (11 kB)
Collecting certbot-dns-transip
  Downloading certbot_dns_transip-0.3.3.tar.gz (13 kB)
Collecting certbot-plugin-gandi
  Downloading certbot_plugin_gandi-1.2.5-py3-none-any.whl (6.0 kB)
Requirement already satisfied: cryptography in /usr/lib/python3.8/site-packages (2.9.2)
Collecting cryptography
  Downloading cryptography-3.4.2.tar.gz (544 kB)
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'done'
    Preparing wheel metadata: started
    Preparing wheel metadata: finished with status 'done'
Requirement already satisfied: requests in /usr/lib/python3.8/site-packages (2.23.0)
Collecting requests
  Downloading requests-2.25.1-py2.py3-none-any.whl (61 kB)
Requirement already satisfied: distro>=1.0.1 in /usr/lib/python3.8/site-packages (from certbot) (1.5.0)
Collecting zope.component
  Downloading zope.component-4.6.2-py2.py3-none-any.whl (67 kB)
Collecting josepy>=1.1.0
  Downloading josepy-1.6.0-py2.py3-none-any.whl (57 kB)
Collecting ConfigArgParse>=0.9.3
  Downloading ConfigArgParse-1.2.3.tar.gz (42 kB)
Collecting acme>=1.8.0
  Downloading acme-1.12.0-py2.py3-none-any.whl (42 kB)
Collecting pytz
  Downloading pytz-2021.1-py2.py3-none-any.whl (510 kB)
Collecting configobj>=5.0.6
  Downloading configobj-5.0.6.tar.gz (33 kB)
Collecting zope.interface
  Downloading zope.interface-5.2.0.tar.gz (227 kB)
Collecting parsedatetime>=2.4
  Downloading parsedatetime-2.6-py3-none-any.whl (42 kB)
Requirement already satisfied: setuptools>=39.0.1 in /usr/lib/python3.8/site-packages (from certbot) (47.0.0)
Collecting pyrfc3339
  Downloading pyRFC3339-1.1-py2.py3-none-any.whl (5.7 kB)
Requirement already satisfied: cffi>=1.12 in /usr/lib/python3.8/site-packages (from cryptography) (1.14.0)
Collecting PyOpenSSL>=17.3.0
  Downloading pyOpenSSL-20.0.1-py2.py3-none-any.whl (54 kB)
Requirement already satisfied: six>=1.11.0 in /usr/lib/python3.8/site-packages (from acme>=1.8.0->certbot) (1.15.0)
Collecting requests-toolbelt>=0.3.0
  Downloading requests_toolbelt-0.9.1-py2.py3-none-any.whl (54 kB)
Requirement already satisfied: pycparser in /usr/lib/python3.8/site-packages (from cffi>=1.12->cryptography) (2.20)
Requirement already satisfied: chardet<5,>=3.0.2 in /usr/lib/python3.8/site-packages (from requests) (3.0.4)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /usr/lib/python3.8/site-packages (from requests) (1.25.9)
Requirement already satisfied: certifi>=2017.4.17 in /usr/lib/python3.8/site-packages (from requests) (2020.4.5.1)
Requirement already satisfied: idna<3,>=2.5 in /usr/lib/python3.8/site-packages (from requests) (2.9)
Collecting mock
  Downloading mock-4.0.3-py3-none-any.whl (28 kB)
Collecting dns-lexicon
  Downloading dns_lexicon-3.5.3-py3-none-any.whl (253 kB)
Collecting cloudflare>=1.5.1
  Downloading cloudflare-2.8.15.tar.gz (70 kB)
Collecting pyyaml
  Downloading PyYAML-5.4.1.tar.gz (175 kB)
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'done'
    Preparing wheel metadata: started
    Preparing wheel metadata: finished with status 'done'
Collecting jsonlines
  Downloading jsonlines-2.0.0-py3-none-any.whl (6.3 kB)
Collecting beautifulsoup4
  Downloading beautifulsoup4-4.9.3-py3-none-any.whl (115 kB)
Requirement already satisfied: future<1,>=0 in /usr/lib/python3.8/site-packages (from dns-lexicon->certbot-dns-aliyun) (0.18.2)
Collecting tldextract<4,>=3
  Downloading tldextract-3.1.0-py2.py3-none-any.whl (87 kB)
Collecting soupsieve>1.2
  Downloading soupsieve-2.1-py3-none-any.whl (32 kB)
Collecting requests-file>=1.4
  Downloading requests_file-1.5.1-py2.py3-none-any.whl (3.7 kB)
Collecting filelock>=3.0.8
  Downloading filelock-3.0.12-py3-none-any.whl (7.6 kB)
Collecting python-digitalocean>=1.11
  Downloading python_digitalocean-1.16.0-py3-none-any.whl (39 kB)
Collecting jsonpickle
  Downloading jsonpickle-2.0.0-py2.py3-none-any.whl (37 kB)
Collecting domeneshop>=0.4.2
  Downloading domeneshop-0.4.2-py2.py3-none-any.whl (5.1 kB)
Collecting httplib2
  Downloading httplib2-0.19.0-py3-none-any.whl (95 kB)
Collecting google-api-python-client>=1.5.5
  Downloading google_api_python_client-1.12.8-py2.py3-none-any.whl (61 kB)
Collecting oauth2client>=4.0
  Downloading oauth2client-4.1.3-py2.py3-none-any.whl (98 kB)
Collecting google-auth>=1.16.0
  Downloading google_auth-1.25.0-py2.py3-none-any.whl (116 kB)
Collecting google-auth-httplib2>=0.0.3
  Downloading google_auth_httplib2-0.0.4-py2.py3-none-any.whl (9.1 kB)
Collecting uritemplate<4dev,>=3.0.0
  Downloading uritemplate-3.0.1-py2.py3-none-any.whl (15 kB)
Collecting google-api-core<2dev,>=1.21.0
  Downloading google_api_core-1.25.1-py2.py3-none-any.whl (92 kB)
Collecting protobuf>=3.12.0
  Downloading protobuf-3.14.0-py2.py3-none-any.whl (173 kB)
Collecting googleapis-common-protos<2.0dev,>=1.6.0
  Downloading googleapis_common_protos-1.52.0-py2.py3-none-any.whl (100 kB)
Collecting cachetools<5.0,>=2.0.0
  Downloading cachetools-4.2.1-py3-none-any.whl (12 kB)
Collecting pyasn1-modules>=0.2.1
  Downloading pyasn1_modules-0.2.8-py2.py3-none-any.whl (155 kB)
Collecting rsa<5,>=3.1.4
  Downloading rsa-4.7-py3-none-any.whl (34 kB)
Requirement already satisfied: pyparsing<3,>=2.4.2 in /usr/lib/python3.8/site-packages (from httplib2->certbot-dns-google) (2.4.7)
Collecting pyasn1>=0.1.7
  Downloading pyasn1-0.4.8-py2.py3-none-any.whl (77 kB)
Collecting requests-mock
  Downloading requests_mock-1.8.0-py2.py3-none-any.whl (23 kB)
Collecting dnspython
  Downloading dnspython-2.1.0-py3-none-any.whl (241 kB)
Collecting boto3
  Downloading boto3-1.17.3-py2.py3-none-any.whl (130 kB)
Collecting transip~=2.1.0
  Downloading transip-2.1.2-py2.py3-none-any.whl (15 kB)
Collecting suds-jurko~=0.6
  Downloading suds-jurko-0.6.zip (255 kB)
Collecting botocore<1.21.0,>=1.20.3
  Downloading botocore-1.20.3-py2.py3-none-any.whl (7.2 MB)
Collecting jmespath<1.0.0,>=0.7.1
  Downloading jmespath-0.10.0-py2.py3-none-any.whl (24 kB)
Collecting s3transfer<0.4.0,>=0.3.0
  Downloading s3transfer-0.3.4-py2.py3-none-any.whl (69 kB)
Collecting python-dateutil<3.0.0,>=2.1
  Downloading python_dateutil-2.8.1-py2.py3-none-any.whl (227 kB)
Collecting zope.event
  Downloading zope.event-4.5.0-py2.py3-none-any.whl (6.8 kB)
Collecting zope.hookable>=4.2.0
  Downloading zope.hookable-5.0.1.tar.gz (24 kB)
Collecting zope.deferredimport>=4.2.1
  Downloading zope.deferredimport-4.3.1-py2.py3-none-any.whl (10 kB)
Collecting zope.deprecation>=4.3.0
  Downloading zope.deprecation-4.4.0-py2.py3-none-any.whl (10 kB)
Collecting zope.proxy
  Downloading zope.proxy-4.3.5.tar.gz (45 kB)
Using legacy 'setup.py install' for ConfigArgParse, since package 'wheel' is not installed.
Using legacy 'setup.py install' for configobj, since package 'wheel' is not installed.
Using legacy 'setup.py install' for cloudflare, since package 'wheel' is not installed.
Using legacy 'setup.py install' for certbot-dns-transip, since package 'wheel' is not installed.
Using legacy 'setup.py install' for suds-jurko, since package 'wheel' is not installed.
Using legacy 'setup.py install' for zope.interface, since package 'wheel' is not installed.
Using legacy 'setup.py install' for zope.hookable, since package 'wheel' is not installed.
Using legacy 'setup.py install' for zope.proxy, since package 'wheel' is not installed.
Building wheels for collected packages: cryptography, pyyaml
  Building wheel for cryptography (PEP 517): started
  Building wheel for cryptography (PEP 517): finished with status 'error'
  ERROR: Command errored out with exit status 1:
   command: /usr/bin/python3 /usr/lib/python3.8/site-packages/pip/_vendor/pep517/_in_process.py build_wheel /tmp/tmphy61v74n                                
       cwd: /tmp/pip-install-sjkmlpdb/cryptography_0dccedd9ccec4b60aa1f965c9741f47d                                                                         
  Complete output (148 lines):                                                                                                                              
  running bdist_wheel                                                                                                                                       
  running build                                                                                                                                             
  running build_py                                                                                                                                          
  creating build                                                                                                                                            
  creating build/lib.linux-x86_64-3.8                                                                                                                       
  creating build/lib.linux-x86_64-3.8/cryptography                                                                                                          
  copying src/cryptography/__init__.py -> build/lib.linux-x86_64-3.8/cryptography                                                                           
  copying src/cryptography/exceptions.py -> build/lib.linux-x86_64-3.8/cryptography                                                                         
  copying src/cryptography/__about__.py -> build/lib.linux-x86_64-3.8/cryptography                                                                          
  copying src/cryptography/fernet.py -> build/lib.linux-x86_64-3.8/cryptography                                                                             
  copying src/cryptography/utils.py -> build/lib.linux-x86_64-3.8/cryptography                                                                              
  creating build/lib.linux-x86_64-3.8/cryptography/x509                                                                                                     
  copying src/cryptography/x509/__init__.py -> build/lib.linux-x86_64-3.8/cryptography/x509                                                                 
  copying src/cryptography/x509/ocsp.py -> build/lib.linux-x86_64-3.8/cryptography/x509                                                                     
  copying src/cryptography/x509/name.py -> build/lib.linux-x86_64-3.8/cryptography/x509                                                                     
  copying src/cryptography/x509/base.py -> build/lib.linux-x86_64-3.8/cryptography/x509                                                                     
  copying src/cryptography/x509/certificate_transparency.py -> build/lib.linux-x86_64-3.8/cryptography/x509                                                 
  copying src/cryptography/x509/oid.py -> build/lib.linux-x86_64-3.8/cryptography/x509                                                                      
  copying src/cryptography/x509/general_name.py -> build/lib.linux-x86_64-3.8/cryptography/x509                                                             
  copying src/cryptography/x509/extensions.py -> build/lib.linux-x86_64-3.8/cryptography/x509                                                               
  creating build/lib.linux-x86_64-3.8/cryptography/hazmat                                                                                                   
  copying src/cryptography/hazmat/__init__.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat                                                             
  copying src/cryptography/hazmat/_types.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat                                                               
  copying src/cryptography/hazmat/_der.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat                                                                 
  copying src/cryptography/hazmat/_oid.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat                                                                 
  creating build/lib.linux-x86_64-3.8/cryptography/hazmat/primitives                                                                                        
  copying src/cryptography/hazmat/primitives/hmac.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/primitives                                           
  copying src/cryptography/hazmat/primitives/__init__.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/primitives                                       
  copying src/cryptography/hazmat/primitives/poly1305.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/primitives                                       
  copying src/cryptography/hazmat/primitives/keywrap.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/primitives                                        
  copying src/cryptography/hazmat/primitives/_serialization.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/primitives                                 
  copying src/cryptography/hazmat/primitives/cmac.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/primitives                                           
  copying src/cryptography/hazmat/primitives/constant_time.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/primitives                                  
  copying src/cryptography/hazmat/primitives/hashes.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/primitives                                         
  copying src/cryptography/hazmat/primitives/padding.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/primitives                                        
  copying src/cryptography/hazmat/primitives/_cipheralgorithm.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/primitives                               
  copying src/cryptography/hazmat/primitives/_asymmetric.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/primitives                                    
  creating build/lib.linux-x86_64-3.8/cryptography/hazmat/bindings                                                                                          
  copying src/cryptography/hazmat/bindings/__init__.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/bindings                                           
  creating build/lib.linux-x86_64-3.8/cryptography/hazmat/backends                                                                                          
  copying src/cryptography/hazmat/backends/__init__.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/backends                                           
  copying src/cryptography/hazmat/backends/interfaces.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/backends                                         
  creating build/lib.linux-x86_64-3.8/cryptography/hazmat/primitives/twofactor                                                                              
  copying src/cryptography/hazmat/primitives/twofactor/__init__.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/primitives/twofactor                   
  copying src/cryptography/hazmat/primitives/twofactor/hotp.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/primitives/twofactor                       
  copying src/cryptography/hazmat/primitives/twofactor/totp.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/primitives/twofactor                       
  copying src/cryptography/hazmat/primitives/twofactor/utils.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/primitives/twofactor                      
  creating build/lib.linux-x86_64-3.8/cryptography/hazmat/primitives/kdf                                                                                    
  copying src/cryptography/hazmat/primitives/kdf/pbkdf2.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/primitives/kdf                                 
  copying src/cryptography/hazmat/primitives/kdf/__init__.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/primitives/kdf                               
  copying src/cryptography/hazmat/primitives/kdf/x963kdf.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/primitives/kdf                                
  copying src/cryptography/hazmat/primitives/kdf/scrypt.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/primitives/kdf                                 
  copying src/cryptography/hazmat/primitives/kdf/kbkdf.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/primitives/kdf                                  
  copying src/cryptography/hazmat/primitives/kdf/concatkdf.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/primitives/kdf                              
  copying src/cryptography/hazmat/primitives/kdf/hkdf.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/primitives/kdf                                   
  creating build/lib.linux-x86_64-3.8/cryptography/hazmat/primitives/serialization                                                                          
  copying src/cryptography/hazmat/primitives/serialization/__init__.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/primitives/serialization           
  copying src/cryptography/hazmat/primitives/serialization/base.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/primitives/serialization               
  copying src/cryptography/hazmat/primitives/serialization/pkcs7.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/primitives/serialization              
  copying src/cryptography/hazmat/primitives/serialization/pkcs12.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/primitives/serialization             
  copying src/cryptography/hazmat/primitives/serialization/ssh.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/primitives/serialization                
  creating build/lib.linux-x86_64-3.8/cryptography/hazmat/primitives/ciphers                                                                                
  copying src/cryptography/hazmat/primitives/ciphers/aead.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/primitives/ciphers                           
  copying src/cryptography/hazmat/primitives/ciphers/__init__.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/primitives/ciphers                       
  copying src/cryptography/hazmat/primitives/ciphers/algorithms.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/primitives/ciphers                     
  copying src/cryptography/hazmat/primitives/ciphers/base.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/primitives/ciphers                           
  copying src/cryptography/hazmat/primitives/ciphers/modes.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/primitives/ciphers                          
  creating build/lib.linux-x86_64-3.8/cryptography/hazmat/primitives/asymmetric                                                                             
  copying src/cryptography/hazmat/primitives/asymmetric/ed448.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/primitives/asymmetric                    
  copying src/cryptography/hazmat/primitives/asymmetric/__init__.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/primitives/asymmetric                 
  copying src/cryptography/hazmat/primitives/asymmetric/ec.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/primitives/asymmetric                       
  copying src/cryptography/hazmat/primitives/asymmetric/dh.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/primitives/asymmetric                       
  copying src/cryptography/hazmat/primitives/asymmetric/rsa.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/primitives/asymmetric                      
  copying src/cryptography/hazmat/primitives/asymmetric/x448.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/primitives/asymmetric                     
  copying src/cryptography/hazmat/primitives/asymmetric/ed25519.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/primitives/asymmetric                  
  copying src/cryptography/hazmat/primitives/asymmetric/padding.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/primitives/asymmetric                  
  copying src/cryptography/hazmat/primitives/asymmetric/x25519.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/primitives/asymmetric                   
  copying src/cryptography/hazmat/primitives/asymmetric/dsa.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/primitives/asymmetric                      
  copying src/cryptography/hazmat/primitives/asymmetric/utils.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/primitives/asymmetric                    
  creating build/lib.linux-x86_64-3.8/cryptography/hazmat/bindings/openssl                                                                                  
  copying src/cryptography/hazmat/bindings/openssl/_conditional.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/bindings/openssl                       
  copying src/cryptography/hazmat/bindings/openssl/__init__.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/bindings/openssl                           
  copying src/cryptography/hazmat/bindings/openssl/binding.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/bindings/openssl                            
  creating build/lib.linux-x86_64-3.8/cryptography/hazmat/backends/openssl                                                                                  
  copying src/cryptography/hazmat/backends/openssl/ed448.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/backends/openssl                              
  copying src/cryptography/hazmat/backends/openssl/aead.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/backends/openssl
  copying src/cryptography/hazmat/backends/openssl/hmac.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/backends/openssl
  copying src/cryptography/hazmat/backends/openssl/__init__.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/backends/openssl
  copying src/cryptography/hazmat/backends/openssl/ocsp.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/backends/openssl
  copying src/cryptography/hazmat/backends/openssl/ec.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/backends/openssl
  copying src/cryptography/hazmat/backends/openssl/dh.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/backends/openssl
  copying src/cryptography/hazmat/backends/openssl/decode_asn1.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/backends/openssl
  copying src/cryptography/hazmat/backends/openssl/poly1305.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/backends/openssl
  copying src/cryptography/hazmat/backends/openssl/rsa.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/backends/openssl
  copying src/cryptography/hazmat/backends/openssl/encode_asn1.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/backends/openssl
  copying src/cryptography/hazmat/backends/openssl/ciphers.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/backends/openssl
  copying src/cryptography/hazmat/backends/openssl/x448.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/backends/openssl
  copying src/cryptography/hazmat/backends/openssl/cmac.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/backends/openssl
  copying src/cryptography/hazmat/backends/openssl/hashes.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/backends/openssl
  copying src/cryptography/hazmat/backends/openssl/ed25519.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/backends/openssl
  copying src/cryptography/hazmat/backends/openssl/x25519.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/backends/openssl
  copying src/cryptography/hazmat/backends/openssl/dsa.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/backends/openssl
  copying src/cryptography/hazmat/backends/openssl/utils.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/backends/openssl
  copying src/cryptography/hazmat/backends/openssl/backend.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/backends/openssl
  copying src/cryptography/hazmat/backends/openssl/x509.py -> build/lib.linux-x86_64-3.8/cryptography/hazmat/backends/openssl
  running egg_info
  writing src/cryptography.egg-info/PKG-INFO
  writing dependency_links to src/cryptography.egg-info/dependency_links.txt
  writing requirements to src/cryptography.egg-info/requires.txt
  writing top-level names to src/cryptography.egg-info/top_level.txt
  reading manifest file 'src/cryptography.egg-info/SOURCES.txt'
  reading manifest template 'MANIFEST.in'
  no previously-included directories found matching 'docs/_build'
  warning: no previously-included files found matching 'vectors'
  warning: no previously-included files matching '*' found under directory 'vectors'
  warning: no previously-included files matching '*' found under directory '.github'
  warning: no previously-included files found matching 'release.py'
  warning: no previously-included files found matching '.coveragerc'
  warning: no previously-included files found matching 'codecov.yml'
  warning: no previously-included files found matching '.readthedocs.yml'
  warning: no previously-included files found matching 'dev-requirements.txt'
  warning: no previously-included files found matching 'tox.ini'
  warning: no previously-included files found matching 'mypy.ini'
  warning: no previously-included files matching '*' found under directory '.zuul.d'
  warning: no previously-included files matching '*' found under directory '.zuul.playbooks'
  writing manifest file 'src/cryptography.egg-info/SOURCES.txt'
  running build_ext
  generating cffi module 'build/temp.linux-x86_64-3.8/_padding.c'
  creating build/temp.linux-x86_64-3.8
  generating cffi module 'build/temp.linux-x86_64-3.8/_openssl.c'
  running build_rust
  
      =============================DEBUG ASSISTANCE=============================
      If you are seeing a compilation error please try the following steps to
      successfully install cryptography:
      1) Upgrade to the latest pip and try again. This will fix errors for most
         users. See: https://pip.pypa.io/en/stable/installing/#upgrading-pip
      2) Read https://cryptography.io/en/latest/installation.html for specific
         instructions for your platform.
      3) Check our frequently asked questions for more information:
         https://cryptography.io/en/latest/faq.html
      4) Ensure you have a recent Rust toolchain installed:
         https://cryptography.io/en/latest/installation.html#rust
      5) If you are experiencing issues with Rust for *this release only* you may
         set the environment variable `CRYPTOGRAPHY_DONT_BUILD_RUST=1`.
      =============================DEBUG ASSISTANCE=============================
  
  error: Can not find Rust compiler
  ----------------------------------------
  ERROR: Failed building wheel for cryptography
  Building wheel for pyyaml (PEP 517): started
  Building wheel for pyyaml (PEP 517): finished with status 'done'
  Created wheel for pyyaml: filename=PyYAML-5.4.1-cp38-cp38-linux_x86_64.whl size=45641 sha256=cafa9c2caf884012438e819f589d534d708425606547501620f4c8d8e698948f
  Stored in directory: /root/.cache/pip/wheels/dd/c5/1d/5d7436173d3efd4a14dcb510eb0b29525ecb6b0e41489e716e
Successfully built pyyaml
Failed to build cryptography
ERROR: Could not build wheels for cryptography which use PEP 517 and cannot be installed directly
The command '/bin/sh -c echo "**** install build packages ****" &&  apk add --no-cache --virtual=build-dependencies     g++     gcc     libffi-dev      openssl-dev         python3-dev &&  echo "**** install runtime packages ****" &&  apk add --no-cache --upgrade      curl    fail2ban        gnupg   memcached  nginx    nginx-mod-http-echo     nginx-mod-http-fancyindex       nginx-mod-http-geoip2   nginx-mod-http-headers-more     nginx-mod-http-image-filter     nginx-mod-http-lua  nginx-mod-http-lua-upstream     nginx-mod-http-nchan    nginx-mod-http-perl     nginx-mod-http-redis2   nginx-mod-http-set-misc         nginx-mod-http-upload-progress      nginx-mod-http-xslt-filter      nginx-mod-mail  nginx-mod-rtmp  nginx-mod-stream        nginx-mod-stream-geoip2         nginx-vim   php7-bcmath     php7-bz2        php7-ctype      php7-curl       php7-dom        php7-exif       php7-ftp        php7-gd         php7-iconv      php7-imap   php7-intl       php7-ldap       php7-mcrypt     php7-memcached  php7-mysqli     php7-mysqlnd    php7-opcache    php7-pdo_mysql  php7-pdo_odbc   php7-pdo_pgsql      php7-pdo_sqlite         php7-pear       php7-pecl-apcu  php7-pecl-redis         php7-pgsql      php7-phar       php7-posix      php7-soap  php7-sockets     php7-sodium     php7-sqlite3    php7-tokenizer  php7-xml        php7-xmlreader  php7-xmlrpc     php7-xsl        php7-zip        py3-cryptography    py3-future      py3-pip         whois &&  echo "**** install certbot plugins ****" &&  if [ -z ${CERTBOT_VERSION+x} ]; then         CERTBOT="certbot";  else         CERTBOT="certbot==${CERTBOT_VERSION}";  fi &&  pip3 install -U         pip &&  pip3 install -U         ${CERTBOT}      certbot-dns-aliyun certbot-dns-cloudflare   certbot-dns-cloudxns    certbot-dns-cpanel      certbot-dns-digitalocean        certbot-dns-dnsimple    certbot-dns-dnsmadeeasy    certbot-dns-domeneshop   certbot-dns-google      certbot-dns-hetzner     certbot-dns-inwx        certbot-dns-linode      certbot-dns-luadns      certbot-dns-netcup  certbot-dns-njalla      certbot-dns-nsone       certbot-dns-ovh         certbot-dns-rfc2136     certbot-dns-route53     certbot-dns-transip     certbot-plugin-gandi        cryptography    requests &&  echo "**** remove unnecessary fail2ban filters ****" &&  rm        /etc/fail2ban/jail.d/alpine-ssh.conf &&  echo "**** copy fail2ban default action and filter to /default ****" &&  mkdir -p /defaults/fail2ban &&  mv /etc/fail2ban/action.d /defaults/fail2ban/ &&  mv /etc/fail2ban/filter.d /defaults/fail2ban/ &&  echo "**** copy proxy confs to /default ****" &&  mkdir -p /defaults/proxy-confs &&  curl -o      /tmp/proxy.tar.gz -L        "https://github.com/linuxserver/reverse-proxy-confs/tarball/master" &&  tar xf  /tmp/proxy.tar.gz -C    /defaults/proxy-confs --strip-components=1 --exclude=linux*/.gitattributes --exclude=linux*/.github --exclude=linux*/.gitignore --exclude=linux*/LICENSE &&  echo "**** configure nginx ****" &&  rm -f /etc/nginx/conf.d/default.conf &&  curl -o      /defaults/dhparams.pem -L       "https://lsio.ams3.digitaloceanspaces.com/dhparams.pem" &&  echo "**** cleanup ****" &&  apk del --purge    build-dependencies &&  for cleanfiles in *.pyc *.pyo;   do      find /usr/lib/python3.*  -iname "${cleanfiles}" -exec rm -f '{}' +  ; done &&  rm -rf       /tmp/*  /root/.cache' returned a non-zero code: 1

netcup dns validation issue (no TXT record written)

linuxserver.io


Expected Behavior

A wildcard SSL certificate is issued for my domain on netcup

Current Behavior

No TXT record is added to my domain

Steps to Reproduce

  1. prepare the docker-compose file
  2. create the dns-conf/netcup.ini file
  3. run docker-compose up -d

Environment

OS:
ubuntu 20
Linux 5.4.0-56-generic #62-Ubuntu SMP Mon Nov 23 19:20:19 UTC 2020 x86_64 x86_64 x86_64 GNU/Linu
CPU architecture: x86_64
How docker service was installed:
official docker repo

Command used to create docker container (run/create/compose/screenshot)

docker-compose.yml

---
version: "3.5"
services:
  swag:
    image: ghcr.io/linuxserver/swag
    container_name: swag
    cap_add:
      - NET_ADMIN
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Europe/London
      - URL=domain.tld
      - SUBDOMAINS=wildcard
      - VALIDATION=dns
      - DNSPLUGIN=netcup
      - EMAIL= [email protected]
      - ONLY_SUBDOMAINS=false #optional
      #- EXTRA_DOMAINS= secondomain.com,www.secondomain.it,*.thirddomain.com
      - STAGING=false #optional
      #- MAXMINDDB_LICENSE_KEY= #optional
    volumes:
      - /docker/swag/config:/config
    ports:
      - 443:443
      - 80:80 #optional
    restart: unless-stopped
    networks:
      - letsencrypt_default

networks:
  letsencrypt_default:
    external: true

Docker logs

docker logs swag

...
-------------------------------------
GID/UID
-------------------------------------

User uid:    1000
User gid:    1000
-------------------------------------

[cont-init.d] 10-adduser: exited 0.
[cont-init.d] 20-config: executing...
[cont-init.d] 20-config: exited 0.
[cont-init.d] 30-keygen: executing...
generating self-signed keys in /config/keys, you can replace these with your own keys if required
Generating a RSA private key
....+++++
.........................................+++++
writing new private key to '/config/keys/cert.key'
-----
[cont-init.d] 30-keygen: exited 0.
[cont-init.d] 50-config: executing...
Variables set:
PUID=1000
PGID=1000
TZ=Europe/London
URL=domain.tld
SUBDOMAINS=wildcard
EXTRA_DOMAINS=
ONLY_SUBDOMAINS=false
VALIDATION=dns
DNSPLUGIN=netcup
EMAIL= mail@gmailcom
STAGING=false

Created donoteditthisfile.conf
SUBDOMAINS entered, processing
Wildcard cert for domain.tld will be requested
E-mail address entered: mail@gmailcom
dns validation via netcup plugin is selected
Generating new certificate
Use of --manual-public-ip-logging-ok is deprecated.
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator dns-netcup, Installer None
Account registered.
Requesting a certificate for *.domain.tld and domain.tld
Performing the following challenges:
dns-01 challenge for domain.tld
dns-01 challenge for domain.tld
Unsafe permissions on credentials configuration file: /config/dns-conf/netcup.ini
Waiting 10 seconds for DNS changes to propagate
Waiting for verification...
Challenge failed for domain domain.tld
Challenge failed for domain domain.tld
dns-01 challenge for domain.tld
dns-01 challenge for domain.tld
Cleaning up challenges
Some challenges have failed.
IMPORTANT NOTES:
 - The following errors were reported by the server:

   Domain: domain.tld
   Type:   unauthorized
   Detail: No TXT record found at _acme-challenge.domain.tld

   Domain: domain.tld
   Type:   unauthorized
   Detail: No TXT record found at _acme-challenge.domain.tld

   To fix these errors, please make sure that your domain name was
   entered correctly and the DNS A/AAAA record(s) for that domain
   contain(s) the right IP address.
ERROR: Cert does not exist! Please see the validation error above. Make sure you entered correct credentials into the /config/dns-conf/netcup.ini file.

I tracked down the issue in /var/log/letsencrypt/letsencrypt.log

2020-12-06 17:43:52,149:DEBUG:acme.client:Storing nonce: 0004lVrIOjytUY9S1by5XIaN5if1tMesHvBHUZZqI7c3M
2020-12-06 17:43:52,149:INFO:certbot._internal.auth_handler:Performing the following challenges:
2020-12-06 17:43:52,149:INFO:certbot._internal.auth_handler:dns-01 challenge for domain.tld
2020-12-06 17:43:52,150:INFO:certbot._internal.auth_handler:dns-01 challenge for domain.tld
2020-12-06 17:43:52,150:WARNING:certbot.plugins.dns_common:Unsafe permissions on credentials configuration file: /config/dns-conf/netcup.ini
2020-12-06 17:43:52,152:DEBUG:lexicon.providers.netcup:login({})
2020-12-06 17:43:52,153:DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): ccp.netcup.net:443
2020-12-06 17:43:52,198:DEBUG:urllib3.connectionpool:https://ccp.netcup.net:443 "POST /run/webservice/servers/endpoint.php?JSON HTTP/1.1" 200 228
2020-12-06 17:43:52,200:DEBUG:lexicon.providers.netcup:infoDnsZone({'domainname': 'domain.tld'})
2020-12-06 17:43:52,201:DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): ccp.netcup.net:443
2020-12-06 17:43:52,259:DEBUG:urllib3.connectionpool:https://ccp.netcup.net:443 "POST /run/webservice/servers/endpoint.php?JSON HTTP/1.1" 200 238
2020-12-06 17:43:52,261:DEBUG:lexicon.providers.netcup:infoDnsRecords({'domainname': 'domain.tld'})
2020-12-06 17:43:52,262:DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): ccp.netcup.net:443
2020-12-06 17:43:52,327:DEBUG:urllib3.connectionpool:https://ccp.netcup.net:443 "POST /run/webservice/servers/endpoint.php?JSON HTTP/1.1" 200 319
2020-12-06 17:43:52,329:DEBUG:lexicon.providers.netcup:list_records: []
2020-12-06 17:43:52,329:DEBUG:lexicon.providers.netcup:updateDnsRecords({'domainname': 'domain.tld', 'dnsrecordset': {'dnsrecords': [{'type': 'TXT', 'hostname': '_acme-challenge', 'destination': 'sp412mXHfTR1DyJ5EAp7AuhxHEgDKk2JxgePBy0BRMZ'}]}})
2020-12-06 17:43:52,330:DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): ccp.netcup.net:443
2020-12-06 17:43:54,445:DEBUG:urllib3.connectionpool:https://ccp.netcup.net:443 "POST /run/webservice/servers/endpoint.php?JSON HTTP/1.1" 200 419
2020-12-06 17:43:54,448:DEBUG:lexicon.providers.netcup:create_record: True
2020-12-06 17:43:54,450:DEBUG:lexicon.providers.netcup:login({})
2020-12-06 17:43:54,451:DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): ccp.netcup.net:443
2020-12-06 17:43:54,489:DEBUG:urllib3.connectionpool:https://ccp.netcup.net:443 "POST /run/webservice/servers/endpoint.php?JSON HTTP/1.1" 200 229
2020-12-06 17:43:54,491:DEBUG:lexicon.providers.netcup:infoDnsZone({'domainname': 'domain.tld'})
2020-12-06 17:43:54,492:DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): ccp.netcup.net:443
2020-12-06 17:43:54,551:DEBUG:urllib3.connectionpool:https://ccp.netcup.net:443 "POST /run/webservice/servers/endpoint.php?JSON HTTP/1.1" 200 240
2020-12-06 17:43:54,553:DEBUG:lexicon.providers.netcup:infoDnsRecords({'domainname': 'domain.tld'})
2020-12-06 17:43:54,555:DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): ccp.netcup.net:443
2020-12-06 17:43:54,613:DEBUG:urllib3.connectionpool:https://ccp.netcup.net:443 "POST /run/webservice/servers/endpoint.php?JSON HTTP/1.1" 200 394
2020-12-06 17:43:54,615:DEBUG:lexicon.providers.netcup:list_records: []
2020-12-06 17:43:54,616:DEBUG:lexicon.providers.netcup:updateDnsRecords({'domainname': 'domain.tld', 'dnsrecordset': {'dnsrecords': [{'type': 'TXT', 'hostname': '_acme-challenge', 'destination': 'SPdEcslSAc-uSMufAUUOIqHup5vp7L8WOrTZfGAT6PE'}]}})
2020-12-06 17:43:54,617:DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): ccp.netcup.net:443
2020-12-06 17:43:56,679:DEBUG:urllib3.connectionpool:https://ccp.netcup.net:443 "POST /run/webservice/servers/endpoint.php?JSON HTTP/1.1" 200 464
2020-12-06 17:43:56,681:DEBUG:lexicon.providers.netcup:create_record: True
2020-12-06 17:43:56,681:INFO:certbot.plugins.dns_common:Waiting 10 seconds for DNS changes to propagate

So the netcup login works, in fact I even have the API log from their side with a "correct" voice

Action status Feedback short Domain name / Handle Id API Key
updateDnsRecords Successfully DNS records successful updated domain.tld netcup.ini api key

continuing the log the dns record are checked and they fail.

2020-12-06 17:44:08,621:DEBUG:acme.client:Received response:
HTTP 200
Server: nginx
Date: Sun, 06 Dec 2020 17:44:08 GMT
Content-Type: application/json
Content-Length: 556
Connection: keep-alive
Boulder-Requester: 105323570
Cache-Control: public, max-age=0, no-cache
Link: <https://acme-v02.api.letsencrypt.org/directory>;rel="index"
Replay-Nonce: 000379E_YezNUePmp4hplBBraSUNLwm2ltX96PdXkKqyEjp
X-Frame-Options: DENY
Strict-Transport-Security: max-age=604800

{
  "identifier": {
    "type": "dns",
    "value": "domain.tld"
  },
  "status": "invalid",
  "expires": "2020-12-13T17:43:51Z",
  "challenges": [
    {
      "type": "dns-01",
      "status": "invalid",
      "error": {
        "type": "urn:ietf:params:acme:error:unauthorized",
        "detail": "No TXT record found at _acme-challenge.domain.tld",
        "status": 403
      },
      "url": "https://acme-v02.api.letsencrypt.org/acme/chall-v3/9118012780/OUnXUA",
      "token": "9D0OB3k2ri0JWeAZ1TAwOpxyKfW48Jdyj6xTtA2weRI"
    }
  ],
  "wildcard": true
}

checking from netcup side the dns record the _acme-challenge TXT is missing.

to recap:

  • the netcup.ini seems to be correct (from the container and netcup logs)
  • the domains A record should be okay (I had single https certificates on some subdomains)

I don' t know I think it' s an API issue, either on netcup side (why is there a " correct" voice on updateDNSrecord if there are no updates?) or the netcup plugin.

As a workaround do you think I could quickly manually add the record to my netcup dns by quickly grepping the TXT value from the logs?

swag doesn't seem to be listening on port 443

linuxserver.io

If you are new to Docker or this application our issue tracker is ONLY used for reporting bugs or requesting features. Please use our discord server for general support.


Expected Behavior

A fresh install of swag results in both port 80 and 443 being opened, and entering the server IP into a browser will bring you to a placeholder "Welcome to our server" website.

Current Behavior

After spinning up a fresh swag container Let's Encrypt is able to generate a cert, and port 80 redirects to port 443 properly, but there doesn't seem to be anything listening on port 443.

Steps to Reproduce

I was running into this issue with a new swag install on a server with quite a few other services running, so I performed the following to isolate the issue.

  1. Created a new droplet on Digital Ocean with the latest docker CE and docker compose installed.
  2. Created a new A record for temp.myurl.com that points to the temporary server.
  3. Confirmed that UFW was inactive.
  4. Used docker-compose up -d with the .yml file listed in the section further down to create the swag container.
  5. Confirmed that port 80 was responding and port 443 was not responding using an online port scanner. Also confirmed that navigating to http://temp.myurl.com redirected to https://temp.myurl.com, but that https://temp.myurl.com would time out.
  6. Spun down swag by using docker-compose down.
  7. Spun up a container of nginx using the following docker compose file:
---
version: "2"
services:
  nginx:
    image: linuxserver/nginx
    container_name: nginx
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Europe/London
    ports:
      - 443:443
      - 80:80
    restart: unless-stopped
  1. Confirmed that port 80 and port 443 were both responding using an online port scanner. Also confirmed that navigating to http://temp.myurl.com redirected to https://temp.myurl.com and displayed the correct placeholder website using a self-signed cert.

Environment

OS: Ubuntu 20.04 LTS
CPU architecture: x86_64
How docker service was installed:
Using Digital Oceans's docker droplet, which has docker CE and docker compose pre-installed. After the droplet is up and running I updated everything using apt-get update && apt-get upgrade

Command used to create docker container (run/create/compose/screenshot)

---
version: "2.1"
services:
  swag:
    image: linuxserver/swag
    container_name: swag
    cap_add:
      - NET_ADMIN
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=America/New_York
      - URL=myurl.com # I scrubbed the real url for this ticket
      - SUBDOMAINS=temp,
      - VALIDATION=http
      - ONLY_SUBDOMAINS=true
    volumes:
      - /opt/docker/swag/config:/config
    ports:
      - 443:443
    ports:
      - 80:80
    restart: unless-stopped

Docker logs

[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] 01-envfile: executing...
[cont-init.d] 01-envfile: exited 0.
[cont-init.d] 10-adduser: executing...

-------------------------------------
          _         ()
         | |  ___   _    __
         | | / __| | |  /  \
         | | \__ \ | | | () |
         |_| |___/ |_|  \__/


Brought to you by linuxserver.io
-------------------------------------

To support the app dev(s) visit:
Certbot: https://supporters.eff.org/donate/support-work-on-certbot

To support LSIO projects visit:
https://www.linuxserver.io/donate/
-------------------------------------
GID/UID
-------------------------------------

User uid:    1000
User gid:    1000
-------------------------------------

[cont-init.d] 10-adduser: exited 0.
[cont-init.d] 20-config: executing...
[cont-init.d] 20-config: exited 0.
[cont-init.d] 30-keygen: executing...
generating self-signed keys in /config/keys, you can replace these with your own keys if required
Generating a RSA private key
............................+++++
...........................+++++
writing new private key to '/config/keys/cert.key'
-----
[cont-init.d] 30-keygen: exited 0.
[cont-init.d] 50-config: executing...
Variables set:
PUID=1000
PGID=1000
TZ=America/New_York
URL=myurl.com
SUBDOMAINS=temp,
EXTRA_DOMAINS=
ONLY_SUBDOMAINS=true
VALIDATION=http
DNSPLUGIN=
EMAIL=
STAGING=

Created donoteditthisfile.conf
SUBDOMAINS entered, processing
SUBDOMAINS entered, processing
Only subdomains, no URL in cert
Sub-domains processed are:  -d temp.myurl.com
No e-mail address entered or address invalid
http validation is selected
Generating new certificate
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator standalone, Installer None
Registering without email!
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for temp.myurl.com
Waiting for verification...
Cleaning up challenges
IMPORTANT NOTES:
 - Congratulations! Your certificate and chain have been saved at:
   /etc/letsencrypt/live/temp.myurl.com/fullchain.pem
   Your key file has been saved at:
   /etc/letsencrypt/live/temp.myurl.com/privkey.pem
   Your cert will expire on 2020-12-26. To obtain a new or tweaked
   version of this certificate in the future, simply run certbot
   again. To non-interactively renew *all* of your certificates, run
   "certbot renew"
 - Your account credentials have been saved in your Certbot
   configuration directory at /etc/letsencrypt. You should make a
   secure backup of this folder now. This configuration directory will
   also contain certificates and private keys obtained by Certbot so
   making regular backups of this folder is ideal.
 - If you like Certbot, please consider supporting our work by:

   Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate
   Donating to EFF:                    https://eff.org/donate-le

New certificate generated; starting nginx
Starting 2019/12/30, GeoIP2 databases require personal license key to download. Please retrieve a free license key from MaxMind,
and add a new env variable "MAXMINDDB_LICENSE_KEY", set to your license key.
[cont-init.d] 50-config: exited 0.
[cont-init.d] 60-renew: executing...
The cert does not expire within the next day. Letting the cron script handle the renewal attempts overnight (2:08am).
[cont-init.d] 60-renew: exited 0.
[cont-init.d] 99-custom-files: executing...
[custom-init] no custom files found exiting...
[cont-init.d] 99-custom-files: exited 0.
[cont-init.d] done.
[services.d] starting services
[services.d] done.
nginx: [alert] detected a LuaJIT version which is not OpenResty's; many optimizations will be disabled and performance will be compromised (see https://github.com/openresty/luajit2 for OpenResty's LuaJIT or, even better, consider using the OpenResty releases from https://openresty.org/en/download.html)
Server ready

http validation not available out of the box

linuxserver.io

If you are new to Docker or this application our issue tracker is ONLY used for reporting bugs or requesting features. Please use our discord server for general support.


Expected Behavior

Running the image like below should result in a container hosting the challenges on :80/.well-known/acme-challenge.

---
version: "2.1"
services:
  swag:
    image: linuxserver/swag
    container_name: swag
    cap_add:
      - NET_ADMIN
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Europe/Amsterdam
      - URL=home.spockz.nl
      - SUBDOMAINS=dockerhost
      - VALIDATION=httpl
      - EMAIL=redacted #optional
      - ONLY_SUBDOMAINS=false #optional
      - EXTRA_DOMAINS= #optional
      - STAGING=true #optional
    volumes:
      - /home/rancher/letsencrypt/config:/config
    ports:
      - 443:443
      - 8444:80 #optional
    restart: unless-stopped

Current Behavior

The port 80 appears not to be bound. From an attached shell:

 root@6e80ae52c0d1:/# curl localhost:80/
curl: (7) Failed to connect to localhost port 80: Connection refused

Steps to Reproduce

  1. docker-compose up with the contents above.

Environment

OS: RancherOS
CPU architecture: x86_64
How docker service was installed:
part of rancheros

Command used to create docker container (run/create/compose/screenshot)

see above.

Docker logs

 [cont-init.d] 30-keygen: exited 0.

[cont-init.d] 50-config: executing... 

Variables set:
�

0
�

0

TZ=Europe/Amsterdam

URL=home.spockz.nl

SUBDOMAINS=dockerhost

EXTRA_DOMAINS=

ONLY_SUBDOMAINS=false

VALIDATION=http

DNSPLUGIN=

EMAIL=redacted
�
STAGING=true


NOTICE: Staging is active

SUBDOMAINS entered, processing

SUBDOMAINS entered, processing

Sub-domains processed are:  -d dockerhost.home.spockz.nl

E-mail address entered: redacted

http validation is selected

Different validation parameters entered than what was used before. Revoking and deleting existing certificate, and an updated one will be created

nerating new certificate

Saving debug log to /var/log/letsencrypt/letsencrypt.log

Plugins selected: Authenticator standalone, Installer None

Obtaining a new certificate

Performing the following challenges:

http-01 challenge for dockerhost.home.spockz.nl
Waiting for verification...

Challenge failed for domain dockerhost.home.spockz.nl

http-01 challenge for dockerhost.home.spockz.nl

Cleaning up challenges

Some challenges have failed.

IMPORTANT NOTES:

 - The following errors were reported by the server:


   Domain: dockerhost.home.spockz.nl

   Type:   connection

   Detail: Fetching

   http://dockerhost.home.spockz.nl/.well-known/acme-challenge/_mk792RfhUgnWANx_7Q4pzGKzteN_eK7bZWHrZvQ9tA:

   Timeout during connect (likely firewall problem)

config for trilium

linuxserver.io


Expected Behavior

When I add and delete pages to Trilium, I don't immediately see the change I have refresh the pages. When I access the site on the lan it works fine. I need a updated NGINX config file , per haps something that updates websockets

Nginx config file

# make sure that your dns has a cname set for grafana and that your grafana container is not using a base url

server {
    listen 443 ssl;
    listen [::]:443 ssl;

    server_name notes.*;

    include /config/nginx/ssl.conf;

    client_max_body_size 0;

    # enable for ldap auth, fill in ldap details in ldap.conf
    #include /config/nginx/ldap.conf;

    location / {
        # enable the next two lines for http auth
        #auth_basic "Restricted";
        #auth_basic_user_file /config/nginx/.htpasswd;

        # enable the next two lines for ldap auth
        #auth_request /auth;
        #error_page 401 =200 /login;

        include /config/nginx/proxy.conf;
        set $upstream_app trilium;
        set $upstream_port 8080;
        set $upstream_proto http;
        proxy_pass $upstream_proto://$upstream_app:$upstream_port;
#        proxy_set_header X-NginX-Proxy true;
        proxy_set_header Upgrade $http_upgrade;
		proxy_set_header Connection "upgrade";
#        proxy_set_header Host $host;
#        proxy_redirect off;
#        proxy_set_header X-Forwarded-Proto $scheme;
#  	     proxy_set_header X-Real-IP $remote_addr;
#        proxy_set_header X-Forward-For $proxy_add_x_forwarded_for;

    }
}

Lets Encrypt recommendations

Hi,
I came across your post on Stack about your Lets Encrypt trouble. I use it for my web platform and it works great. I can't directly help you though I can recommend testing out whatever you're working on with the third party validation services that work on my platform. The ones listed provide actual useful insight and aren't skin deep or pointless and I hope will help you clean up whatever is broken by giving you some insight that you may not yet have access to or whatever. Not intended at all for self-promotion so please feel free to delete this "bug report". If you have a question about one of the services post and I will try to help though I'm mostly ignoring email while working on building my own email so it might take a few days. Good luck!

https://www.jabcreations.com/performance/#third-parties

Enable full webdav support

swag does not include full support for webdav.


Desired Behavior

The docker image should include the "nginx-mod-http-dav-ext" extension, as the stock dav nginx extension only implements partial support.

Current Behavior

Cannot have full webdav support with stock image:
nginx: [emerg] unknown directive "dav_ext_methods" in /config/nginx/site-confs/myconf...

nginx-dav-ext-module provides the additional implementation and is required for useful configurations. It provides the PROPFIND, OPTIONS, LOCK & UNLOCK dav methods.

Alternatives Considered

Use limited dav support from stock nginx module.

inwx dns plugin

the certbot-dns-inwx:dns-inwx plugin is about to be deprecated:

Plugin legacy name certbot-dns-inwx:dns-inwx may be removed in a future version. Please use dns-inwx instead.

get error of invalid certificate even with the certificate generated correctly and not starts nginx

linuxserver.io

the swag creates a valid certificate,but it acuses that the certificate was not generated and not starts the http server, also this keeps generating new certificates every time the container is restarted so it exceed the letsencrypt certificate generation limit very fast

Expected Behavior

get the certificate and starts the nginx

Current Behavior

returns an error even with the message saying it works to generate the certificate,and not starts the nginx,making the http server not works

Steps to Reproduce

1.install letsencrypt
2.start,wait to create default file
3.stop all containers
4.edit the file to use heimdall
5.restart all containers
6.get error

Environment

OS:
**CPU architecture:**arm32
How docker service was installed:
using a docker-compose file associating many of linuxserver containers,like emby,nextcloud and heimdall

Command used to create docker container (run/create/compose/screenshot)

Docker logs

aving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator manual, Installer None
Renewing an existing certificate
IMPORTANT NOTES:

  • Congratulations! Your certificate and chain have been saved at:
    /etc/letsencrypt/live/myserver/fullchain.pem
    Your key file has been saved at:
    /etc/letsencrypt/live/myserver/privkey.pem
    Your cert will expire on 2021-02-10. To obtain a new or tweaked
    version of this certificate in the future, simply run certbot
    again. To non-interactively renew all of your certificates, run
    "certbot renew"
  • If you like Certbot, please consider supporting our work by:
    Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate
    Donating to EFF: https://eff.org/donate-le
    ERROR: Cert does not exist! Please see the validation error above. Make sure your DUCKDNSTOKEN is correct.

nginx logrotate fails as /var/run/ss6/services does not exist

I have just been trying to setup logrotate and found that the default Nginx logrotate file config has s6-svc -h /var/run/ss6/services/nginx but this file does not exist and I believe the path is misspelt as s6-svc -h /var/run/s6/services/nginx is a file and reloads Nginx after the files have rotated

OpenSSL 1.1.1g Advisory

linuxserver.io

CISA published a vulnerability bulletin in all versions of OpenSSL prior to v1.1.1i. The notice can be found here:
https://us-cert.cisa.gov/ncas/current-activity/2020/12/08/openssl-releases-security-update
OpenSSL's details can be found here:
https://www.openssl.org/news/secadv/20201208.txt


Expected Behavior

Upgrade to version 1.1.1i

Current Behavior

Existing 1.1.1g install is susceptible to a DoS condition.

Steps to Reproduce

  1. N/A

Environment

OS: UnRAID 6.9.0-beta35
CPU architecture: x86_64, however all releases are impacted.

How docker service was installed:

Command used to create docker container (run/create/compose/screenshot)

Installed via Community Applications in UnRAID 6.9

Docker logs

[Feature Request] NTLM Support

Desired Behavior

Could you add this NTLM Plugin in Swag ? I want to make reverse proxy for my self hosted DevOps Server (Express) but the free version of nginx doesn't support NTLM authnetication witouth a plugins.

https://github.com/gabihodoroaga/nginx-ntlm-module

Thank you

Current Behavior

NTLM authentication not working

Alternatives Considered

None ... Maybe i will cry ... ^^

Overwrite nginx.con

Expected Behavior

To set a proper log_format, which is a entity in the http directive in nginx, I need to overwrite the file /config/nginx/nginx.conf

The file /config/nginx/nginx.conf should be overwritten by my Dockerfile, which is the following:

FROM linuxserver/swag:latest

COPY dns-confs/google.json /config/dns-conf/google.json
COPY proxy-confs /config/nginx/proxy-confs
COPY site-confs/default.conf /config/nginx/site-confs/default
COPY nginx-confs/nginx.conf /config/nginx/nginx.conf
RUN cat /config/nginx/nginx.conf

Current Behavior

The file gets obviously overwritten by the runtime.

Docker logs

Building the image... As you can see, the cat command plot out my modfied image. So it is there in the image.

docker-compose build test_proxy
Building test_proxy
Step 1/6 : FROM linuxserver/swag:latest
 ---> d4fcb1a85a88
Step 2/6 : COPY dns-confs/google.json /config/dns-conf/google.json
 ---> Using cache
 ---> e401aafc1646
Step 3/6 : COPY proxy-confs /config/nginx/proxy-confs
 ---> Using cache
 ---> b3ed6d783c4f
Step 4/6 : COPY site-confs/default.conf /config/nginx/site-confs/default
 ---> Using cache
 ---> b3c33d3ff6d8
Step 5/6 : COPY nginx-confs/nginx.conf /config/nginx/nginx.conf
 ---> 90b4692b4d85
Step 6/6 : RUN cat /config/nginx/nginx.conf
 ---> Running in d5c42c6202ca
## Version 2019/12/19 - Changelog: https://github.com/linuxserver/docker-swag/commits/master/root/defaults/nginx.conf

user abc;
worker_processes 4;
pid /run/nginx.pid;
include /etc/nginx/modules/*.conf;

events {
    worker_connections 768;
    # multi_accept on;
}

http {

    ##
    # Basic Settings
    ##

    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    keepalive_timeout 65;
    types_hash_max_size 2048;
    variables_hash_max_size 2048;
    large_client_header_buffers 4 16k;

    # server_tokens off;

    # server_names_hash_bucket_size 64;
    # server_name_in_redirect off;

    client_max_body_size 0;

    include /etc/nginx/mime.types;
    default_type application/octet-stream;

    ##
    # Logging Settings
    ##

    log_format main escape=json '{'
    '"message":     "$host: $request",'
    '"host": "$host",'
    '"remote_addr": "$remote_addr",'
    '"remote_user": "$remote_user",'
    '"time_local":  "$time_local",'
    '"request":     "$request",'
    '"status":      "$status",'
    '"body_bytes_sent": "$body_bytes_sent",'
    '"http_referer": "$http_referer",'
    '"http_user_agent": "$http_user_agent"'
    '}';

    access_log /dev/stdout main;
    error_log /dev/stderr;

    ##
    # Gzip Settings
    ##

    gzip on;
    gzip_disable "msie6";

    # gzip_vary on;
    # gzip_proxied any;
    # gzip_comp_level 6;
    # gzip_buffers 16 8k;
    # gzip_http_version 1.1;
    # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;

    ##
    # nginx-naxsi config
    ##
    # Uncomment it if you installed nginx-naxsi
    ##

    #include /etc/nginx/naxsi_core.rules;

    ##
    # nginx-passenger config
    ##
    # Uncomment it if you installed nginx-passenger
    ##

    #passenger_root /usr;
    #passenger_ruby /usr/bin/ruby;

    ##
    # Virtual Host Configs
    ##
    include /etc/nginx/conf.d/*.conf;
    include /config/nginx/site-confs/*;
    lua_load_resty_core off;

}

daemon off;Removing intermediate container d5c42c6202ca
 ---> 89e7634348ba
Successfully built 89e7634348ba

Putting it live into my kubernetes cluster, the file got overwritten (maybe by the runtime or startup script):

root@proxy-ddd5764c8-b959v:/# cat /config/nginx/nginx.conf
## Version 2019/12/19 - Changelog: https://github.com/linuxserver/docker-swag/commits/master/root/defaults/nginx.conf

user abc;
worker_processes 4;
pid /run/nginx.pid;
include /etc/nginx/modules/*.conf;

events {
	worker_connections 768;
	# multi_accept on;
}

http {

	##
	# Basic Settings
	##

	sendfile on;
	tcp_nopush on;
	tcp_nodelay on;
	keepalive_timeout 65;
	types_hash_max_size 2048;
	variables_hash_max_size 2048;
	large_client_header_buffers 4 16k;

	# server_tokens off;

	# server_names_hash_bucket_size 64;
	# server_name_in_redirect off;

	client_max_body_size 0;

	include /etc/nginx/mime.types;
	default_type application/octet-stream;

	##
	# Logging Settings
	##

	access_log /config/log/nginx/access.log;
	error_log /config/log/nginx/error.log;

	##
	# Gzip Settings
	##

	gzip on;
	gzip_disable "msie6";

	# gzip_vary on;
	# gzip_proxied any;
	# gzip_comp_level 6;
	# gzip_buffers 16 8k;
	# gzip_http_version 1.1;
	# gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;

	##
	# nginx-naxsi config
	##
	# Uncomment it if you installed nginx-naxsi
	##

	#include /etc/nginx/naxsi_core.rules;

	##
	# nginx-passenger config
	##
	# Uncomment it if you installed nginx-passenger
	##

	#passenger_root /usr;
	#passenger_ruby /usr/bin/ruby;

	##
	# Virtual Host Configs
	##
	include /etc/nginx/conf.d/*.conf;
	include /config/nginx/site-confs/*;
	lua_load_resty_core off;

}


#mail {
#	# See sample authentication script at:
#	# http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript
#
#	# auth_http localhost/auth.php;
#	# pop3_capabilities "TOP" "USER";
#	# imap_capabilities "IMAP4rev1" "UIDPLUS";
#
#	server {
#		listen     localhost:110;
#		protocol   pop3;
#		proxy      on;
#	}
#
#	server {
#		listen     localhost:143;
#		protocol   imap;
#		proxy      on;
#	}
#}
daemon off;
root@proxy-ddd5764c8-b959v:/# 

Modify UnifiController sample configuration.

Add following to unifi controller sample. Without this config I got WebSocket errors inside the app.

location /wss/ {
    proxy_pass $upstream_proto://$upstream_app:$upstream_port;
    proxy_http_version 1.1;
    proxy_buffering off;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "Upgrade";
    proxy_read_timeout 86400;
    proxy_set_header Host $http_host;
}

Docker version 19.03.8, build afacb8b7f0
Ubuntu 20.04.1 LTS x86_64
Unifi controller image from LinuxServer: https://hub.docker.com/r/linuxserver/unifi-controller

Using a QNET or macvlan network fails - Port Contention

linuxserver.io


Expected Behavior

On my system (QNAP NAS) port 80 and 443 are in use (by the QNAP interface). So the default ports won't fly, and they won't fly in any type of bridge network. When trying to start the SWAG container using a QNET based network (QNET is seems to be identical to macvlan, but QNAP gonna QNAP) in order to get around this port contention, the SWAG container is added to the QNET network, but the CertBot and Ngin-X containers are added to the default compose network.

The nginx container complains about the port being in use,

ERROR: for library_nginx_1 Cannot start service nginx: driver failed programming external connectivity on endpoint library_nginx_1 (container id): listen tcp 0.0.0.0:443: bind: address already in use

and everything goes to crap.

What I would like to see is that the certbot and nginx containers follow the SWAG container. or that there is some option to configure their networking in compose. I know I can mess around with NAT and port but that doesn't work for my application (router doesn't really support hairpin NAT).

Relevent yml is below, I hope this is clear.

swag: # Secure Web Application Gateway, for Reverse Proxy and LE
image: ghcr.io/linuxserver/swag
container_name: swag
networks:
QNET:
ipv4_address: 192.168.200.11
cap_add:
- NET_ADMIN
environment:
- PUID=0000
- PGID=0000
- TZ=Europe/London
- URL=ombi.domain.tld
- SUBDOMAINS=www,
- VALIDATION=http
- DNSPLUGIN=cloudflare #optional
- PROPAGATION= #optional
- DUCKDNSTOKEN= #optional
- EMAIL= #optional
- ONLY_SUBDOMAINS=true #optional
- EXTRA_DOMAINS= #optional
- STAGING=false #optional
- MAXMINDDB_LICENSE_KEY= #optional
volumes:
- /share/CACHEDEV1_DATA/docker_stack/swag/config:/config
ports:
- 443:443/tcp
- 80:80/tcp
restart: unless-stopped
networks:
torrent-net:
QNET:
external: true

certbot throws error on first certificate request (but succeeds)

linuxserver.io


Expected Behavior

When starting the container for the first time with duckdns provider, a new certificate is generated.

Current Behavior

When obtaining a new certificate through duckdns, certbot throws the following error:

Waiting for verification...
Cleaning up challenges
usage:
  certbot [SUBCOMMAND] [options] [-d DOMAIN] [-d DOMAIN] ...

Certbot can obtain and install HTTPS/TLS/SSL certificates.  By default,
it will attempt to use a webserver both for obtaining and installing the
certificate.
certbot: error: argument --manual-public-ip-logging-ok: expected one argument
New certificate generated; starting nginx

Note that the certificates are generated properly. It's just that it throws this warning/error but continues running anyway

Steps to Reproduce

  1. Use the following configuration for docker-compose:
    example.txt

  2. Check logs with docker logs swag -f

  3. Certbot prints the following error:

certbot: error: argument --manual-public-ip-logging-ok: expected one argument
  1. Certificates are generated properly

Environment

OS: Raspbian GNU/Linux 10 (buster)
CPU architecture: armv7l
How docker service was installed: Using the get-docker.sh file provided at https://get.docker.com

Command used to create docker container (run/create/compose/screenshot)

docker-compose up -d

Docker logs

log.txt

Permission problems /var/lib/nginx/tmp

linuxserver.io


Expected Behavior

I'm using this with several other linuxserver.io containers. Original issue was getting "error 500" when trying to upload to Nextcloud. Looking through the logs I see that Emby has also had issues with nginx in SWAG.

Current Behavior

Inside SWAG config directory, /log/nginx/error.log shows permission errors.
2020/12/14 02:19:19 [crit] 576#576: [sanitized] open() "/var/lib/nginx/tmp/proxy/7/04/0000003047" failed (13: Permission denied) while reading upstream, client: [sanitized], server: emby., request: "GET /emby/videos/[sanitized] HTTP/1.1", upstream: "[sanitized]", host: "[sanitized]"
2020/12/19 23:32:50 [crit] 455#455: [sanitized] open() "/var/lib/nginx/tmp/client_body/0000003648" failed (13: Permission denied), client: [sanitized], server: nextcloud., request: "PUT /remote.php/dav/files/[sanitized] HTTP/1.1", host: "[sanitized]"

Steps to Reproduce

I went inside the container and did "chmod 777 /var/lib/nginx/tmp -R" and that has corrected my issue but I'm not sure if this causes a seperate security issue and it won't survive a container update.
For a rundown of the original permissions from inside the container:
root@[sanitized]:/var/lib/nginx/tmp# ls -l
total 20
drwxr-xr-x 2 xfs xfs 4096 Dec 20 01:48 client_body
drwx------ 2 xfs root 4096 Dec 20 01:48 fastcgi
drwx------ 2 xfs root 4096 Dec 20 01:48 proxy
drwx------ 2 xfs root 4096 Dec 20 01:48 scgi
drwx------ 2 xfs root 4096 Dec 20 01:48 uwsgi
root@[sanitized]:/var/lib/nginx/tmp# ps aux | grep "nginx: worker process"
xfs 455 0.0 0.0 179816 9780 ? S 01:48 0:00 nginx: worker process
xfs 456 0.0 0.0 179796 9780 ? S 01:48 0:00 nginx: worker process
xfs 457 0.0 0.0 180200 9772 ? S 01:48 0:00 nginx: worker process
xfs 458 0.0 0.0 180096 9728 ? S 01:48 0:00 nginx: worker process
root 488 0.0 0.0 1580 4 pts/0 S+ 01:51 0:00 grep nginx: worker process

As you can see I've set the user to 33:33 in the docker command.

Sorry in advance if this is obviously my fault, I'm learning. Thanks for all these containers, linuxserver.io is amazing.
Thanks.

Environment

OS: Ubuntu 16.04.7 LTS
CPU architecture: x86_64
How docker service was installed:

Command used to create docker container (run/create/compose/screenshot)

#SWAG
docker run -d
--name=swag
--net=[sanitized]
-e PUID=33
-e PGID=33
-e TZ=America/New_York
-e URL=[sanitized]
-e SUBDOMAINS=wildcard
-e VALIDATION=duckdns
-e DUCKDNSTOKEN=[sanitized]
-e EMAIL=[sanitized]
-p 443:443
-p 80:80
-v /home/[sanitized]/swag:/config
--restart unless-stopped
linuxserver/swag

Docker logs

Reverse Proxy config

Hello,

I'm looking for a way to use Swag as a reverse proxy for a wireguard server in order to be able to close the 51820 port on my router.

for now the terminaison point is : wireguard.MYDOMAIN.EXT:51820

My wireguard is running thanks to the 'WireHole' repo (in order to have pihole and unbound connected.

The Wireguard and Swag containers are both in the same docker network (as well as some of my others applications), wireguard is connected to another network as well (the one shared with pihole and unbound).

I'm struggling as how should my wireguard.subdomain.conf be (don't know anything about nginx)...

For now I tried this :

server {
    listen 51820;
    listen [::]:51820;

    server_name wireguard.*;

    client_max_body_size 0;

    location / {
        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        set $upstream_app wireguard;
        set $upstream_port 51820;
        proxy_pass $upstream_app:$upstream_port;
    }
}

Thanks for helping !

geo2ip.conf missing variable for local access with country block

linuxserver.io


Expected Behavior

Fresh install of swag, to which a MAXMIND_LICENSE was added via variable, Geo2IP database downloaded successfully. When configuring "geo2ip.conf", one country and a local subnet were whitelisted. I'm expecting to reach my website from inside and outside now.

Current Behavior

Access from outside is working like configured: Only requests from the whitelisted country will open my website. But direct local access of the whitelisted subnet (192.168.1.0/24 with client IP 192.168.1.30) is not working.

Solution

Add a new variable to the geo2ip.conf file:
# LOCAL IP ALLOW GEO BLOCK
geo $lan-ip {
default no;
192.168.1.0/24 yes;
}

Add following to your site-conf file:
# Allow Local IP
if ($lan-ip = yes) {
set $allowed_country yes;
}

# Country geo block
if ($allowed_country = no) {
return 444;
}

Improve existing geo2ip.conf template

The attached geo2ip.conf template is extended, adding the Local IP check (Lines 18 to 27, duplicate entries were removed). Please add this template in future versions.

geoip2.zip

TimeZone is not set correctly in the SWAG container

linuxserver.io


Expected Behavior

I've set a custom timezone in the docker-compose, so I would expect that if I get the current time on the container's console with the date command, it should not be UTC time.

In case of other containers (eg. linuxserver/qbittorrent) the time zone setting works well (but that is Ubuntu based opposed to Alpine).

Current Behavior

If I get the current time on the container's console with the date command, it is UTC time.
This causes problems in fail2ban detection (in case of logs mounted from outside).

I have tried to mount host's etc/timezone and etc/localtime but the result was the same:
volumes:
- "/etc/timezone:/etc/timezone:ro"
- "/etc/localtime:/etc/localtime:ro"

Steps to Reproduce

  1. docker exec swag date
  2. the output is: "Sat Nov 14 23:13:26 UTC 2020"
  3. date
  4. the output is: "Sun Nov 15 00:13:29 CET 2020"

Environment

OS: raspbian
CPU architecture: arm64
How docker service was installed: from Raspbian Repository

Command used to create docker container (compose)

  swag:
    depends_on:
      - authelia
    image: linuxserver/swag:latest
    container_name: swag
    networks:
      - bridge
    ports:
      - 2443:443
      - 2080:80
    cap_add:
      - NET_ADMIN
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Europe/Budapest
      - URL=mydomain.eu
      - SUBDOMAINS=wildcard
      - VALIDATION=dns
      - DNSPLUGIN=cloudflare
      - ONLY_SUBDOMAINS=false
      - MAXMINDDB_LICENSE_KEY=licensekey
    volumes:
      - /var/lib/docker/volumes/swag_data:/config
      - /var/lib/docker/volumes/bitwarden_data/bwlog:/config/log/bitwarden:ro
      - /var/lib/docker/volumes/homeassistant_data/home-assistant.log:/config/log/homeassistant/home-assistant.log:ro
      - /var/lib/docker/volumes/authelia_data/authelia.log:/config/log/authelia/authelia.log:ro
      - /var/log/auth.log:/config/log/openmediavault/auth.log:ro
      - /var/lib/docker/volumes/pihole_data/log/pihole.log:/config/log/pihole/pihole.log:ro
    labels:
      - "diun.enable=true"
    restart: always

Docker logs

[cont-finish.d] executing container finish scripts...
[cont-finish.d] done.
[s6-finish] waiting for services.
s6-svwait: fatal: supervisor died
[s6-finish] sending all processes the TERM signal.
[s6-finish] sending all processes the KILL signal and exiting.
[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] 01-envfile: executing...
[cont-init.d] 01-envfile: exited 0.
[cont-init.d] 10-adduser: executing...
usermod: no changes

-------------------------------------
          _         ()
         | |  ___   _    __
         | | / __| | |  /  \
         | | \__ \ | | | () |
         |_| |___/ |_|  \__/


Brought to you by linuxserver.io
-------------------------------------

To support the app dev(s) visit:
Certbot: https://supporters.eff.org/donate/support-work-on-certbot

To support LSIO projects visit:
https://www.linuxserver.io/donate/
-------------------------------------
GID/UID
-------------------------------------

User uid:    1000
User gid:    1000
-------------------------------------

[cont-init.d] 10-adduser: exited 0.
[cont-init.d] 20-config: executing...
chown: changing ownership of '/config/log/authelia/authelia.log': Read-only file system
chown: changing ownership of '/config/log/homeassistant/home-assistant.log': Read-only file system
chown: changing ownership of '/config/log/openmediavault/auth.log': Read-only file system
chown: changing ownership of '/config/log/pihole/pihole.log': Read-only file system
chown: changing ownership of '/config/log/bitwarden/bitwarden.log': Read-only file system
chown: changing ownership of '/config/log/bitwarden': Read-only file system
[cont-init.d] 20-config: exited 0.
[cont-init.d] 30-keygen: executing...
using keys found in /config/keys
[cont-init.d] 30-keygen: exited 0.
[cont-init.d] 50-config: executing...
Variables set:
PUID=1000
PGID=1000
TZ=Europe/Budapest
URL=mydomain.eu
SUBDOMAINS=wildcard
EXTRA_DOMAINS=
ONLY_SUBDOMAINS=false
VALIDATION=dns
DNSPLUGIN=cloudflare
EMAIL=
STAGING=

SUBDOMAINS entered, processing
Wildcard cert for mydomain.eu will be requested
No e-mail address entered or address invalid
dns validation via cloudflare plugin is selected
Certificate exists; parameters unchanged; starting nginx
chown: changing ownership of '/config/log/authelia/authelia.log': Read-only file system
chown: changing ownership of '/config/log/homeassistant/home-assistant.log': Read-only file system
chown: changing ownership of '/config/log/openmediavault/auth.log': Read-only file system
chown: changing ownership of '/config/log/pihole/pihole.log': Read-only file system
chown: changing ownership of '/config/log/bitwarden/bitwarden.log': Read-only file system
chown: changing ownership of '/config/log/bitwarden': Read-only file system
chmod: changing permissions of '/config/log/authelia/authelia.log': Read-only file system
chmod: changing permissions of '/config/log/homeassistant/home-assistant.log': Read-only file system
chmod: changing permissions of '/config/log/openmediavault/auth.log': Read-only file system
chmod: changing permissions of '/config/log/pihole/pihole.log': Read-only file system
chmod: changing permissions of '/config/log/bitwarden': Read-only file system
chmod: changing permissions of '/config/log/bitwarden/bitwarden.log': Read-only file system
[cont-init.d] 50-config: exited 0.
[cont-init.d] 60-renew: executing...
The cert does not expire within the next day. Letting the cron script handle the renewal attempts overnight (2:08am).
[cont-init.d] 60-renew: exited 0.
[cont-init.d] 99-custom-files: executing...
[custom-init] no custom files found exiting...
[cont-init.d] 99-custom-files: exited 0.
[cont-init.d] done.
[services.d] starting services
[services.d] done.
Server ready
nginx: [alert] detected a LuaJIT version which is not OpenResty's; many optimizations will be disabled and performance will be compromised (see https://github.com/openresty/luajit2 for OpenResty's LuaJIT or, even better, consider using the OpenResty releases from https://openresty.org/en/download.html)

le-renew.sh should use --deploy-hook

linuxserver.io


Expected Behavior

I was looking for a deploy hook of swag found #25 and looked at the hooks of certbot.

After calling certbot -n renew the certificate operations should only be executed when the certificate where actually renewed.
The operations should be done by the --deploy-hook of certbot renew

see Note from certbot:

certbot renew exit status will only be 1 if a renewal attempt failed. This means certbot renew 
exit status will be 0 if no certificate needs to be updated. If you write a custom script and expect 
to run a command only after a certificate was actually renewed you will need to use the 
--deploy-hook since the exit status will be 0 both on successful renewal and when renewal is 
not necessary.

Current Behavior

Currently certbot is called like this. This means that all operations are executed within the post-hook.

  certbot -n renew \
    --post-hook "if ps aux | grep [n]ginx: > /dev/null; then s6-svc -h /var/run/s6/services/nginx; fi; \
    cd /config/keys/letsencrypt && \
    openssl pkcs12 -export -out privkey.pfx -inkey privkey.pem -in cert.pem -certfile chain.pem -passout pass: && \
    sleep 1 && \
    cat privkey.pem fullchain.pem > priv-fullchain-bundle.pem && \
    chown -R abc:abc /config/etc/letsencrypt"

Steps to Reproduce

See above

Environment

OS: Ubuntu Linux
CPU architecture: x86_64
How docker service was installed:
official docker repo

Command used to create docker container (run/create/compose/screenshot)

n/a

Docker logs

n/a

Blank page AdGuard docker-compose

Current Behavior

I'm using a docker-compose.yml with Swag and including >25 Docker containers on a Raspberry Pi4 aarch64. The only one which does not work is AdGuardhome: It displays a blank page when opening https://adguard.mydomain.nl/install.html.

Steps to Reproduce

volumes/swag/config/nginx/proxy-conf/adguard.subdomain.conf

server {
    listen 443 ssl;
    listen [::]:443 ssl;

    server_name adguard.*;

    include /config/nginx/ssl.conf;

    client_max_body_size 0;

     location / {
        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        set $upstream_app adguard;
        set $upstream_port 3000;
        set $upstream_proto http;
        proxy_pass $upstream_proto://$upstream_app:$upstream_port;
    }

    location /control {
        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        set $upstream_app adguard;
        set $upstream_port 80;
        set $upstream_proto http;
        proxy_pass $upstream_proto://$upstream_app:$upstream_port;
    }
}

docker-compose.yml

version: '3'

services:
  adguard:
    image: adguard/adguardhome
    container_name: adguard
    restart: always
    ports:
      #- 80:80/tcp # Conflict with Swag
      #- 3000:3000/tcp # AdGuard Home service
      #- 67:67/udp # DHCP server not used
      #- 68:68/udp # DHCP server not used
      #- 68:68/tcp # DHCP server not used
      #    https://hub.docker.com/r/adguard/adguardhome
      #    Follow section Running on a system with 'resolved' daemon:
      - 53:53/tcp # DNS
      - 53:53/udp # DNS
      #- 853:853/tcp # DNS-over-TLS server not used
    volumes:
      - ${VOLUME_DIR}/adguard/work:/opt/adguardhome/work
      - ${VOLUME_DIR}/adguard/conf:/opt/adguardhome/conf

  swag:
    image: ghcr.io/linuxserver/swag
    container_name: swag
    restart: always
    ports:
      - 80:80
      - 443:443
    cap_add:
      - NET_ADMIN
    environment:
      - PUID=${UID}
      - PGID=${GID}
      - EMAIL=${SWAG_EMAIL}
      - URL=${ROOT_DOMAIN}
      - SUBDOMAINS=${SWAG_SUBDOMAINS}
      - VALIDATION=${SWAG_VALIDATION}
      - DNSPLUGIN=${SWAG_DNSPLUGIN}
      - ONLY_SUBDOMAINS=${SWAG_ONLY_SUBDOMAINS}
      - TZ=${TZ}
    volumes:
      - ${VOLUME_DIR}/swag/config:/config

Docker logs

$ docker-compose up adguard
Starting adguard ... done
Attaching to adguard
adguard          | 2021/02/05 17:10:05 [error] Couldn't read config file /opt/adguardhome/conf/AdGuardHome.yaml: open /opt/adguardhome/conf/AdGuardHome.yaml: no such file or directory
adguard          | 2021/02/05 17:10:05 [info] AdGuard Home, version v0.104.3, channel release, arch linux arm64
adguard          | 2021/02/05 17:10:05 [info] This is the first time AdGuard Home is launched
adguard          | 2021/02/05 17:10:05 [info] Checking if AdGuard Home has necessary permissions
adguard          | 2021/02/05 17:10:05 [info] AdGuard Home can bind to port 53
adguard          | 2021/02/05 17:10:05 [info] Initializing auth module: /opt/adguardhome/work/data/sessions.db
adguard          | 2021/02/05 17:10:05 [info] Auth: initialized.  users:0  sessions:0
adguard          | 2021/02/05 17:10:05 [info] Initialize web module
adguard          | 2021/02/05 17:10:05 [info] This is the first launch of AdGuard Home, redirecting everything to /install.html 
adguard          | 2021/02/05 17:10:05 [info] AdGuard Home is available on the following addresses:
adguard          | 2021/02/05 17:10:05 [info] Go to http://127.0.0.1:3000
adguard          | 2021/02/05 17:10:05 [info] Go to http://172.20.0.17:3000

error.log

Empty.

access.log

<IP> - - [05/Feb/2021:18:28:05 +0100] "GET / HTTP/2.0" 302 36 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:85.0) Gecko/20100101 Firefox/85.0"
<IP> - - [05/Feb/2021:18:28:05 +0100] "GET /install.html HTTP/2.0" 200 477 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:85.0) Gecko/20100101 Firefox/85.0"

Environment

OS: uname -a: Linux raspberry 5.4.0-1028-raspi #31-Ubuntu SMP PREEMPT Wed Jan 20 11:30:45 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux
CPU architecture: aarch64
How docker service was installed: sudo apt install docker.io docker-compose

However, exporting adguard container port 3000:3000/tcp and open the webpage directly on http://raspberry:3000/install.html works.

What could be the problem? Is my configuration wrong?

Allow wildcard sub-sub-domains and sub-domains together

linuxserver.io


Expected Behavior

I expected placing *,*.test in the sub-domains would give me *.example.com and *.test.example.com but it actually iterates all files in the directory and will instead give you a sub-domain for each file (because * in bash iterates files in the current directory). I tried using just wildcard for the sub-domains but this makes my sub-sub-domains not work (such as yay.test.example.com).

It would be nice if I could define wildcard,*.test to achieve what I want (to avoid the * from iterating files in the current directory).

Current Behavior

Well currently the only way I found to get this working is to define every sub-domain in my sub-domain list but then add *.test as another sub-domain. This works but I want my sub-domains to be wildcard as well.

Steps to Reproduce

  1. Select a DNS provider like cloudflare or the such so you can do wildcard certs
  2. Set the sub-domains to *,*.test
  3. Watch the logs as the sub-domain list gets filled with filenames from the current directory of the running script

Environment

OS: Unraid
CPU architecture: x86_64
How docker service was installed:

Installed via CA in Unraid

Command used to create docker container (run/create/compose/screenshot)

Created via UI

Docker logs

I'm not going to include my log. It literally dumped a sub-domain list of all my files in my www directory and going through to remove all my personal information from that would make the log useless anyways.

If a log is absolutely necessary please mention it and I can send it over email privately.

ERROR: Cert does not exist! Please see the validation error above. Make sure you entered correct credentials into the /config/dns-conf/cloudflare.ini file.

Expected Behavior

Certificates would be fetched and NGINX would start

Current Behavior

Telling me that my credentials for cloudflare.ini are incorrect

Steps to Reproduce

  1. Remove all files and folders for swag container
  2. Run docker-compose up -d swag
  3. Start swag
  4. error shows up.

Environment

OS: Windows 10 CPU architecture: x86_64 How docker service was installed: Docker desktop

Command used to create docker container (run/create/compose/screenshot)**

docker-compose up -d swag

Docker logs


SUBDOMAINS entered, processing

Wildcard cert for thespicefiles.com will be requested

E-mail address entered: [email protected]

dns validation via cloudflare plugin is selected

Generating new certificate

Saving debug log to /var/log/letsencrypt/letsencrypt.log

Plugins selected: Authenticator dns-cloudflare, Installer None

Requesting a certificate for *.domain.com and domain.com

An unexpected error occurred:

There were too many requests of a given type :: Error creating new order :: too many certificates already issued for exact set of domains: *.domain.com,domain.com: see https://letsencrypt.org/docs/rate-limits/

Please see the logfiles in /var/log/letsencrypt for more details.

ERROR: Cert does not exist! Please see the validation error above. Make sure you entered correct credentials into the /config/dns-conf/cloudflare.ini file.

I have verified my credentials are correct, even going as far as changing my CF API Key.

Things I have tested:

  • commenting out the username/global API key lines and using my generated zone token
  • using generated zone token as the global key in the first section
  • deleting all folders for swag and re-adding the compose and running again.

I have checked and I have not exceeded the 50 cert requests in the last week.

Permissions are fine on all folders, and I really have no idea what is causing the error.

Disable TLSv1.0 and TLSv1.1

My ssl.conf file:

## Version 2020/10/29 - Changelog: https://github.com/linuxserver/docker-swag/commits/master/root/defaults/ssl.conf

### Mozilla Recommendations
# generated 2020-06-17, Mozilla Guideline v5.4, nginx 1.18.0-r0, OpenSSL 1.1.1g-r0, intermediate configuration
# https://ssl-config.mozilla.org/#server=nginx&version=1.18.0-r0&config=intermediate&openssl=1.1.1g-r0&guideline=5.4

ssl_session_timeout 1d;
ssl_session_cache shared:MozSSL:10m;  # about 40000 sessions
ssl_session_tickets off;

# intermediate configuration
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
ssl_prefer_server_ciphers off;

# OCSP stapling
ssl_stapling on;
ssl_stapling_verify on;


### Linuxserver.io Defaults

# Certificates
ssl_certificate /config/keys/letsencrypt/fullchain.pem;
ssl_certificate_key /config/keys/letsencrypt/privkey.pem;
# verify chain of trust of OCSP response using Root CA and Intermediate certs
ssl_trusted_certificate /config/keys/letsencrypt/fullchain.pem;

# Diffie-Hellman Parameters
ssl_dhparam /config/nginx/dhparams.pem;

# Resolver
resolver 127.0.0.11 valid=30s; # Docker DNS Server

# Enable TLS 1.3 early data
ssl_early_data on;

# HSTS, remove # from the line below to enable HSTS
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;

# Optional additional headers
#add_header Cache-Control "no-transform" always;
add_header Content-Security-Policy "upgrade-insecure-requests; frame-ancestors 'self'";
add_header Referrer-Policy "same-origin" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-Frame-Options "SAMEORIGIN" always;
#add_header X-UA-Compatible "IE=Edge" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header X-Robots-Tag "noindex, nofollow, nosnippet, noarchive, noimageindex, notranslate";

add_header Permissions-Policy "accelerometer=(), camera=(), geolocation=(), gyroscope=(), magnetometer=(), microphone=(), payment=(), usb=()";

Attempt: curl -I -v --tlsv1 --tls-max 1.0 https://mydomain.com

Expected Behavior

Error

Current Behavior

...
TLSv1.0 (OUT), TLS handshake, Client hello (1):
TLSv1.0 (IN), TLS handshake, Server hello (2):
...

It does not fail

Steps to Reproduce

  1. Ensure ssl_protocols TLSv1.2 TLSv1.3; is in ssl.conf
  2. curl -I -v --tlsv1 --tls-max 1.0 https://mydomain.com or curl -I -v --tlsv1.1 --tls-max 1.1 https://mydomain.com
  3. Check output

Environment

OS: unRaid 6.9.0-rc2
CPU architecture: arm64
How docker service was installed: CA template

Command used to create docker container (run/create/compose/screenshot)

root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='swag' --net='proxynet' -e TZ="Europe/Paris" -e HOST_OS="Unraid" -e 'EMAIL'='' -e 'URL'='mydomain.com' -e 'SUBDOMAINS'='some' -e 'ONLY_SUBDOMAINS'='true' -e 'VALIDATION'='dns' -e 'DNSPLUGIN'='cloudflare' -e 'EXTRA_DOMAINS'='' -e 'STAGING'='false' -e 'DUCKDNSTOKEN'='' -e 'PROPAGATION'='' -e 'MAXMINDDB_LICENSE_KEY'='xxx' -e 'PUID'='99' -e 'PGID'='100' -p '180:80/tcp' -p '1443:443/tcp' -v '/mnt/user/appdata/swag':'/config':'rw' --cap-add=NET_ADMIN 'linuxserver/swag'

Docker logs

[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] 01-envfile: executing... 
[cont-init.d] 01-envfile: exited 0.
[cont-init.d] 10-adduser: executing... 

-------------------------------------
          _         ()
         | |  ___   _    __
         | | / __| | |  /  \ 
         | | \__ \ | | | () |
         |_| |___/ |_|  \__/


Brought to you by linuxserver.io
-------------------------------------

To support the app dev(s) visit:
Certbot: https://supporters.eff.org/donate/support-work-on-certbot

To support LSIO projects visit:
https://www.linuxserver.io/donate/
-------------------------------------
GID/UID
-------------------------------------

User uid:    99
User gid:    100
-------------------------------------

[cont-init.d] 10-adduser: exited 0.
[cont-init.d] 20-config: executing... 
[cont-init.d] 20-config: exited 0.
[cont-init.d] 30-keygen: executing... 
using keys found in /config/keys
[cont-init.d] 30-keygen: exited 0.
[cont-init.d] 50-config: executing... 
Variables set:
PUID=99
PGID=100
TZ=Europe/Paris
URL=zotarios.com
SUBDOMAINS=some
EXTRA_DOMAINS=
ONLY_SUBDOMAINS=true
VALIDATION=dns
CERTPROVIDER=
DNSPLUGIN=cloudflare
EMAIL=
STAGING=false

Using Let's Encrypt as the cert provider
SUBDOMAINS entered, processing
SUBDOMAINS entered, processing
Only subdomains, no URL in cert
Sub-domains processed are:  -d some.mydomain.com
No e-mail address entered or address invalid
dns validation via cloudflare plugin is selected
Certificate exists; parameters unchanged; starting nginx
[cont-init.d] 50-config: exited 0.
[cont-init.d] 60-renew: executing... 
The cert does not expire within the next day. Letting the cron script handle the renewal attempts overnight (2:08am).
[cont-init.d] 60-renew: exited 0.
[cont-init.d] 70-templates: executing... 
[cont-init.d] 70-templates: exited 0.
[cont-init.d] 99-custom-files: executing... 
[custom-init] no custom files found exiting...
[cont-init.d] 99-custom-files: exited 0.
[cont-init.d] done.
[services.d] starting services
[services.d] done.
nginx: [alert] detected a LuaJIT version which is not OpenResty's; many optimizations will be disabled and performance will be compromised (see https://github.com/openresty/luajit2 for OpenResty's LuaJIT or, even better, consider using the OpenResty releases from https://openresty.org/en/download.html)
Server ready

IPv6

Hi,

I need disable IPv6

When disabling IPv6 from ubuntu I get the error
(97: address family not supported by protocol)

I need to disable IPv6 as it brings me problems with my ISP.

How can I do it?

Thanks

Add option for HTTP Validation to use the subdomain, or skip the root domain.

Expected Behavior

I want to host a number of subdomains (www.mydomain.com, apps.mydomain.com) but not the root domain (mydomain.com).

Current Behavior

VALIDATION=http uses the URL param (root domain) as URL for the ACME validation, even though I only want to verify the subdomains.

Steps to Reproduce

  1. Launch docker-compose with the configuration below.

Environment

OS: Debian
CPU architecture: x86_64
How docker service was installed: Following instructions on the docker website.

Command used to create docker container (run/create/compose/screenshot)

I created a "stack" in Portainer, which is equivalent to docker-compose.

version: 2
services:
  swag:
    image: linuxserver/swag
    container_name: swag
    cap_add:
      - NET_ADMIN
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=America/New_York
      - URL=mydomain.com
      - SUBDOMAINS=apps,auth
      - VALIDATION=http
      - STAGING=true #optional
    volumes:
      - /srv/swag:/config
    ports:
      - 443:443
      - 80:80 #optional
    restart: unless-stopped

Docker logs

[s6-init] making user provided files available at /var/run/s6/etc...exited 0.,
[s6-init] ensuring user provided files have correct perms...exited 0.,
[fix-attrs.d] applying ownership & permissions fixes...,
[fix-attrs.d] done.,
[cont-init.d] executing container initialization scripts...,
[cont-init.d] 01-envfile: executing... ,
[cont-init.d] 01-envfile: exited 0.,
[cont-init.d] 10-adduser: executing... ,
,
-------------------------------------,
          _         (),
         | |  ___   _    __,
         | | / __| | |  /  \ ,
         | | \__ \ | | | () |,
         |_| |___/ |_|  \__/,
,
,
Brought to you by linuxserver.io,
-------------------------------------,
,
To support the app dev(s) visit:,
Certbot: https://supporters.eff.org/donate/support-work-on-certbot,
,
To support LSIO projects visit:,
https://www.linuxserver.io/donate/,
-------------------------------------,
GID/UID,
-------------------------------------,
,
User uid:    1000,
User gid:    1000,
-------------------------------------,
,
[cont-init.d] 10-adduser: exited 0.,
[cont-init.d] 20-config: executing... ,
[cont-init.d] 20-config: exited 0.,
[cont-init.d] 30-keygen: executing... ,
using keys found in /config/keys,
[cont-init.d] 30-keygen: exited 0.,
[cont-init.d] 50-config: executing... ,
Variables set:,�,
0,�,
0,
TZ=America/New_York,
URL=mydomain.com,
SUBDOMAINS=apps,auth,
EXTRA_DOMAINS=,
ONLY_SUBDOMAINS=false,
VALIDATION=http,
DNSPLUGIN=,
EMAIL=,�
STAGING=true,
,
NOTICE: Staging is active,
SUBDOMAINS entered, processing,
SUBDOMAINS entered, processing,
Sub-domains processed are:  -d apps.mydomain.com -d auth.mydomain.com,
No e-mail address entered or address invalid,
http validation is selected,
nerating new certificate,
Saving debug log to /var/log/letsencrypt/letsencrypt.log,
Plugins selected: Authenticator standalone, Installer None,
Obtaining a new certificate,
Performing the following challenges:,
http-01 challenge for mydomain.com,
Waiting for verification...,
Challenge failed for domain mydomain.com,
http-01 challenge for mydomain.com,
Cleaning up challenges,
Some challenges have failed.,
IMPORTANT NOTES:,
 - The following errors were reported by the server:,
,
   Domain: mydomain.com,
   Type:   unauthorized,
   Detail: Invalid response from,
   https://mydomain.com/.well-known/acme-challenge/DusnGXNePNs7AA1c0ZZr7CksGGtE70d4Og-DXzt8zZw,
   [2606:4700:3033::681c:470]: "<!DOCTYPE html>\n<!--[if lt IE 7]>,
   <html class=\"no-js ie6 oldie\" lang=\"en-US\">,
   <![endif]-->\n<!--[if IE 7]>    <html class=\"no-js ",
,
   To fix these errors, please make sure that your domain name was,
   entered correctly and the DNS A/AAAA record(s) for that domain,
   contain(s) the right IP address.,
ERROR: Cert does not exist! Please see the validation error above. The issue may be due to incorrect dns or port forwarding settings. Please fix your settings and recreate the container,

NOTE: the response to the challenge is likely the DNS proxy in Cloudflare as I have not forwarded (and I don't intend to) the root domain to this server.

Running 'nginx -s reload' within container yields error

linuxserver.io

If you are new to Docker or this application our issue tracker is ONLY used for reporting bugs or requesting features. Please use our discord server for general support.


Expected Behavior

When I run nginx -s reload within the container, it should reload the config files without requiring a restart of nginx or the container.

Current Behavior

Running nginx -s reload yields the following error: open() "/run/nginx/nginx.pid" failed (2: No such file or directory).
It looks like this is because the default nginx.conf sets the PID location to /run/nginx.pid, but running nginx -V shows that it's expecting the PID to be at /run/nginx/nginx.pid. This is probably a matter of either adding mkdir /run/nginx to the dockerfile and changing the default nginx.conf to point to the new location, or recompiling nginx to expect the PID at /run/nginx.pid.

Steps to Reproduce

  1. Pull the latest SWAG docker image from dockerhub
  2. Start up a container
  3. Run docker exec -it <container_name> nginx -s reload

Environment

OS: Ubuntu 20.04.1
CPU architecture: x86_64
How docker service was installed: Add official Docker repo, installed through Apt (followed instructions at https://docs.docker.com/engine/install/ubuntu/)

Command used to create docker container (run/create/compose/screenshot)

docker-compose.yaml:

version: "2.4"
services:
  gateway:
    image: linuxserver/swag
    container_name: gateway
    cap_add:
      - NET_ADMIN
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=America/Chicago
      - URL=my.domain
      - SUBDOMAINS=wildcard
      - VALIDATION=dns
      - DNSPLUGIN=cloudflare
      - [email protected]
    volumes:
      - /srv/gateway:/config
    ports:
      - 443:443
      - 80:80
    restart: unless-stopped

Docker logs

swag.log

Unable to get certs from Zerossl - KeyError

Expected Behavior

swag should get a certificate from ZeroSSL

Current Behavior

Getting "Unable to retrieve EAB credentials from ZeroSSL. Check the outgoing connections to api.zerossl.com and dns. Sleeping."

Steps to Reproduce

  1. Ran docker-compose seen below

Environment

OS: Ubuntu Server 20.10
CPU architecture: x86_64

How docker service was installed:
Followed the directions from the docker docs

Command used to create docker container (run/create/compose/screenshot)

---
version: "2.1"

networks:
  default:
    external:
      name:  ${MAIN_NETWORK}

services:
  swag:
    image: ghcr.io/linuxserver/swag
    container_name: webserver
    cap_add:
      - NET_ADMIN
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=America/New_York
      - URL=${MAIN_DOMAIN}
      - SUBDOMAINS=wildcard
      - VALIDATION=dns
      - CERTPROVIDER=zerossl
      - DNSPLUGIN=cloudflare
      - EMAIL=${ADMIN_EMAIL}
      - ONLY_SUBDOMAINS=true
      - HTTP_PROXY=
      - HTTPS_PROXY=

    volumes:
      - ./config:/config
    ports:
      - 443:443
      - 80:80
    restart: unless-stopped

Docker logs

[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] 01-envfile: executing... 
[cont-init.d] 01-envfile: exited 0.
[cont-init.d] 10-adduser: executing... 
usermod: no changes

-------------------------------------
          _         ()
         | |  ___   _    __
         | | / __| | |  /  \ 
         | | \__ \ | | | () |
         |_| |___/ |_|  \__/


Brought to you by linuxserver.io
-------------------------------------

To support the app dev(s) visit:
Certbot: https://supporters.eff.org/donate/support-work-on-certbot

To support LSIO projects visit:
https://www.linuxserver.io/donate/
-------------------------------------
GID/UID
-------------------------------------

User uid:    1000
User gid:    1000
-------------------------------------

[cont-init.d] 10-adduser: exited 0.
[cont-init.d] 20-config: executing... 
[cont-init.d] 20-config: exited 0.
[cont-init.d] 30-keygen: executing... 
using keys found in /config/keys
[cont-init.d] 30-keygen: exited 0.
[cont-init.d] 50-config: executing... 
Variables set:
PUID=1000
PGID=1000
TZ=America/New_York
URL=example.com
SUBDOMAINS=wildcard
EXTRA_DOMAINS=
ONLY_SUBDOMAINS=true
VALIDATION=dns
CERTPROVIDER=
DNSPLUGIN=cloudflare
[email protected]
STAGING=

Using Let's Encrypt as the cert provider
SUBDOMAINS entered, processing
Wildcard cert for only the subdomains of example.com will be requested
E-mail address entered: [email protected]
dns validation via cloudflare plugin is selected
Different validation parameters entered than what was used before. Revoking and deleting existing certificate, and an updated one will be created
Unable to retrieve EAB credentials from ZeroSSL. Check the outgoing connections to api.zerossl.com and dns. Sleeping.
[cont-finish.d] executing container finish scripts...
[cont-finish.d] done.
[s6-finish] waiting for services.
[s6-finish] sending all processes the TERM signal.
[s6-finish] sending all processes the KILL signal and exiting.

Python error

Traceback (most recent call last):
   File "<string>", line 1, in <module>
 KeyError: 'eab_kid'
 Traceback (most recent call last):
   File "<string>", line 1, in <module>
 KeyError: 'eab_hmac_key'
 Unable to retrieve EAB credentials from ZeroSSL. Check the outgoing connections to api.zerossl.com and dns. Sleeping.

Proxy buffering will crash on big downloads above 1024M

Expected Behavior

When proxying other sites Swag/nginx should allow through whatever size of download that is supported by the underlying application. It also shouldn't depend on the speed at which that can be read nor be storing large files on the drive in order to buffer those files.

Current Behavior

Swag/nginx is copying the contents of the file as fast as the backend server can provide and then serving to the customer up to a limit of 1024M at which point it fails for the customer, but it reads the backend file fully.

Steps to Reproduce

  1. Setup a proxy to a backend nginx server on the same machine using the standard settings as copied from proxy-conf
  2. Have that backend nginx server up a file of over 1024M, say 1.5GB in total size
  3. Download the file with Firefox.

Environment

OS: Ubuntu 20.04
CPU architecture: x86_64
How docker service was installed:
Installed from hub.docker.io.

Command used to create docker container (run/create/compose/screenshot)

Docker-compose using the following
image: linuxserver/swag
container_name: swag
ports:
- 443:443
- 80:80
cap_add:
- NET_ADMIN
environment:
- PUID=1000
- PGID=1000
- TZ=Europe/***
- URL=***
- SUBDOMAINS=***
- VALIDATION=http
- EMAIL=***

Docker logs

The relevant log entry will show the following is local file system log/nginx/error.log and the message it produces follows this pattern:

2021/02/10 19:00:00 [error] 461#461: 1 upstream prematurely closed connection while reading upstream, client: ..., server: ., request: "GET /ABIGFILE HTTP/2.0", upstream: "http://...:80/ABIGFILE", host: ""

The solution

It seems to be caused by the default setting of http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_max_temp_file_size it is a problem with buffering. When the nginx is local it reads the backend as fast as it can and fills to the files limit (1024m) and then when the client gets to that point it fails to get the subsequent bytes. Setting this value in the proxy config to 0 along with the other proxy settings stops nginx producing pointless caching files of monster size and peaking out the CPU the moment a request is asked for and allows larger downloads to complete and is what I would recommend be done to fix it.

Request php ext-xsl be added

linuxserver.io

If you are new to Docker or this application our issue tracker is ONLY used for reporting bugs or requesting features. Please use our discord server for general support.


Current Behavior

I installed Kimai2 for time tracking inside the container but it looks like one of the recent releases added ext-xsl as a dependency. Can this PHP extension please be added?

Steps to Reproduce

  1. Install Kimai2 via the console on the container

Environment

OS: Unraid
CPU architecture: x86_64
How docker service was installed:

Command used to create docker container (run/create/compose/screenshot)

Docker logs

"ssl3_get_record:wrong version number" after some time

linuxserver.io


Expected Behavior

Sites can be opened all the time

Current Behavior

Swag is running fine generally. But from one Linux Host i can not open the pages after a time. When i wait a bit (~ an hour) it works again. In the same time it does not work from this host, i can open the sites from another Host.
Firefox reports "PR_CONNECT_RESET_ERROR".

With curl i get the following:

curl -v https://RETRACTED/
*   Trying xx.1.134.102:443...
* Connected to RETRACTED (xx.1.134.102) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: /etc/pki/tls/certs/ca-bundle.crt
  CApath: none
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* error:1408F10B:SSL routines:ssl3_get_record:wrong version number
* Closing connection 0
curl: (35) error:1408F10B:SSL routines:ssl3_get_record:wrong version number

From another host i can get the site with curl without a Problem. And some time later from this host also again.

Steps to Reproduce

  1. Have swag hosted site open for a while
  2. wait
  3. reload

Environment

OS: Fedora 33
CPU architecture: x86_64
How docker service was installed: official docker repo

Command used to create docker container (run/create/compose/screenshot)

swag:
    image: linuxserver/swag
    container_name: swag
    cap_add:
      - NET_ADMIN
    environment:
      - PUID=8675309
      - PGID=8675309
      - TZ=Europe/Berlin
      - URL=RETRACTED
      - SUBDOMAINS=wildcard
      - VALIDATION=dns
      - DNSPLUGIN=cloudflare #optional
      - PROPAGATION= #optional
      - DUCKDNSTOKEN= #optional
      - EMAIL=RETRACTED #optional
      - ONLY_SUBDOMAINS=false #optional
      - EXTRA_DOMAINS=RETRACTED, RETRACTED
      - STAGING=false #optional
    volumes:
      - /opt/appdata/letsencrypt:/config
      - /opt/appdata/nextcloud:/var/www/html
    ports:
      - 443:443
      - 80:80 #optional
    restart: unless-stopped

Docker logs

[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] 01-envfile: executing... 
[cont-init.d] 01-envfile: exited 0.
[cont-init.d] 10-adduser: executing... 
usermod: no changes
-------------------------------------
          _         ()
         | |  ___   _    __
         | | / __| | |  /  \ 
         | | \__ \ | | | () |
         |_| |___/ |_|  \__/
Brought to you by linuxserver.io
-------------------------------------
To support the app dev(s) visit:
Certbot: https://supporters.eff.org/donate/support-work-on-certbot
To support LSIO projects visit:
https://www.linuxserver.io/donate/
-------------------------------------
GID/UID
-------------------------------------
User uid:    8675309
User gid:    8675309
-------------------------------------
[cont-init.d] 10-adduser: exited 0.
[cont-init.d] 20-config: executing... 
[cont-init.d] 20-config: exited 0.
[cont-init.d] 30-keygen: executing... 
using keys found in /config/keys
[cont-init.d] 30-keygen: exited 0.
[cont-init.d] 50-config: executing... 
Variables set:
PUID=8675309
PGID=8675309
TZ=Europe/Berlin
URL=RETRACTED
SUBDOMAINS=wildcard
EXTRA_DOMAINS=RETRACTED
ONLY_SUBDOMAINS=false
VALIDATION=dns
DNSPLUGIN=cloudflare
EMAIL=RETRACTED
STAGING=false
SUBDOMAINS entered, processing
Wildcard cert for RETRACTED will be requested
EXTRA_DOMAINS entered, processing
Extra domains processed are:  -d RETRACTED
E-mail address entered: RETRACTED
dns validation via cloudflare plugin is selected
Certificate exists; parameters unchanged; starting nginx
Starting 2019/12/30, GeoIP2 databases require personal license key to download. Please retrieve a free license key from MaxMind,
and add a new env variable "MAXMINDDB_LICENSE_KEY", set to your license key.
[cont-init.d] 50-config: exited 0.
[cont-init.d] 60-renew: executing... 
The cert does not expire within the next day. Letting the cron script handle the renewal attempts overnight (2:08am).
[cont-init.d] 60-renew: exited 0.
[cont-init.d] 70-templates: executing... 
[cont-init.d] 70-templates: exited 0.
[cont-init.d] 99-custom-files: executing... 
[custom-init] no custom files found exiting...
[cont-init.d] 99-custom-files: exited 0.
[cont-init.d] done.
[services.d] starting services
[services.d] done.
nginx: [alert] detected a LuaJIT version which is not OpenResty's; many optimizations will be disabled and performance will be compromised (see https://github.com/openresty/luajit2 for OpenResty's LuaJIT or, even better, consider using the OpenResty releases from https://openresty.org/en/download.html)
Server ready

Add usage Certbot DNS TransIP

The documentation of the Certbot DNS TransIP was incomplete. After reverse engineering I came up with the following documentation:

docker-swag/root/defaults/dns-conf/transip.ini

# Instructions: https://readthedocs.org/projects/certbot-dns-transip/
#
# This DNS plugin can be used to generate SSL wildcard certificates via TransIP DNS TXT records
#
# Login with your TransIP account and go to My Account | API:
# 1. API-settings: On
#
# 2. IP-address/ranges whitelist: Add a new authorized IP address (Swag Docker) to use the API
#
# 3. Generate a new Key Pair and copy the private key to a new transip.key file in the format:
#    -----BEGIN PRIVATE KEY-----
#    ...
#    -----END PRIVATE KEY-----
#
# 4. Convert the key to an RSA key with command:
#     openssl rsa -in transip.key -out /config/dns-conf/transip-rsa.key
#
# 5. Set permission
#     chmod 600 /config/dns-conf/transip-rsa.key
#
# 6. Replace <transip_username> below with your TransIP username
#
# 7. Create wildcard certificate with Swag environment variables:
#      SUBDOMAINS=wildcard
#      VALIDATION=dns
#      DNSPLUGIN=transip

dns_transip_username = <transip_username>
dns_transip_key_file = /config/dns-conf/transip-rsa.key

htpcmanager

/config/nginx/proxy-confs folder is missing a sample config for htpcmanager.

[Feature Request] Individual Cert for Each Subdomain

Hey Guys,
The issue I am having is that I have two domains that are served utilizing SWAG one for plex and one with a personal blog that contains sensitive information. When you check the SSL certificate for each of the domains both domains are listed in the SSL chain. It would be great if SWAG creates for each subdomain an individual SSL Cert.

I basically run my docker command using these ENV vars

-e URL=myplexdomain.url 
-e EXTRA_DOMAINS= myblogdomain.url 

Address in use

linuxserver.io


Expected Behavior

Self-explanatory. See Errors below.

Current Behavior

nginx: [emerg] bind() to [::]:80 failed (98: Address in use)
nginx: [emerg] bind() to 0.0.0.0:443 failed (98: Address in use)
nginx: [emerg] bind() to [::]:443 failed (98: Address in use)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address in use)
nginx: [emerg] bind() to [::]:80 failed (98: Address in use)
nginx: [emerg] bind() to 0.0.0.0:443 failed (98: Address in use)
nginx: [emerg] bind() to [::]:443 failed (98: Address in use

Steps to Reproduce

  1. Start Swag
  2. Wait.This last time had an uptime of 7 days before I got spammed with notifications for things going offline in the middle of the night.

Environment

**OS: Ubuntu Server 20.04
CPU architecture: x86_64
**How docker service was installed: docker repo
**Docker Version: Docker version 19.03.13, build 4484c46d9d

Command used to create docker container (run/create/compose/screenshot)

https://i.imgur.com/1Gytpx5.png

Docker logs

docker logs swag: https://pastebin.facewan.com/?698b3fb8da3aba89#EzKt2Kj3AqiqRCDCgLDUViySr3vdKAzmy6p3nnfGZEhz
Logs from inside Portainer (shows the address issues): https://pastebin.facewan.com/?3a563e2b3b28d9fb#7XMxY3hXSjCm9toUcVGFUeo71Mosak4mD6WoW1kuuLu1

providing volume for /config seems to get overriden by container


Expected Behavior

config volume is mounted to container

Current Behavior

config volume is not mounted to container

Steps to Reproduce

version: "3"
services:
  swag:
    image: ghcr.io/linuxserver/swag
    container_name: swag
    cap_add:
      - NET_ADMIN
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Europe/London
      - URL=<my domain here>
      - VALIDATION=http
    volumes:
      - ./config:/config <-- this seem to have no effect, local folder `config` exists
    ports:
      - 443:443
      - 80:80
    restart: unless-stopped

Environment

OS:
CPU architecture: x86_64
How docker service was installed:

Docker logs

no errors in docker log

nginx: [alert] detected a LuaJIT version which is not OpenResty's

Hi,
I set up swag container instead of previously running linuxserver/letsencrypt.
Both containers log shows alert and error mentioning LuaJIT (check the logs below).
Should I need to do any changes or I can ignor this alert?

Environment

OS: Debian 10 with openmediavault 5 installed
Linux omv.local 5.4.65-1-pve #1 SMP PVE 5.4.65-1 (Mon, 21 Sep 2020 15:40:22 +0200) x86_64 GNU/Linux
CPU architectur: x86_64
How docker service was installed: via apt command from default debian repo

Command used to create docker container (run/create/compose/screenshot)

Docker logs

How to get memcached to auto start

linuxserver.io


Is there any way to get Memcached to autostart on startup/update of the container?
running "memcached -l 127.0.0.1 -m 512-p 11211 -u abc" works but obviously is not persistent through restarts, I have tried doing a crontab with @reboot but it does not work. the only thing I can think of is to make a script to start it if it is not started and make it run every minute with corn. but there's got to be an easier way

ERROR Cert does not exist! although actions performed correctly

Expected Behavior

get new cert with dns inwx plugin, no error

Current Behavior

throws:
ERROR: Cert does not exist! Please see the validation error above. Make sure you entered correct credentials into the /config/dns-conf/inwx.ini file.

But certificates are generated, challenge cleaned up, all fine.

I suspect though, that the renewal does not work. 7 days left to expire and no action yet and no new log entries for >2 months now.

Steps to Reproduce

  1. edit inwx.ini file
  2. run swag
  3. very last log entry is the error
  4. container keeps running

Environment

OS: Debian Edit: same behaviour on raspberrypi 4 with raspberry OS
CPU architecture: x86_64
How docker service was installed:according to docker offical docs

Command used to create docker container (run/create/compose/screenshot)

created in portainer so I post config here

image
linuxserver/swag:latest

ports
444->443
81->80

volumes
/etc/letsencrypt -> /etc/letsencrypt
/config:letsencrypt_data

ENV
PGID 1000
TZ Europe/Berlin
URL a.b
DNSPLUGIN inwx
ONLY_SUBDOMAINS true
STAGING false
PUID 1000
VALIDATION dns
PATH /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PS1 $(whoami)@$(hostname):$(pwd)$
HOME /root
TERM xterm
DHLEVEL 2048
S6_BEHAVIOUR_IF_STAGE2_FAILS 2
EMAIL [email protected]
SUBDOMAINS abc
AWS_CONFIG_FILE /config/dns-conf/route53.ini

Docker logs

[s6-init] making user provided files available at /var/run/s6/etc...exited 0.,
[s6-init] ensuring user provided files have correct perms...exited 0.,
[fix-attrs.d] applying ownership & permissions fixes...,
[fix-attrs.d] done.,
[cont-init.d] executing container initialization scripts...,
[cont-init.d] 01-envfile: executing... ,
[cont-init.d] 01-envfile: exited 0.,
[cont-init.d] 10-adduser: executing... ,
,
-------------------------------------,
_ (),
| | ___ _ _,
| | / | | | / \ ,
| | _
\ | | | () |,
|| |
/ || _/,
,
,
Brought to you by linuxserver.io,
-------------------------------------,
,
To support the app dev(s) visit:,
Certbot: https://supporters.eff.org/donate/support-work-on-certbot,
,
To support LSIO projects visit:,
https://www.linuxserver.io/donate/,
-------------------------------------,
GID/UID,
-------------------------------------,
,
User uid: 1000,
User gid: 1000,
-------------------------------------,
,
[cont-init.d] 10-adduser: exited 0.,
[cont-init.d] 20-config: executing... ,
[cont-init.d] 20-config: exited 0.,
[cont-init.d] 30-keygen: executing... ,
using keys found in /config/keys,
[cont-init.d] 30-keygen: exited 0.,
[cont-init.d] 50-config: executing... ,

Variables set:

0

0
TZ=Europe/Berlin

l
SUBDOMAINS=abc
EXTRA_DOMAINS=
ONLY_SUBDOMAINS=true
VALIDATION=dns
DNSPLUGIN=inwx
EMAIL=[email protected]
STAGING=false

rm: cannot remove '/etc/letsencrypt': Resource busy
SUBDOMAINS entered, processing
SUBDOMAINS entered, processing
Only subdomains, no URL in cert
Sub-domains processed are: -d abc.a.b
E-mail address entered: [email protected]
dns validation via inwx plugin is selected
Different validation parameters entered than what was used before. Revoking and deleting existing certificate, and an updated one will be created
nerating new certificate
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugin legacy name certbot-dns-inwx:dns-inwx may be removed in a future version. Please use dns-inwx instead.
Plugins selected: Authenticator certbot-dns-inwx:dns-inwx, Installer None
Obtaining a new certificate
Performing the following challenges:
dns-01 challenge for abc.a.b
Waiting 60 seconds for DNS changes to propagate
Waiting for verification...
Cleaning up challenges
IMPORTANT NOTES:

  • Congratulations! Your certificate and chain have been saved at:
    /etc/letsencrypt/live/abc.a.b/fullchain.pem
    Your key file has been saved at:
    /etc/letsencrypt/live/abc.a.b/privkey.pem
    Your cert will expire on 2020-12-20. To obtain a new or tweaked
    version of this certificate in the future, simply run certbot
    again. To non-interactively renew all of your certificates, run
    "certbot renew"

  • Your account credentials have been saved in your Certbot
    configuration directory at /etc/letsencrypt. You should make a
    secure backup of this folder now. This configuration directory will
    also contain certificates and private keys obtained by Certbot so
    making regular backups of this folder is ideal.

  • If you like Certbot, please consider supporting our work by:

    Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate
    Donating to EFF: https://eff.org/donate-le

ERROR: Cert does not exist! Please see the validation error above. Make sure you entered correct credentials into the /config/dns-conf/inwx.ini file.

Letsencrypt Log (last lines)

[...]
/etc/letsencrypt/live/abc.a.b/fullchain.pem
Your key file has been saved at:
/etc/letsencrypt/live/abc.a.b/privkey.pem
Your cert will expire on 2020-12-20. To obtain a new or tweaked version of this certificate in the future, simply run certbot again. To non-interactively renew all of your certificates, run "certbot renew"
2020-09-22 00:36:54,284:DEBUG:certbot._internal.reporter:Reporting to user: If you like Certbot, please consider supporting our work by:

Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate
Donating to EFF: https://eff.org/donate-le

Mock https certificate

linuxserver.io


Desired Behavior

Some env variable that disables cert validation.
This is useful for testing purposes where a valid cert is not required.

Current Behavior

A valid certificate validation method is required or else nginx doesn't start:
ERROR: Cert does not exist! Please see the validation error above. The issue may be due to incorrect dns or port forwarding settings. Please fix your settings and recreate the container

Alternatives Considered

Replacing init files with custom ones that avoid cert validation.

nginx -t and nginx -s reload not working

linuxserver.io


Expected Behavior

When using the nginx -t command, Nginx should test all configurations for any errors (-t = test). This way you can identify errors before restarting the service to avoid downtime. Example with the original Nginx image:

docker exec -it backend-nginx nginx -t

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

When using the nginx -s reload command, Nginx should reload all the config files without restarting the server (-s reload = service reload). This keeps the downtime of the reverse proxy to a minimum. Example with the original Nginx image:

docker exec -it backend-nginx nginx -s reload

# If no message gets displayed everything is fine.

When restarting the container with docker-compose up -d --force-recreate everything runs fine and the nginx config works too. So the nginx config isn't wrong.

Using those two commands is best practice :) You can even set an alias for this to make life easier:
alias reverseproxyreload='docker exec backend-nginx nginx -t && docker exec backend-nginx nginx -s reload'

Current Behavior

docker exec -it backend-swag nginx -t

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: [emerg] open() "/run/nginx/nginx.pid" failed (2: No such file or directory)
nginx: configuration file /etc/nginx/nginx.conf test failed
docker exec -it backend-swag nginx -s reload

nginx: [error] open() "/run/nginx/nginx.pid" failed (2: No such file or directory)

Steps to Reproduce

  1. Create an swag container.
  2. Make changes to the nginx config. E.g.: Create a site
  3. run: docker exec -it YOUR_CONTAINER_NAME nginx -t
  4. run: docker exec -it YOUR_CONTAINER_NAME nginx -s reload

Environment

OS: Debian GNU/Linux 9.13 (stretch)
CPU architecture: x86_64
How docker service was installed: Docker was install using apt-get install ...

Command used to create docker container (run/create/compose/screenshot)

Using docker compose:

version: "3"

services:
  backend-swag:
    image: ghcr.io/linuxserver/swag
    container_name: backend-swag
    restart: unless-stopped
    ports:
      - 9443:443
      - 9080:80
    volumes:
      - ./data-swag:/config
    environment:
      - PUID=996
      - PGID=996
      - TZ=Europe/Berlin
      - URL=XXX.XXX
      - SUBDOMAINS=wildcard
      - VALIDATION=dns
      - DNSPLUGIN=cloudflare
      - [email protected]
      - MAXMINDDB_LICENSE_KEY=XXX
    cap_add:
      - NET_ADMIN
    labels:
      - com.centurylinklabs.watchtower.enable=true

Docker logs

See "Current Behavior" for outputs.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.