GithubHelp home page GithubHelp logo

tiangolo / uvicorn-gunicorn-fastapi-docker Goto Github PK

View Code? Open in Web Editor NEW
2.5K 24.0 318.0 153 KB

Docker image with Uvicorn managed by Gunicorn for high-performance FastAPI web applications in Python with performance auto-tuning.

License: MIT License

Dockerfile 24.29% Python 67.74% Shell 7.96%
uvicorn gunicorn asgi web python async docker docker-image websockets json

uvicorn-gunicorn-fastapi-docker's Introduction

Test Deploy

Supported tags and respective Dockerfile links

Deprecated tags

๐Ÿšจ These tags are no longer supported or maintained, they are removed from the GitHub repository, but the last versions pushed might still be available in Docker Hub if anyone has been pulling them:

  • python3.9-alpine3.14
  • python3.8-alpine3.10
  • python3.7-alpine3.8
  • python3.6
  • python3.6-alpine3.8

The last date tags for these versions are:

  • python3.9-alpine3.14-2024-03-11
  • python3.8-alpine3.10-2024-01-29
  • python3.7-alpine3.8-2024-03-11
  • python3.6-2022-11-25
  • python3.6-alpine3.8-2022-11-25

Note: There are tags for each build date. If you need to "pin" the Docker image version you use, you can select one of those tags. E.g. tiangolo/uvicorn-gunicorn-fastapi:python3.7-2019-10-15.

uvicorn-gunicorn-fastapi

Docker image with Uvicorn managed by Gunicorn for high-performance FastAPI web applications in Python with performance auto-tuning.

GitHub repo: https://github.com/tiangolo/uvicorn-gunicorn-fastapi-docker

Docker Hub image: https://hub.docker.com/r/tiangolo/uvicorn-gunicorn-fastapi/

Description

FastAPI has shown to be a Python web framework with one of the best performances, as measured by third-party benchmarks, thanks to being based on and powered by Starlette.

The achievable performance is on par with (and in many cases superior to) Go and Node.js frameworks.

This image has an auto-tuning mechanism included to start a number of worker processes based on the available CPU cores. That way you can just add your code and get high performance automatically, which is useful in simple deployments.

๐Ÿšจ WARNING: You Probably Don't Need this Docker Image

You are probably using Kubernetes or similar tools. In that case, you probably don't need this image (or any other similar base image). You are probably better off building a Docker image from scratch as explained in the docs for FastAPI in Containers - Docker: Build a Docker Image for FastAPI.


If you have a cluster of machines with Kubernetes, Docker Swarm Mode, Nomad, or other similar complex system to manage distributed containers on multiple machines, then you will probably want to handle replication at the cluster level instead of using a process manager (like Gunicorn with Uvicorn workers) in each container, which is what this Docker image does.

In those cases (e.g. using Kubernetes) you would probably want to build a Docker image from scratch, installing your dependencies, and running a single Uvicorn process instead of this image.

For example, your Dockerfile could look like:

FROM python:3.9

WORKDIR /code

COPY ./requirements.txt /code/requirements.txt

RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt

COPY ./app /code/app

CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "80"]

You can read more about this in the FastAPI documentation about: FastAPI in Containers - Docker.

When to Use this Docker Image

A Simple App

You could want a process manager like Gunicorn running Uvicorn workers in the container if your application is simple enough that you don't need (at least not yet) to fine-tune the number of processes too much, and you can just use an automated default, and you are running it on a single server, not a cluster.

Docker Compose

You could be deploying to a single server (not a cluster) with Docker Compose, so you wouldn't have an easy way to manage replication of containers (with Docker Compose) while preserving the shared network and load balancing.

Then you could want to have a single container with a Gunicorn process manager starting several Uvicorn worker processes inside, as this Docker image does.

Prometheus and Other Reasons

You could also have other reasons that would make it easier to have a single container with multiple processes instead of having multiple containers with a single process in each of them.

For example (depending on your setup) you could have some tool like a Prometheus exporter in the same container that should have access to each of the requests that come.

In this case, if you had multiple containers, by default, when Prometheus came to read the metrics, it would get the ones for a single container each time (for the container that handled that particular request), instead of getting the accumulated metrics for all the replicated containers.

Then, in that case, it could be simpler to have one container with multiple processes, and a local tool (e.g. a Prometheus exporter) on the same container collecting Prometheus metrics for all the internal processes and exposing those metrics on that single container.


Read more about it all in the FastAPI documentation about: FastAPI in Containers - Docker.

Technical Details

Uvicorn

Uvicorn is a lightning-fast "ASGI" server.

It runs asynchronous Python web code in a single process.

Gunicorn

You can use Gunicorn to start and manage multiple Uvicorn worker processes.

That way, you get the best of concurrency and parallelism in simple deployments.

FastAPI

FastAPI is a modern, fast (high-performance), web framework for building APIs with Python.

The key features are:

  • Fast: Very high performance, on par with NodeJS and Go (thanks to Starlette and Pydantic).
  • Fast to code: Increase the speed to develop features by about 200% to 300% *.
  • Less bugs: Reduce about 40% of human (developer) induced errors. *
  • Intuitive: Great editor support. Completion everywhere. Less time debugging.
  • Easy: Designed to be easy to use and learn. Less time reading docs.
  • Short: Minimize code duplication. Multiple features from each parameter declaration. Less bugs.
  • Robust: Get production-ready code. With automatic interactive documentation.
  • Standards-based: Based on (and fully compatible with) the open standards for APIs: OpenAPI (previously known as Swagger) and JSON Schema.

* estimation based on tests on an internal development team, building production applications.

tiangolo/uvicorn-gunicorn-fastapi

This image will set a sensible configuration based on the server it is running on (the amount of CPU cores available) without making sacrifices.

It has sensible defaults, but you can configure it with environment variables or override the configuration files.

There are also slim versions. If you want one of those, use one of the tags from above.

tiangolo/uvicorn-gunicorn

This image (tiangolo/uvicorn-gunicorn-fastapi) is based on tiangolo/uvicorn-gunicorn.

That image is what actually does all the work.

This image just installs FastAPI and has the documentation specifically targeted at FastAPI.

If you feel confident about your knowledge of Uvicorn, Gunicorn and ASGI, you can use that image directly.

tiangolo/uvicorn-gunicorn-starlette

There is a sibling Docker image: tiangolo/uvicorn-gunicorn-starlette

If you are creating a new Starlette web application and you want to discard all the additional features from FastAPI you should use tiangolo/uvicorn-gunicorn-starlette instead.

Note: FastAPI is based on Starlette and adds several features on top of it. Useful for APIs and other cases: data validation, data conversion, documentation with OpenAPI, dependency injection, security/authentication and others.

How to use

You don't need to clone the GitHub repo.

You can use this image as a base image for other images.

Assuming you have a file requirements.txt, you could have a Dockerfile like this:

FROM tiangolo/uvicorn-gunicorn-fastapi:python3.11

COPY ./requirements.txt /app/requirements.txt

RUN pip install --no-cache-dir --upgrade -r /app/requirements.txt

COPY ./app /app

It will expect a file at /app/app/main.py.

Or otherwise a file at /app/main.py.

And will expect it to contain a variable app with your FastAPI application.

Then you can build your image from the directory that has your Dockerfile, e.g:

docker build -t myimage ./

Quick Start

Build your Image

  • Go to your project directory.
  • Create a Dockerfile with:
FROM tiangolo/uvicorn-gunicorn-fastapi:python3.11

COPY ./requirements.txt /app/requirements.txt

RUN pip install --no-cache-dir --upgrade -r /app/requirements.txt

COPY ./app /app
  • Create an app directory and enter in it.
  • Create a main.py file with:
from fastapi import FastAPI

app = FastAPI()


@app.get("/")
def read_root():
    return {"Hello": "World"}


@app.get("/items/{item_id}")
def read_item(item_id: int, q: str = None):
    return {"item_id": item_id, "q": q}
  • You should now have a directory structure like:
.
โ”œโ”€โ”€ app
โ”‚   โ””โ”€โ”€ main.py
โ””โ”€โ”€ Dockerfile
  • Go to the project directory (in where your Dockerfile is, containing your app directory).
  • Build your FastAPI image:
docker build -t myimage .
  • Run a container based on your image:
docker run -d --name mycontainer -p 80:80 myimage

Now you have an optimized FastAPI server in a Docker container. Auto-tuned for your current server (and number of CPU cores).

Check it

You should be able to check it in your Docker container's URL, for example: http://192.168.99.100/items/5?q=somequery or http://127.0.0.1/items/5?q=somequery (or equivalent, using your Docker host).

You will see something like:

{"item_id": 5, "q": "somequery"}

Interactive API docs

Now you can go to http://192.168.99.100/docs or http://127.0.0.1/docs (or equivalent, using your Docker host).

You will see the automatic interactive API documentation (provided by Swagger UI):

Swagger UI

Alternative API docs

And you can also go to http://192.168.99.100/redoc or http://127.0.0.1/redoc(or equivalent, using your Docker host).

You will see the alternative automatic documentation (provided by ReDoc):

ReDoc

Dependencies and packages

You will probably also want to add any dependencies for your app and pin them to a specific version, probably including Uvicorn, Gunicorn, and FastAPI.

This way you can make sure your app always works as expected.

You could install packages with pip commands in your Dockerfile, using a requirements.txt, or even using Poetry.

And then you can upgrade those dependencies in a controlled way, running your tests, making sure that everything works, but without breaking your production application if some new version is not compatible.

Using Poetry

Here's a small example of one of the ways you could install your dependencies making sure you have a pinned version for each package.

Let's say you have a project managed with Poetry, so, you have your package dependencies in a file pyproject.toml. And possibly a file poetry.lock.

Then you could have a Dockerfile using Docker multi-stage building with:

FROM python:3.9 as requirements-stage

WORKDIR /tmp

RUN pip install poetry

COPY ./pyproject.toml ./poetry.lock* /tmp/

RUN poetry export -f requirements.txt --output requirements.txt --without-hashes

FROM tiangolo/uvicorn-gunicorn-fastapi:python3.11

COPY --from=requirements-stage /tmp/requirements.txt /app/requirements.txt

RUN pip install --no-cache-dir --upgrade -r /app/requirements.txt

COPY ./app /app

That will:

  • Install poetry and configure it for running inside of the Docker container.
  • Copy your application requirements.
    • Because it uses ./poetry.lock* (ending with a *), it won't crash if that file is not available yet.
  • Install the dependencies.
  • Then copy your app code.

It's important to copy the app code after installing the dependencies, that way you can take advantage of Docker's cache. That way it won't have to install everything from scratch every time you update your application files, only when you add new dependencies.

This also applies for any other way you use to install your dependencies. If you use a requirements.txt, copy it alone and install all the dependencies on the top of the Dockerfile, and add your app code after it.

Advanced usage

Environment variables

These are the environment variables that you can set in the container to configure it and their default values:

MODULE_NAME

The Python "module" (file) to be imported by Gunicorn, this module would contain the actual application in a variable.

By default:

  • app.main if there's a file /app/app/main.py or
  • main if there's a file /app/main.py

For example, if your main file was at /app/custom_app/custom_main.py, you could set it like:

docker run -d -p 80:80 -e MODULE_NAME="custom_app.custom_main" myimage

VARIABLE_NAME

The variable inside of the Python module that contains the FastAPI application.

By default:

  • app

For example, if your main Python file has something like:

from fastapi import FastAPI

api = FastAPI()


@api.get("/")
def read_root():
    return {"Hello": "World"}

In this case api would be the variable with the FastAPI application. You could set it like:

docker run -d -p 80:80 -e VARIABLE_NAME="api" myimage

APP_MODULE

The string with the Python module and the variable name passed to Gunicorn.

By default, set based on the variables MODULE_NAME and VARIABLE_NAME:

  • app.main:app or
  • main:app

You can set it like:

docker run -d -p 80:80 -e APP_MODULE="custom_app.custom_main:api" myimage

GUNICORN_CONF

The path to a Gunicorn Python configuration file.

By default:

  • /app/gunicorn_conf.py if it exists
  • /app/app/gunicorn_conf.py if it exists
  • /gunicorn_conf.py (the included default)

You can set it like:

docker run -d -p 80:80 -e GUNICORN_CONF="/app/custom_gunicorn_conf.py" myimage

You can use the config file from the base image as a starting point for yours.

WORKERS_PER_CORE

This image will check how many CPU cores are available in the current server running your container.

It will set the number of workers to the number of CPU cores multiplied by this value.

By default:

  • 1

You can set it like:

docker run -d -p 80:80 -e WORKERS_PER_CORE="3" myimage

If you used the value 3 in a server with 2 CPU cores, it would run 6 worker processes.

You can use floating point values too.

So, for example, if you have a big server (let's say, with 8 CPU cores) running several applications, and you have a FastAPI application that you know won't need high performance. And you don't want to waste server resources. You could make it use 0.5 workers per CPU core. For example:

docker run -d -p 80:80 -e WORKERS_PER_CORE="0.5" myimage

In a server with 8 CPU cores, this would make it start only 4 worker processes.

Note: By default, if WORKERS_PER_CORE is 1 and the server has only 1 CPU core, instead of starting 1 single worker, it will start 2. This is to avoid bad performance and blocking applications (server application) on small machines (server machine/cloud/etc). This can be overridden using WEB_CONCURRENCY.

MAX_WORKERS

Set the maximum number of workers to use.

You can use it to let the image compute the number of workers automatically but making sure it's limited to a maximum.

This can be useful, for example, if each worker uses a database connection and your database has a maximum limit of open connections.

By default it's not set, meaning that it's unlimited.

You can set it like:

docker run -d -p 80:80 -e MAX_WORKERS="24" myimage

This would make the image start at most 24 workers, independent of how many CPU cores are available in the server.

WEB_CONCURRENCY

Override the automatic definition of number of workers.

By default:

  • Set to the number of CPU cores in the current server multiplied by the environment variable WORKERS_PER_CORE. So, in a server with 2 cores, by default it will be set to 2.

You can set it like:

docker run -d -p 80:80 -e WEB_CONCURRENCY="2" myimage

This would make the image start 2 worker processes, independent of how many CPU cores are available in the server.

HOST

The "host" used by Gunicorn, the IP where Gunicorn will listen for requests.

It is the host inside of the container.

So, for example, if you set this variable to 127.0.0.1, it will only be available inside the container, not in the host running it.

It's is provided for completeness, but you probably shouldn't change it.

By default:

  • 0.0.0.0

PORT

The port the container should listen on.

If you are running your container in a restrictive environment that forces you to use some specific port (like 8080) you can set it with this variable.

By default:

  • 80

You can set it like:

docker run -d -p 80:8080 -e PORT="8080" myimage

BIND

The actual host and port passed to Gunicorn.

By default, set based on the variables HOST and PORT.

So, if you didn't change anything, it will be set by default to:

  • 0.0.0.0:80

You can set it like:

docker run -d -p 80:8080 -e BIND="0.0.0.0:8080" myimage

LOG_LEVEL

The log level for Gunicorn.

One of:

  • debug
  • info
  • warning
  • error
  • critical

By default, set to info.

If you need to squeeze more performance sacrificing logging, set it to warning, for example:

You can set it like:

docker run -d -p 80:8080 -e LOG_LEVEL="warning" myimage

WORKER_CLASS

The class to be used by Gunicorn for the workers.

By default, set to uvicorn.workers.UvicornWorker.

The fact that it uses Uvicorn is what allows using ASGI frameworks like FastAPI, and that is also what provides the maximum performance.

You probably shouldn't change it.

But if for some reason you need to use the alternative Uvicorn worker: uvicorn.workers.UvicornH11Worker you can set it with this environment variable.

You can set it like:

docker run -d -p 80:8080 -e WORKER_CLASS="uvicorn.workers.UvicornH11Worker" myimage

TIMEOUT

Workers silent for more than this many seconds are killed and restarted.

Read more about it in the Gunicorn docs: timeout.

By default, set to 120.

Notice that Uvicorn and ASGI frameworks like FastAPI are async, not sync. So it's probably safe to have higher timeouts than for sync workers.

You can set it like:

docker run -d -p 80:8080 -e TIMEOUT="20" myimage

KEEP_ALIVE

The number of seconds to wait for requests on a Keep-Alive connection.

Read more about it in the Gunicorn docs: keepalive.

By default, set to 2.

You can set it like:

docker run -d -p 80:8080 -e KEEP_ALIVE="20" myimage

GRACEFUL_TIMEOUT

Timeout for graceful workers restart.

Read more about it in the Gunicorn docs: graceful-timeout.

By default, set to 120.

You can set it like:

docker run -d -p 80:8080 -e GRACEFUL_TIMEOUT="20" myimage

ACCESS_LOG

The access log file to write to.

By default "-", which means stdout (print in the Docker logs).

If you want to disable ACCESS_LOG, set it to an empty value.

For example, you could disable it with:

docker run -d -p 80:8080 -e ACCESS_LOG= myimage

ERROR_LOG

The error log file to write to.

By default "-", which means stderr (print in the Docker logs).

If you want to disable ERROR_LOG, set it to an empty value.

For example, you could disable it with:

docker run -d -p 80:8080 -e ERROR_LOG= myimage

GUNICORN_CMD_ARGS

Any additional command line settings for Gunicorn can be passed in the GUNICORN_CMD_ARGS environment variable.

Read more about it in the Gunicorn docs: Settings.

These settings will have precedence over the other environment variables and any Gunicorn config file.

For example, if you have a custom TLS/SSL certificate that you want to use, you could copy them to the Docker image or mount them in the container, and set --keyfile and --certfile to the location of the files, for example:

docker run -d -p 80:8080 -e GUNICORN_CMD_ARGS="--keyfile=/secrets/key.pem --certfile=/secrets/cert.pem" -e PORT=443 myimage

Note: instead of handling TLS/SSL yourself and configuring it in the container, it's recommended to use a "TLS Termination Proxy" like Traefik. You can read more about it in the FastAPI documentation about HTTPS.

PRE_START_PATH

The path where to find the pre-start script.

By default, set to /app/prestart.sh.

You can set it like:

docker run -d -p 80:8080 -e PRE_START_PATH="/custom/script.sh" myimage

Custom Gunicorn configuration file

The image includes a default Gunicorn Python config file at /gunicorn_conf.py.

It uses the environment variables declared above to set all the configurations.

You can override it by including a file in:

  • /app/gunicorn_conf.py
  • /app/app/gunicorn_conf.py
  • /gunicorn_conf.py

Custom /app/prestart.sh

If you need to run anything before starting the app, you can add a file prestart.sh to the directory /app. The image will automatically detect and run it before starting everything.

For example, if you want to add Alembic SQL migrations (with SQLALchemy), you could create a ./app/prestart.sh file in your code directory (that will be copied by your Dockerfile) with:

#! /usr/bin/env bash

# Let the DB start
sleep 10;
# Run migrations
alembic upgrade head

and it would wait 10 seconds to give the database some time to start and then run that alembic command.

If you need to run a Python script before starting the app, you could make the /app/prestart.sh file run your Python script, with something like:

#! /usr/bin/env bash

# Run custom Python script before starting
python /app/my_custom_prestart_script.py

You can customize the location of the prestart script with the environment variable PRE_START_PATH described above.

Development live reload

The default program that is run is at /start.sh. It does everything described above.

There's also a version for development with live auto-reload at:

/start-reload.sh

Details

For development, it's useful to be able to mount the contents of the application code inside of the container as a Docker "host volume", to be able to change the code and test it live, without having to build the image every time.

In that case, it's also useful to run the server with live auto-reload, so that it re-starts automatically at every code change.

The additional script /start-reload.sh runs Uvicorn alone (without Gunicorn) and in a single process.

It is ideal for development.

Usage

For example, instead of running:

docker run -d -p 80:80 myimage

You could run:

docker run -d -p 80:80 -v $(pwd):/app myimage /start-reload.sh
  • -v $(pwd):/app: means that the directory $(pwd) should be mounted as a volume inside of the container at /app.
    • $(pwd): runs pwd ("print working directory") and puts it as part of the string.
  • /start-reload.sh: adding something (like /start-reload.sh) at the end of the command, replaces the default "command" with this one. In this case, it replaces the default (/start.sh) with the development alternative /start-reload.sh.

Development live reload - Technical Details

As /start-reload.sh doesn't run with Gunicorn, any of the configurations you put in a gunicorn_conf.py file won't apply.

But these environment variables will work the same as described above:

  • MODULE_NAME
  • VARIABLE_NAME
  • APP_MODULE
  • HOST
  • PORT
  • LOG_LEVEL

๐Ÿšจ Alpine Python Warning

In short: You probably shouldn't use Alpine for Python projects, instead use the slim Docker image versions.


Do you want more details? Continue reading ๐Ÿ‘‡

Alpine is more useful for other languages where you build a static binary in one Docker image stage (using multi-stage Docker building) and then copy it to a simple Alpine image, and then just execute that binary. For example, using Go.

But for Python, as Alpine doesn't use the standard tooling used for building Python extensions, when installing packages, in many cases Python (pip) won't find a precompiled installable package (a "wheel") for Alpine. And after debugging lots of strange errors you will realize that you have to install a lot of extra tooling and build a lot of dependencies just to use some of these common Python packages. ๐Ÿ˜ฉ

This means that, although the original Alpine image might have been small, you end up with a an image with a size comparable to the size you would have gotten if you had just used a standard Python image (based on Debian), or in some cases even larger. ๐Ÿคฏ

And in all those cases, it will take much longer to build, consuming much more resources, building dependencies for longer, and also increasing its carbon footprint, as you are using more CPU time and energy for each build. ๐ŸŒณ

If you want slim Python images, you should instead try and use the slim versions that are still based on Debian, but are smaller. ๐Ÿค“

Tests

All the image tags, configurations, environment variables and application options are tested.

Release Notes

Latest Changes

Docs

Internal

  • ๐Ÿ”ง Add GitHub templates for discussions and templates. PR #281 by @tiangolo.
  • ๐Ÿ”ง Update latest-changes.yml. PR #276 by @alejsdev.

0.8.0

Features

  • โœจ Add support for multi-arch builds, including support for arm64 (e.g. Mac M1). PR #273 by @tiangolo.

Docs

Upgrades

Internal

0.7.0

Highlights of this release:

  • Support for Python 3.10 and 3.11.
  • Deprecation of Python 3.6.
    • The last Python 3.6 image tag was pushed and is available in Docker Hub, but it won't be updated or maintained anymore.
    • The last image with a date tag is python3.6-2022-11-25.
  • Upgraded versions of all the dependencies.

Features

  • โœจ Add support for Python 3.10 and 3.11. PR #220 by @tiangolo.
  • โœจ Add Python 3.9 and Python 3.9 Alpine. PR #67 by @graue70.

Breaking Changes

  • ๐Ÿ”ฅ Deprecate and remove Python 3.6. PR #211 by @tiangolo.

Upgrades

  • โฌ†๏ธ Upgrade FastAPI and Uvicorn versions. PR #212 by @tiangolo.
  • โฌ†๏ธ Upgrade packages to the last version that supports Python 3.6. PR #207 by @tiangolo.

Docs

  • ๐Ÿ“ Add note to discourage Alpine with Python. PR #122 by @tiangolo.
  • ๐Ÿ“ Add warning for Kubernetes, when to use this image. PR #121 by @tiangolo.
  • โœ Fix typo, repeated word on README. PR #96 by @shelbylsmith.

Internal

  • โฌ†๏ธ Update black requirement from ^20.8b1 to ^22.10. PR #216 by @dependabot[bot].
  • โฌ†๏ธ Update docker requirement from ^5.0.3 to ^6.0.1. PR #217 by @dependabot[bot].
  • ๐Ÿ”ฅ Remove old Travis file. PR #219 by @tiangolo.
  • โฌ†๏ธ Upgrade CI OS. PR #218 by @tiangolo.
  • ๐Ÿ”ง Update Dependabot config. PR #213 by @tiangolo.
  • ๐Ÿ‘ท Add scheduled CI. PR #210 by @tiangolo.
  • ๐Ÿ‘ท Add alls-green GitHub Action. PR #209 by @tiangolo.
  • ๐Ÿ‘ท Do not run double CI, run on push only on master. PR #208 by @tiangolo.
  • โฌ†๏ธ Bump actions/setup-python from 4.1.0 to 4.3.0. PR #201 by @dependabot[bot].
  • โฌ†๏ธ Update black requirement from ^19.10b0 to ^20.8b1. PR #113 by @dependabot[bot].
  • โฌ†๏ธ Update docker requirement from ^4.2.0 to ^5.0.3. PR #125 by @dependabot[bot].
  • โฌ†๏ธ Bump actions/checkout from 2 to 3.1.0. PR #194 by @dependabot[bot].
  • โฌ†๏ธ Update mypy requirement from ^0.770 to ^0.971. PR #184 by @dependabot[bot].
  • โฌ†๏ธ Update isort requirement from ^4.3.21 to ^5.8.0. PR #116 by @dependabot[bot].
  • โฌ†๏ธ Bump tiangolo/issue-manager from 0.2.0 to 0.4.0. PR #110 by @dependabot[bot].
  • โฌ†๏ธ Bump actions/setup-python from 1 to 4.1.0. PR #182 by @dependabot[bot].
  • โฌ†๏ธ Update pytest requirement from ^5.4.1 to ^7.0.1. PR #153 by @dependabot[bot].
  • ๐Ÿ“Œ Add external dependencies and Dependabot to get automatic upgrade PRs. PR #109 by @tiangolo.
  • ๐Ÿ‘ท Update Latest Changes. PR #108 by @tiangolo.
  • ๐Ÿ‘ท Allow GitHub workflow dispatch to trigger test and deploy. PR #93 by @tiangolo.
  • ๐Ÿ‘ท Add latest-changes GitHub action, update issue-manager, add funding. PR #70 by @tiangolo.

0.6.0

  • Add docs about installing and pinning dependencies. PR #41.
  • Add slim version. PR #40.
  • Update and refactor bringing all the new features from the base image. Includes:
    • Centralize, simplify, and deduplicate code and setup
    • Move CI to GitHub actions
    • Add Python 3.8 (and Alpine)
    • Add new configs and docs:
      • WORKER_CLASS
      • TIMEOUT
      • KEEP_ALIVE
      • GRACEFUL_TIMEOUT
      • ACCESS_LOG
      • ERROR_LOG
      • GUNICORN_CMD_ARGS
      • MAX_WORKERS
    • PR #39.
  • Disable pip cache during installation. PR #38.
  • Migrate local development from Pipenv to Poetry. PR #34.
  • Add docs for custom PRE_START_PATH env var. PR #33.

0.5.0

  • Refactor tests to use env vars and add image tags for each build date, like tiangolo/uvicorn-gunicorn-fastapi:python3.7-2019-10-15. PR #17.
  • Upgrade Travis. PR #9.

0.4.0

  • Add support for live auto-reload with an additional custom script /start-reload.sh, check the updated documentation. PR #6 in parent image.

0.3.0

  • Set WORKERS_PER_CORE by default to 1, as it shows to have the best performance on benchmarks.
  • Make the default web concurrency, when WEB_CONCURRENCY is not set, to a minimum of 2 workers. This is to avoid bad performance and blocking applications (server application) on small machines (server machine/cloud/etc). This can be overridden using WEB_CONCURRENCY. This applies for example in the case where WORKERS_PER_CORE is set to 1 (the default) and the server has only 1 CPU core. PR #6 and PR #5 in parent image.

0.2.0

  • Make /start.sh run independently, reading and generating used default environment variables. And remove /entrypoint.sh as it doesn't modify anything in the system, only reads environment variables. PR #4 in parent image.

0.1.0

  • Add support for /app/prestart.sh.

License

This project is licensed under the terms of the MIT license.

uvicorn-gunicorn-fastapi-docker's People

Contributors

alejsdev avatar dependabot[bot] avatar estebanx64 avatar graue70 avatar kludex avatar shelbylsmith avatar tiangolo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

uvicorn-gunicorn-fastapi-docker's Issues

High security findings

We're using the tiangolo/uvicorn-gunicorn-fastapi:python3.8-slim image and after a image scan we got a report with 2 high findings relating to perl.

`

featurename featureversion vulnerability namespace description link severity fixedby
perl 5.28.1-6 CVE-2020-10878 debian:10 Perl before 5.30.3 has an integer overflow related to mishandling of a "PL_regkind[OP(n)] == NOTHING" situation. A crafted regular expression could lead to malformed bytecode with a possibility of instruction injection. https://security-tracker.debian.org/tracker/CVE-2020-10878 High ย 
perl 5.28.1-6 CVE-2020-10543 debian:10 Perl before 5.30.3 on 32-bit platforms allows a heap-based buffer overflow because nested regular expression quantifiers have an integer overflow. https://security-tracker.debian.org/tracker/CVE-2020-10543 High ย 

`

is it possible to upgrade perl to 5.30.3?

How to install "openjdk-8-jdk" or "JDK 1.8" in Dockfile?

Dear All,
I have read the "How to use" and "Quick Start" section of your README.md file, and have created the Dockerfile using
"
FROM tiangolo/uvicorn-gunicorn-fastapi:python3.7
"
11

Due to "openjdk-8-jdk" is needed in my project, and I searched "JDK installation package of the docker image" when the command "docker build -t ..." have been executed:

"
RUN apt-get update && apt search jdk
"
However, instead of "openjdk-8-jdk", just only "openjdk-11-jdk" was found. Because of "openjdk-8-jdk" is the only right version of "JDK" that can be used in my project, could you pls help to give me some suggestions for how to install "openjdk-8-jdk" or "jdk 1.8" in the Dockerfile? Thanks!

Some log information of "docker build -t ..." is below: (The log file is only related with "apt search jdk")

search_jdk.txt

Pin versions?

Would it make sense to pin the version of at least FastAPI (potentially the Uvicorn and Gunicorn as well) such that the images create reproducible builds?

I've ran into this often, where I define a specific version of this image (e.g. python3.7-2019-12-11), but whenever I build it, it just pulls the latest version of FastAPI, whatever that is at the time of building (due to RUN pip install fastapi). This may lead to inconsistencies between local versions (which I can specify in requirements.txt) and Docker builds.

Only one CPU is being used on 4 core CPU

Hi,
I am hosting a web application on 4 cores CPU. When I make concurrent requests, only one CPU is being used and other 3 CPU's are using only 3 to 4%. How can I get high parallelism and concurrency?

my docker-compose file is as follows

version: '3'
services:
  web:
    build:
      context: .
      
    volumes:
      - ./app:/app
    ports:
      - "80:80"
    environment:
      - WORKERS=16

    command: bash -c "uvicorn main:app --reload --host 0.0.0.0 --port 80"

I have tried to set WORKERS but it is not getting reflected. Only one CPU is being used regardless of WORKERS value.
How can I correctly set these values?

[QUESTION] How to do logging in a FastApi container, any logging does not appear

Description
I have another project that utilizes fast api using gunicorn running uvicorn workers and supervisor to keep the api up. Recently I came across the issue that none of my logs from files that are not the fast api app are coming through. Initially I tried making an adhoc script to see if it works as well as changing the levels of the logging. I only had success if I set the logging to be at the DEBUG level.

I put together another small project to test out if I would run into this problem with a clean slate and I still couldn't get logging working with a standard

import logging

log = logging.getLogger(__name__)
log.setLevel(logging.INFO)
log.info('help!')

Other steps I took was chmod-ing the /var/log/ directory in case it was a permissions issue but I had no luck. Has anyone else ran into this or have recommendations on how they implemented logging?

Additional context
For context I put up the testing repo here: https://github.com/PunkDork21/fastapi-git-test
Testing it would be like:

docker-compose up -d
docker exec -it git-test_web_1 bash
python3 ./appy.py

The most of the files are similar to what I have in my real project

Long running task with importlib crashes workers

Hey,

I'm running this image on a Kubernetes cluster and I have a simple FastAPI app in place. Everything basically works, except one thing. I'm using importlib to import some self made python file / lib as following:

    try:
        if 'CustomModule' not in sys.modules:
            sys.path.append('Subfolder/')
            main = importlib.import_module("CustomModule")
        else:
            main = importlib.reload("CustomModule")

        ...
    except:
        ...

This import runs a lot of code which takes a lot of time. But still, this is working fine when I'm running FastAPI locally, with this in the code:

if __name__ == "__main__":
    uvicorn.run(app, host="0.0.0.0", port=80)

But this seems to crash when doing this same thing in combination with Gunicorn (so, this image). Logs from the kubernetes pod:

[2019-06-04 09:36:49 +0000] [1] [DEBUG] Current configuration:
  config: /gunicorn_conf.py
  bind: ['0.0.0.0:80']
  backlog: 2048
  workers: 2
  worker_class: uvicorn.workers.UvicornWorker
  threads: 1
  worker_connections: 1000
  max_requests: 0
  max_requests_jitter: 0
  timeout: 30
  graceful_timeout: 30
  keepalive: 120
  limit_request_line: 4094
  limit_request_fields: 100
  limit_request_field_size: 8190
  reload: False
  reload_engine: auto
  reload_extra_files: []
  spew: False
  check_config: False
  preload_app: False
  sendfile: None
  reuse_port: False
  chdir: /app
  daemon: False
  raw_env: []
  pidfile: None
worker_tmp_dir: None
  user: 0
  group: 0
  umask: 0
  initgroups: False
  tmp_upload_dir: None
  secure_scheme_headers: {'X-FORWARDED-PROTOCOL': 'ssl', 'X-FORWARDED-PROTO': 'https', 'X-FORWARDED-SSL': 'on'}
  forwarded_allow_ips: ['127.0.0.1']
  accesslog: None
  disable_redirect_access_to_syslog: False
  access_log_format: %(h)s %(l)s %(u)s %(t)s "%(r)s" %(s)s %(b)s "%(f)s" "%(a)s"
  errorlog: -
  loglevel: debug
  capture_output: False
  logger_class: gunicorn.glogging.Logger
  logconfig: None
  logconfig_dict: {}
  syslog_addr: udp://localhost:514
  syslog: False
  syslog_prefix: None
  syslog_facility: user
  enable_stdio_inheritance: False
  statsd_host: None
  statsd_prefix: 
  proc_name: None
  default_proc_name: main:app
  pythonpath: None
  paste: None
on_starting: <function OnStarting.on_starting at 0x7f7568b5ee18>
  on_reload: <function OnReload.on_reload at 0x7f7568b5ef28>
  when_ready: <function WhenReady.when_ready at 0x7f75688d30d0>
  pre_fork: <function Prefork.pre_fork at 0x7f75688d31e0>
  post_fork: <function Postfork.post_fork at 0x7f75688d32f0>
  post_worker_init: <function PostWorkerInit.post_worker_init at 0x7f75688d3400>
  worker_int: <function WorkerInt.worker_int at 0x7f75688d3510>
  worker_abort: <function WorkerAbort.worker_abort at 0x7f75688d3620>
  pre_exec: <function PreExec.pre_exec at 0x7f75688d3730>
  pre_request: <function PreRequest.pre_request at 0x7f75688d3840>
  post_request: <function PostRequest.post_request at 0x7f75688d38c8>
  child_exit: <function ChildExit.child_exit at 0x7f75688d39d8>
  worker_exit: <function WorkerExit.worker_exit at 0x7f75688d3ae8>
  nworkers_changed: <function NumWorkersChanged.nworkers_changed at 0x7f75688d3bf8>
  on_exit: <function OnExit.on_exit at 0x7f75688d3d08>
  proxy_protocol: False
  proxy_allow_ips: ['127.0.0.1']
  keyfile: None
  certfile: None
  ssl_version: 2
  cert_reqs: 0
  ca_certs: None
  suppress_ragged_eofs: True
  do_handshake_on_connect: False
  ciphers: TLSv1
  raw_paste_global_conf: []
[2019-06-04 09:36:49 +0000] [1] [INFO] Starting gunicorn 19.9.0
[2019-06-04 09:36:49 +0000] [1] [DEBUG] Arbiter booted
[2019-06-04 09:36:49 +0000] [1] [INFO] Listening at: http://0.0.0.0:80 (1)
[2019-06-04 09:36:49 +0000] [1] [INFO] Using worker: uvicorn.workers.UvicornWorker
[2019-06-04 09:36:49 +0000] [8] [INFO] Booting worker with pid: 8
[2019-06-04 09:36:49 +0000] [9] [INFO] Booting worker with pid: 9
[2019-06-04 09:36:49 +0000] [1] [DEBUG] 2 workers
email-validator not installed, email fields will be treated as str.
To install, run: pip install email-validator
email-validator not installed, email fields will be treated as str.
To install, run: pip install email-validator
[2019-06-04 09:36:58 +0000] [9] [INFO] Started server process [9]
[2019-06-04 09:36:58 +0000] [9] [INFO] Waiting for application startup.
[2019-06-04 09:36:58 +0000] [8] [INFO] Started server process [8]
[2019-06-04 09:36:58 +0000] [8] [INFO] Waiting for application startup.
[2019-06-04 09:37:07 +0000] [34] [INFO] ('172.17.0.1', 49573) - "GET /docs HTTP/1.1" 200
[2019-06-04 09:37:08 +0000] [34] [INFO] ('172.17.0.1', 49573) - "GET /openapi.json HTTP/1.1" 200
Endpoint called
Reached function
Module is not in sysmodules
Executing sys append
[2019-06-04 09:37:15 +0000] [46] [INFO] Booting worker with pid: 46
Endpoint called
Reached function
Module is not in sysmodules
Executing sys append
email-validator not installed, email fields will be treated as str.
To install, run: pip install email-validator
[2019-06-04 09:37:23 +0000] [46] [INFO] Started server process [46]
[2019-06-04 09:37:23 +0000] [46] [INFO] Waiting for application startup.
[2019-06-04 09:37:25 +0000] [50] [INFO] Booting worker with pid: 50
[2019-06-04 09:37:25 +0000] [46] [DEBUG] ('172.17.0.1', 49580) - Connected
Endpoint called
Reached function
Module is not in sysmodules
Executing sys append
email-validator not installed, email fields will be treated as str.
To install, run: pip install email-validator
[2019-06-04 09:37:33 +0000] [50] [INFO] Started server process [50]
[2019-06-04 09:37:33 +0000] [50] [INFO] Waiting for application startup.
[2019-06-04 09:37:34 +0000] [54] [INFO] Booting worker with pid: 54
email-validator not installed, email fields will be treated as str.
To install, run: pip install email-validator
[2019-06-04 09:37:38 +0000] [54] [INFO] Started server process [54]
[2019-06-04 09:37:38 +0000] [54] [INFO] Waiting for application startup.

On the /docs page, I get to see TypeError: Failed to fetch after the workers restart.

I really don't know why this is happening. It looks like Importlib is not throwing any exception, and Gunicorn is also not logging any Critical Timeout error or anything like that.

I would greatly appreciate any help, thanks!

[1] [CRITICAL] WORKER TIMEOUT (pid:45)

i post many requests to the server, the gunicorn worker will raise the error "[CRITICAL] WORKER TIMEOUT (pid:45)". and it can not deal with the last request before restartใ€‚so the last request which this error worker get before restart will has not any response. please help me how to solve this error @tiangolo ,Thanks

my gunicorn config is :
bind = "0.0.0.0:7075"
worker=13
worker_connections = 1000
keepalive = 20
daemon = False
timeout = 120
preload_app = True
max_requests_jitter = 1024
worker_class = "uvicorn.workers.UvicornWorker"
max_requests = 2048
graceful_timeout = 120
errorlog = "/logs/gunicorn_error.log"

Adding a new APK package to the image

As the your image is based on Alpine, I need to install additional system dependencies such as cron. However, apk cannot be found while running a Docker build.

Is there a workaround for that?

Unable to start container

I have the following Dockerfile:

FROM tiangolo/uvicorn-gunicorn-fastapi:python3.7

COPY . /app
#COPY ./requirements.txt /app
#
#run pip install --upgrade pip && \
#    pip install -r /app/requirements.txt

ENV APP_NAME "control.controller:app"

I have a folder control and a controller.py in it.

But my control app is not started. It starts the simple main.py from this repo instead.

Any ideas?

Question: Why does python3.8-slim image not come with fastapi pre-installed?

I have pulled and ran the image. And then opened terminal inside the container and when I ran 'pip3 freeze', this is the output:

click==7.1.1
gunicorn==20.0.4
h11==0.9.0
httptools==0.1.1
uvicorn==0.11.3
uvloop==0.14.0
websockets==8.1

So, I would like to know why fastAPI is not installed by default, when all the helper server packages are and it being in the name of this repo?

dockerhub image not latest version of fastapi

Did a pull of the latest image, generated a container, and connected to the container to see which version of FastAPI is running. The result was 0.29.1 instead of 0.30.0.
In one console:
docker run -i tiangolo/uvicorn-gunicorn-fastapi:python3.7
In another:

> docker container ls
CONTAINER ID        IMAGE                                         COMMAND             CREATED             STATUS              PORTS               NAMES
1141e2b39112        tiangolo/uvicorn-gunicorn-fastapi:python3.7   "/start.sh"         9 seconds ago       Up 8 seconds        80/tcp              nervous_euler

> docker exec -it nervous_euler /bin/bash
root@1141e2b39112:/app# pip list
Package    Version
---------- -------
Click      7.0
fastapi    0.29.1
gunicorn   19.9.0
h11        0.8.1
httptools  0.0.13
pip        19.1.1
pydantic   0.26
setuptools 41.0.1
starlette  0.12.0
uvicorn    0.7.1
uvloop     0.12.2
websockets 7.0
wheel      0.33.1
root@1141e2b39112:/app#

[CRITICAL] WORKER TIMEOUT with basic fastapi and no load. (with example)

First of all I want to say I love fastapi. It a blast to work with so big thanks to you and your team.

Now my problem.
We have a fastapi instance running on kubernetes. The weird part is that disk usage spikes until the point it is being evicted because lack of space. Upon further inspection we saw [CRITICAL] WORKER TIMEOUT. Now when we exec into the docker container we found a lot of files like these: core.gunicorn.3161.1590754193 which seem to be around 40M each. These are unreadable.

After long searching I didn't find anything related to the core.gunicorn files. But upon further inspection it became apparent that it had to do with the [CRITICAL] WORKER TIMEOUT. Because every file seems to correspond with one [CRITICAL] WORKER TIMEOUT. So I went after the problem to try and fix the Critical issue because it seems that will also fix my disk usage issue.

The weird thing is that we don't have crazy load or long processes running. Which often seems the problem with this [CRITICAL] WORKER TIMEOUT log.

So I made a simple basic fastapi app with about the same settings that we use on kubernetes. And local I get the same log [CRITICAL] WORKER TIMEOUT. Local I don't get the files but that is something I can look into when this critical issue is fixed.

The repo with the simple fastapi + docker can be found here: https://github.com/NielsDebrier/fastapiProblem
How to run the docker with the correct settings you can find in the readme.

If anyone could help me or have suggestions on what to do that would be amazing.
I have searched around a lot and was not able to find a solution for this. And because the example app is so simple I don't understand why there would be [CRITICAL] logs.

Thank you

Here are the debug logs I get local:

Checking for script in /app/prestart.sh
Running script /app/prestart.sh
Running inside /app/prestart.sh, you could add migrations to this file, e.g.:

#! /usr/bin/env bash

# Let the DB start
sleep 10;
# Run migrations
alembic upgrade head

{"loglevel": "debug", "workers": 6, "bind": "0.0.0.0:8080", "workers_per_core": 1.0, "host": "0.0.0.0", "port": "8080"}
[2020-05-29 11:51:35 +0000] [1] [DEBUG] Current configuration:
  config: /gunicorn_conf.py
  bind: ['0.0.0.0:8080']
  backlog: 2048
  workers: 6
  worker_class: uvicorn.workers.UvicornWorker
  threads: 1
  worker_connections: 1000
  max_requests: 0
  max_requests_jitter: 0
  timeout: 30
  graceful_timeout: 30
  keepalive: 120
  limit_request_line: 4094
  limit_request_fields: 100
  limit_request_field_size: 8190
  reload: False
  reload_engine: auto
  reload_extra_files: []
  spew: False
  check_config: False
  preload_app: False
  sendfile: None
  reuse_port: False
  chdir: /app
  daemon: False
  raw_env: []
  pidfile: None
  worker_tmp_dir: None
  user: 0
  group: 0
  umask: 0
  initgroups: False
  tmp_upload_dir: None
  secure_scheme_headers: {'X-FORWARDED-PROTOCOL': 'ssl', 'X-FORWARDED-PROTO': 'https', 'X-FORWARDED-SSL': 'on'}
  forwarded_allow_ips: ['127.0.0.1']
  accesslog: None
  disable_redirect_access_to_syslog: False
  access_log_format: %(h)s %(l)s %(u)s %(t)s "%(r)s" %(s)s %(b)s "%(f)s" "%(a)s"
  errorlog: -
  loglevel: debug
  capture_output: False
  logger_class: gunicorn.glogging.Logger
  logconfig: None
  logconfig_dict: {}
  syslog_addr: udp://localhost:514
  syslog: False
  syslog_prefix: None
  syslog_facility: user
  enable_stdio_inheritance: False
  statsd_host: None
  dogstatsd_tags:
  statsd_prefix:
  proc_name: None
  default_proc_name: fast.main:app
  pythonpath: None
  paste: None
  on_starting: <function OnStarting.on_starting at 0x7f5d40ba59e0>
  on_reload: <function OnReload.on_reload at 0x7f5d40ba5b00>
  when_ready: <function WhenReady.when_ready at 0x7f5d40ba5c20>
  pre_fork: <function Prefork.pre_fork at 0x7f5d40ba5d40>
  post_fork: <function Postfork.post_fork at 0x7f5d40ba5e60>
  post_worker_init: <function PostWorkerInit.post_worker_init at 0x7f5d40ba5f80>
  worker_int: <function WorkerInt.worker_int at 0x7f5d40b210e0>
  worker_abort: <function WorkerAbort.worker_abort at 0x7f5d40b21200>
  pre_exec: <function PreExec.pre_exec at 0x7f5d40b21320>
  pre_request: <function PreRequest.pre_request at 0x7f5d40b21440>
  post_request: <function PostRequest.post_request at 0x7f5d40b214d0>
  child_exit: <function ChildExit.child_exit at 0x7f5d40b215f0>
  worker_exit: <function WorkerExit.worker_exit at 0x7f5d40b21710>
  nworkers_changed: <function NumWorkersChanged.nworkers_changed at 0x7f5d40b21830>
  on_exit: <function OnExit.on_exit at 0x7f5d40b21950>
  proxy_protocol: False
  proxy_allow_ips: ['127.0.0.1']
  keyfile: None
  certfile: None
  ssl_version: 2
  cert_reqs: 0
  ca_certs: None
  suppress_ragged_eofs: True
  do_handshake_on_connect: False
  ciphers: None
  raw_paste_global_conf: []
  strip_header_spaces: False
[2020-05-29 11:51:35 +0000] [1] [INFO] Starting gunicorn 20.0.4
[2020-05-29 11:51:35 +0000] [1] [DEBUG] Arbiter booted
[2020-05-29 11:51:35 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1)
[2020-05-29 11:51:35 +0000] [1] [INFO] Using worker: uvicorn.workers.UvicornWorker
[2020-05-29 11:51:35 +0000] [8] [INFO] Booting worker with pid: 8
[2020-05-29 11:51:35 +0000] [9] [INFO] Booting worker with pid: 9
[2020-05-29 11:51:35 +0000] [8] [INFO] Started server process [8]
[2020-05-29 11:51:35 +0000] [8] [INFO] Waiting for application startup.
[2020-05-29 11:51:35 +0000] [8] [INFO] Application startup complete.
[2020-05-29 11:51:35 +0000] [10] [INFO] Booting worker with pid: 10
[2020-05-29 11:51:35 +0000] [11] [INFO] Booting worker with pid: 11
[2020-05-29 11:51:35 +0000] [9] [INFO] Started server process [9]
[2020-05-29 11:51:35 +0000] [9] [INFO] Waiting for application startup.
[2020-05-29 11:51:35 +0000] [9] [INFO] Application startup complete.
[2020-05-29 11:51:35 +0000] [12] [INFO] Booting worker with pid: 12
[2020-05-29 11:51:36 +0000] [10] [INFO] Started server process [10]
[2020-05-29 11:51:36 +0000] [10] [INFO] Waiting for application startup.
[2020-05-29 11:51:36 +0000] [10] [INFO] Application startup complete.
[2020-05-29 11:51:36 +0000] [13] [INFO] Booting worker with pid: 13
[2020-05-29 11:51:36 +0000] [11] [INFO] Started server process [11]
[2020-05-29 11:51:36 +0000] [11] [INFO] Waiting for application startup.
[2020-05-29 11:51:36 +0000] [11] [INFO] Application startup complete.
[2020-05-29 11:51:36 +0000] [1] [DEBUG] 6 workers
[2020-05-29 11:51:36 +0000] [12] [INFO] Started server process [12]
[2020-05-29 11:51:36 +0000] [12] [INFO] Waiting for application startup.
[2020-05-29 11:51:36 +0000] [12] [INFO] Application startup complete.
[2020-05-29 11:51:36 +0000] [13] [INFO] Started server process [13]
[2020-05-29 11:51:36 +0000] [13] [INFO] Waiting for application startup.
[2020-05-29 11:51:36 +0000] [13] [INFO] Application startup complete.
[2020-05-29 11:52:06 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:8)
[2020-05-29 11:52:06 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:9)
[2020-05-29 11:52:06 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:10)
[2020-05-29 11:52:06 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:11)
[2020-05-29 11:52:06 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:12)
[2020-05-29 11:52:06 +0000] [14] [INFO] Booting worker with pid: 14
[2020-05-29 11:52:06 +0000] [15] [INFO] Booting worker with pid: 15
[2020-05-29 11:52:06 +0000] [16] [INFO] Booting worker with pid: 16
[2020-05-29 11:52:06 +0000] [14] [INFO] Started server process [14]
[2020-05-29 11:52:06 +0000] [14] [INFO] Waiting for application startup.
[2020-05-29 11:52:06 +0000] [14] [INFO] Application startup complete.
[2020-05-29 11:52:06 +0000] [15] [INFO] Started server process [15]
[2020-05-29 11:52:06 +0000] [15] [INFO] Waiting for application startup.
[2020-05-29 11:52:06 +0000] [15] [INFO] Application startup complete.
[2020-05-29 11:52:06 +0000] [17] [INFO] Booting worker with pid: 17
[2020-05-29 11:52:06 +0000] [16] [INFO] Started server process [16]
[2020-05-29 11:52:06 +0000] [16] [INFO] Waiting for application startup.
[2020-05-29 11:52:06 +0000] [16] [INFO] Application startup complete.
[2020-05-29 11:52:06 +0000] [1] [DEBUG] 5 workers
[2020-05-29 11:52:06 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:13)
[2020-05-29 11:52:06 +0000] [18] [INFO] Booting worker with pid: 18
[2020-05-29 11:52:06 +0000] [17] [INFO] Started server process [17]
[2020-05-29 11:52:06 +0000] [17] [INFO] Waiting for application startup.
[2020-05-29 11:52:06 +0000] [17] [INFO] Application startup complete.
[2020-05-29 11:52:06 +0000] [19] [INFO] Booting worker with pid: 19
[2020-05-29 11:52:06 +0000] [18] [INFO] Started server process [18]
[2020-05-29 11:52:06 +0000] [18] [INFO] Waiting for application startup.
[2020-05-29 11:52:06 +0000] [18] [INFO] Application startup complete.
[2020-05-29 11:52:06 +0000] [1] [DEBUG] 6 workers
[2020-05-29 11:52:06 +0000] [19] [INFO] Started server process [19]
[2020-05-29 11:52:06 +0000] [19] [INFO] Waiting for application startup.
[2020-05-29 11:52:06 +0000] [19] [INFO] Application startup complete.
[2020-05-29 11:52:36 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:14)
[2020-05-29 11:52:36 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:15)
[2020-05-29 11:52:36 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:16)
[2020-05-29 11:52:36 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:17)
[2020-05-29 11:52:36 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:18)
[2020-05-29 11:52:36 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:19)
[2020-05-29 11:52:36 +0000] [38] [INFO] Booting worker with pid: 38
[2020-05-29 11:52:36 +0000] [39] [INFO] Booting worker with pid: 39
[2020-05-29 11:52:36 +0000] [38] [INFO] Started server process [38]
[2020-05-29 11:52:36 +0000] [38] [INFO] Waiting for application startup.
[2020-05-29 11:52:36 +0000] [38] [INFO] Application startup complete.
[2020-05-29 11:52:36 +0000] [40] [INFO] Booting worker with pid: 40
[2020-05-29 11:52:36 +0000] [39] [INFO] Started server process [39]
[2020-05-29 11:52:36 +0000] [39] [INFO] Waiting for application startup.
[2020-05-29 11:52:36 +0000] [39] [INFO] Application startup complete.
[2020-05-29 11:52:36 +0000] [41] [INFO] Booting worker with pid: 41
[2020-05-29 11:52:36 +0000] [1] [DEBUG] 4 workers
[2020-05-29 11:52:36 +0000] [42] [INFO] Booting worker with pid: 42
[2020-05-29 11:52:37 +0000] [43] [INFO] Booting worker with pid: 43
[2020-05-29 11:52:37 +0000] [40] [INFO] Started server process [40]
[2020-05-29 11:52:37 +0000] [40] [INFO] Waiting for application startup.
[2020-05-29 11:52:37 +0000] [40] [INFO] Application startup complete.
[2020-05-29 11:52:37 +0000] [1] [DEBUG] 6 workers
[2020-05-29 11:52:37 +0000] [41] [INFO] Started server process [41]
[2020-05-29 11:52:37 +0000] [41] [INFO] Waiting for application startup.
[2020-05-29 11:52:37 +0000] [41] [INFO] Application startup complete.
[2020-05-29 11:52:37 +0000] [42] [INFO] Started server process [42]
[2020-05-29 11:52:37 +0000] [42] [INFO] Waiting for application startup.
[2020-05-29 11:52:37 +0000] [42] [INFO] Application startup complete.
[2020-05-29 11:52:37 +0000] [43] [INFO] Started server process [43]
[2020-05-29 11:52:37 +0000] [43] [INFO] Waiting for application startup.
[2020-05-29 11:52:37 +0000] [43] [INFO] Application startup complete.
[2020-05-29 11:53:07 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:38)
[2020-05-29 11:53:07 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:39)
[2020-05-29 11:53:07 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:40)
[2020-05-29 11:53:07 +0000] [44] [INFO] Booting worker with pid: 44
[2020-05-29 11:53:07 +0000] [45] [INFO] Booting worker with pid: 45
[2020-05-29 11:53:07 +0000] [46] [INFO] Booting worker with pid: 46
[2020-05-29 11:53:07 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:41)
[2020-05-29 11:53:07 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:42)
[2020-05-29 11:53:07 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:43)
[2020-05-29 11:53:07 +0000] [47] [INFO] Booting worker with pid: 47
[2020-05-29 11:53:07 +0000] [44] [INFO] Started server process [44]
[2020-05-29 11:53:07 +0000] [44] [INFO] Waiting for application startup.
[2020-05-29 11:53:07 +0000] [44] [INFO] Application startup complete.
[2020-05-29 11:53:07 +0000] [45] [INFO] Started server process [45]
[2020-05-29 11:53:07 +0000] [45] [INFO] Waiting for application startup.
[2020-05-29 11:53:07 +0000] [45] [INFO] Application startup complete.
[2020-05-29 11:53:07 +0000] [46] [INFO] Started server process [46]
[2020-05-29 11:53:07 +0000] [46] [INFO] Waiting for application startup.
[2020-05-29 11:53:07 +0000] [46] [INFO] Application startup complete.
[2020-05-29 11:53:07 +0000] [48] [INFO] Booting worker with pid: 48
[2020-05-29 11:53:07 +0000] [47] [INFO] Started server process [47]
[2020-05-29 11:53:07 +0000] [47] [INFO] Waiting for application startup.
[2020-05-29 11:53:07 +0000] [47] [INFO] Application startup complete.
[2020-05-29 11:53:07 +0000] [49] [INFO] Booting worker with pid: 49
[2020-05-29 11:53:07 +0000] [48] [INFO] Started server process [48]
[2020-05-29 11:53:07 +0000] [48] [INFO] Waiting for application startup.
[2020-05-29 11:53:07 +0000] [48] [INFO] Application startup complete.
[2020-05-29 11:53:07 +0000] [49] [INFO] Started server process [49]
[2020-05-29 11:53:07 +0000] [49] [INFO] Waiting for application startup.
[2020-05-29 11:53:07 +0000] [49] [INFO] Application startup complete.
[2020-05-29 11:53:37 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:44)
[2020-05-29 11:53:37 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:45)
[2020-05-29 11:53:37 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:46)
[2020-05-29 11:53:37 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:47)
[2020-05-29 11:53:37 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:48)
[2020-05-29 11:53:37 +0000] [50] [INFO] Booting worker with pid: 50
...

Pull access denied for ttiangolo/uvicorn-gunicorn-fastapi

H, I get this error when I try to build an image from this tool:

pull access denied for ttiangolo/uvicorn-gunicorn-fastapi, repository does not exist or may require 'docker login': denied: requested access to the resource is denied

How can I get past this? It doesn't happen with other repos.

[CRITICAL] WORKER TIMEOUT

I'm running the uvicorn-gunicorn-fastapi:python3.7 Docker-Image on an Azure App Service (B2: 200 ACU, 2 Cores, 3.5 GB Memory, OS: Linux).

My Dockerfile looks as follows:

FROM tiangolo/uvicorn-gunicorn-fastapi:python3.7

WORKDIR /app

RUN apt-get update \
    && apt install -y tesseract-ocr tesseract-ocr-deu libgl1-mesa-dev poppler-utils \
    && apt clean


COPY /app .

RUN pip install -r /app/requirements.txt

The service accepts POST requests with a file attached and processes it using tesseract and open-cv.
After the file has been processed, the service responds with the result of the processed file.

Oftentimes, however, the processing stops with the following error:

2020-11-04T13:48:58.000206215Z [2020-11-04 13:48:57 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:8)

2020-11-04T13:48:58.529238062Z [2020-11-04 13:48:58 +0000] [90] [INFO] Booting worker with pid: 90
2020-11-04T13:49:00.743342241Z [2020-11-04 13:49:00 +0000] [90] [INFO] Started server process [90]
2020-11-04T13:49:00.743447942Z [2020-11-04 13:49:00 +0000] [90] [INFO] Waiting for application startup.
2020-11-04T13:49:00.748887110Z [2020-11-04 13:49:00 +0000] [90] [INFO] Application startup complete.

This error does not occur after the default timeout of 120 seconds. Still, I tried to get rid of the error by using a custom gunicorn_conf.py and increased the timeout to 180 seconds. Additionally, I tried to solve the issue by increasing/decreasing the amount of workers per core. The error still remains.
I also checked the log-files on the App Service but there isn't any further information about the error.
Changing the LOG_LEVEL within the gunicorn_conf-file didn't help, either.

Does anyone know a solution for the problem? Running the Docker-Container locally works just fine (Windows 10, Docker Engine v19.03.13)

Support for nvidia

Hello,

Thanks for your work ! Do you plan to add a docker image starting from nvidia/cuda to have cuda toolkit installed ?

Document request: How to deploy to DigitalOcean

It will be great if you can add a doc about how to deploy the image to a hosting service such as DigitalOcean.
Covering the security issues and step by step tutorial will be very practical for many.

How to run as a user and not root

Hello,
I need to run as a user other than root

FROM tiangolo/uvicorn-gunicorn-fastapi:python3.8-alpine3.10

COPY ./app /app
COPY requirements.txt ./

RUN pip install --no-cache-dir -r requirements.txt \
&& addgroup -S appgroup && adduser -S appuser -G appgroup  

USER appuser

Yet when I run image from above dockerfile it errors with below msg

Same code works without issues when I remove USER line above and run as root user

Checking for script in /app/prestart.sh
Running script /app/prestart.sh
Running inside /app/prestart.sh, you could add migrations to this file, e.g.:

#! /usr/bin/env bash

# Let the DB start
sleep 10;
# Run migrations
alembic upgrade head

{"loglevel": "info", "workers": 2, "bind": "0.0.0.0:80", "graceful_timeout": 120, "timeout": 120, "keepalive": 5, "errorlog": "-", "accesslog": "-", "workers_per_core": 1.0, "use_max_workers": null, "host": "0.0.0.0", "port": "80"}
[2020-06-18 19:16:07 +0000] [1] [INFO] Starting gunicorn 20.0.4
[2020-06-18 19:16:07 +0000] [1] [ERROR] Retrying in 1 second.
[2020-06-18 19:16:08 +0000] [1] [ERROR] Retrying in 1 second.
[2020-06-18 19:16:09 +0000] [1] [ERROR] Retrying in 1 second.
[2020-06-18 19:16:10 +0000] [1] [ERROR] Retrying in 1 second.
[2020-06-18 19:16:11 +0000] [1] [ERROR] Retrying in 1 second.
[2020-06-18 19:16:12 +0000] [1] [ERROR] Can't connect to ('0.0.0.0', 80)

what am I missing?

Size of Docker image for uvicorn-gunicorn-fastapi-docker is too large

FastAPI is great, really nice project, and it is awesome that there is a ready-to-be-used image for it. However, its size is too large for it to be practical - it around 1Gb! Using Alpine image is not ideal, because Alpine distro has incompatibilities with some compiled packages, due to musl lib being used instead of glibc. Sooner or later you always run into issues with Alpine.

Can we have instructions / guide somewhere how to configure uvicorn + gunicorn for fastapi? It would allow anyone to quickly build custom images for FASTApi?

docker-compose and gunicorn_conf.py file preparation?

Hi,
I want to pass custom settings for Gunicorn and Uvicorn for workers settings. I have followed this file
So I have added gunicorn_conf.py file in my /app/ folder. Directory structure is as follows

fastapi
      |-app
          |-main.py
          |- gunicorn_conf.py
      |-docker-compose.yml
      |-Dockerfile

The content of gunicorn_conf.py

import json
import multiprocessing
import os

workers_per_core_str = os.getenv("WORKERS_PER_CORE", "10")
max_workers_str = os.getenv("MAX_WORKERS")
use_max_workers = None
if max_workers_str:
    use_max_workers = int(max_workers_str)
web_concurrency_str = os.getenv("WEB_CONCURRENCY", None)

host = os.getenv("HOST", "0.0.0.0")
port = os.getenv("PORT", "80")
bind_env = os.getenv("BIND", None)
use_loglevel = os.getenv("LOG_LEVEL", "info")
if bind_env:
    use_bind = bind_env
else:
    use_bind = f"{host}:{port}"

cores = multiprocessing.cpu_count()
workers_per_core = float(workers_per_core_str)
default_web_concurrency = workers_per_core * cores
if web_concurrency_str:
    web_concurrency = int(web_concurrency_str)
    assert web_concurrency > 0
else:
    web_concurrency = max(int(default_web_concurrency), 2)
    if use_max_workers:
        web_concurrency = min(web_concurrency, use_max_workers)
accesslog_var = os.getenv("ACCESS_LOG", "-")
use_accesslog = accesslog_var or None
errorlog_var = os.getenv("ERROR_LOG", "-")
use_errorlog = errorlog_var or None
graceful_timeout_str = os.getenv("GRACEFUL_TIMEOUT", "120")
timeout_str = os.getenv("TIMEOUT", "120")
keepalive_str = os.getenv("KEEP_ALIVE", "5")

# Gunicorn config variables
loglevel = use_loglevel
workers = web_concurrency
bind = use_bind
errorlog = use_errorlog
worker_tmp_dir = "/dev/shm"
accesslog = use_accesslog
graceful_timeout = int(graceful_timeout_str)
timeout = int(timeout_str)
keepalive = int(keepalive_str)


# For debugging and testing
log_data = {
    "loglevel": loglevel,
    "workers": workers,
    "bind": bind,
    "graceful_timeout": graceful_timeout,
    "timeout": timeout,
    "keepalive": keepalive,
    "errorlog": errorlog,
    "accesslog": accesslog,
    # Additional, non-gunicorn variables
    "workers_per_core": workers_per_core,
    "use_max_workers": use_max_workers,
    "host": host,
    "port": port,
}
print(json.dumps(log_data))

And content of docker-compose.yml

version: '3'
services:
  web:
    build:
      context: .
      
    volumes:
      - ./app:/app
    ports:
      - "80:80"
    #environment:

    command: bash -c "uvicorn main:app --reload --host 0.0.0.0 --port 80"
    # Infinite loop, to keep it alive, for debugging
    # command: bash -c "while true; do echo 'sleeping...' && sleep 10; done"

My server is not picking parameters of gunicorn_conf.py.

Am I missing something here?

how to develop ?

Hi,

I like to develop within the docker container so that I use the same environment that is used later in prod.

I run:
docker run -it --rm -v /c/Users/myuser/chat/api:/app -p 8090:8000 my-image-name:0.0.2 uvicorn main:app --reload

So I map using -v my local directory inside the container. When I change a file locally the reloader within docker detecs it and restarts the application. That works fine.
But if I try to connect to http://localhost:8090/ I get a "The connection was reset" message in the webbrowser (firefox).

Uvicorn logs that its running on http://127.0.0.1:8000 so -p 8090:8000 should work - I thought :)

my-image-name:0.0.2 is based on tiangolo/uvicorn-gunicorn-fastapi:python3.8-slim and basically only installs some python packages and then does a COPY /api /app
I develop on Windows 10 using WSL (not WSL2).

Does anyone else had this issue ?

Thanks!

Gunicorn instrumentation with statsd_host

I would like to configure--statsd-host for Gunicorn instrumentation in tiangolo/uvicorn-gunicorn-fastapi:python3.7-alpine3.8. So far, I have not had success e.g. with echo "statsd_host = 'localhost:9125'" >> /gunicorn_conf.py in /apps/prestart.sh. Is there a better way to try this and is is possible at all?

Why is my app stuck at booting new workers?

I have a rather simple python script that loads a large file in the beginning, before defining the fastapi endpoints.

app = FastAPI()
embeddings = np.load('embeddings.npy') # 15 sec to load

This takes about 15 seconds on a normal laptop. The app runs fine if i start it with vanilla uvicorn without a docker.

When using the fastapi-docker and the following settings are defined in thegunicorn_conf.py

{"loglevel": "info", "workers": 1, "bind": "0.0.0.0:80", "graceful_timeout": 300, "timeout": 300, "keepalive": 300, "errorlog": "-", "accesslog": "-", "workers_per_core": 1, "use_max_workers": null, "host": "0.0.0.0", "port": "80"}

[2020-05-08 08:41:27 +0000] [1] [INFO] Listening at: http://0.0.0.0:80 (1)
[2020-05-08 08:41:27 +0000] [1] [INFO] Using worker: uvicorn.workers.UvicornWorker
[2020-05-08 08:41:27 +0000] [8] [INFO] Booting worker with pid: 8
[2020-05-08 08:41:43 +0000] [19] [INFO] Booting worker with pid: 19
[2020-05-08 08:42:00 +0000] [30] [INFO] Booting worker with pid: 30
[2020-05-08 08:42:17 +0000] [41] [INFO] Booting worker with pid: 41
[2020-05-08 08:42:33 +0000] [52] [INFO] Booting worker with pid: 52
[2020-05-08 08:42:51 +0000] [63] [INFO] Booting worker with pid: 63
[2020-05-08 08:43:05 +0000] [74] [INFO] Booting worker with pid: 74
[2020-05-08 08:43:20 +0000] [85] [INFO] Booting worker with pid: 85
[2020-05-08 08:43:36 +0000] [96] [INFO] Booting worker with pid: 96
[2020-05-08 08:43:52 +0000] [107] [INFO] Booting worker with pid: 107
[2020-05-08 08:44:06 +0000] [118] [INFO] Booting worker with pid: 118
[2020-05-08 08:44:20 +0000] [129] [INFO] Booting worker with pid: 129
[2020-05-08 08:44:34 +0000] [140] [INFO] Booting worker with pid: 140
[2020-05-08 08:44:50 +0000] [151] [INFO] Booting worker with pid: 151
[2020-05-08 08:45:05 +0000] [162] [INFO] Booting worker with pid: 162
[2020-05-08 08:45:22 +0000] [173] [INFO] Booting worker with pid: 173
[2020-05-08 08:45:36 +0000] [184] [INFO] Booting worker with pid: 184
[2020-05-08 08:45:54 +0000] [195] [INFO] Booting worker with pid: 195
[2020-05-08 08:46:08 +0000] [206] [INFO] Booting worker with pid: 206
[2020-05-08 08:46:24 +0000] [217] [INFO] Booting worker with pid: 217

I set all kinds of timeout to 300 and workers to 1. The app is never reachable at http://0.0.0.0:80 since it's just spawning new workers and the startup never completes.

Started with docker run -p 80:80 -t my_app

Not able to pass multiple environment variables(BIND & WEB_CONCURRENCY together)

docker run -d -p 80:8080 -e BIND="0.0.0.0:8080" myimage works
docker run -d -p 80:80 -e WEB_CONCURRENCY="2" myimage works

I want to do something like -
docker run -d -p 80:8080 -e BIND="0.0.0.0:8080" -e WEB_CONCURRENCY="2" myimage

But when i run this line it actually takes only first variable.

Please let me know how can i send multiple environment variables.

no module found or cannot import package on startup

I can't quite figure out how to set up my app in the Dockerfile. My repo contains the following structure. valis is the python package that contains my FastAPI app, set up for larger scale, with the app variable inside the module wsgi.py or main.py. In the code I also import valis at times to load utility functions, get versions and config, etc.

  • Dockerfile
  • pyproject.toml
  • poetry.lock
  • python
    • valis
      • __init__.py
      • main.py
      • wsgi.py (app variable)
      • routes
        • paths.py
        • ...

My Dockerfile is the following

FROM tiangolo/uvicorn-gunicorn-fastapi:python3.7

# Install Poetry
RUN curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | POETRY_HOME=/opt/poetry python && \
    cd /usr/local/bin && \
    ln -s /opt/poetry/bin/poetry && \
    poetry config virtualenvs.create false

# Copy using poetry.lock* in case it doesn't exist yet
COPY ./pyproject.toml ./poetry.lock* ./

RUN poetry install --no-root --no-dev

# copy over application
COPY ./python/valis /app

# set environment variables
ENV MODULE_NAME = "valis.wsgi"

This should create a folder at /app/valis and APP_MODULE should resolve to valis.wsgi:app. However, when I start the docker, I keep getting errors that either gunicorn cannot find the module ModuleNotFoundError: No module named '= valis', or it cannot import the module valis, e.g.

  File "/app/main.py", line 17, in <module>
    import valis

depending on how I change my Dockerfile settings. I've tried setting WORKDIR /app and ENV PYTHONPATH = "${PYTONPATH}:/app" at the final point to try to ensure the /app/valis is in the python path. I've also tried COPY ./python/valis /app/app/. None of these work. What's the suggested way of setting this up so that the app can be accessed and valis is a valid python package?

Run w/ CUDA GPU?

This is all about CPU workers. What about GPU? Is there some way to tell uvicorn / gunicorn to run the worker on GPU?

Using nvidia-docker instead of docker, it should be possible. But I think I need to tell Gunicorn to boot workers on the GPU devices?

Wrong UTC time?

Using Windows 10, Docker Desktop

In container:
Thu Feb 13 17:49:33 Vilnius 2020
Google says:
Thu Feb 13 18:49:33 Vilnius 2020

Tried:

apt-get update && apt-get install tzdata
ENV TZ=Europe/Vilnius
ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone

All other containers that use ubuntu images are OK. This is the first I encountered;

Please help ? I'm not too familiar with docker, and this is quite a problem if you ask me

GUNICORN_CONF set, file exists, but it's not being read

Hi all,

My current issue: I can't seem to make gunicorn/uvicorn honor the gunicorn_conf.py file I write at the /app folder, and all errors/access logs are being output into stdout.

Could you please advise the best way to make errorlog and accesslog directives work ? For now I just want to output those to a file.

My gunicorn_conf.py:

import json
import multiprocessing
import os

workers_per_core_str = os.getenv("WORKERS_PER_CORE", "1")
web_concurrency_str = os.getenv("WEB_CONCURRENCY", None)
host = os.getenv("HOST", "0.0.0.0")
port = os.getenv("PORT", "80")
bind_env = os.getenv("BIND", None)
use_loglevel = os.getenv("LOG_LEVEL", "info")
if bind_env:
    use_bind = bind_env
else:
    use_bind = "%s:%s" % (host, port)

cores = multiprocessing.cpu_count()
workers_per_core = float(workers_per_core_str)
default_web_concurrency = workers_per_core * cores
if web_concurrency_str:
    web_concurrency = int(web_concurrency_str)
    assert web_concurrency > 0
else:
    web_concurrency = max(int(default_web_concurrency), 2)

# Gunicorn config variables
loglevel = use_loglevel
workers = web_concurrency
bind = use_bind
keepalive = 120

########### This is not being observed #######
errorlog = "/app/log/gunicorn.log"
accesslog = "/app/log/gunicorn.access.log"
capture_output = True
########### /This is not being observed #######

# For debugging and testing
log_data = {
     "loglevel": loglevel,
     "workers": workers,
     "bind": bind,
     # Additional, non-gunicorn variables
     "workers_per_core": workers_per_core,
     "host": host,
     "port": port,
}
print(json.dumps(log_data))

Seems like the file is not being observed at all, even though I've put the GUNICORN_CONF environment variable on my docker compose file.

Any clues ?

Thanks for your attention.

The server refused to connect

I am running the image in a container. It runs successfully. The terminal output is:
[2019-06-18 15:17:00 +0000] [1] [INFO] Starting gunicorn 19.9.0
[2019-06-18 15:17:00 +0000] [1] [INFO] Listening at: http://0.0.0.0:80 (1)
[2019-06-18 15:17:00 +0000] [1] [INFO] Using worker: uvicorn.workers.UvicornWorker
[2019-06-18 15:17:00 +0000] [8] [INFO] Booting worker with pid: 8
[2019-06-18 15:17:00 +0000] [9] [INFO] Booting worker with pid: 9
[2019-06-18 15:17:01 +0000] [8] [INFO] Started server process [8]
[2019-06-18 15:17:01 +0000] [9] [INFO] Started server process [9]
[2019-06-18 15:17:01 +0000] [9] [INFO] Waiting for application startup.
[2019-06-18 15:17:01 +0000] [8] [INFO] Waiting for application startup.

But whenever I hit 0.0.0.0:80, it says "Refused to connect". It only happens if I connect it through a docker container. If I run it without a docker container, it works like charm.

Edit: None of the ports are working. So changing port is not solving the problem.

Docker images out of date

Hello,

the docker CI build is not triggered anymore and the images in the hub are out of date.

Would be great if they can be updated again @tiangolo

Thanks!

Don't work on rpi

Hi, testing the image on a rpi3 won't work, I use the example from "how to use" and fail by exec format.

root@raspberrypi:~/uvicorn# tree
โ”œโ”€โ”€ app
โ”‚ย ย  โ””โ”€โ”€ main.py
โ””โ”€โ”€ Dockerfile

root@raspberrypi:~/uvicorn# docker build -t myimage .
Sending build context to Docker daemon  3.584kB
Step 1/4 : FROM tiangolo/uvicorn-gunicorn:python3.8
python3.8: Pulling from tiangolo/uvicorn-gunicorn
90fe46dd8199: Pull complete 
35a4f1977689: Pull complete 
bbc37f14aded: Pull complete 
74e27dc593d4: Pull complete 
4352dcff7819: Pull complete 
deb569b08de6: Pull complete 
98fd06fa8c53: Pull complete 
7b9cc4fdefe6: Pull complete 
e8e1fd64f499: Pull complete 
5a722254cee3: Pull complete 
9865e006dbdf: Pull complete 
2195ebdd17f8: Pull complete 
3f9ecd21cf9b: Pull complete 
65523a4c97ba: Pull complete 
fddbe8c99de3: Pull complete 
e5a0e888eefe: Pull complete 
Digest: sha256:a55797175d2824029335b35ac78ccd92a854eb76ed492ecbee045bf382ccbb7d
Status: Downloaded newer image for tiangolo/uvicorn-gunicorn:python3.8
 ---> 352c37cc4a7e
Step 2/4 : LABEL maintainer="Sebastian Ramirez <[email protected]>"
 ---> Running in dacc07b3e52e
Removing intermediate container dacc07b3e52e
 ---> 8cff09d9216f
Step 3/4 : RUN pip install --no-cache-dir fastapi
 ---> Running in c365e6b9ff9c
standard_init_linux.go:211: exec user process caused "exec format error"
The command '/bin/sh -c pip install --no-cache-dir fastapi' returned a non-zero code: 1

is possible add support for arm?, cheers.

Could not import (fastapi.)HTTPException but starlette OK

Hi tiangolo.

Thank you for your docker images

However, with this one "FROM tiangolo/uvicorn-gunicorn-fastapi:python3.7-alpine3.8"

I had to import HTTPException from starlette because it wasn't known in fastapi, in my import.

fixed for me but I thought it would be interesting for you to know

fastapi build docker and publishto k8s,do i need gunicorn+fastapi or not ??

Uvicorn
Uvicorn is a lightning-fast "ASGI" server.

It runs asynchronous Python web code in a single process.

Gunicorn
You can use Gunicorn to manage Uvicorn and run multiple of these concurrent processes.

That way, you get the best of concurrency and parallelism.

#Question

fastapi build docker and publishto k8s, so do i need gunicorn+fastapi or not ?? , just uvicorn+fastapi ?
because k8s can create multiple Pods to implementation these concurrent processes Instead of Gunicorn

Can I release directly to the production environment with uvicorn+fastapi ๏ผˆno gunicorn and build docker to k8s๏ผ‰??

gunicorn+fastapi -w 1 pk uvicorn+fastapi(no gunicorn manage ), Which performance is higher ??

oci runtime error

Using the documentation in the README, getting the following error:

docker run --name testwebapp -p 80:80 testwebapp
docker: Error response from daemon: rpc error: code = 2 desc = "oci runtime error: exec format error".

GET Query Parameters is not forwarded to docker container

I deployed my apps to my production server in AWS EC2 (Standard Ubuntu VPS)
When I tried to access it with the public IP, the query params seems not forwarded. The log is 180.252.8.xxx:58693 - "GET /get_total_order HTTP/1.1" 200 OK. The port part is always different every time I accessed it through public IP

but when I ssh-ed to the server and do curl http://localhost/get_total_order?status=aaaaaa&start_date=2000-01-01&end_date=2020-05-20, it's working.

My Dockerfile:

FROM tiangolo/uvicorn-gunicorn-fastapi:python3.7

COPY ./app /app/app

COPY ./requirement.txt /app/requirement.txt

RUN pip install -r requirement.txt

COPY ./octopus-data-gcp.json /app

My docker-compose.yml:

version: "3"

services:
  web:
    image: api_dashboard_production:latest
    build:
      context: .
      dockerfile: Dockerfile.production
    container_name: api_dashboard_production
    env_file:
            - .env
    command: bash -c "uvicorn app.main:app --host 0.0.0.0 --port 80"
    ports:
      - "80:80"

The port part is weird, so I think it's routed through AWS internal network and the parameter is dropped, but I'm not sure. Any Ideas?

I'm going to open AWS support ticket. But I asked anyway in case someone runs into the same problem.

Thanks!

Server shutting down in production

Hey there,

I saw the other issue #8 that seems similar to my problem but the solution described there doesn't fix my problem.

I have a basic FastAPI Docker API with one endpoint (no async) that returns a prediction for a machine learning model. Everything works great when running with /start_reload.sh.

But as soon as I remove it (and, I think, use Gunicorn instead of unicorn), as soon as I curl the endpoint to get a prediction, I get:

2019-12-20T15:32:03.282698361Z INFO:root:New prediction asked!
2019-12-20T15:32:03.283512479Z INFO:root:Converting JSON data to plain email...
2019-12-20T15:32:03.354365024Z INFO:root:Getting predictions for mail #176161..
2019-12-20T15:32:03.363742952Z {"loglevel": "info", "workers": 4, "bind": "0.0.0.0:80", "workers_per_core": 1.0, "host": "0.0.0.0", "port": "80"}
2019-12-20T15:32:03.363773295Z Converting to features started.
100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1/1 [00:00<00:00, 8.32it/s]
2019-12-20T15:32:03.889738388Z [2019-12-20 15:32:03 +0000] [9] [INFO] Shutting down
2019-12-20T15:32:03.990590055Z [2019-12-20 15:32:03 +0000] [9] [INFO] Waiting for connections to close. (CTRL+C to force quit)

After a while, I guess the Gunicorn timeout, the server restarts. This is really happening when asking the model to return a prediction. For example, I made sure in my code that if my json data is bad, I return a HTTPException before reaching the model for prediction. And if I test that, it works. I don't get the "shutting down" message.

Could it be related to the concurrency/parallelism stuff? I tried to edit the Gunicorn conf, added a timeout etc, nothing works except unicorn with /start-reload.sh. But /start-reload.sh adds a lot of CPU use (around 50% in my case instead of 1%) therefore I can't use that in production.

Is there a way to use Gunicorn but disable all the fancy stuff that could cause this issue?

Thank you!

Startup-loop when using for machine learning deployment

I'm trying to get this container running for our production API endpoints for fasttext ml models. When running locally with uvicorn main:app we get everything running without problems. When running in docker we get an infinite startup loop. Gunicorn is just spaning new workers without taking the time to startup completely. Loading the model takes some time for sure (it's 1.6GB).

[2019-07-01 11:14:38 +0000] [1] [INFO] Starting gunicorn 19.9.0
[2019-07-01 11:14:38 +0000] [1] [DEBUG] Arbiter booted
[2019-07-01 11:14:38 +0000] [1] [INFO] Listening at: http://0.0.0.0:80 (1)
[2019-07-01 11:14:38 +0000] [1] [INFO] Using worker: uvicorn.workers.UvicornWorker
[2019-07-01 11:14:38 +0000] [9] [INFO] Booting worker with pid: 9
[2019-07-01 11:14:38 +0000] [11] [INFO] Booting worker with pid: 11
[2019-07-01 11:14:39 +0000] [1] [DEBUG] 2 workers
email-validator not installed, email fields will be treated as str.
To install, run: pip install email-validator
email-validator not installed, email fields will be treated as str.
To install, run: pip install email-validator
Model is now loading from disk..
/usr/local/lib/python3.7/site-packages/smart_open/smart_open_lib.py:398: UserWarning: This function is deprecated, use smart_open.open instead. See the migration notes for details: https://github.com/RaRe-Technologies/smart_open/blob/master/README.rst#migrating-to-the-new-open-function
  'See the migration notes for details: %s' % _MIGRATION_NOTES_URL
Model is now loading from disk..
/usr/local/lib/python3.7/site-packages/smart_open/smart_open_lib.py:398: UserWarning: This function is deprecated, use smart_open.open instead. See the migration notes for details: https://github.com/RaRe-Technologies/smart_open/blob/master/README.rst#migrating-to-the-new-open-function
  'See the migration notes for details: %s' % _MIGRATION_NOTES_URL
[2019-07-01 11:14:48 +0000] [13] [INFO] Booting worker with pid: 13
email-validator not installed, email fields will be treated as str.
To install, run: pip install email-validator
Model is now loading from disk..
/usr/local/lib/python3.7/site-packages/smart_open/smart_open_lib.py:398: UserWarning: This function is deprecated, use smart_open.open instead. See the migration notes for details: https://github.com/RaRe-Technologies/smart_open/blob/master/README.rst#migrating-to-the-new-open-function
  'See the migration notes for details: %s' % _MIGRATION_NOTES_URL
[2019-07-01 11:14:54 +0000] [15] [INFO] Booting worker with pid: 15
email-validator not installed, email fields will be treated as str.
To install, run: pip install email-validator
Model is now loading from disk..
/usr/local/lib/python3.7/site-packages/smart_open/smart_open_lib.py:398: UserWarning: This function is deprecated, use smart_open.open instead. See the migration notes for details: https://github.com/RaRe-Technologies/smart_open/blob/master/README.rst#migrating-to-the-new-open-function
  'See the migration notes for details: %s' % _MIGRATION_NOTES_URL
^C[2019-07-01 11:15:07 +0000] [1] [INFO] Handling signal: int
[2019-07-01 11:15:07 +0000] [17] [INFO] Booting worker with pid: 17
email-validator not installed, email fields will be treated as str.
To install, run: pip install email-validator
Model is now loading from disk..

Our Dockerfile looks like this:

FROM tiangolo/uvicorn-gunicorn-fastapi:python3.7
# We expose port 80

LABEL MAINTAINER="greple.ai DEV | Jannik Zinkl <[email protected]>"

ARG MODEL_FOLDER

COPY ${MODEL_FOLDER}/requirements.txt /app

RUN pip install -r requirements.txt \ 
    && rm -rf /var/cache/apk/* \
    && rm -rf requirements.txt
COPY ${MODEL_FOLDER}/ /app 

I couldn't find any timeout setting or anything related to this issue. Furthermore, I don't get errors even when using debug logging.

Any hint is highly appreciated!

Adding gunicorn_conf.py removes all default gunicorn configs

When I create a gunicorn_conf.py to increase timeout time, I think it is removing all configurations from the default configs in the docker image. Is there a way to keep all the other configs set by the default config file (ex. # of works, host, port)?

Right now I'm just copying the default config file and adding the additional timeout variable.

Also FYI for anyone looking for the default gunicorn_conf.py, it is here.

Providing an ARM build?

I am trying to host my FastAPI project on a Raspberry Pi and get the following error every time I hit a RUN:
standard_init_linux.go:190: exec user process caused "exec format error"

A little bit of research led me to believe that I need an ARM-image to base my project on, is there anything planned?


My Dockerfile:

# use FastAPI quick-deploy
FROM tiangolo/uvicorn-gunicorn:python3.7-alpine3.8

# copy whole installation (minus dockerignore)
COPY ./app /app

# install additional dependencies
COPY ./requirements.txt requirements.txt
RUN pip install -r requirements.txt

# entrypoints are managed by FastAPI

how to debug an application

Hi
Is it possible debugging an app (for example using the pycharm or vscode debugger)?
The only solution I found so far is changing the log level to DEBUG but it's not enough for properly debugging an API

Debug mode?

Trying to figure out how to turn on auto-reloading (e.g. debug mode) while using docker. I develop in the docker container by volume mounting the app directory into the docker container so any changes I make to the code are available immediately inside the running docker container. I can't figure out how to start debug mode using gunicorn with uvicorn.

Any ideas?

PS Great project and thanks!

Server crashes in production but not in development

I'm sending 217 calls with a 797KB payload each to a FastAPI endpoint. When I run the project with /start-reload.sh it works great. When I run it in production, I get [CRITICAL] WORKER TIMEOUT. I'm assuming this is related to gunicorn. I've read that, under non-async circumstances, worker_class='gevent' might fix the issue, however, in this case the project uses worker_class: uvicorn.workers.UvicornWorker

Any ideas?

Thanks!

$ docker logs -f 0f011d4e9f2b
Checking for script in /app/prestart.sh
There is no script /app/prestart.sh
{"loglevel": "debug", "workers": 4, "bind": "0.0.0.0:80", "workers_per_core": 1.0, "host": "0.0.0.0", "port": "80"}
[2019-04-04 13:45:53 +0000] [1] [DEBUG] Current configuration:
  config: /app/gunicorn_conf.py
  bind: ['0.0.0.0:80']
  backlog: 2048
  workers: 4
  worker_class: uvicorn.workers.UvicornWorker
  threads: 1
  worker_connections: 1000
  max_requests: 0
  max_requests_jitter: 0
  timeout: 30
  graceful_timeout: 30
  keepalive: 120
  limit_request_line: 4094
  limit_request_fields: 100
  limit_request_field_size: 8190
  reload: False
  reload_engine: auto
  reload_extra_files: []
  spew: False
  check_config: False
  preload_app: False
  sendfile: None
  reuse_port: False
  chdir: /app
  daemon: False
  raw_env: []
  pidfile: None
  worker_tmp_dir: None
  user: 0
  group: 0
  umask: 0
  initgroups: False
  tmp_upload_dir: None
  secure_scheme_headers: {'X-FORWARDED-PROTOCOL': 'ssl', 'X-FORWARDED-PROTO': 'https', 'X-FORWARDED-SSL': 'on'}
  forwarded_allow_ips: ['127.0.0.1']
  accesslog: None
  disable_redirect_access_to_syslog: False
  access_log_format: %(h)s %(l)s %(u)s %(t)s "%(r)s" %(s)s %(b)s "%(f)s" "%(a)s"
  errorlog: -
  loglevel: debug
  capture_output: False
  logger_class: gunicorn.glogging.Logger
  logconfig: None
  logconfig_dict: {}
  syslog_addr: udp://localhost:514
  syslog: False
  syslog_prefix: None
  syslog_facility: user
  enable_stdio_inheritance: False
  statsd_host: None
  statsd_prefix: 
  proc_name: None
  default_proc_name: api:app
  pythonpath: None
  paste: None
  on_starting: <function OnStarting.on_starting at 0x7feee3f1ce18>
  on_reload: <function OnReload.on_reload at 0x7feee3f1cf28>
  when_ready: <function WhenReady.when_ready at 0x7feee3f320d0>
  pre_fork: <function Prefork.pre_fork at 0x7feee3f321e0>
  post_fork: <function Postfork.post_fork at 0x7feee3f322f0>
  post_worker_init: <function PostWorkerInit.post_worker_init at 0x7feee3f32400>
  worker_int: <function WorkerInt.worker_int at 0x7feee3f32510>
  worker_abort: <function WorkerAbort.worker_abort at 0x7feee3f32620>
  pre_exec: <function PreExec.pre_exec at 0x7feee3f32730>
  pre_request: <function PreRequest.pre_request at 0x7feee3f32840>
  post_request: <function PostRequest.post_request at 0x7feee3f328c8>
  child_exit: <function ChildExit.child_exit at 0x7feee3f329d8>
  worker_exit: <function WorkerExit.worker_exit at 0x7feee3f32ae8>
  nworkers_changed: <function NumWorkersChanged.nworkers_changed at 0x7feee3f32bf8>
  on_exit: <function OnExit.on_exit at 0x7feee3f32d08>
  proxy_protocol: False
  proxy_allow_ips: ['127.0.0.1']
  keyfile: None
  certfile: None
  ssl_version: 2
  cert_reqs: 0
  ca_certs: None
  suppress_ragged_eofs: True
  do_handshake_on_connect: False
  ciphers: TLSv1
  raw_paste_global_conf: []
[2019-04-04 13:45:53 +0000] [1] [INFO] Starting gunicorn 19.9.0
[2019-04-04 13:45:53 +0000] [1] [DEBUG] Arbiter booted
[2019-04-04 13:45:53 +0000] [1] [INFO] Listening at: http://0.0.0.0:80 (1)
[2019-04-04 13:45:53 +0000] [1] [INFO] Using worker: uvicorn.workers.UvicornWorker
[2019-04-04 13:45:53 +0000] [8] [INFO] Booting worker with pid: 8
[2019-04-04 13:45:53 +0000] [9] [INFO] Booting worker with pid: 9
WARNING:root:email-validator not installed, email fields will be treated as str.
To install, run: pip install email-validator
[2019-04-04 13:45:53 +0000] [11] [INFO] Booting worker with pid: 11
[2019-04-04 13:45:53 +0000] [12] [INFO] Booting worker with pid: 12
[2019-04-04 13:45:53 +0000] [1] [DEBUG] 4 workers
WARNING:root:email-validator not installed, email fields will be treated as str.
To install, run: pip install email-validator
WARNING:root:email-validator not installed, email fields will be treated as str.
To install, run: pip install email-validator
WARNING:root:email-validator not installed, email fields will be treated as str.
To install, run: pip install email-validator
[2019-04-04 13:45:55 +0000] [12] [INFO] Started server process [12]
[2019-04-04 13:45:55 +0000] [11] [INFO] Started server process [11]
[2019-04-04 13:45:55 +0000] [12] [INFO] Waiting for application startup.
[2019-04-04 13:45:55 +0000] [11] [INFO] Waiting for application startup.
[2019-04-04 13:45:55 +0000] [8] [INFO] Started server process [8]
[2019-04-04 13:45:55 +0000] [9] [INFO] Started server process [9]
[2019-04-04 13:45:55 +0000] [8] [INFO] Waiting for application startup.
[2019-04-04 13:45:55 +0000] [9] [INFO] Waiting for application startup.
[2019-04-04 13:49:30 +0000] [11] [DEBUG] ('172.18.0.1', 41826) - Connected
[2019-04-04 13:49:30 +0000] [11] [DEBUG] ('172.18.0.1', 41830) - Connected
[2019-04-04 13:49:30 +0000] [12] [DEBUG] ('172.18.0.1', 41828) - Connected
[2019-04-04 13:49:30 +0000] [9] [DEBUG] ('172.18.0.1', 41832) - Connected
[2019-04-04 13:49:30 +0000] [9] [DEBUG] ('172.18.0.1', 41834) - Connected
[2019-04-04 13:49:31 +0000] [9] [DEBUG] ('172.18.0.1', 41836) - Connected
[2019-04-04 13:49:31 +0000] [9] [DEBUG] ('172.18.0.1', 41838) - Connected
[2019-04-04 13:49:31 +0000] [9] [DEBUG] ('172.18.0.1', 41840) - Connected
[2019-04-04 13:49:31 +0000] [9] [DEBUG] ('172.18.0.1', 41842) - Connected
[2019-04-04 13:49:31 +0000] [9] [DEBUG] ('172.18.0.1', 41844) - Connected
[2019-04-04 13:49:31 +0000] [9] [DEBUG] ('172.18.0.1', 41848) - Connected
[2019-04-04 13:49:31 +0000] [9] [DEBUG] ('172.18.0.1', 41850) - Connected
[2019-04-04 13:49:31 +0000] [9] [DEBUG] ('172.18.0.1', 41852) - Connected
[2019-04-04 13:49:31 +0000] [9] [DEBUG] ('172.18.0.1', 41854) - Connected
[2019-04-04 13:49:31 +0000] [9] [DEBUG] ('172.18.0.1', 41856) - Connected
[2019-04-04 13:49:31 +0000] [9] [DEBUG] ('172.18.0.1', 41858) - Connected
[2019-04-04 13:49:31 +0000] [9] [DEBUG] ('172.18.0.1', 41860) - Connected
[2019-04-04 13:49:31 +0000] [9] [DEBUG] ('172.18.0.1', 41862) - Connected
[2019-04-04 13:49:31 +0000] [9] [DEBUG] ('172.18.0.1', 41864) - Connected
[2019-04-04 13:49:31 +0000] [9] [DEBUG] ('172.18.0.1', 41866) - Connected
[2019-04-04 13:49:31 +0000] [9] [DEBUG] ('172.18.0.1', 41868) - Connected
[2019-04-04 13:49:31 +0000] [8] [DEBUG] ('172.18.0.1', 41846) - Connected
[2019-04-04 13:49:31 +0000] [8] [DEBUG] ('172.18.0.1', 41876) - Connected
[2019-04-04 13:49:31 +0000] [8] [DEBUG] ('172.18.0.1', 41878) - Connected
[2019-04-04 13:49:31 +0000] [8] [DEBUG] ('172.18.0.1', 41880) - Connected
[2019-04-04 13:49:31 +0000] [11] [DEBUG] ('172.18.0.1', 41874) - Connected
[2019-04-04 13:49:31 +0000] [9] [DEBUG] ('172.18.0.1', 41870) - Connected
[2019-04-04 13:49:31 +0000] [9] [DEBUG] ('172.18.0.1', 41872) - Connected
[2019-04-04 13:49:31 +0000] [12] [DEBUG] ('172.18.0.1', 41882) - Connected
[2019-04-04 13:49:31 +0000] [11] [DEBUG] ('172.18.0.1', 41884) - Connected
[2019-04-04 13:49:31 +0000] [11] [DEBUG] ('172.18.0.1', 41888) - Connected
[2019-04-04 13:49:31 +0000] [11] [DEBUG] ('172.18.0.1', 41892) - Connected
[2019-04-04 13:49:31 +0000] [11] [DEBUG] ('172.18.0.1', 41886) - Connected
[2019-04-04 13:49:31 +0000] [11] [DEBUG] ('172.18.0.1', 41894) - Connected
[2019-04-04 13:49:31 +0000] [11] [DEBUG] ('172.18.0.1', 41896) - Connected
[2019-04-04 13:49:31 +0000] [11] [DEBUG] ('172.18.0.1', 41898) - Connected
[2019-04-04 13:49:31 +0000] [11] [DEBUG] ('172.18.0.1', 41900) - Connected
[2019-04-04 13:49:31 +0000] [9] [DEBUG] ('172.18.0.1', 41904) - Connected
[2019-04-04 13:49:31 +0000] [9] [DEBUG] ('172.18.0.1', 41906) - Connected
[2019-04-04 13:49:31 +0000] [9] [DEBUG] ('172.18.0.1', 41908) - Connected
[2019-04-04 13:49:31 +0000] [9] [DEBUG] ('172.18.0.1', 41912) - Connected
[2019-04-04 13:49:31 +0000] [12] [DEBUG] ('172.18.0.1', 41902) - Connected
[2019-04-04 13:49:31 +0000] [12] [DEBUG] ('172.18.0.1', 41910) - Connected
[2019-04-04 13:49:31 +0000] [12] [DEBUG] ('172.18.0.1', 41914) - Connected
[2019-04-04 13:49:31 +0000] [12] [DEBUG] ('172.18.0.1', 41916) - Connected
[2019-04-04 13:49:31 +0000] [12] [DEBUG] ('172.18.0.1', 41920) - Connected
[2019-04-04 13:49:31 +0000] [12] [DEBUG] ('172.18.0.1', 41922) - Connected
[2019-04-04 13:49:31 +0000] [12] [DEBUG] ('172.18.0.1', 41924) - Connected
[2019-04-04 13:49:31 +0000] [12] [DEBUG] ('172.18.0.1', 41926) - Connected
[2019-04-04 13:49:31 +0000] [12] [DEBUG] ('172.18.0.1', 41928) - Connected
[2019-04-04 13:49:31 +0000] [12] [DEBUG] ('172.18.0.1', 41930) - Connected
[2019-04-04 13:49:31 +0000] [12] [DEBUG] ('172.18.0.1', 41932) - Connected
[2019-04-04 13:49:31 +0000] [12] [DEBUG] ('172.18.0.1', 41934) - Connected
[2019-04-04 13:49:31 +0000] [12] [DEBUG] ('172.18.0.1', 41938) - Connected
[2019-04-04 13:49:31 +0000] [12] [DEBUG] ('172.18.0.1', 41936) - Connected
[2019-04-04 13:49:31 +0000] [11] [DEBUG] ('172.18.0.1', 41918) - Connected
[2019-04-04 13:49:32 +0000] [11] [DEBUG] ('172.18.0.1', 41940) - Connected
[2019-04-04 13:49:32 +0000] [8] [DEBUG] ('172.18.0.1', 41944) - Connected
[2019-04-04 13:49:32 +0000] [9] [DEBUG] ('172.18.0.1', 41948) - Connected
[2019-04-04 13:49:32 +0000] [9] [DEBUG] ('172.18.0.1', 41950) - Connected
[2019-04-04 13:49:32 +0000] [9] [DEBUG] ('172.18.0.1', 41952) - Connected
[2019-04-04 13:49:32 +0000] [8] [DEBUG] ('172.18.0.1', 41954) - Connected
[2019-04-04 13:49:32 +0000] [12] [DEBUG] ('172.18.0.1', 41946) - Connected
[2019-04-04 13:49:32 +0000] [8] [DEBUG] ('172.18.0.1', 41942) - Connected
[2019-04-04 13:49:32 +0000] [8] [DEBUG] ('172.18.0.1', 41956) - Connected
[2019-04-04 13:49:32 +0000] [8] [DEBUG] ('172.18.0.1', 41958) - Connected
[2019-04-04 13:49:32 +0000] [8] [DEBUG] ('172.18.0.1', 41960) - Connected
[2019-04-04 13:49:32 +0000] [8] [DEBUG] ('172.18.0.1', 41962) - Connected
[2019-04-04 13:49:32 +0000] [8] [DEBUG] ('172.18.0.1', 41964) - Connected
[2019-04-04 13:49:32 +0000] [8] [DEBUG] ('172.18.0.1', 41966) - Connected
[2019-04-04 13:49:32 +0000] [8] [DEBUG] ('172.18.0.1', 41968) - Connected
[2019-04-04 13:49:32 +0000] [8] [DEBUG] ('172.18.0.1', 41972) - Connected
[2019-04-04 13:49:32 +0000] [8] [DEBUG] ('172.18.0.1', 41970) - Connected
[2019-04-04 13:49:32 +0000] [8] [DEBUG] ('172.18.0.1', 41974) - Connected
[2019-04-04 13:49:32 +0000] [8] [DEBUG] ('172.18.0.1', 41976) - Connected
[2019-04-04 13:49:32 +0000] [8] [DEBUG] ('172.18.0.1', 41978) - Connected
[2019-04-04 13:49:32 +0000] [8] [DEBUG] ('172.18.0.1', 41980) - Connected
[2019-04-04 13:49:32 +0000] [8] [DEBUG] ('172.18.0.1', 41982) - Connected
[2019-04-04 13:49:32 +0000] [8] [DEBUG] ('172.18.0.1', 41984) - Connected
[2019-04-04 13:49:32 +0000] [8] [DEBUG] ('172.18.0.1', 41986) - Connected
[2019-04-04 13:49:32 +0000] [8] [DEBUG] ('172.18.0.1', 41988) - Connected
[2019-04-04 13:49:32 +0000] [12] [DEBUG] ('172.18.0.1', 42028) - Connected
[2019-04-04 13:49:32 +0000] [9] [DEBUG] ('172.18.0.1', 41990) - Connected
[2019-04-04 13:49:32 +0000] [9] [DEBUG] ('172.18.0.1', 41992) - Connected
[2019-04-04 13:49:32 +0000] [9] [DEBUG] ('172.18.0.1', 41994) - Connected
[2019-04-04 13:49:32 +0000] [9] [DEBUG] ('172.18.0.1', 41996) - Connected
[2019-04-04 13:49:32 +0000] [9] [DEBUG] ('172.18.0.1', 41998) - Connected
[2019-04-04 13:49:32 +0000] [9] [DEBUG] ('172.18.0.1', 42000) - Connected
[2019-04-04 13:49:32 +0000] [9] [DEBUG] ('172.18.0.1', 42002) - Connected
[2019-04-04 13:49:32 +0000] [9] [DEBUG] ('172.18.0.1', 42004) - Connected
[2019-04-04 13:49:32 +0000] [9] [DEBUG] ('172.18.0.1', 42006) - Connected
[2019-04-04 13:49:32 +0000] [9] [DEBUG] ('172.18.0.1', 42008) - Connected
[2019-04-04 13:49:32 +0000] [9] [DEBUG] ('172.18.0.1', 42010) - Connected
[2019-04-04 13:49:32 +0000] [9] [DEBUG] ('172.18.0.1', 42012) - Connected
[2019-04-04 13:49:32 +0000] [9] [DEBUG] ('172.18.0.1', 42014) - Connected
[2019-04-04 13:49:32 +0000] [9] [DEBUG] ('172.18.0.1', 42016) - Connected
[2019-04-04 13:49:32 +0000] [9] [DEBUG] ('172.18.0.1', 42018) - Connected
[2019-04-04 13:49:32 +0000] [9] [DEBUG] ('172.18.0.1', 42020) - Connected
[2019-04-04 13:49:32 +0000] [9] [DEBUG] ('172.18.0.1', 42022) - Connected
[2019-04-04 13:49:32 +0000] [9] [DEBUG] ('172.18.0.1', 42024) - Connected
[2019-04-04 13:49:32 +0000] [9] [DEBUG] ('172.18.0.1', 42026) - Connected
[2019-04-04 13:49:32 +0000] [9] [DEBUG] ('172.18.0.1', 42030) - Connected
[2019-04-04 13:49:32 +0000] [9] [DEBUG] ('172.18.0.1', 42032) - Connected
[2019-04-04 13:49:32 +0000] [9] [DEBUG] ('172.18.0.1', 42036) - Connected
[2019-04-04 13:49:32 +0000] [9] [DEBUG] ('172.18.0.1', 42034) - Connected
[2019-04-04 13:49:32 +0000] [9] [DEBUG] ('172.18.0.1', 42038) - Connected
[2019-04-04 13:49:32 +0000] [9] [DEBUG] ('172.18.0.1', 42040) - Connected
[2019-04-04 13:49:32 +0000] [9] [DEBUG] ('172.18.0.1', 42042) - Connected
[2019-04-04 13:49:32 +0000] [9] [DEBUG] ('172.18.0.1', 42044) - Connected
[2019-04-04 13:49:32 +0000] [9] [DEBUG] ('172.18.0.1', 42048) - Connected
[2019-04-04 13:49:32 +0000] [9] [DEBUG] ('172.18.0.1', 42050) - Connected
[2019-04-04 13:49:32 +0000] [9] [DEBUG] ('172.18.0.1', 42052) - Connected
[2019-04-04 13:49:32 +0000] [9] [DEBUG] ('172.18.0.1', 42054) - Connected
[2019-04-04 13:49:32 +0000] [9] [DEBUG] ('172.18.0.1', 42056) - Connected
[2019-04-04 13:49:32 +0000] [8] [DEBUG] ('172.18.0.1', 42058) - Connected
[2019-04-04 13:49:32 +0000] [9] [DEBUG] ('172.18.0.1', 42062) - Connected
[2019-04-04 13:49:32 +0000] [8] [DEBUG] ('172.18.0.1', 42060) - Connected
[2019-04-04 13:49:32 +0000] [11] [DEBUG] ('172.18.0.1', 42046) - Connected
[2019-04-04 13:49:32 +0000] [9] [DEBUG] ('172.18.0.1', 42064) - Connected
[2019-04-04 13:49:32 +0000] [11] [DEBUG] ('172.18.0.1', 42066) - Connected
[2019-04-04 13:49:32 +0000] [9] [DEBUG] ('172.18.0.1', 42068) - Connected
[2019-04-04 13:49:32 +0000] [9] [DEBUG] ('172.18.0.1', 42070) - Connected
[2019-04-04 13:49:32 +0000] [9] [DEBUG] ('172.18.0.1', 42072) - Connected
[2019-04-04 13:49:32 +0000] [9] [DEBUG] ('172.18.0.1', 42074) - Connected
[2019-04-04 13:49:32 +0000] [9] [DEBUG] ('172.18.0.1', 42076) - Connected
[2019-04-04 13:49:32 +0000] [11] [DEBUG] ('172.18.0.1', 42078) - Connected
[2019-04-04 13:49:32 +0000] [9] [DEBUG] ('172.18.0.1', 42082) - Connected
[2019-04-04 13:49:32 +0000] [9] [DEBUG] ('172.18.0.1', 42084) - Connected
[2019-04-04 13:49:32 +0000] [11] [DEBUG] ('172.18.0.1', 42080) - Connected
[2019-04-04 13:49:32 +0000] [9] [DEBUG] ('172.18.0.1', 42088) - Connected
[2019-04-04 13:49:32 +0000] [12] [DEBUG] ('172.18.0.1', 42086) - Connected
[2019-04-04 13:49:32 +0000] [8] [DEBUG] ('172.18.0.1', 42090) - Connected
[2019-04-04 13:49:32 +0000] [8] [DEBUG] ('172.18.0.1', 42094) - Connected
[2019-04-04 13:49:32 +0000] [12] [DEBUG] ('172.18.0.1', 42092) - Connected
[2019-04-04 13:49:32 +0000] [8] [DEBUG] ('172.18.0.1', 42096) - Connected
[2019-04-04 13:49:32 +0000] [8] [DEBUG] ('172.18.0.1', 42098) - Connected
[2019-04-04 13:49:32 +0000] [8] [DEBUG] ('172.18.0.1', 42100) - Connected
[2019-04-04 13:49:32 +0000] [9] [DEBUG] ('172.18.0.1', 42102) - Connected
[2019-04-04 13:49:32 +0000] [9] [DEBUG] ('172.18.0.1', 42104) - Connected
[2019-04-04 13:49:32 +0000] [8] [DEBUG] ('172.18.0.1', 42106) - Connected
[2019-04-04 13:49:32 +0000] [8] [DEBUG] ('172.18.0.1', 42108) - Connected
[2019-04-04 13:49:33 +0000] [9] [DEBUG] ('172.18.0.1', 42118) - Connected
[2019-04-04 13:49:33 +0000] [9] [DEBUG] ('172.18.0.1', 42120) - Connected
[2019-04-04 13:49:33 +0000] [9] [DEBUG] ('172.18.0.1', 42122) - Connected
[2019-04-04 13:49:33 +0000] [9] [DEBUG] ('172.18.0.1', 42124) - Connected
[2019-04-04 13:49:32 +0000] [8] [DEBUG] ('172.18.0.1', 42110) - Connected
[2019-04-04 13:49:33 +0000] [8] [DEBUG] ('172.18.0.1', 42112) - Connected
[2019-04-04 13:49:33 +0000] [8] [DEBUG] ('172.18.0.1', 42114) - Connected
[2019-04-04 13:49:33 +0000] [8] [DEBUG] ('172.18.0.1', 42116) - Connected
[2019-04-04 13:49:33 +0000] [8] [DEBUG] ('172.18.0.1', 42126) - Connected
[2019-04-04 13:49:33 +0000] [8] [DEBUG] ('172.18.0.1', 42128) - Connected
[2019-04-04 13:49:33 +0000] [8] [DEBUG] ('172.18.0.1', 42130) - Connected
[2019-04-04 13:49:33 +0000] [8] [DEBUG] ('172.18.0.1', 42132) - Connected
[2019-04-04 13:49:33 +0000] [8] [DEBUG] ('172.18.0.1', 42134) - Connected
[2019-04-04 13:49:33 +0000] [8] [DEBUG] ('172.18.0.1', 42136) - Connected
[2019-04-04 13:49:33 +0000] [8] [DEBUG] ('172.18.0.1', 42138) - Connected
[2019-04-04 13:49:33 +0000] [8] [DEBUG] ('172.18.0.1', 42140) - Connected
[2019-04-04 13:49:33 +0000] [8] [DEBUG] ('172.18.0.1', 42142) - Connected
[2019-04-04 13:49:33 +0000] [8] [DEBUG] ('172.18.0.1', 42144) - Connected
[2019-04-04 13:49:33 +0000] [12] [DEBUG] ('172.18.0.1', 42146) - Connected
[2019-04-04 13:49:33 +0000] [8] [DEBUG] ('172.18.0.1', 42148) - Connected
[2019-04-04 13:49:33 +0000] [8] [DEBUG] ('172.18.0.1', 42150) - Connected
[2019-04-04 13:49:33 +0000] [8] [DEBUG] ('172.18.0.1', 42152) - Connected
[2019-04-04 13:49:33 +0000] [8] [DEBUG] ('172.18.0.1', 42154) - Connected
[2019-04-04 13:49:33 +0000] [8] [DEBUG] ('172.18.0.1', 42156) - Connected
[2019-04-04 13:49:33 +0000] [9] [DEBUG] ('172.18.0.1', 42158) - Connected
[2019-04-04 13:49:33 +0000] [11] [DEBUG] ('172.18.0.1', 42160) - Connected
[2019-04-04 13:49:33 +0000] [11] [DEBUG] ('172.18.0.1', 42164) - Connected
[2019-04-04 13:49:33 +0000] [9] [DEBUG] ('172.18.0.1', 42166) - Connected
[2019-04-04 13:49:33 +0000] [8] [DEBUG] ('172.18.0.1', 42168) - Connected
[2019-04-04 13:49:33 +0000] [9] [DEBUG] ('172.18.0.1', 42170) - Connected
[2019-04-04 13:49:33 +0000] [9] [DEBUG] ('172.18.0.1', 42172) - Connected
[2019-04-04 13:49:33 +0000] [9] [DEBUG] ('172.18.0.1', 42174) - Connected
[2019-04-04 13:49:33 +0000] [9] [DEBUG] ('172.18.0.1', 42178) - Connected
[2019-04-04 13:49:33 +0000] [8] [DEBUG] ('172.18.0.1', 42180) - Connected
[2019-04-04 13:49:33 +0000] [8] [DEBUG] ('172.18.0.1', 42182) - Connected
[2019-04-04 13:49:33 +0000] [11] [DEBUG] ('172.18.0.1', 42176) - Connected
[2019-04-04 13:49:33 +0000] [8] [DEBUG] ('172.18.0.1', 42184) - Connected
[2019-04-04 13:49:33 +0000] [12] [DEBUG] ('172.18.0.1', 42186) - Connected
[2019-04-04 13:49:33 +0000] [12] [DEBUG] ('172.18.0.1', 42188) - Connected
[2019-04-04 13:49:33 +0000] [9] [DEBUG] ('172.18.0.1', 42190) - Connected
[2019-04-04 13:49:33 +0000] [8] [DEBUG] ('172.18.0.1', 42194) - Connected
[2019-04-04 13:49:33 +0000] [8] [DEBUG] ('172.18.0.1', 42196) - Connected
[2019-04-04 13:49:33 +0000] [8] [DEBUG] ('172.18.0.1', 42198) - Connected
[2019-04-04 13:49:33 +0000] [8] [DEBUG] ('172.18.0.1', 42200) - Connected
[2019-04-04 13:49:33 +0000] [9] [DEBUG] ('172.18.0.1', 42192) - Connected
[2019-04-04 13:49:33 +0000] [8] [DEBUG] ('172.18.0.1', 42202) - Connected
[2019-04-04 13:49:33 +0000] [8] [DEBUG] ('172.18.0.1', 42204) - Connected
[2019-04-04 13:49:33 +0000] [8] [DEBUG] ('172.18.0.1', 42206) - Connected
[2019-04-04 13:49:33 +0000] [8] [DEBUG] ('172.18.0.1', 42208) - Connected
[2019-04-04 13:49:33 +0000] [8] [DEBUG] ('172.18.0.1', 42210) - Connected
[2019-04-04 13:49:33 +0000] [8] [DEBUG] ('172.18.0.1', 42212) - Connected
[2019-04-04 13:49:33 +0000] [8] [DEBUG] ('172.18.0.1', 42214) - Connected
[2019-04-04 13:49:33 +0000] [8] [DEBUG] ('172.18.0.1', 42216) - Connected
[2019-04-04 13:49:33 +0000] [9] [DEBUG] ('172.18.0.1', 42220) - Connected
[2019-04-04 13:49:33 +0000] [8] [DEBUG] ('172.18.0.1', 42218) - Connected
[2019-04-04 13:49:33 +0000] [9] [DEBUG] ('172.18.0.1', 42222) - Connected
[2019-04-04 13:49:33 +0000] [8] [DEBUG] ('172.18.0.1', 42230) - Connected
[2019-04-04 13:49:33 +0000] [8] [DEBUG] ('172.18.0.1', 42232) - Connected
[2019-04-04 13:49:33 +0000] [8] [DEBUG] ('172.18.0.1', 42240) - Connected
[2019-04-04 13:49:33 +0000] [9] [DEBUG] ('172.18.0.1', 42234) - Connected
[2019-04-04 13:49:33 +0000] [8] [DEBUG] ('172.18.0.1', 42244) - Connected
[2019-04-04 13:49:33 +0000] [8] [DEBUG] ('172.18.0.1', 42246) - Connected
[2019-04-04 13:49:33 +0000] [9] [DEBUG] ('172.18.0.1', 42236) - Connected
[2019-04-04 13:49:33 +0000] [9] [DEBUG] ('172.18.0.1', 42238) - Connected
[2019-04-04 13:49:33 +0000] [8] [DEBUG] ('172.18.0.1', 42248) - Connected
[2019-04-04 13:49:33 +0000] [9] [DEBUG] ('172.18.0.1', 42242) - Connected
[2019-04-04 13:49:33 +0000] [8] [DEBUG] ('172.18.0.1', 42250) - Connected
[2019-04-04 13:49:33 +0000] [8] [DEBUG] ('172.18.0.1', 42254) - Connected
[2019-04-04 13:49:33 +0000] [8] [DEBUG] ('172.18.0.1', 42252) - Connected
[2019-04-04 13:49:33 +0000] [8] [DEBUG] ('172.18.0.1', 42256) - Connected
[2019-04-04 13:49:33 +0000] [8] [DEBUG] ('172.18.0.1', 42258) - Connected
[2019-04-04 13:49:33 +0000] [8] [DEBUG] ('172.18.0.1', 42262) - Connected
[2019-04-04 13:49:33 +0000] [9] [DEBUG] ('172.18.0.1', 42264) - Connected
[2019-04-04 13:49:33 +0000] [9] [DEBUG] ('172.18.0.1', 42266) - Connected
[2019-04-04 13:49:33 +0000] [12] [DEBUG] ('172.18.0.1', 42228) - Connected
[2019-04-04 13:49:33 +0000] [12] [DEBUG] ('172.18.0.1', 42260) - Connected
[2019-04-04 13:49:34 +0000] [12] [INFO] ('172.18.0.1', 41828) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:34 +0000] [11] [INFO] ('172.18.0.1', 41826) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:34 +0000] [11] [DEBUG] ('172.18.0.1', 41826) - Disconnected
[2019-04-04 13:49:34 +0000] [12] [DEBUG] ('172.18.0.1', 41828) - Disconnected
[2019-04-04 13:49:34 +0000] [11] [INFO] ('172.18.0.1', 41830) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:34 +0000] [11] [DEBUG] ('172.18.0.1', 41830) - Disconnected
[2019-04-04 13:49:39 +0000] [11] [INFO] ('172.18.0.1', 41874) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:39 +0000] [11] [INFO] ('172.18.0.1', 41888) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:39 +0000] [11] [INFO] ('172.18.0.1', 41892) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:39 +0000] [11] [INFO] ('172.18.0.1', 41884) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:39 +0000] [11] [INFO] ('172.18.0.1', 41886) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:39 +0000] [11] [INFO] ('172.18.0.1', 41918) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:39 +0000] [11] [DEBUG] ('172.18.0.1', 41918) - Disconnected
[2019-04-04 13:49:39 +0000] [11] [DEBUG] ('172.18.0.1', 41874) - Disconnected
[2019-04-04 13:49:39 +0000] [11] [DEBUG] ('172.18.0.1', 41892) - Disconnected
[2019-04-04 13:49:40 +0000] [11] [INFO] ('172.18.0.1', 41898) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:40 +0000] [11] [INFO] ('172.18.0.1', 41894) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:40 +0000] [11] [INFO] ('172.18.0.1', 41900) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:40 +0000] [11] [DEBUG] ('172.18.0.1', 41886) - Disconnected
[2019-04-04 13:49:40 +0000] [11] [DEBUG] ('172.18.0.1', 41888) - Disconnected
[2019-04-04 13:49:40 +0000] [11] [DEBUG] ('172.18.0.1', 41884) - Disconnected
[2019-04-04 13:49:40 +0000] [11] [INFO] ('172.18.0.1', 42046) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:40 +0000] [12] [INFO] ('172.18.0.1', 41882) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:40 +0000] [12] [INFO] ('172.18.0.1', 41922) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:40 +0000] [12] [INFO] ('172.18.0.1', 41926) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:40 +0000] [11] [INFO] ('172.18.0.1', 41940) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:40 +0000] [12] [INFO] ('172.18.0.1', 42028) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:40 +0000] [11] [INFO] ('172.18.0.1', 42066) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:40 +0000] [11] [INFO] ('172.18.0.1', 41896) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:40 +0000] [11] [DEBUG] ('172.18.0.1', 41894) - Disconnected
[2019-04-04 13:49:40 +0000] [11] [DEBUG] ('172.18.0.1', 41898) - Disconnected
[2019-04-04 13:49:40 +0000] [11] [DEBUG] ('172.18.0.1', 41900) - Disconnected
[2019-04-04 13:49:40 +0000] [11] [DEBUG] ('172.18.0.1', 41940) - Disconnected
[2019-04-04 13:49:40 +0000] [11] [DEBUG] ('172.18.0.1', 42046) - Disconnected
[2019-04-04 13:49:40 +0000] [11] [DEBUG] ('172.18.0.1', 41896) - Disconnected
[2019-04-04 13:49:40 +0000] [11] [DEBUG] ('172.18.0.1', 42066) - Disconnected
[2019-04-04 13:49:40 +0000] [11] [INFO] ('172.18.0.1', 42078) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:40 +0000] [12] [INFO] ('172.18.0.1', 41930) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:40 +0000] [12] [INFO] ('172.18.0.1', 41914) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:40 +0000] [12] [INFO] ('172.18.0.1', 41934) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:40 +0000] [12] [INFO] ('172.18.0.1', 41910) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:40 +0000] [11] [DEBUG] ('172.18.0.1', 42078) - Disconnected
[2019-04-04 13:49:40 +0000] [11] [INFO] ('172.18.0.1', 42080) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:40 +0000] [11] [DEBUG] ('172.18.0.1', 42080) - Disconnected
[2019-04-04 13:49:40 +0000] [12] [INFO] ('172.18.0.1', 41938) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:40 +0000] [12] [INFO] ('172.18.0.1', 41916) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:41 +0000] [11] [INFO] ('172.18.0.1', 42160) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:41 +0000] [11] [DEBUG] ('172.18.0.1', 42160) - Disconnected
[2019-04-04 13:49:41 +0000] [12] [INFO] ('172.18.0.1', 41920) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:41 +0000] [12] [INFO] ('172.18.0.1', 41924) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:41 +0000] [12] [DEBUG] ('172.18.0.1', 41926) - Disconnected
[2019-04-04 13:49:41 +0000] [12] [DEBUG] ('172.18.0.1', 41930) - Disconnected
[2019-04-04 13:49:41 +0000] [12] [DEBUG] ('172.18.0.1', 41922) - Disconnected
[2019-04-04 13:49:41 +0000] [12] [DEBUG] ('172.18.0.1', 42028) - Disconnected
[2019-04-04 13:49:41 +0000] [12] [DEBUG] ('172.18.0.1', 41910) - Disconnected
[2019-04-04 13:49:41 +0000] [12] [DEBUG] ('172.18.0.1', 41934) - Disconnected
[2019-04-04 13:49:41 +0000] [12] [DEBUG] ('172.18.0.1', 41882) - Disconnected
[2019-04-04 13:49:41 +0000] [12] [DEBUG] ('172.18.0.1', 41914) - Disconnected
[2019-04-04 13:49:41 +0000] [12] [DEBUG] ('172.18.0.1', 41938) - Disconnected
[2019-04-04 13:49:41 +0000] [12] [DEBUG] ('172.18.0.1', 41916) - Disconnected
[2019-04-04 13:49:41 +0000] [11] [INFO] ('172.18.0.1', 42176) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:41 +0000] [11] [DEBUG] ('172.18.0.1', 42176) - Disconnected
[2019-04-04 13:49:41 +0000] [11] [INFO] ('172.18.0.1', 42164) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:41 +0000] [11] [DEBUG] ('172.18.0.1', 42164) - Disconnected
[2019-04-04 13:49:41 +0000] [12] [DEBUG] ('172.18.0.1', 41920) - Disconnected
[2019-04-04 13:49:41 +0000] [12] [DEBUG] ('172.18.0.1', 41924) - Disconnected
[2019-04-04 13:49:41 +0000] [12] [INFO] ('172.18.0.1', 41902) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:41 +0000] [12] [DEBUG] ('172.18.0.1', 41902) - Disconnected
[2019-04-04 13:49:41 +0000] [12] [INFO] ('172.18.0.1', 41928) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:41 +0000] [12] [DEBUG] ('172.18.0.1', 41928) - Disconnected
[2019-04-04 13:49:42 +0000] [12] [INFO] ('172.18.0.1', 42146) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:42 +0000] [12] [DEBUG] ('172.18.0.1', 42146) - Disconnected
[2019-04-04 13:49:42 +0000] [12] [INFO] ('172.18.0.1', 41932) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:42 +0000] [12] [DEBUG] ('172.18.0.1', 41932) - Disconnected
[2019-04-04 13:49:42 +0000] [12] [INFO] ('172.18.0.1', 42086) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:42 +0000] [12] [INFO] ('172.18.0.1', 41946) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:42 +0000] [12] [DEBUG] ('172.18.0.1', 42086) - Disconnected
[2019-04-04 13:49:42 +0000] [12] [DEBUG] ('172.18.0.1', 41946) - Disconnected
[2019-04-04 13:49:42 +0000] [12] [INFO] ('172.18.0.1', 41936) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:42 +0000] [12] [INFO] ('172.18.0.1', 42228) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:42 +0000] [12] [DEBUG] ('172.18.0.1', 42228) - Disconnected
[2019-04-04 13:49:42 +0000] [12] [DEBUG] ('172.18.0.1', 41936) - Disconnected
[2019-04-04 13:49:43 +0000] [12] [INFO] ('172.18.0.1', 42188) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:43 +0000] [12] [DEBUG] ('172.18.0.1', 42188) - Disconnected
[2019-04-04 13:49:43 +0000] [12] [INFO] ('172.18.0.1', 42186) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:43 +0000] [12] [DEBUG] ('172.18.0.1', 42186) - Disconnected
[2019-04-04 13:49:43 +0000] [12] [INFO] ('172.18.0.1', 42092) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:43 +0000] [12] [DEBUG] ('172.18.0.1', 42092) - Disconnected
[2019-04-04 13:49:43 +0000] [12] [INFO] ('172.18.0.1', 42260) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:43 +0000] [12] [DEBUG] ('172.18.0.1', 42260) - Disconnected
[2019-04-04 13:49:50 +0000] [8] [INFO] ('172.18.0.1', 41846) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:50 +0000] [8] [INFO] ('172.18.0.1', 41878) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:50 +0000] [8] [INFO] ('172.18.0.1', 41960) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:50 +0000] [8] [INFO] ('172.18.0.1', 42098) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:50 +0000] [8] [INFO] ('172.18.0.1', 41876) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:50 +0000] [8] [INFO] ('172.18.0.1', 41962) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:50 +0000] [8] [INFO] ('172.18.0.1', 42096) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:50 +0000] [8] [INFO] ('172.18.0.1', 42094) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:50 +0000] [8] [INFO] ('172.18.0.1', 42194) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:50 +0000] [8] [INFO] ('172.18.0.1', 41880) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:51 +0000] [8] [INFO] ('172.18.0.1', 42200) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:51 +0000] [8] [INFO] ('172.18.0.1', 42196) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:51 +0000] [8] [INFO] ('172.18.0.1', 42198) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:51 +0000] [8] [INFO] ('172.18.0.1', 41964) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:51 +0000] [8] [INFO] ('172.18.0.1', 42210) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:51 +0000] [8] [INFO] ('172.18.0.1', 42202) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:51 +0000] [8] [INFO] ('172.18.0.1', 42250) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:51 +0000] [8] [INFO] ('172.18.0.1', 42212) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:51 +0000] [8] [INFO] ('172.18.0.1', 42218) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:51 +0000] [8] [INFO] ('172.18.0.1', 42246) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:51 +0000] [8] [INFO] ('172.18.0.1', 42216) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:51 +0000] [8] [INFO] ('172.18.0.1', 42214) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:51 +0000] [8] [INFO] ('172.18.0.1', 42204) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:51 +0000] [8] [INFO] ('172.18.0.1', 42206) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:51 +0000] [8] [INFO] ('172.18.0.1', 42252) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:51 +0000] [8] [INFO] ('172.18.0.1', 42230) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:51 +0000] [8] [INFO] ('172.18.0.1', 42112) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:51 +0000] [8] [INFO] ('172.18.0.1', 42248) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:52 +0000] [8] [INFO] ('172.18.0.1', 42244) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:52 +0000] [9] [INFO] ('172.18.0.1', 41832) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:52 +0000] [8] [INFO] ('172.18.0.1', 42108) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:52 +0000] [8] [INFO] ('172.18.0.1', 42110) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:52 +0000] [8] [INFO] ('172.18.0.1', 42058) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:52 +0000] [8] [INFO] ('172.18.0.1', 42116) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:52 +0000] [8] [INFO] ('172.18.0.1', 42208) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:52 +0000] [8] [INFO] ('172.18.0.1', 42106) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:52 +0000] [8] [INFO] ('172.18.0.1', 41956) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:52 +0000] [8] [INFO] ('172.18.0.1', 42060) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:52 +0000] [9] [INFO] ('172.18.0.1', 41834) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:52 +0000] [9] [INFO] ('172.18.0.1', 41842) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:53 +0000] [8] [INFO] ('172.18.0.1', 42232) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:53 +0000] [8] [INFO] ('172.18.0.1', 41958) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:53 +0000] [8] [INFO] ('172.18.0.1', 41968) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:53 +0000] [8] [INFO] ('172.18.0.1', 41942) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:53 +0000] [8] [INFO] ('172.18.0.1', 42114) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:53 +0000] [8] [INFO] ('172.18.0.1', 41966) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:53 +0000] [8] [INFO] ('172.18.0.1', 42134) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:53 +0000] [9] [INFO] ('172.18.0.1', 41838) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:53 +0000] [8] [INFO] ('172.18.0.1', 41944) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:53 +0000] [9] [INFO] ('172.18.0.1', 41836) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:53 +0000] [9] [INFO] ('172.18.0.1', 41844) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:53 +0000] [9] [INFO] ('172.18.0.1', 41856) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:53 +0000] [9] [INFO] ('172.18.0.1', 41854) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:53 +0000] [9] [INFO] ('172.18.0.1', 41840) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:53 +0000] [9] [INFO] ('172.18.0.1', 41860) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:53 +0000] [9] [INFO] ('172.18.0.1', 41862) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:53 +0000] [9] [INFO] ('172.18.0.1', 41866) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:53 +0000] [9] [INFO] ('172.18.0.1', 41858) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:53 +0000] [9] [INFO] ('172.18.0.1', 41852) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:54 +0000] [9] [INFO] ('172.18.0.1', 41868) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:54 +0000] [8] [INFO] ('172.18.0.1', 42130) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:54 +0000] [8] [INFO] ('172.18.0.1', 42132) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:54 +0000] [8] [INFO] ('172.18.0.1', 42136) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:54 +0000] [8] [DEBUG] ('172.18.0.1', 42208) - Disconnected
[2019-04-04 13:49:54 +0000] [8] [DEBUG] ('172.18.0.1', 42198) - Disconnected
[2019-04-04 13:49:54 +0000] [8] [DEBUG] ('172.18.0.1', 42096) - Disconnected
[2019-04-04 13:49:54 +0000] [8] [DEBUG] ('172.18.0.1', 42218) - Disconnected
[2019-04-04 13:49:54 +0000] [8] [DEBUG] ('172.18.0.1', 42200) - Disconnected
[2019-04-04 13:49:54 +0000] [8] [DEBUG] ('172.18.0.1', 42230) - Disconnected
[2019-04-04 13:49:54 +0000] [8] [DEBUG] ('172.18.0.1', 42210) - Disconnected
[2019-04-04 13:49:54 +0000] [8] [DEBUG] ('172.18.0.1', 42206) - Disconnected
[2019-04-04 13:49:54 +0000] [8] [DEBUG] ('172.18.0.1', 41880) - Disconnected
[2019-04-04 13:49:54 +0000] [8] [DEBUG] ('172.18.0.1', 42250) - Disconnected
[2019-04-04 13:49:54 +0000] [8] [DEBUG] ('172.18.0.1', 42110) - Disconnected
[2019-04-04 13:49:54 +0000] [8] [DEBUG] ('172.18.0.1', 41876) - Disconnected
[2019-04-04 13:49:54 +0000] [8] [DEBUG] ('172.18.0.1', 42094) - Disconnected
[2019-04-04 13:49:54 +0000] [8] [DEBUG] ('172.18.0.1', 42202) - Disconnected
[2019-04-04 13:49:54 +0000] [8] [DEBUG] ('172.18.0.1', 42216) - Disconnected
[2019-04-04 13:49:54 +0000] [8] [DEBUG] ('172.18.0.1', 41962) - Disconnected
[2019-04-04 13:49:54 +0000] [8] [DEBUG] ('172.18.0.1', 41878) - Disconnected
[2019-04-04 13:49:54 +0000] [8] [DEBUG] ('172.18.0.1', 42116) - Disconnected
[2019-04-04 13:49:54 +0000] [8] [DEBUG] ('172.18.0.1', 42194) - Disconnected
[2019-04-04 13:49:54 +0000] [8] [DEBUG] ('172.18.0.1', 42058) - Disconnected
[2019-04-04 13:49:54 +0000] [8] [DEBUG] ('172.18.0.1', 42204) - Disconnected
[2019-04-04 13:49:54 +0000] [8] [DEBUG] ('172.18.0.1', 42246) - Disconnected
[2019-04-04 13:49:54 +0000] [8] [DEBUG] ('172.18.0.1', 42248) - Disconnected
[2019-04-04 13:49:54 +0000] [8] [DEBUG] ('172.18.0.1', 42212) - Disconnected
[2019-04-04 13:49:54 +0000] [8] [DEBUG] ('172.18.0.1', 41960) - Disconnected
[2019-04-04 13:49:54 +0000] [8] [DEBUG] ('172.18.0.1', 42196) - Disconnected
[2019-04-04 13:49:54 +0000] [8] [DEBUG] ('172.18.0.1', 42252) - Disconnected
[2019-04-04 13:49:54 +0000] [8] [DEBUG] ('172.18.0.1', 42112) - Disconnected
[2019-04-04 13:49:54 +0000] [8] [DEBUG] ('172.18.0.1', 42214) - Disconnected
[2019-04-04 13:49:54 +0000] [8] [DEBUG] ('172.18.0.1', 42098) - Disconnected
[2019-04-04 13:49:54 +0000] [8] [DEBUG] ('172.18.0.1', 41846) - Disconnected
[2019-04-04 13:49:54 +0000] [8] [DEBUG] ('172.18.0.1', 42244) - Disconnected
[2019-04-04 13:49:55 +0000] [8] [DEBUG] ('172.18.0.1', 41964) - Disconnected
[2019-04-04 13:49:55 +0000] [8] [DEBUG] ('172.18.0.1', 42108) - Disconnected
[2019-04-04 13:49:55 +0000] [9] [INFO] ('172.18.0.1', 41872) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:55 +0000] [9] [INFO] ('172.18.0.1', 41848) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:55 +0000] [9] [INFO] ('172.18.0.1', 42034) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:55 +0000] [9] [INFO] ('172.18.0.1', 41864) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:55 +0000] [9] [INFO] ('172.18.0.1', 42038) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:55 +0000] [8] [INFO] ('172.18.0.1', 42128) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:55 +0000] [8] [INFO] ('172.18.0.1', 42138) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:55 +0000] [9] [INFO] ('172.18.0.1', 42042) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:55 +0000] [9] [INFO] ('172.18.0.1', 42044) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:55 +0000] [9] [INFO] ('172.18.0.1', 42032) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:55 +0000] [9] [INFO] ('172.18.0.1', 42040) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:55 +0000] [9] [INFO] ('172.18.0.1', 41908) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:55 +0000] [8] [INFO] ('172.18.0.1', 42144) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:55 +0000] [8] [INFO] ('172.18.0.1', 42156) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:56 +0000] [8] [INFO] ('172.18.0.1', 41972) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:56 +0000] [8] [INFO] ('172.18.0.1', 42140) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:56 +0000] [8] [INFO] ('172.18.0.1', 41954) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:56 +0000] [8] [DEBUG] ('172.18.0.1', 41956) - Disconnected
[2019-04-04 13:49:56 +0000] [8] [DEBUG] ('172.18.0.1', 42060) - Disconnected
[2019-04-04 13:49:56 +0000] [8] [DEBUG] ('172.18.0.1', 42106) - Disconnected
[2019-04-04 13:49:56 +0000] [9] [INFO] ('172.18.0.1', 42024) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:56 +0000] [9] [INFO] ('172.18.0.1', 42020) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:56 +0000] [9] [INFO] ('172.18.0.1', 41904) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:56 +0000] [8] [INFO] ('172.18.0.1', 42142) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:56 +0000] [8] [INFO] ('172.18.0.1', 41984) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:56 +0000] [8] [INFO] ('172.18.0.1', 41976) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:56 +0000] [9] [INFO] ('172.18.0.1', 42104) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:56 +0000] [8] [INFO] ('172.18.0.1', 42182) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:56 +0000] [8] [INFO] ('172.18.0.1', 41978) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:56 +0000] [9] [INFO] ('172.18.0.1', 42036) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:56 +0000] [9] [INFO] ('172.18.0.1', 42022) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:56 +0000] [8] [INFO] ('172.18.0.1', 41980) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:56 +0000] [8] [INFO] ('172.18.0.1', 42126) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:56 +0000] [9] [INFO] ('172.18.0.1', 42030) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:56 +0000] [8] [INFO] ('172.18.0.1', 41974) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:56 +0000] [9] [INFO] ('172.18.0.1', 41850) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:56 +0000] [8] [DEBUG] ('172.18.0.1', 41942) - Disconnected
[2019-04-04 13:49:56 +0000] [8] [DEBUG] ('172.18.0.1', 41966) - Disconnected
[2019-04-04 13:49:56 +0000] [9] [INFO] ('172.18.0.1', 41906) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:56 +0000] [9] [INFO] ('172.18.0.1', 42052) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:56 +0000] [9] [INFO] ('172.18.0.1', 42048) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:56 +0000] [8] [DEBUG] ('172.18.0.1', 41958) - Disconnected
[2019-04-04 13:49:56 +0000] [8] [DEBUG] ('172.18.0.1', 42136) - Disconnected
[2019-04-04 13:49:56 +0000] [8] [DEBUG] ('172.18.0.1', 41968) - Disconnected
[2019-04-04 13:49:56 +0000] [8] [DEBUG] ('172.18.0.1', 42232) - Disconnected
[2019-04-04 13:49:56 +0000] [8] [DEBUG] ('172.18.0.1', 42130) - Disconnected
[2019-04-04 13:49:56 +0000] [9] [INFO] ('172.18.0.1', 41870) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:56 +0000] [9] [INFO] ('172.18.0.1', 42062) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:56 +0000] [9] [INFO] ('172.18.0.1', 42070) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:56 +0000] [9] [INFO] ('172.18.0.1', 42120) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:56 +0000] [9] [INFO] ('172.18.0.1', 42068) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:56 +0000] [9] [INFO] ('172.18.0.1', 42056) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:56 +0000] [9] [INFO] ('172.18.0.1', 42064) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:56 +0000] [9] [INFO] ('172.18.0.1', 41948) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:56 +0000] [9] [INFO] ('172.18.0.1', 42054) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:56 +0000] [8] [DEBUG] ('172.18.0.1', 42132) - Disconnected
[2019-04-04 13:49:56 +0000] [8] [DEBUG] ('172.18.0.1', 42114) - Disconnected
[2019-04-04 13:49:56 +0000] [8] [DEBUG] ('172.18.0.1', 41944) - Disconnected
[2019-04-04 13:49:56 +0000] [8] [DEBUG] ('172.18.0.1', 42134) - Disconnected
[2019-04-04 13:49:56 +0000] [8] [INFO] ('172.18.0.1', 41970) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:56 +0000] [8] [INFO] ('172.18.0.1', 42154) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:56 +0000] [9] [INFO] ('172.18.0.1', 42076) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:56 +0000] [9] [INFO] ('172.18.0.1', 42026) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:56 +0000] [9] [INFO] ('172.18.0.1', 42074) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:56 +0000] [9] [INFO] ('172.18.0.1', 42050) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:56 +0000] [9] [INFO] ('172.18.0.1', 42118) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:57 +0000] [8] [INFO] ('172.18.0.1', 42150) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:57 +0000] [8] [DEBUG] ('172.18.0.1', 42156) - Disconnected
[2019-04-04 13:49:57 +0000] [9] [INFO] ('172.18.0.1', 41950) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:57 +0000] [8] [DEBUG] ('172.18.0.1', 41972) - Disconnected
[2019-04-04 13:49:57 +0000] [8] [DEBUG] ('172.18.0.1', 42128) - Disconnected
[2019-04-04 13:49:57 +0000] [8] [DEBUG] ('172.18.0.1', 42140) - Disconnected
[2019-04-04 13:49:57 +0000] [8] [DEBUG] ('172.18.0.1', 41954) - Disconnected
[2019-04-04 13:49:57 +0000] [8] [DEBUG] ('172.18.0.1', 42138) - Disconnected
[2019-04-04 13:49:57 +0000] [8] [DEBUG] ('172.18.0.1', 42144) - Disconnected
[2019-04-04 13:49:57 +0000] [8] [INFO] ('172.18.0.1', 42152) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:57 +0000] [8] [INFO] ('172.18.0.1', 41988) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:57 +0000] [8] [INFO] ('172.18.0.1', 42148) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:57 +0000] [9] [INFO] ('172.18.0.1', 42124) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:57 +0000] [8] [INFO] ('172.18.0.1', 42180) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:57 +0000] [8] [INFO] ('172.18.0.1', 42184) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:57 +0000] [8] [INFO] ('172.18.0.1', 41986) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:57 +0000] [9] [INFO] ('172.18.0.1', 41912) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:57 +0000] [8] [DEBUG] ('172.18.0.1', 41978) - Disconnected
[2019-04-04 13:49:57 +0000] [9] [INFO] ('172.18.0.1', 42072) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:57 +0000] [9] [INFO] ('172.18.0.1', 41952) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:57 +0000] [8] [DEBUG] ('172.18.0.1', 42126) - Disconnected
[2019-04-04 13:49:57 +0000] [8] [DEBUG] ('172.18.0.1', 41976) - Disconnected
[2019-04-04 13:49:57 +0000] [8] [DEBUG] ('172.18.0.1', 42182) - Disconnected
[2019-04-04 13:49:57 +0000] [8] [DEBUG] ('172.18.0.1', 41974) - Disconnected
[2019-04-04 13:49:57 +0000] [8] [DEBUG] ('172.18.0.1', 41980) - Disconnected
[2019-04-04 13:49:57 +0000] [8] [DEBUG] ('172.18.0.1', 41984) - Disconnected
[2019-04-04 13:49:57 +0000] [8] [DEBUG] ('172.18.0.1', 42142) - Disconnected
[2019-04-04 13:49:57 +0000] [8] [INFO] ('172.18.0.1', 41982) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:57 +0000] [8] [INFO] ('172.18.0.1', 42168) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:57 +0000] [8] [INFO] ('172.18.0.1', 42090) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:57 +0000] [8] [INFO] ('172.18.0.1', 42258) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:57 +0000] [8] [INFO] ('172.18.0.1', 42262) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:57 +0000] [8] [DEBUG] ('172.18.0.1', 41970) - Disconnected
[2019-04-04 13:49:57 +0000] [8] [DEBUG] ('172.18.0.1', 42150) - Disconnected
[2019-04-04 13:49:57 +0000] [8] [DEBUG] ('172.18.0.1', 42154) - Disconnected
[2019-04-04 13:49:57 +0000] [8] [INFO] ('172.18.0.1', 42240) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:57 +0000] [8] [INFO] ('172.18.0.1', 42254) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:57 +0000] [8] [INFO] ('172.18.0.1', 42100) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:57 +0000] [8] [DEBUG] ('172.18.0.1', 42180) - Disconnected
[2019-04-04 13:49:57 +0000] [8] [DEBUG] ('172.18.0.1', 42184) - Disconnected
[2019-04-04 13:49:57 +0000] [8] [DEBUG] ('172.18.0.1', 42148) - Disconnected
[2019-04-04 13:49:57 +0000] [8] [DEBUG] ('172.18.0.1', 41986) - Disconnected
[2019-04-04 13:49:57 +0000] [8] [DEBUG] ('172.18.0.1', 41988) - Disconnected
[2019-04-04 13:49:57 +0000] [8] [DEBUG] ('172.18.0.1', 42152) - Disconnected
[2019-04-04 13:49:57 +0000] [8] [INFO] ('172.18.0.1', 42256) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:57 +0000] [8] [DEBUG] ('172.18.0.1', 42090) - Disconnected
[2019-04-04 13:49:57 +0000] [8] [DEBUG] ('172.18.0.1', 42258) - Disconnected
[2019-04-04 13:49:57 +0000] [8] [DEBUG] ('172.18.0.1', 41982) - Disconnected
[2019-04-04 13:49:57 +0000] [8] [DEBUG] ('172.18.0.1', 42168) - Disconnected
[2019-04-04 13:49:57 +0000] [8] [DEBUG] ('172.18.0.1', 42262) - Disconnected
[2019-04-04 13:49:57 +0000] [8] [DEBUG] ('172.18.0.1', 42254) - Disconnected
[2019-04-04 13:49:57 +0000] [8] [DEBUG] ('172.18.0.1', 42100) - Disconnected
[2019-04-04 13:49:57 +0000] [8] [DEBUG] ('172.18.0.1', 42240) - Disconnected
[2019-04-04 13:49:57 +0000] [8] [DEBUG] ('172.18.0.1', 42256) - Disconnected
[2019-04-04 13:49:57 +0000] [9] [INFO] ('172.18.0.1', 42082) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:58 +0000] [9] [INFO] ('172.18.0.1', 42174) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:58 +0000] [9] [INFO] ('172.18.0.1', 41992) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:58 +0000] [9] [INFO] ('172.18.0.1', 42122) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:58 +0000] [9] [DEBUG] ('172.18.0.1', 41862) - Disconnected
[2019-04-04 13:49:58 +0000] [9] [DEBUG] ('172.18.0.1', 41868) - Disconnected
[2019-04-04 13:49:58 +0000] [9] [DEBUG] ('172.18.0.1', 41858) - Disconnected
[2019-04-04 13:49:58 +0000] [9] [DEBUG] ('172.18.0.1', 41852) - Disconnected
[2019-04-04 13:49:58 +0000] [9] [DEBUG] ('172.18.0.1', 41842) - Disconnected
[2019-04-04 13:49:58 +0000] [9] [DEBUG] ('172.18.0.1', 41866) - Disconnected
[2019-04-04 13:49:58 +0000] [9] [DEBUG] ('172.18.0.1', 41860) - Disconnected
[2019-04-04 13:49:58 +0000] [9] [DEBUG] ('172.18.0.1', 41832) - Disconnected
[2019-04-04 13:49:58 +0000] [9] [DEBUG] ('172.18.0.1', 41838) - Disconnected
[2019-04-04 13:49:58 +0000] [9] [DEBUG] ('172.18.0.1', 41856) - Disconnected
[2019-04-04 13:49:58 +0000] [9] [DEBUG] ('172.18.0.1', 41836) - Disconnected
[2019-04-04 13:49:58 +0000] [9] [DEBUG] ('172.18.0.1', 41844) - Disconnected
[2019-04-04 13:49:58 +0000] [9] [DEBUG] ('172.18.0.1', 41854) - Disconnected
[2019-04-04 13:49:58 +0000] [9] [DEBUG] ('172.18.0.1', 41840) - Disconnected
[2019-04-04 13:49:58 +0000] [9] [DEBUG] ('172.18.0.1', 41834) - Disconnected
[2019-04-04 13:49:59 +0000] [9] [INFO] ('172.18.0.1', 42170) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:59 +0000] [9] [INFO] ('172.18.0.1', 41998) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:59 +0000] [9] [INFO] ('172.18.0.1', 42000) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:59 +0000] [9] [INFO] ('172.18.0.1', 42084) - "POST /signals/fanout HTTP/1.1" 200
[2019-04-04 13:49:59 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:8)
[2019-04-04 13:49:59 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:9)
[2019-04-04 13:49:59 +0000] [96] [INFO] Booting worker with pid: 96
[2019-04-04 13:49:59 +0000] [1] [DEBUG] 3 workers
[2019-04-04 13:49:59 +0000] [97] [INFO] Booting worker with pid: 97
[2019-04-04 13:49:59 +0000] [1] [DEBUG] 4 workers
WARNING:root:email-validator not installed, email fields will be treated as str.
To install, run: pip install email-validatorWARNING:root:email-validator not installed, email fields will be treated as str.
To install, run: pip install email-validator

[2019-04-04 13:50:01 +0000] [97] [INFO] Started server process [97]
[2019-04-04 13:50:01 +0000] [96] [INFO] Started server process [96]
[2019-04-04 13:50:01 +0000] [96] [INFO] Waiting for application startup.
[2019-04-04 13:50:01 +0000] [97] [INFO] Waiting for application startup.


ERR_CONNECTION_REFUSED with WebSocket

I am unable to use web sockets inside of the docker container. My JavaScript:
var ws = new WebSocket("ws://localhost:80/ws");

and my Python route:
@app.websocket("/ws") async def websocket_endpoint(websocket: WebSocket): await websocket.accept() while True: data = await websocket.receive_text() await websocket.send_text(f"Message text was: {data}")

This is using tiangolo/uvicorn-gunicorn:python3.6-alpine3.8.

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.