reef-technologies / cookiecutter-rt-django Goto Github PK
View Code? Open in Web Editor NEWCookieCutter template for Django application projects with docker-compose etc.
License: BSD 3-Clause "New" or "Revised" License
CookieCutter template for Django application projects with docker-compose etc.
License: BSD 3-Clause "New" or "Revised" License
Based on how it's done in reef.pl website project, it looks like whole frontend setup is inside app/src/frontend
, where apps/src
is also a backend root. It makes it harder to dockerize project properly - backend docker image also have whole frontend code inside and requires to manually build and check into source code bundle.js - minified JS file with whole compiled project.
Suggestion of a solution: prepare a separate directory for frontend that doesn't overlap with backend. Dockerize it, so building docker image also creates a minified bundle. During deploy copy that file into nginx staticfiles volume.
There's a very neat alternative to letsencrypt CLI - https://github.com/auto-ssl/lua-resty-auto-ssl#lua-resty-auto-ssl. It's OpenResty (nginx with lua support) module generating and renewing certificates on-the-fly, as they're requested. I've created a pretty popular docker image making it very easy to use https://github.com/Valian/docker-nginx-auto-ssl
# docker-compose.yml
version: '3'
services:
# your application, listening on port specified in `SITES` env variable
myapp:
image: nginx
nginx:
image: valian/docker-nginx-auto-ssl
restart: on-failure
ports:
- 80:80
- 443:443
volumes:
- ssl_data:/etc/resty-auto-ssl
environment:
ALLOWED_DOMAINS: 'yourdomain.com'
SITES: 'yourdomain.com=myapp:80'
volumes:
ssl_data:
No crontab, no hassle, it just works. Can be integrated directly into nginx serving media / static files.
Raising:
(project) ghost:src vmax$ celery -A project worker -B
Traceback (most recent call last):
File "/Users/vmax/.virtualenvs/scraper/lib/python3.6/site-packages/kombu/transport/base.py", line 123, in __getattr__
return self[key]
KeyError: 'async'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/vmax/.virtualenvs/scraper/bin/celery", line 11, in <module>
sys.exit(main())
File "/Users/vmax/.virtualenvs/scraper/lib/python3.6/site-packages/celery/__main__.py", line 14, in main
_main()
File "/Users/vmax/.virtualenvs/scraper/lib/python3.6/site-packages/celery/bin/celery.py", line 326, in main
cmd.execute_from_commandline(argv)
File "/Users/vmax/.virtualenvs/scraper/lib/python3.6/site-packages/celery/bin/celery.py", line 488, in execute_from_commandline
super(CeleryCommand, self).execute_from_commandline(argv)))
File "/Users/vmax/.virtualenvs/scraper/lib/python3.6/site-packages/celery/bin/base.py", line 281, in execute_from_commandline
return self.handle_argv(self.prog_name, argv[1:])
File "/Users/vmax/.virtualenvs/scraper/lib/python3.6/site-packages/celery/bin/celery.py", line 480, in handle_argv
return self.execute(command, argv)
File "/Users/vmax/.virtualenvs/scraper/lib/python3.6/site-packages/celery/bin/celery.py", line 412, in execute
).run_from_argv(self.prog_name, argv[1:], command=argv[0])
File "/Users/vmax/.virtualenvs/scraper/lib/python3.6/site-packages/celery/bin/worker.py", line 221, in run_from_argv
return self(*args, **options)
File "/Users/vmax/.virtualenvs/scraper/lib/python3.6/site-packages/celery/bin/base.py", line 244, in __call__
ret = self.run(*args, **kwargs)
File "/Users/vmax/.virtualenvs/scraper/lib/python3.6/site-packages/celery/bin/worker.py", line 255, in run
**kwargs)
File "/Users/vmax/.virtualenvs/scraper/lib/python3.6/site-packages/celery/worker/worker.py", line 99, in __init__
self.setup_instance(**self.prepare_args(**kwargs))
File "/Users/vmax/.virtualenvs/scraper/lib/python3.6/site-packages/celery/worker/worker.py", line 122, in setup_instance
self.should_use_eventloop() if use_eventloop is None
File "/Users/vmax/.virtualenvs/scraper/lib/python3.6/site-packages/celery/worker/worker.py", line 241, in should_use_eventloop
self._conninfo.transport.implements.async and
File "/Users/vmax/.virtualenvs/scraper/lib/python3.6/site-packages/kombu/transport/base.py", line 125, in __getattr__
raise AttributeError(key)
AttributeError: async
we can detect whether we need to run postgres container if so the run only this container and then run migrations using app
image to create new container for running migrations
The situation with backups is not good. Nobody checks whether they actually work (i.e. whether they are created, and whether whatever is created is good for restoring after failure), and we have only small section in README about backups
Proposal:
After trying to follow the manual setup instruction in README, it turned out that not all steps are working, like:
main
branch, not masterdocker compose
; update scripts to use new version1.1) Static files are collected during image build:
collectstatic
is chosen from settings.py
- which is root('static')
== /root/src/static
:
STATIC_ROOT = env('STATIC_ROOT', default=root('static'))
1.3) Docker-compose mounts a separate volume for static files to the same location (/root/src/static
):
volumes:
- backend-static:/root/src/static
1.4) Thus whatever was collected during build phase is overridden by volume mount at startup
1.5) Even if you manually log into running container and run collectstatic
, it won't work for ManifestStaticfilesStorage
which requires everything to be collected before app startup - ongoing changes won't be picked until restart
Proposal:
2.1) If we decide to run collectstatic
during docker image build, then remove volume mount in docker-compose. May not work if some project collects static files to s3 or so - it's weird to do it on image build rather then when deploying.
2.2) If we decide to run collectstatic
during deployment, then we should remove if from image build.
Side note:
3.1) I think ManifestStaticfilesStorage is a nice thing for effective and reliable versioning of static files and should be included in cookiecutter template.
docker-compose stop $SERVICES
docker-compose up -d
docker-compose exec -T app python manage.py collectstatic --no-input --clear # <-----
docker-compose exec -T app python manage.py wait_for_database
docker-compose exec -T app python manage.py migrate
It makes dev / production more similar and allows to easily propagate dev environment changes across developers. Of course, virtualenv approach should still be working.
The first is to block access via IP or other domains. It can be done by creating a default "blocking" server, e.g.:
server {
listen 80 default_server;
server_name _;
return 444;
}
and one similar for HTTPS. It will solve "Invalid HTTP_HOST header" issues in Sentry.
The second is to catch unneeded traffic when the first one can not be used.
Use e.g. HEAD /admin/login/
which should be always available in opposite to GET /
Currently periodic backups optionally require sentry-cli
(for error reporting) and b2
(for uploading backups to b2). I think we should add installation of these two CLI tools into setup-prod.sh
. It's very easy to forget to add them, and end up with broken backup process.
It makes it obvious what are the "real" requirements and makes it easy to manage them. It's my standard approach in all projects and seems to be better than requirements_freeze.py
(because it's widely used and maintained by other people)
https://github.com/jazzband/pip-tools
Example
# requirements.in
django
run pip-compile requirements.in
:
$ pip-compile requirements.in
$ cat requirements.txt
#
# This file is autogenerated by pip-compile
# To update, run:
#
# pip-compile requirements.in
#
asgiref==3.2.3 # via django
django==3.0.3 # via -r requirements.in
pytz==2019.3 # via django
sqlparse==0.3.0 # via django
It will produce requirements.txt
, with all the Django dependencies
(and all underlying dependencies) pinned.
To verify:
from <>.celery import app
app.conf.result_expires
# expected to return 604800 (int)
# but returns datetime.timedelta(1)
# datetime.timedelta(1).total_seconds() == 86400 (24h, which is the default)
Suggested fix:
Instead specify CELERY_RESULT_EXPIRES
which saves the correct value
As discussed, we can update cookiecuter's "poor man's deployment" to not separate bare repo and materialized project files.
E.g.:
I don't like having nginx outside of project repo. I think it's a bad idea:
I'm usually using tool called pre-commit
https://pre-commit.com/
It's something that works great as an addition to CI / CD. Just before making a commit, pre-commit automatically runs all changed files through a series of fast checks, such as flake8, black, isort etc. It usually takes less than a second, and makes it possible to eliminate style-related discussion during Code Review.
My usual configuration file:
repos:
- repo: https://github.com/ambv/black
rev: 20.8b1
hooks:
- id: black
language_version: python3.8
args: ['--line-length=120']
- repo: https://github.com/asottile/seed-isort-config
rev: v1.6.0
hooks:
- id: seed-isort-config
language_version: python3.8
args: ['--application-directories=src/backend']
- repo: https://github.com/pre-commit/mirrors-isort
rev: v4.3.4
hooks:
- id: isort
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v2.1.0
hooks:
- id: end-of-file-fixer
- id: check-merge-conflict
- id: mixed-line-ending
- id: trailing-whitespace
- id: check-added-large-files
args: ['--maxkb=1024']
- id: flake8
language_version: python3
Components to upgrade:
When not having EMAIL_TARGET
env var set, running
if [ -n "${EMAIL_HOST}" ] && [ -n "${EMAIL_TARGET}" ]; then
fails with:
bin/backup-db.sh: line 36: EMAIL_TARGET: unbound variable
One possible solution is to change #!/bin/bash -eu
to #!/bin/bash -e
, or do more advanced check for undefined variables.
It would probably be nice to link https://prometheus.io/docs/practices/naming/ + write down there any deviations from that recommendation
It could be invoke+fabric2. More advanced mechanisms, which are not needed in our opinion, are e.g. ansible or salt.
Requirements:
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.