GithubHelp home page GithubHelp logo

reef-technologies / cookiecutter-rt-django Goto Github PK

View Code? Open in Web Editor NEW
16.0 16.0 23.0 2.17 MB

CookieCutter template for Django application projects with docker-compose etc.

License: BSD 3-Clause "New" or "Revised" License

Python 45.05% Shell 23.34% Dockerfile 2.73% HCL 28.89%

cookiecutter-rt-django's People

Contributors

agoncharov-reef avatar anuj-reef avatar anujism avatar awierzbicki-reef avatar dependabot[bot] avatar emnoor-reef avatar jsuchan-reef avatar kglod-reef avatar kkalinowski-reef avatar magarwal-reef avatar mjurbanski-reef avatar mlech-reef avatar mlesniewski-reef avatar mpnowacki-reef avatar mszumocki-reef avatar mzukowski-reef avatar pawelpolewicz avatar pawelwilczynski avatar ppolewicz avatar pstachurski-reef avatar vbaltrusaitis-reef avatar vmax avatar wbancer-reef avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

cookiecutter-rt-django's Issues

Prepare place in template for frontend development

Based on how it's done in reef.pl website project, it looks like whole frontend setup is inside app/src/frontend, where apps/src is also a backend root. It makes it harder to dockerize project properly - backend docker image also have whole frontend code inside and requires to manually build and check into source code bundle.js - minified JS file with whole compiled project.

Suggestion of a solution: prepare a separate directory for frontend that doesn't overlap with backend. Dockerize it, so building docker image also creates a minified bundle. During deploy copy that file into nginx staticfiles volume.

Consider switching to from letsencrypt crontab to lua-resty-open-ssl

There's a very neat alternative to letsencrypt CLI - https://github.com/auto-ssl/lua-resty-auto-ssl#lua-resty-auto-ssl. It's OpenResty (nginx with lua support) module generating and renewing certificates on-the-fly, as they're requested. I've created a pretty popular docker image making it very easy to use https://github.com/Valian/docker-nginx-auto-ssl

# docker-compose.yml
version: '3'
services:
  # your application, listening on port specified in `SITES` env variable
  myapp:
    image: nginx

  nginx:
    image: valian/docker-nginx-auto-ssl
    restart: on-failure
    ports:
      - 80:80
      - 443:443
    volumes:
      - ssl_data:/etc/resty-auto-ssl
    environment:
      ALLOWED_DOMAINS: 'yourdomain.com'
      SITES: 'yourdomain.com=myapp:80'
  

volumes:
  ssl_data:

No crontab, no hassle, it just works. Can be integrated directly into nginx serving media / static files.

Celery does not run

Raising:

(project) ghost:src vmax$ celery -A project worker -B
Traceback (most recent call last):
  File "/Users/vmax/.virtualenvs/scraper/lib/python3.6/site-packages/kombu/transport/base.py", line 123, in __getattr__
    return self[key]
KeyError: 'async'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/vmax/.virtualenvs/scraper/bin/celery", line 11, in <module>
    sys.exit(main())
  File "/Users/vmax/.virtualenvs/scraper/lib/python3.6/site-packages/celery/__main__.py", line 14, in main
    _main()
  File "/Users/vmax/.virtualenvs/scraper/lib/python3.6/site-packages/celery/bin/celery.py", line 326, in main
    cmd.execute_from_commandline(argv)
  File "/Users/vmax/.virtualenvs/scraper/lib/python3.6/site-packages/celery/bin/celery.py", line 488, in execute_from_commandline
    super(CeleryCommand, self).execute_from_commandline(argv)))
  File "/Users/vmax/.virtualenvs/scraper/lib/python3.6/site-packages/celery/bin/base.py", line 281, in execute_from_commandline
    return self.handle_argv(self.prog_name, argv[1:])
  File "/Users/vmax/.virtualenvs/scraper/lib/python3.6/site-packages/celery/bin/celery.py", line 480, in handle_argv
    return self.execute(command, argv)
  File "/Users/vmax/.virtualenvs/scraper/lib/python3.6/site-packages/celery/bin/celery.py", line 412, in execute
    ).run_from_argv(self.prog_name, argv[1:], command=argv[0])
  File "/Users/vmax/.virtualenvs/scraper/lib/python3.6/site-packages/celery/bin/worker.py", line 221, in run_from_argv
    return self(*args, **options)
  File "/Users/vmax/.virtualenvs/scraper/lib/python3.6/site-packages/celery/bin/base.py", line 244, in __call__
    ret = self.run(*args, **kwargs)
  File "/Users/vmax/.virtualenvs/scraper/lib/python3.6/site-packages/celery/bin/worker.py", line 255, in run
    **kwargs)
  File "/Users/vmax/.virtualenvs/scraper/lib/python3.6/site-packages/celery/worker/worker.py", line 99, in __init__
    self.setup_instance(**self.prepare_args(**kwargs))
  File "/Users/vmax/.virtualenvs/scraper/lib/python3.6/site-packages/celery/worker/worker.py", line 122, in setup_instance
    self.should_use_eventloop() if use_eventloop is None
  File "/Users/vmax/.virtualenvs/scraper/lib/python3.6/site-packages/celery/worker/worker.py", line 241, in should_use_eventloop
    self._conninfo.transport.implements.async and
  File "/Users/vmax/.virtualenvs/scraper/lib/python3.6/site-packages/kombu/transport/base.py", line 125, in __getattr__
    raise AttributeError(key)
AttributeError: async

No strict backup rules

The situation with backups is not good. Nobody checks whether they actually work (i.e. whether they are created, and whether whatever is created is good for restoring after failure), and we have only small section in README about backups

Proposal:

  1. Add strict policy for backing up each new project (script backup, cloud service snapshots, backup images rotation policy, b2-or-not-b2 etc)
  2. Add backups creation stats to grafana, set up alerts if backups are missing
  3. Add periodic checks that backups are usable
  4. Check all existing projects for all clients and ensure all is fine

Readme is outdated

After trying to follow the manual setup instruction in README, it turned out that not all steps are working, like:

  • copying post-receive file - env vars are substituted
  • www domain is not needed sometimes
  • github creates main branch, not master
  • fe_sendauth: no password supplied (WAT?) - because postgres password should be set in TWO places in .env file
  • nginx doesn't start without cadvisor (even if I don't need the latter); cadvisor doesn't start w/o monitoring certificates
  • no need to install docker-compose since it's now docker compose; update scripts to use new version

Staticfiles improperly collected

1.1) Static files are collected during image build:

RUN ENV=prod ENV_FILL_MISSING_VALUES=1 SECRET_KEY=dummy python3 manage.py collectstatic --no-input --clear

1.2) Since env vars are not automatically passed during image build, a target folder for collectstatic is chosen from settings.py - which is root('static') == /root/src/static:

STATIC_ROOT = env('STATIC_ROOT', default=root('static'))

https://github.com/reef-technologies/cookiecutter-rt-django/blob/master/%7B%7Bcookiecutter.repostory_name%7D%7D/app/src/%7B%7Bcookiecutter.django_project_name%7D%7D/settings.py#L184

1.3) Docker-compose mounts a separate volume for static files to the same location (/root/src/static):

    volumes:
      - backend-static:/root/src/static

https://github.com/reef-technologies/cookiecutter-rt-django/blob/master/%7B%7Bcookiecutter.repostory_name%7D%7D/envs/prod/docker-compose.yml#L51

1.4) Thus whatever was collected during build phase is overridden by volume mount at startup

1.5) Even if you manually log into running container and run collectstatic, it won't work for ManifestStaticfilesStorage which requires everything to be collected before app startup - ongoing changes won't be picked until restart

Proposal:
2.1) If we decide to run collectstatic during docker image build, then remove volume mount in docker-compose. May not work if some project collects static files to s3 or so - it's weird to do it on image build rather then when deploying.
2.2) If we decide to run collectstatic during deployment, then we should remove if from image build.

Side note:
3.1) I think ManifestStaticfilesStorage is a nice thing for effective and reliable versioning of static files and should be included in cookiecutter template.

deploy.sh should collect static files

docker-compose stop $SERVICES
docker-compose up -d

docker-compose exec -T app python manage.py collectstatic --no-input --clear   # <-----
docker-compose exec -T app python manage.py wait_for_database
docker-compose exec -T app python manage.py migrate

Auto-install b2 and sentry-cli

Currently periodic backups optionally require sentry-cli (for error reporting) and b2 (for uploading backups to b2). I think we should add installation of these two CLI tools into setup-prod.sh. It's very easy to forget to add them, and end up with broken backup process.

Use pip-compile to manage requirements

It makes it obvious what are the "real" requirements and makes it easy to manage them. It's my standard approach in all projects and seems to be better than requirements_freeze.py (because it's widely used and maintained by other people)

https://github.com/jazzband/pip-tools

Example

# requirements.in
django

run pip-compile requirements.in:

$ pip-compile requirements.in
$ cat requirements.txt
#
# This file is autogenerated by pip-compile
# To update, run:
#
#    pip-compile requirements.in
#
asgiref==3.2.3    # via django
django==3.0.3    # via -r requirements.in
pytz==2019.3    # via django
sqlparse==0.3.0    # via django

It will produce requirements.txt, with all the Django dependencies
(and all underlying dependencies) pinned.

Celery does not respect task expiry param

CELERY_TASK_RESULT_EXPIRES = int(dt.timedelta(days=7).total_seconds())

To verify:

from <>.celery import app
app.conf.result_expires
# expected to return 604800 (int)
# but returns datetime.timedelta(1)
# datetime.timedelta(1).total_seconds() == 86400 (24h, which is the default) 

Suggested fix:

Instead specify CELERY_RESULT_EXPIRES which saves the correct value

Separate nginx image is annoying

I don't like having nginx outside of project repo. I think it's a bad idea:

  1. The reason to have it outside of project is that we can update it in cookiecutter and thus update it on every project automatically, but we can update it via cruft update as well but manually, thus having more control over the process
  2. It is rather ANNOYING that I don't have control over the nginx image, for example i've been hit twice already with letsencrypt configuration: there are cases when I want to intervent letsencrypt validation process and I cannot do that because
    nginx:
    image: 'ghcr.io/reef-technologies/nginx-rt:v1.0.0'

Add automatic code checks / formatting

I'm usually using tool called pre-commit https://pre-commit.com/

It's something that works great as an addition to CI / CD. Just before making a commit, pre-commit automatically runs all changed files through a series of fast checks, such as flake8, black, isort etc. It usually takes less than a second, and makes it possible to eliminate style-related discussion during Code Review.

My usual configuration file:

repos:
  - repo: https://github.com/ambv/black
    rev: 20.8b1
    hooks:
      - id: black
        language_version: python3.8
        args: ['--line-length=120']
  - repo: https://github.com/asottile/seed-isort-config
    rev: v1.6.0
    hooks:
      - id: seed-isort-config
        language_version: python3.8
        args: ['--application-directories=src/backend']
  - repo: https://github.com/pre-commit/mirrors-isort
    rev: v4.3.4
    hooks:
      - id: isort
  - repo: https://github.com/pre-commit/pre-commit-hooks
    rev: v2.1.0
    hooks:
      - id: end-of-file-fixer
      - id: check-merge-conflict
      - id: mixed-line-ending
      - id: trailing-whitespace
      - id: check-added-large-files
        args: ['--maxkb=1024']
      - id: flake8
        language_version: python3

Upgrade components

Components to upgrade:

  • python
  • celery
  • postgres (support for 9.6 will end in September 2021!)
    etc.

Backup script partially fails if env var not set

When not having EMAIL_TARGET env var set, running

if [ -n "${EMAIL_HOST}" ] && [ -n "${EMAIL_TARGET}" ]; then

fails with:

bin/backup-db.sh: line 36: EMAIL_TARGET: unbound variable

One possible solution is to change #!/bin/bash -eu to #!/bin/bash -e, or do more advanced check for undefined variables.

Implement proper continues delivery and deployment tools

It could be invoke+fabric2. More advanced mechanisms, which are not needed in our opinion, are e.g. ansible or salt.

Requirements:

  • The deployment should allow pushing the image to the server without having container registry.
  • There should be no need to have a source code on the VM.
  • It should work on freshly installed Ubuntu LTS systems. All dependencies should be automatically installed if needed.
  • It should allow deploying manually and via GitHub Actions by using the same tools.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.