GithubHelp home page GithubHelp logo

Comments (2)

jay-johnson avatar jay-johnson commented on May 29, 2024

Wow...just wow and thanks for the really kind words @SpencerPinegar I hope all this helps you build stuff faster. To answer your question, some of this I imagine is pretty opinionated and will likely keep changing because it's how I develop + try to reduce overhead when building django/celery/python stacks... This repo and response may or may not be very useful for the django app/project deployment footprint you're looking for.

Here's my generalized deployment workflow and some background on how I iterated to get here:

Background

This django application is the latest version of an evolving django project that started many years ago. This newest django 2.x is an upgrade from the original django project: https://github.com/jay-johnson/docker-django-nginx-slack-sphinx About a year ago I got tired of worrying about python 3, so I started with Jose Padilla's django 2.0+ repo: https://github.com/jpadilla/django-project-template and rolled the AntiNex Django REST API (which works for Django and Django REST Framework).

Application Deployment Strategy - Docker Containers

I am pretty sold on docker containers for everything so I always try to containerize as soon as possible to prevent post-launch integration pain for a project/product. Docker also solves a lot of new user issues which helps when I am working with a team.

Automate Container Builds - Travis

I try to focus on getting an initial small, baseline feature set functional outside of docker first and then roll the first Dockerfile. I'm still on GitHub for the moment because I do not know how/have not taken to time to port the travis.yml file to handle the docker builds to work on GitLab's tools (or something like Travis on there). Another note, this repo's travis.yml file does not handle PR and PR-merge webhooks as gracefully as the one I'm using on the Stock Analysis Engine's travis.yml... I should fix that haha.

Phase 1 - Functional

Now that the container images are auto-pushed to docker hub (or a private registry), I start on the first full-container integration sandbox environment. I use Docker Compose files for managing my stack locally. I usually start with a general deployment use case like:

Moving components into Docker one at a time

With the confidence that the app is container-ready, I work on putting the full stack together. For django, I focused on getting the layers above and below and on the side together and made sure the app could migrate + write to a database Postgres + pgAdmin and use a Redis server with manage.py and an environment variable-driven settings.py file.

Phase 2 - Full Stack End to End Integration

I usually consider this phase the cloud-ready milestone where all the lego blocks are ready for assembly on AWS/GCP/GKE/Azure/on-prem kubernetes. At this point, I am looking at end-to-end validation and running many containers at once (usually on a single-host to prevent debugging networking/cluster issues). I still want to be rapidly testing, so I utilize docker volumes heavily in a local development sandbox and mount the repos straight into the containers and build the newest pips/build artifacts on container startup with a start script.

For this phase, I am looking at docker compose files that solve these general deployment use cases:

Phase 3 - Scheduler Integration - Kubernetes and OpenShift

These days, I cannot believe how easy Kubernetes and OpenShift makes running a modern stack.

For years, I was chasing the latest docker swarm patterns and spent a lot of time keeping the CaaS layer stable. As a comparison: I've spent a total of maybe 5 minutes of my own time fixing Kubernetes on 1.12.1 in the past week. The last backup I did was 13 days ago (note, I manually restarted Minio to test the Ceph volume persistence). Docker swarm was way more daily tuning and monitoring than k8 is (so far).

How long has it been running?

kubectl get pods
NAME                                READY     STATUS      RESTARTS   AGE
api-6794f879df-fdxsx                1/1       Running     0          13d
api-6794f879df-vdfkr                1/1       Running     0          13d
core-7b478cbb6b-vdfmw               1/1       Running     0          13d
jupyter-5bd448667f-2tnq4            1/1       Running     0          13d
minio-deployment-6ddbf89585-925q7   1/1       Running     0          11d
nginx-k2l8t                         1/1       Running     0          13d
nginx-lq9w6                         1/1       Running     0          13d
nginx-nhlxv                         1/1       Running     0          13d
pgadmin4-http                       1/1       Running     0          13d
primary                             1/1       Running     0          13d
redis-master-0                      1/1       Running     0          13d
redis-metrics-594dd54459-2qq4z      1/1       Running     0          13d
redis-slave-5f66458b7d-fjg9g        1/1       Running     0          13d
redis-slave-5f66458b7d-md9f7        1/1       Running     0          13d
redis-slave-5f66458b7d-nvplh        1/1       Running     0          13d
sa-dataset-collector-8nzgp          0/1       Completed   0          6h21m
sa-engine-57656d9bc6-gtfqj          1/1       Running     0          6h22m
splunk-7c9c7bbb59-5n59r             1/1       Running     0          13d
worker-55f9d4fd74-89gsd             1/1       Running     0          13d

I've been developing new jobs and engine deployments each day on this Kubernetes cluster. Nothing has even restarted due to a failure that wasn't me testing stuff.

Infrastructure

Once you get Kubernetes/OpenShift running your stack, you can easily run on any cloud provider with a Kubernetes/OCP offering. I am tired of always running on expensive clouds, so I recently moved to a home server. Note, my home cluster is not super fault tolerant at this point it is just a single Dell r620 (32 core 128 gb) server with a 1 tb drive running the cluster's 3 vms. The bare metal server is Ubuntu 18.04 with kvm for managing the 3 Kubernetes CentOS 7 VMs. Each k8 vm is running CentOS 7 with about 80 gb hdd, 6 cpu and 30 gb ram. The ram's pretty necessary if you're planning on doing any kind of AI work, but it's definitely more than I've ever had without burning a serious hole in my AWS budget. I am also loving having a DNS server (bind9) for helping host all the Kubernetes Ingresses on real FQDNs across my house.

Ingresses available using an external DNS server:

kubectl get ingress
NAME                 HOSTS                 ADDRESS   PORTS     AGE
api-ingress          api.example.com                 80, 443   13d
jupyter-ingress      jupyter.example.com             80, 443   13d
minio-ingress        minio.example.com               80, 443   11d
pgadmin-ingress      pgadmin.example.com             80, 443   13d
splunk-web-ingress   splunk.example.com              80, 443   13d

VM cluster membership:

kubectl get nodes -o wide
NAME                  STATUS    ROLES     AGE       VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION               CONTAINER-RUNTIME
master1.example.com   Ready     master    13d       v1.12.1   192.168.0.101   <none>        CentOS Linux 7 (Core)   3.10.0-862.11.6.el7.x86_64   docker://18.6.1
master2.example.com   Ready     <none>    13d       v1.12.1   192.168.0.102   <none>        CentOS Linux 7 (Core)   3.10.0-862.11.6.el7.x86_64   docker://18.6.1
master3.example.com   Ready     <none>    13d       v1.12.1   192.168.0.103   <none>        CentOS Linux 7 (Core)   3.10.0-862.11.6.el7.x86_64   docker://18.6.1

Let me know if you have any questions... I love this stuff!

from deploy-to-kubernetes.

SpencerPinegar avatar SpencerPinegar commented on May 29, 2024

Wow, Thanks for the full response Jay; Your communication skills are as excellent as your programming abilities! It's exciting to stumble upon a project like this so early so if there is anything I can do to help, please let me know; I plan on building upon this project either way and I would love to contribute to the open source community.

Docker
I have an introductory understanding of docker, but my largest headaches have been with the "it works on my machine" issue so this tech is a breath of fresh air. Is it easiest just manage docker images from the docker-compose.yaml file initially or to initialize them from the CLI initially?

CLI
I am not super familiar with TravisCI but I have introductory experience with Jenkins - after reading this, I was wondering why you chose Travis over Jenkins? I assume it is because Travis doesn't need a dedicated server. Is Travis a good CI choice for a production application?

Logging
I understand that splunk is less resource intensive than an ELK stack and that you understood it better from previous projects, but is there any other rationale for choosing splunk? At work, I am currently implementing ELK logging and deprecating splunk so I had to ask.

from deploy-to-kubernetes.

Related Issues (1)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.