GithubHelp home page GithubHelp logo

jstop / container-orchestration-foundation-blueprint Goto Github PK

View Code? Open in Web Editor NEW

This project forked from verticalrelevance/container-orchestration-foundation-blueprint

2.0 0.0 1.0 1.76 MB

Container Orchestration Foundation Blueprint

Shell 3.49% JavaScript 35.97% Java 40.33% TypeScript 10.00% CSS 4.86% Makefile 1.77% HTML 1.05% Dockerfile 1.28% Mustache 1.25%

container-orchestration-foundation-blueprint's Introduction

Container-Orchestration-Foundation-Blueprint

The Container Orchestration Foundation Blueprint is an AWS CDK application that is designed to set up an EKS cluster, including all of the underlying resources, along with AWS CodePipeline, CodeBuild, and ECR to create and host the container images. The cluster is created with the CDK's EKS Blueprint, which follows the AWS best practices for managing EKS. In order to deploy the container images to the EKS cluster, we utilize ArgoCD. A React frontend and Java Spring backend that utilizes RDS MySQL are provided, along with helm charts for each. Together, the frontend, backend, and database comprise a three-tier archicture polling application. The app is meant to be hosted at https://polling.yourdomain.com, where yourdomain.com is the hosted zone name of your hosted zone.

The following EKS addons are installed:

  • Kube Proxy - Enables network communication to pods
  • Core DNS - Provides DNS resolution for all pods in the cluster
  • VPC CNI - Adds support for the VPC CNI plugin, which allows pods to have to have the same IP address inside the pod as they do on the VPC network
  • Secrets Store - Used for mounting secrets in the pods
  • ArgoCD - A declarative GitOps CI/CD tool for Kubernetes. This addon will be used to deploy the helm charts to the cluster
  • Metrics Server - Needed for Horizontal Pod Autoscaler
  • AWS Load Balancer Controller - Allows provisioning of AWS Application Load Balancers through ingress resources
  • External DNS - Creates Route53 records by using annotations on the ingress resources. This will configure the domains to automatically point at the load balancers
  • Karpenter - Provisions and removes nodes automatically when pods are created or deleted
  • AWS for Fluent Bit - Configures logging with CloudWatch Logs
  • Container Insights - Installs the AWS Distro for Open Telemetry collector and populates CloudWatch Container Insights with metrics. The metrics and dashboards can be viewed through the CloudWatch console.

image

Prerequisites

  1. Git version 2.28.0 or later installed locally
  2. Homebrew installed on your local machine
    export NVM_DIR="$HOME/.nvm"
    [ -s "$HOMEBREW_PREFIX/opt/nvm/nvm.sh" ] && \. "$HOMEBREW_PREFIX/opt/nvm/nvm.sh" # This loads nvm
    [ -s "$HOMEBREW_PREFIX/opt/nvm/etc/bash_completion.d/nvm" ] && \. "$HOMEBREW_PREFIX/opt/nvm/etc/bash_completion.d/nvm" # This loads nvm bash_completion
  3. A GitHub repository with an SSH Key Pair
  4. An AWS Account
  5. A Route53 hosted zone registered in the AWS account. See Registering and managing domains using Amazon Route 53 in the AWS Developer Guide for details. The following records must be available: polling, polling-api, argo.
  6. A Platform team user role. This is a role that will your team will be allowed to assume in order to administer the cluster.

Create a GitHub repository and SSH Key Pair

Clone the repository and set up a new origin and branch

export GITHUB_OWNER=mycompany # update this with your org or user
git clone [email protected]:VerticalRelevance/Container-Orchestration-Foundation-Blueprint.git
cd Container-Orchestration-Foundation-Blueprint/
git remote remove origin
git remote add origin [email protected]:${GITHUB_OWNER}/Container-Orchestration-Foundation-Blueprint.git
git push -u origin main

AWS Credentials

AWS credentails must be configured to work with the target account and region when deploying the CDK. The project will use the AWS SDK under the hood to gather credentials. Therefore, you can export AWS_PROFILE=<profile_name> if you have an AWS profile configured with the CLI. You can also set the region by running aws configure set region <region-name>

If you experience errors while running the make commands such as "unresolved tokens", please be sure to check your ~/.aws/credentials and ~/.aws/config files for potential configuration issues.

Configuration

Configuration is done through environment variables. The .env file in the root of this repository will be included when running make commands. Crucially, the HOSTED_ZONE_NAME and PLATFORM_TEAM_USER_ROLE_ARN variables must be specified. Optionally, SSH_PRIVATE_KEY_PATH can be specified.

Example .env file

echo "HOSTED_ZONE_NAME=<HOSTED_ZONE_NAME>
PLATFORM_TEAM_USER_ROLE_ARN=<PLATFORM_TEAM_USER_ROLE_ARN>" \
> .env

To override the the default SSH private key path, the SSH_PRIVATE_KEY_PATH environment variable can be set. The default value is ~/.ssh/id_ed25519.

echo "SSH_PRIVATE_KEY_PATH=~/.ssh/id_rsa" >> .env

Install and run CDK

make

This step:

  • Installs the homebrew dependencies, if needed.
  • Runs the CDK deploy and generates the output JSON file (which is needed for the next step)
    • Provisions the Wildcard ACM certicate and CodeCommit repo
    • Provisions the VPC, EKS Cluster, EKS Addons
    • Provisions the RDS Database Cluster
    • Provisions the Pipelines (spring-frontend and spring-backend) and ECR Repositories
  • Provisions the Karpenter template (see https://karpenter.sh/docs/getting-started/getting-started-with-karpenter/#5-create-provisioner)
  • Pushes the application code the the CodeCcommit repos, which triggers the CodePipelines (so that the images are built and pushed into ECR)

Update the values files for argocd

This step utilizes the outputs of the CDK and configures the helm chart values that are needed for deployment. The values files are automatically parsed, modified, then rendered back out as a string in-place.

make update-values
git add charts/
git commit -m "Update values files"
git push

Setup Argo Proxy

make argo-proxy This will not complete and should be left open. Use another terminal while this is running to finish dashboard and app installation.

Install Dashboard and Apps

With argo-proxy running:

make dashboard # optional
make spring-apps

The make dashboard command will install the kubernetes dashboard application from this repository that can be accessed by running kubectl proxy (see accessing the Dashboard UI documentation) and then visiting http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/ in the browser.

The make spring-apps will install the polling-app, which is comprised of the two helm charts: spring-frontend and spring-backend. Each chart is configured with an ingress (which in turn creates an Application Load Balancer) and annotations to utilize the wildcard ACM certificate created from the previous CDK step, as well as an external-dns annotation for automatic Route53 record configuration.

argocd UI

  • https://localhost:8080
  • username: admin
  • password: kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d

Cleanup

In order to clean up we must first delete the argo applications and then tear down the cluster. Therefore, we should run make argo-destroy before make destroy.

With argo-proxy running:

make argo-destroy

To destroy the CDK Stacks:

make destroy

This will

  • Delete the argocd namespace from the cluster

    • If the namespace is stuck in terminating state, there may be resources failing to delete from the namespace. In which case the following commands will be helpful:

      # List the hanging resources
      kubectl api-resources --verbs=list --namespaced -o name | xargs -n 1 kubectl get --show-kind --ignore-not-found -n argocd
      
      # List and delete the application resources in the argocd namespace
      kubectl -n argocd get application -o=jsonpath='{.items[*].metadata.name}' | xargs -n 1 -I {} kubectl patch -n argocd application {} --type=json -p='[{"op": "remove", "path": "/metadata/finalizers"}]'
  • Remove any remaining images from the ECR repositories

  • Run cdk destroy to delete the CloudFormation Stacks

The make destroy command can be run multiple times, in case the CloudFormation stacks fail to delete. Since there are known issues with CloudFormation stacks getting stuck with ArgoCD, we delete the argocd namespace to remove ArgoCD from the cluster before destroying the cluster.

container-orchestration-foundation-blueprint's People

Contributors

bharber-vr avatar jstein-vr avatar brice-ridings avatar

Stargazers

Tina Humphrey avatar Damian Montero avatar

Forkers

thedamian

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.