GithubHelp home page GithubHelp logo

clowdhaus / eks-reference-architecture Goto Github PK

View Code? Open in Web Editor NEW
85.0 4.0 26.0 176 KB

Reference EKS architectures using https://github.com/terraform-aws-modules/terraform-aws-eks

License: Apache License 2.0

HCL 85.99% Jupyter Notebook 14.01%
terraform aws-eks aws-eks-cluster kubernetes-cluster infrastructure-as-code architectural-patterns

eks-reference-architecture's Introduction

EKS Reference Architecture

The configurations captured within this project are intended to serve as references for others to build upon. If you are looking to get started with one or more of the following architectures, the provided configurations can help users quickly grasp the configurations used to achieve the desired outcome.

The configurations provided are not intended to be consumed as a Terraform module, nor are they necessarily designed to be "production ready". The configurations can be greatly expanded to be closer to production ready, but are kept narrow and focused on the specific configurations that support the respective reference architecture.

License

Apache-2.0 Licensed. See LICENSE.

eks-reference-architecture's People

Contributors

bryantbiggs avatar good92 avatar renovate[bot] avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

eks-reference-architecture's Issues

Basically thanks you !

Hi

@bryantbiggs I stumbled across your private cluster sample not so far ago and I was able to use it as a reference point with great effect.

As such, thanks you for your effort of sharing knowledge on a AWS/Terraform field.

And keep going !

Error - error: CreateFile /proc/self/fd/63 during coredns deletion

Terraform code fails during null_resource is getting executed. The problem is with shell substitution / Expansion of the below line:
kubectl --namespace kube-system delete deployment coredns --kubeconfig <(echo $KUBECONFIG | base64 --decode).

โ”‚ Error: local-exec provisioner error
โ”‚
โ”‚ with null_resource.remove_default_coredns_deployment,
โ”‚ on helm-provisioner.tf line 16, in resource "null_resource" "remove_default_coredns_deployment":
โ”‚ 16: provisioner "local-exec" {
โ”‚
โ”‚ Error running command 'kubectl --namespace kube-system delete deployment
โ”‚ coredns --kubeconfig <(echo $KUBECONFIG | base64 -d)
: The systematus 1. Output: error: CreateFile /proc/self/fd/63
โ”‚ cannot find the path specified.

It could be nice if this command could modified.

BR,
Sandy

fargate-scheduler: Your AWS account is currently blocked and thus cannot launch any Fargate pods

Describe the bug

I followed this example and I am stuck with the following status:

Events:
  Type     Reason            Age    From               Message
  ----     ------            ----   ----               -------
  Warning  FailedScheduling  2m59s  fargate-scheduler  Your AWS account is currently blocked and thus cannot launch any Fargate pods

To Reproduce
Steps to reproduce the behavior:

  1. Navigate to the serverless tutorial
  2. Run it
  3. See error

Code

terraform {
  required_version = "~> 1.2.4"
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 4.21.0"
    }
    kubernetes = {
      source  = "hashicorp/kubernetes"
      version = "~>2.12.0"
    }
    helm = {
      source  = "hashicorp/helm"
      version = ">= 2.5" # "~>2.6.0"
    }
    null = {
      source  = "hashicorp/null"
      version = ">= 3.0" # "~>3.1.0"
    }
  }
}


provider "aws" {
  profile = "syncifyEKS-terraform-admin"
  region  = local.region

  default_tags {
    tags = {
      Environment = "Staging"
      Owner       = "BT-Compliance"
      Terraform   = "True"
    }
  }
}

#
# Housekeeping
#

locals {
  project_name    = "syncify-dev"
  cluster_name    = "${local.project_name}-eks-cluster"
  cluster_version = "1.22"
  region          = "us-west-1"
}


/*
The following 2 data resources are used get around the fact that we have to wait
for the EKS cluster to be initialised before we can attempt to authenticate.
*/

data "aws_eks_cluster" "default" {
  name = module.eks.cluster_id
}

data "aws_eks_cluster_auth" "default" {
  name = module.eks.cluster_id
}

provider "kubernetes" {
  host                   = data.aws_eks_cluster.default.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.default.certificate_authority[0].data)
  token                  = data.aws_eks_cluster_auth.default.token
}

provider "helm" {
  kubernetes {
    host                   = data.aws_eks_cluster.default.endpoint
    cluster_ca_certificate = base64decode(data.aws_eks_cluster.default.certificate_authority[0].data)
    token                  = data.aws_eks_cluster_auth.default.token
  }
}
#############################################################################################
#############################################################################################

# Create EKS Cluster
#############################################################################################
#############################################################################################
# Create VPC for EKS Cluster
module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "3.14.2"

  name = local.cluster_name
  cidr = "10.0.0.0/16"

  azs             = ["${local.region}a", "${local.region}b", "${local.region}c"]
  private_subnets = ["10.0.1.0/24", "10.0.2.0/24"]     #, "10.0.3.0/24"]
  public_subnets  = ["10.0.101.0/24", "10.0.102.0/24"] #, "10.0.103.0/24"]

  enable_nat_gateway     = true
  single_nat_gateway     = true
  one_nat_gateway_per_az = false

  manage_default_network_acl    = true
  default_network_acl_tags      = { Name = "${local.cluster_name}-default" }
  manage_default_route_table    = true
  default_route_table_tags      = { Name = "${local.cluster_name}-default" }
  manage_default_security_group = true
  default_security_group_tags   = { Name = "${local.cluster_name}-default" }

  public_subnet_tags = {
    "kubernetes.io/cluster/${local.cluster_name}" = "shared"
    "kubernetes.io/role/elb"                      = 1
  }

  private_subnet_tags = {
    "kubernetes.io/cluster/${local.cluster_name}" = "shared"
    "kubernetes.io/role/internal-elb"             = 1
  }
}



module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = "18.26.3"

  cluster_name    = local.cluster_name
  cluster_version = local.cluster_version

  vpc_id     = module.vpc.vpc_id
  subnet_ids = module.vpc.private_subnets

  cluster_addons = {
    kube-proxy = {
      addon_version     = data.aws_eks_addon_version.this["kube-proxy"].version
      resolve_conflicts = "OVERWRITE"
    }
    vpc-cni = {
      addon_version     = data.aws_eks_addon_version.this["vpc-cni"].version
      resolve_conflicts = "OVERWRITE"
    }
  }

  # manage_aws_auth_configmap = true

  fargate_profiles = {
    default = {
      name = "default"
      selectors = [
        { namespace = "default" }
      ]
    }

    kube_system = {
      name = "kube-system"
      selectors = [
        { namespace = "kube-system" }
      ]
    }
  }
}

data "aws_eks_addon_version" "this" {
  for_each = toset(["coredns", "kube-proxy", "vpc-cni"])

  addon_name         = each.value
  kubernetes_version = module.eks.cluster_version
  most_recent        = true
}


################################################################################
# Modify EKS CoreDNS Deployment
################################################################################

data "aws_eks_cluster_auth" "this" {
  name = module.eks.cluster_id
}

locals {
  kubeconfig = yamlencode({
    apiVersion      = "v1"
    kind            = "Config"
    current-context = "terraform"
    clusters = [{
      name = module.eks.cluster_id
      cluster = {
        certificate-authority-data = module.eks.cluster_certificate_authority_data
        server                     = module.eks.cluster_endpoint
      }
    }]
    contexts = [{
      name = "terraform"
      context = {
        cluster = module.eks.cluster_id
        user    = "terraform"
      }
    }]
    users = [{
      name = "terraform"
      user = {
        token = data.aws_eks_cluster_auth.this.token
      }
    }]
  })
}

# Separate resource so that this is only ever executed once
resource "null_resource" "remove_default_coredns_deployment" {
  triggers = {}

  provisioner "local-exec" {
    interpreter = ["/bin/bash", "-c"]
    environment = {
      KUBECONFIG = base64encode(local.kubeconfig)
    }

    # We are removing the deployment provided by the EKS service and replacing it through the self-managed CoreDNS Helm addon
    # However, we are maintaing the existing kube-dns service and annotating it for Helm to assume control
    command = <<-EOT
      kubectl --namespace kube-system delete deployment coredns --kubeconfig <(echo $KUBECONFIG | base64 --decode)
    EOT
  }
}

resource "null_resource" "modify_kube_dns" {
  triggers = {}

  provisioner "local-exec" {
    interpreter = ["/bin/bash", "-c"]
    environment = {
      KUBECONFIG = base64encode(local.kubeconfig)
    }

    # We are maintaing the existing kube-dns service and annotating it for Helm to assume control
    command = <<-EOT
      echo "Setting implicit dependency on ${module.eks.fargate_profiles["kube_system"].fargate_profile_pod_execution_role_arn}"
      kubectl --namespace kube-system annotate --overwrite service kube-dns meta.helm.sh/release-name=coredns --kubeconfig <(echo $KUBECONFIG | base64 --decode)
      kubectl --namespace kube-system annotate --overwrite service kube-dns meta.helm.sh/release-namespace=kube-system --kubeconfig <(echo $KUBECONFIG | base64 --decode)
      kubectl --namespace kube-system label --overwrite service kube-dns app.kubernetes.io/managed-by=Helm --kubeconfig <(echo $KUBECONFIG | base64 --decode)
    EOT
  }

  depends_on = [
    null_resource.remove_default_coredns_deployment
  ]
}

################################################################################
# CoreDNS Helm Chart (self-managed)
################################################################################

resource "helm_release" "coredns" {
  name             = "coredns"
  namespace        = "kube-system"
  create_namespace = false
  description      = "CoreDNS is a DNS server that chains plugins and provides Kubernetes DNS Services"
  chart            = "coredns"
  version          = "1.19.4"
  repository       = "https://coredns.github.io/helm"
  force_update     = true
  recreate_pods    = true


  # For EKS image repositories https://docs.aws.amazon.com/eks/latest/userguide/add-ons-images.html
  values = [
    <<-EOT
      image:
        repository: 602401143452.dkr.ecr.us-west-1.amazonaws.com/eks/coredns  
        tag: ${data.aws_eks_addon_version.this["coredns"].version}
      deployment:
        name: coredns
        annotations:
          eks.amazonaws.com/compute-type: fargate
      service:
        name: kube-dns
        annotations:
          eks.amazonaws.com/compute-type: fargate
      podAnnotations:
        eks.amazonaws.com/compute-type: fargate
      EOT
  ]
  depends_on = [
    null_resource.modify_kube_dns
  ]
}




Expected behavior
coredns pods should have gotten scheduled.

Any help would be greatly appreciated.

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

Open

These updates have all been created already. Click a checkbox below to force a retry/rebase of any.

Detected dependencies

github-actions
.github/workflows/pre-commit.yml
  • actions/checkout v4
  • clowdhaus/terraform-composite-actions v1.9.0
  • actions/checkout v4
  • clowdhaus/terraform-min-max v1.3.0
  • clowdhaus/terraform-composite-actions v1.9.0
  • clowdhaus/terraform-composite-actions v1.9.0
  • actions/checkout v4
  • clowdhaus/terraform-min-max v1.3.0
  • clowdhaus/terraform-composite-actions v1.9.0
terraform
cluster-autoscaler/add-ons.tf
  • aws-ia/eks-blueprints-addons/aws ~> 1.16
cluster-autoscaler/eks.tf
  • terraform-aws-modules/eks/aws ~> 20.5
cluster-autoscaler/main.tf
  • aws >= 5.38
  • helm >= 2.7
  • hashicorp/terraform ~> 1.3
  • clowdhaus/tags/aws ~> 1.0
cluster-autoscaler/vpc.tf
  • terraform-aws-modules/vpc/aws ~> 5.0
does-not-work/eks-mng-ssm-param/eks.tf
  • terraform-aws-modules/eks/aws ~> 20.0
does-not-work/eks-mng-ssm-param/main.tf
  • aws ~> 5.0
  • hashicorp/terraform ~> 1.0
  • clowdhaus/tags/aws ~> 1.0
does-not-work/eks-mng-ssm-param/vpc.tf
  • terraform-aws-modules/vpc/aws ~> 5.0
eks-managed-node-group/eks_al2.tf
  • terraform-aws-modules/eks/aws ~> 20.0
eks-managed-node-group/eks_bottlerocket.tf
  • terraform-aws-modules/eks/aws ~> 20.0
eks-managed-node-group/eks_default.tf
  • terraform-aws-modules/eks/aws ~> 20.0
eks-managed-node-group/main.tf
  • aws ~> 5.0
  • hashicorp/terraform ~> 1.0
  • clowdhaus/tags/aws ~> 1.0
eks-managed-node-group/vpc.tf
  • terraform-aws-modules/vpc/aws ~> 5.0
eks-mng-gpu/eks.tf
  • nvidia-device-plugin 0.14.5
  • terraform-aws-modules/eks/aws ~> 20.0
eks-mng-gpu/main.tf
  • aws ~> 5.38
  • helm >= 2.7
  • hashicorp/terraform ~> 1.0
  • clowdhaus/tags/aws ~> 1.0
eks-mng-gpu/vpc.tf
  • terraform-aws-modules/vpc/aws ~> 5.0
ephemeral-vol-test/eks.tf
  • terraform-aws-modules/iam/aws ~> 5.20
  • terraform-aws-modules/eks/aws ~> 20.0
ephemeral-vol-test/main.tf
  • aws ~> 5.0
  • hashicorp/terraform ~> 1.0
  • clowdhaus/tags/aws ~> 1.0
ephemeral-vol-test/vpc.tf
  • terraform-aws-modules/vpc/aws ~> 5.0
inferentia/add-ons.tf
  • aws-ia/eks-blueprints-addons/aws ~> 1.16
inferentia/eks.tf
  • terraform-aws-modules/eks/aws ~> 20.4
inferentia/main.tf
  • aws >= 5.38
  • helm >= 2.7
  • http >= 3.3
  • kubectl >= 2.0
  • hashicorp/terraform ~> 1.3
  • clowdhaus/tags/aws ~> 1.0
inferentia/vpc.tf
  • terraform-aws-modules/vpc/aws ~> 5.0
ipv4-prefix-delegation/eks.tf
  • terraform-aws-modules/eks/aws ~> 20.0
ipv4-prefix-delegation/main.tf
  • aws ~> 5.0
  • hashicorp/terraform ~> 1.0
  • clowdhaus/tags/aws ~> 1.0
ipv4-prefix-delegation/vpc.tf
  • terraform-aws-modules/vpc/aws ~> 5.0
ipvs/eks.tf
  • terraform-aws-modules/eks/aws ~> 20.0
ipvs/main.tf
  • aws ~> 5.0
  • hashicorp/terraform ~> 1.0
  • clowdhaus/tags/aws ~> 1.0
ipvs/vpc.tf
  • terraform-aws-modules/vpc/aws ~> 5.0
karpenter-gpu/add-ons.tf
  • aws-ia/eks-blueprints-addons/aws ~> 1.0
karpenter-gpu/eks.tf
  • terraform-aws-modules/iam/aws ~> 5.20
  • terraform-aws-modules/eks/aws ~> 19.15
karpenter-gpu/main.tf
  • aws >= 5.0
  • helm >= 2.6
  • kubectl >= 2.0.0
  • kubernetes >= 2.20
  • hashicorp/terraform ~> 1.0
  • clowdhaus/tags/aws ~> 1.0
karpenter-gpu/vpc.tf
  • terraform-aws-modules/vpc/aws ~> 5.0
karpenter/alb_controller.tf
  • aws-load-balancer-controller 1.7.1
  • terraform-aws-modules/iam/aws ~> 5.20
karpenter/eks.tf
  • terraform-aws-modules/eks/aws ~> 20.0
karpenter/karpenter.tf
  • karpenter v0.34.1
  • terraform-aws-modules/eks/aws ~> 20.0
karpenter/main.tf
  • aws ~> 5.0
  • helm ~> 2.6
  • kubectl >= 2.0
  • hashicorp/terraform ~> 1.0
  • clowdhaus/tags/aws ~> 1.0
karpenter/vpc.tf
  • terraform-aws-modules/vpc/aws ~> 5.0
private/eks.tf
  • terraform-aws-modules/kms/aws ~> 2.0
  • terraform-aws-modules/eks/aws ~> 20.0
private/main.tf
  • aws ~> 5.0
  • hashicorp/terraform ~> 1.0
  • clowdhaus/tags/aws ~> 1.0
private/vpc.tf
  • terraform-aws-modules/security-group/aws ~> 5.0
  • terraform-aws-modules/vpc/aws ~> 5.0
  • terraform-aws-modules/vpc/aws ~> 5.0
self-managed-node-group/eks_default.tf
  • terraform-aws-modules/eks/aws ~> 20.0
self-managed-node-group/main.tf
  • aws ~> 5.0
  • hashicorp/terraform ~> 1.0
  • clowdhaus/tags/aws ~> 1.0
self-managed-node-group/vpc.tf
  • terraform-aws-modules/vpc/aws ~> 5.0
serverless/eks.tf
  • terraform-aws-modules/eks/aws ~> 20.0
serverless/main.tf
  • aws ~> 5.0
  • hashicorp/terraform ~> 1.0
  • clowdhaus/tags/aws ~> 1.0
serverless/vpc.tf
  • terraform-aws-modules/vpc/aws ~> 5.0
weaviate/add-ons.tf
  • aws-ia/eks-blueprints-addons/aws ~> 1.0
weaviate/eks.tf
  • terraform-aws-modules/iam/aws ~> 5.20
  • terraform-aws-modules/eks/aws ~> 20.0
weaviate/main.tf
  • aws >= 5.0
  • helm ~> 2.10
  • kubernetes >= 2.20
  • null >= 3.0
  • hashicorp/terraform ~> 1.0
  • clowdhaus/tags/aws ~> 1.0
weaviate/sagemaker.tf
  • terraform-aws-modules/s3-bucket/aws ~> 4.0
  • terraform-aws-modules/security-group/aws ~> 5.0
weaviate/vpc.tf
  • terraform-aws-modules/vpc/aws ~> 5.0
  • terraform-aws-modules/vpc/aws ~> 5.0
windows/eks.tf
  • terraform-aws-modules/eks/aws ~> 20.0
windows/main.tf
  • aws ~> 5.0
  • hashicorp/terraform ~> 1.0
  • clowdhaus/tags/aws ~> 1.0
windows/vpc.tf
  • terraform-aws-modules/vpc/aws ~> 5.0

  • Check this box to trigger a request for Renovate to run again on this repository

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.