GithubHelp home page GithubHelp logo

aws-ia / terraform-aws-eks-blueprints-addons Goto Github PK

View Code? Open in Web Editor NEW
245.0 14.0 117.0 3.3 MB

Terraform module which provisions addons on Amazon EKS clusters

Home Page: https://aws-ia.github.io/terraform-aws-eks-blueprints-addons/main/

License: Apache License 2.0

HCL 100.00%
amazon-eks eks-addons terraform-module aws aws-eks elastic-kubernetes-service kubernetes terraform

terraform-aws-eks-blueprints-addons's Introduction

Amazon EKS Blueprints Addons

Terraform module to deploy Kubernetes addons on Amazon EKS clusters.

Usage

module "eks_blueprints_addons" {
  source = "aws-ia/eks-blueprints-addons/aws"
  version = "~> 1.0" #ensure to update this to the latest/desired version

  cluster_name      = module.eks.cluster_name
  cluster_endpoint  = module.eks.cluster_endpoint
  cluster_version   = module.eks.cluster_version
  oidc_provider_arn = module.eks.oidc_provider_arn

  eks_addons = {
    aws-ebs-csi-driver = {
      most_recent = true
    }
    coredns = {
      most_recent = true
    }
    vpc-cni = {
      most_recent = true
    }
    kube-proxy = {
      most_recent = true
    }
  }

  enable_aws_load_balancer_controller    = true
  enable_cluster_proportional_autoscaler = true
  enable_karpenter                       = true
  enable_kube_prometheus_stack           = true
  enable_metrics_server                  = true
  enable_external_dns                    = true
  enable_cert_manager                    = true
  cert_manager_route53_hosted_zone_arns  = ["arn:aws:route53:::hostedzone/XXXXXXXXXXXXX"]

  tags = {
    Environment = "dev"
  }
}

module "eks" {
  source = "terraform-aws-modules/eks/aws"

  cluster_name    = "my-cluster"
  cluster_version = "1.29"

  ... truncated for brevity
}

Requirements

Name Version
terraform >= 1.0
aws >= 5.0
helm >= 2.9
kubernetes >= 2.20
time >= 0.9

Providers

Name Version
aws >= 5.0
helm >= 2.9
kubernetes >= 2.20
time >= 0.9

Modules

Name Source Version
argo_events aws-ia/eks-blueprints-addon/aws 1.1.1
argo_rollouts aws-ia/eks-blueprints-addon/aws 1.1.1
argo_workflows aws-ia/eks-blueprints-addon/aws 1.1.1
argocd aws-ia/eks-blueprints-addon/aws 1.1.1
aws_cloudwatch_metrics aws-ia/eks-blueprints-addon/aws 1.1.1
aws_efs_csi_driver aws-ia/eks-blueprints-addon/aws 1.1.1
aws_for_fluentbit aws-ia/eks-blueprints-addon/aws 1.1.1
aws_fsx_csi_driver aws-ia/eks-blueprints-addon/aws 1.1.1
aws_gateway_api_controller aws-ia/eks-blueprints-addon/aws 1.1.1
aws_load_balancer_controller aws-ia/eks-blueprints-addon/aws 1.1.1
aws_node_termination_handler aws-ia/eks-blueprints-addon/aws 1.1.1
aws_node_termination_handler_sqs terraform-aws-modules/sqs/aws 4.0.1
aws_privateca_issuer aws-ia/eks-blueprints-addon/aws 1.1.1
bottlerocket_shadow aws-ia/eks-blueprints-addon/aws ~> 1.1.1
bottlerocket_update_operator aws-ia/eks-blueprints-addon/aws ~> 1.1.1
cert_manager aws-ia/eks-blueprints-addon/aws 1.1.1
cluster_autoscaler aws-ia/eks-blueprints-addon/aws 1.1.1
cluster_proportional_autoscaler aws-ia/eks-blueprints-addon/aws 1.1.1
external_dns aws-ia/eks-blueprints-addon/aws 1.1.1
external_secrets aws-ia/eks-blueprints-addon/aws 1.1.1
gatekeeper aws-ia/eks-blueprints-addon/aws 1.1.1
ingress_nginx aws-ia/eks-blueprints-addon/aws 1.1.1
karpenter aws-ia/eks-blueprints-addon/aws 1.1.1
karpenter_sqs terraform-aws-modules/sqs/aws 4.0.1
kube_prometheus_stack aws-ia/eks-blueprints-addon/aws 1.1.1
metrics_server aws-ia/eks-blueprints-addon/aws 1.1.1
secrets_store_csi_driver aws-ia/eks-blueprints-addon/aws 1.1.1
secrets_store_csi_driver_provider_aws aws-ia/eks-blueprints-addon/aws 1.1.1
velero aws-ia/eks-blueprints-addon/aws 1.1.1
vpa aws-ia/eks-blueprints-addon/aws 1.1.1

Resources

Name Type
aws_autoscaling_group_tag.aws_node_termination_handler resource
aws_autoscaling_lifecycle_hook.aws_node_termination_handler resource
aws_cloudwatch_event_rule.aws_node_termination_handler resource
aws_cloudwatch_event_rule.karpenter resource
aws_cloudwatch_event_target.aws_node_termination_handler resource
aws_cloudwatch_event_target.karpenter resource
aws_cloudwatch_log_group.aws_for_fluentbit resource
aws_cloudwatch_log_group.fargate_fluentbit resource
aws_eks_addon.this resource
aws_iam_instance_profile.karpenter resource
aws_iam_policy.fargate_fluentbit resource
aws_iam_role.karpenter resource
aws_iam_role_policy_attachment.additional resource
aws_iam_role_policy_attachment.karpenter resource
helm_release.this resource
kubernetes_config_map_v1.aws_logging resource
kubernetes_config_map_v1_data.aws_for_fluentbit_containerinsights resource
kubernetes_namespace_v1.aws_observability resource
time_sleep.this resource
aws_caller_identity.current data source
aws_eks_addon_version.this data source
aws_iam_policy_document.aws_efs_csi_driver data source
aws_iam_policy_document.aws_for_fluentbit data source
aws_iam_policy_document.aws_fsx_csi_driver data source
aws_iam_policy_document.aws_gateway_api_controller data source
aws_iam_policy_document.aws_load_balancer_controller data source
aws_iam_policy_document.aws_node_termination_handler data source
aws_iam_policy_document.aws_privateca_issuer data source
aws_iam_policy_document.cert_manager data source
aws_iam_policy_document.cluster_autoscaler data source
aws_iam_policy_document.external_dns data source
aws_iam_policy_document.external_secrets data source
aws_iam_policy_document.fargate_fluentbit data source
aws_iam_policy_document.karpenter data source
aws_iam_policy_document.karpenter_assume_role data source
aws_iam_policy_document.velero data source
aws_partition.current data source
aws_region.current data source

Inputs

Name Description Type Default Required
argo_events Argo Events add-on configuration values any {} no
argo_rollouts Argo Rollouts add-on configuration values any {} no
argo_workflows Argo Workflows add-on configuration values any {} no
argocd ArgoCD add-on configuration values any {} no
aws_cloudwatch_metrics Cloudwatch Metrics add-on configuration values any {} no
aws_efs_csi_driver EFS CSI Driver add-on configuration values any {} no
aws_for_fluentbit AWS Fluentbit add-on configurations any {} no
aws_for_fluentbit_cw_log_group AWS Fluentbit CloudWatch Log Group configurations any {} no
aws_fsx_csi_driver FSX CSI Driver add-on configuration values any {} no
aws_gateway_api_controller AWS Gateway API Controller add-on configuration values any {} no
aws_load_balancer_controller AWS Load Balancer Controller add-on configuration values any {} no
aws_node_termination_handler AWS Node Termination Handler add-on configuration values any {} no
aws_node_termination_handler_asg_arns List of Auto Scaling group ARNs that AWS Node Termination Handler will monitor for EC2 events list(string) [] no
aws_node_termination_handler_sqs AWS Node Termination Handler SQS queue configuration values any {} no
aws_privateca_issuer AWS PCA Issuer add-on configurations any {} no
bottlerocket_shadow Bottlerocket Update Operator CRDs configuration values any {} no
bottlerocket_update_operator Bottlerocket Update Operator add-on configuration values any {} no
cert_manager cert-manager add-on configuration values any {} no
cert_manager_route53_hosted_zone_arns List of Route53 Hosted Zone ARNs that are used by cert-manager to create DNS records list(string)
[
"arn:aws:route53:::hostedzone/*"
]
no
cluster_autoscaler Cluster Autoscaler add-on configuration values any {} no
cluster_endpoint Endpoint for your Kubernetes API server string n/a yes
cluster_name Name of the EKS cluster string n/a yes
cluster_proportional_autoscaler Cluster Proportional Autoscaler add-on configurations any {} no
cluster_version Kubernetes <major>.<minor> version to use for the EKS cluster (i.e.: 1.24) string n/a yes
create_delay_dependencies Dependency attribute which must be resolved before starting the create_delay_duration list(string) [] no
create_delay_duration The duration to wait before creating resources string "30s" no
create_kubernetes_resources Create Kubernetes resource with Helm or Kubernetes provider bool true no
eks_addons Map of EKS add-on configurations to enable for the cluster. Add-on name can be the map keys or set with name any {} no
eks_addons_timeouts Create, update, and delete timeout configurations for the EKS add-ons map(string) {} no
enable_argo_events Enable Argo Events add-on bool false no
enable_argo_rollouts Enable Argo Rollouts add-on bool false no
enable_argo_workflows Enable Argo workflows add-on bool false no
enable_argocd Enable Argo CD Kubernetes add-on bool false no
enable_aws_cloudwatch_metrics Enable AWS Cloudwatch Metrics add-on for Container Insights bool false no
enable_aws_efs_csi_driver Enable AWS EFS CSI Driver add-on bool false no
enable_aws_for_fluentbit Enable AWS for FluentBit add-on bool false no
enable_aws_fsx_csi_driver Enable AWS FSX CSI Driver add-on bool false no
enable_aws_gateway_api_controller Enable AWS Gateway API Controller add-on bool false no
enable_aws_load_balancer_controller Enable AWS Load Balancer Controller add-on bool false no
enable_aws_node_termination_handler Enable AWS Node Termination Handler add-on bool false no
enable_aws_privateca_issuer Enable AWS PCA Issuer bool false no
enable_bottlerocket_update_operator Enable Bottlerocket Update Operator add-on bool false no
enable_cert_manager Enable cert-manager add-on bool false no
enable_cluster_autoscaler Enable Cluster autoscaler add-on bool false no
enable_cluster_proportional_autoscaler Enable Cluster Proportional Autoscaler bool false no
enable_eks_fargate Identifies whether or not respective addons should be modified to support deployment on EKS Fargate bool false no
enable_external_dns Enable external-dns operator add-on bool false no
enable_external_secrets Enable External Secrets operator add-on bool false no
enable_fargate_fluentbit Enable Fargate FluentBit add-on bool false no
enable_gatekeeper Enable Gatekeeper add-on bool false no
enable_ingress_nginx Enable Ingress Nginx bool false no
enable_karpenter Enable Karpenter controller add-on bool false no
enable_kube_prometheus_stack Enable Kube Prometheus Stack bool false no
enable_metrics_server Enable metrics server add-on bool false no
enable_secrets_store_csi_driver Enable CSI Secrets Store Provider bool false no
enable_secrets_store_csi_driver_provider_aws Enable AWS CSI Secrets Store Provider bool false no
enable_velero Enable Kubernetes Dashboard add-on bool false no
enable_vpa Enable Vertical Pod Autoscaler add-on bool false no
external_dns external-dns add-on configuration values any {} no
external_dns_route53_zone_arns List of Route53 zones ARNs which external-dns will have access to create/manage records (if using Route53) list(string) [] no
external_secrets External Secrets add-on configuration values any {} no
external_secrets_kms_key_arns List of KMS Key ARNs that are used by Secrets Manager that contain secrets to mount using External Secrets list(string)
[
"arn:aws:kms:::key/*"
]
no
external_secrets_secrets_manager_arns List of Secrets Manager ARNs that contain secrets to mount using External Secrets list(string)
[
"arn:aws:secretsmanager:::secret:*"
]
no
external_secrets_ssm_parameter_arns List of Systems Manager Parameter ARNs that contain secrets to mount using External Secrets list(string)
[
"arn:aws:ssm:::parameter/*"
]
no
fargate_fluentbit Fargate fluentbit add-on config any {} no
fargate_fluentbit_cw_log_group AWS Fargate Fluentbit CloudWatch Log Group configurations any {} no
gatekeeper Gatekeeper add-on configuration any {} no
helm_releases A map of Helm releases to create. This provides the ability to pass in an arbitrary map of Helm chart definitions to create any {} no
ingress_nginx Ingress Nginx add-on configurations any {} no
karpenter Karpenter add-on configuration values any {} no
karpenter_enable_instance_profile_creation Determines whether Karpenter will be allowed to create the IAM instance profile (v1beta1) or if Terraform will (v1alpha1) bool true no
karpenter_enable_spot_termination Determines whether to enable native node termination handling bool true no
karpenter_node Karpenter IAM role and IAM instance profile configuration values any {} no
karpenter_sqs Karpenter SQS queue for native node termination handling configuration values any {} no
kube_prometheus_stack Kube Prometheus Stack add-on configurations any {} no
metrics_server Metrics Server add-on configurations any {} no
oidc_provider_arn The ARN of the cluster OIDC Provider string n/a yes
secrets_store_csi_driver CSI Secrets Store Provider add-on configurations any {} no
secrets_store_csi_driver_provider_aws CSI Secrets Store Provider add-on configurations any {} no
tags A map of tags to add to all resources map(string) {} no
velero Velero add-on configuration values any {} no
vpa Vertical Pod Autoscaler add-on configuration values any {} no

Outputs

Name Description
argo_events Map of attributes of the Helm release created
argo_rollouts Map of attributes of the Helm release created
argo_workflows Map of attributes of the Helm release created
argocd Map of attributes of the Helm release created
aws_cloudwatch_metrics Map of attributes of the Helm release and IRSA created
aws_efs_csi_driver Map of attributes of the Helm release and IRSA created
aws_for_fluentbit Map of attributes of the Helm release and IRSA created
aws_fsx_csi_driver Map of attributes of the Helm release and IRSA created
aws_gateway_api_controller Map of attributes of the Helm release and IRSA created
aws_load_balancer_controller Map of attributes of the Helm release and IRSA created
aws_node_termination_handler Map of attributes of the Helm release and IRSA created
aws_privateca_issuer Map of attributes of the Helm release and IRSA created
bottlerocket_update_operator Map of attributes of the Helm release and IRSA created
cert_manager Map of attributes of the Helm release and IRSA created
cluster_autoscaler Map of attributes of the Helm release and IRSA created
cluster_proportional_autoscaler Map of attributes of the Helm release and IRSA created
eks_addons Map of attributes for each EKS addons enabled
external_dns Map of attributes of the Helm release and IRSA created
external_secrets Map of attributes of the Helm release and IRSA created
fargate_fluentbit Map of attributes of the configmap and IAM policy created
gatekeeper Map of attributes of the Helm release and IRSA created
gitops_metadata GitOps Bridge metadata
helm_releases Map of attributes of the Helm release created
ingress_nginx Map of attributes of the Helm release and IRSA created
karpenter Map of attributes of the Helm release and IRSA created
kube_prometheus_stack Map of attributes of the Helm release and IRSA created
metrics_server Map of attributes of the Helm release and IRSA created
secrets_store_csi_driver Map of attributes of the Helm release and IRSA created
secrets_store_csi_driver_provider_aws Map of attributes of the Helm release and IRSA created
velero Map of attributes of the Helm release and IRSA created
vpa Map of attributes of the Helm release and IRSA created

terraform-aws-eks-blueprints-addons's People

Contributors

allamand avatar askulkarni2 avatar bersr-aws avatar bjarneo avatar blakepettersson avatar bryantbiggs avatar candonov avatar csantanapr avatar daniellafreese avatar dtherhtun avatar elcomtik avatar fabidick22 avatar fcarta29 avatar gohmc avatar joaocc avatar melnikovn avatar mickeysh avatar neelaruban avatar ovaleanu avatar pacobart avatar pedrodsrodrigues avatar pincher95 avatar pricealexandra avatar robertnorthard avatar rodrigobersa avatar tbulding avatar vara-bonthu avatar vilakshan2996 avatar wellsiau-aws avatar wfrced avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-aws-eks-blueprints-addons's Issues

Add support for `argocd_projects` field to Argocd addons

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

What is the outcome that you are trying to reach?

Request for a feature to create ArgoCD Projects for application isolation in ArgoCD. Currently, ArgoCD Addons support creating ArgoCD Applications, but not the ability to assign values for the project field from argocd_applications. The ability to create ArgoCD Projects would allow us to separate our applications based on environment, system, or workload.

Describe the solution you would like

Create an additional Helm chart for the ArgoCD Project (AppProject) and use a Terraform helm_release resource for argocd_project. Also, export the argocd_projects variables to create additional ArgoCD projects. Please see the proposed solution in this pull request.

Describe alternatives you have considered

Additional context

k8s addons charts should not use local

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

What is the outcome that you are trying to reach?

Describe the solution you would like

Addons that have static string of the repository, should also have "hard coded" string of the chart, the only thing that should be dynamic is the "name" (AKA helm release name).

Describe alternatives you have considered

For now, we can leverage the usage of the helm_config and overwrite the configuration of the chart and the release name, for example:

  prometheus_helm_config = {
    name       = "prometheus-diff-name"
    repository = "https://prometheus-community.github.io/helm-charts"
    chart      = "prometheus"
    version    = "15.10.1"
    namespace  = "prometheus"
    timeout    = "300"
    values = [templatefile("${path.module}/helm-values/prometheus-values.yaml", {
      operating_system = "linux"
    })]
  }

Additional context

New Addon appmesh-prometheus

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

What is the outcome that you are trying to reach?

Prometheus is an open-source monitoring and alerting toolkit which is able to scrape metrics from specified targets at configured intervals. You can use Prometheus with AWS App Mesh to track metrics of your applications within the meshes.

App Mesh provides a basic installation to setup Prometheus quickly using Helm. Follow the steps below to install the pre-configured version of Prometheus which works with App Mesh.

Describe the solution you would like

A simple EKS Blueprint Add-on to deploy AppMesh

  enable_appmesh_prometheus = true

Describe alternatives you have considered

Manual installation

Add the EKS repository:

helm repo add eks https://aws.github.io/eks-charts

Install App Mesh Prometheus:

helm upgrade -i appmesh-prometheus eks/appmesh-prometheus \
    --namespace appmesh-system

Additional context

FluentBit default values for common data

Please describe your question here

  • It would be great to have a set of default transforms for FluentBIt addon that will add data that is commonly used such as:
    • Account ID
    • Cluster name
    • Namespace
    • Pod labels

Provide a link to the example/module related to the question

  • N/A

Additional context

  • Relates to aws-ia/terraform-aws-eks-blueprints#823 - when performing central log aggregation, this commonly used data is necessary to be able to segregate out where the logs are coming from for analysis

[DOC]Getting started guide - how to contribute an addon

Provide documentation to help guide users on how to contribute an addon to avoid PR feedback churn. Some things we should look at including are:

  • What are the common pieces required for an addon (i.e. - add chart to addon chart repo first)
  • What is the standard "structure" (main.tf, outputs.tf, variables.tf, versions.tf, README.md) -> could probably create a template dir to copy+paste and update
  • What content is required in the README.md -> ensure commands for testing/validating addon are included
  • High level guidance on what should go into the addon module and what should be left out for implementation
  • Any conventions (naming conventions, defaults, etc.)
  • What should be in the addon README.md and what should be updated under doc/s

Anything else I might be missing?

kube-prometheus-stack not loading config on v1.21+ of the module

Description

  • โœ‹ I have searched the open/closed issues and my issue is not listed.

Hi!

Since this PR aws-ia/terraform-aws-eks-blueprints#1295 (I could be wrong, but same release) the kube-prometheus-stack addon stops loading our config. This is how we've set up the addon

enable_kube_prometheus_stack = true
kube_prometheus_stack_helm_config = {
  values  = [templatefile("./kube-prometheus-stack-values.yaml", {})],
}

On version v1.21 it works, on v1.22 it doesn't (nor on 1.25/latest), unfortunately.

If I misname the config file, the loader fails, but no matter what the file contains, it is ignored on later versions.

external-dns not exposing domain_name variable

Description

It appears the main module is not exposing domain_name variable which the external-dns expects in its data filter for the zone. Please see relevant code:
https://github.com/aws-ia/terraform-aws-eks-blueprints/blob/v4.8.0/modules/kubernetes-addons/external-dns/data.tf#L1-L5
https://github.com/aws-ia/terraform-aws-eks-blueprints/blob/v4.8.0/modules/kubernetes-addons/external-dns/variables.tf#L18-L21
https://github.com/aws-ia/terraform-aws-eks-blueprints/blob/main/modules/kubernetes-addons/variables.tf#L275-L304

According to the comments the domain_name and private_zone variable will be deprecated in favor of route53_zone_arns but so far the current changes are making the module to fail while searching for the zone to manage.

  • โœ‹ I have searched the open/closed issues and my issue is not listed.

โš ๏ธ Note

Before you submit an issue, please perform the following first:

  1. Remove the local .terraform directory (! ONLY if state is stored remotely, which hopefully you are following that best practice!): rm -rf .terraform/
  2. Re-initialize the project root to pull down modules: terraform init
  3. Re-attempt your terraform plan or apply and check if the issue still persists

Versions

  • Module version [Required]:
    4.8.0

  • Terraform version:
    1.2.x

  • Provider version(s):

Reproduction Code [Required]

# external dns
enable_external_dns       = true
external_dns_private_zone = true
external_dns_route53_zone_arns = [
  <zone_arn>
]

Steps to reproduce the behavior:

  • Are you using workspaces?
    Yes

  • Have you cleared the local cache (see Notice section above)?
    Yes

Expected behaviour

Find zone and manage its records

Actual behaviour

Failure of data zone finding resource

Terminal Output Screenshot(s)

Error: Either name or zone_id must be set
with module.addons.module.external_dns[0].data.aws_route53_zone.selected
on .terraform/modules/addons/modules/kubernetes-addons/external-dns/data.tf line 2, in data "aws_route53_zone" "selected":

data "aws_route53_zone" "selected" {

External Secrets not working on Fargate

Description

When using the external-secrets addon and the addon runs on Fargate pods, the validation webhook deployment pod doesn't work, the api-server can't connect with errors

The root cause is that the helm chart for external-secrets uses the port 10250 and when the pod runs in fargate it conflict with kubelet port 10250

We should update

  • โœ‹ I have searched the open/closed issues and my issue is not listed.

โš ๏ธ Note

Versions

  • Module version [Required]: latest released

  • Terraform version: latest

  • Provider version(s): latest

Reproduction Code [Required]

Steps to reproduce the behavior:

  • Deploy cluster with Fargate, create fargate profile to run external-secrets on it
  • Enable external-dns module

Expected behaviour

external-dns to be ok

Actual behaviour

error

Terminal Output Screenshot(s)

erros on api-server:

Error: cluster-secretstore-sm failed to run apply: error when creating "/tmp/138385594kubectl_manifest.yaml": Internal error occurred: failed calling webhook "[validate.clustersecretstore.external-secrets.io](http://validate.clustersecretstore.external-secrets.io/)": failed to call webhook: Post "[https://external-secrets-webhook.external-secrets.svc:443/validate-external-secrets-io-v1beta1-clustersecretstore?timeout=5s](https://external-secrets-webhook.external-secrets.svc/validate-external-secrets-io-v1beta1-clustersecretstore?timeout=5s)": x509: certificate is valid for ip-10-0-1-115.ec2.internal, not external-secrets-webhook.external-secrets.svc (edited)

Additional context

An issue was opened to see if the helm chart default value port could be change for new releases
https://github.com/external-secrets/external-secrets/issues/19815

Workaround for now to set the port to 9443 and make sure security group rules allows access from control plane to nodes on this port.

  # Enable External Secrets Operator
  enable_external_secrets = true
  external_secrets_helm_config = {  
    namespace = "external-secrets",
    values = [
      yamlencode(
        {
      "webhook" : {
          port" = "9443"
        }
      }
    ]
  }

We should default to port 9443 for the external-dns module.

This was found during customer POC

Remove forced namespace creation on kube-prometheus-stack

Description

Please provide a clear and concise description of the issue you are encountering, and a reproduction of your configuration (see the examples/* directory for references that you can copy+paste and tailor to match your configs if you are unable to copy your exact configuration). The reproduction MUST be executable by running terraform init && terraform apply without any further changes.

If your request is for a new feature, please use the Feature request template.

Building namespaces should be optional in kubernetes-addons. kube-prometheus-stack creates a namespace as part of it's deployment. https://github.com/aws-ia/terraform-aws-eks-blueprints/blob/main/modules/kubernetes-addons/kube-prometheus-stack/main.tf#L8

I recommend modifying the kube-prometheus-stack addon to match other addons. Ex:

locals {
    create_namespace = try(var.helm_config.create_namespace, true) && local.namespace_name != "kube-system"`
}

resource "kubernetes_namespace_v1" "prometheus" {
  count = local.create_namespace ? 1 : 0

  metadata {
    name = local.namespace_name
  }
}
  • โœ‹ I have searched the open/closed issues and my issue is not listed.

โš ๏ธ Note

Before you submit an issue, please perform the following first:

  1. Remove the local .terraform directory (! ONLY if state is stored remotely, which hopefully you are following that best practice!): rm -rf .terraform/
  2. Re-initialize the project root to pull down modules: terraform init
  3. Re-attempt your terraform plan or apply and check if the issue still persists

Add an EKS cluster to an existing argocd

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

What is the outcome that you are trying to reach?

I have a single argocd cluster to manage multiple eks clusters. I have my argocd cluster installed in an eks server red in my shared-corp account. I created a new eks cluster blue in my production account and I'd like to add my new eks cluster blue to my existing argocd.

Currently, I do this by running argocd cluster add <cluster> --grpc-web

which creates the following resources in my new eks cluster blue

  • clusterrole
  • clusterrolebinding
  • kubernetes secret
  • kubernetes service account

It then creates a blue-secret in my argocd eks cluster which contains the necessary information to connect to the new eks cluster.


Adding this feature also enables us to create a new module to bootstrap the new eks cluster with a set of foundational apps. We would then have the full flow.

  • Existing argocd to manage all eks clusters
  • New eks cluster is born using terraform-aws-eks-blueprints terraform module
  • eks cluster is added to existing argocd cluster via kubernetes/argocd terraform provider using a kubernetes-addons/argocd-add-cluster
  • eks cluster is bootstrapped with foundational helm charts using argocd's app of apps via kubernetes/argocd terraform provider using a kubernetes-addons/argocd-bootstrap-cluster

Describe the solution you would like

I'd like a new module i.e. modules/kubernetes-addons/add-eks-cluster-to-argocd that will create all the necessary declarative resources.

Using the hashicorp/kubernetes provider

https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs

This is tested code

# main.tf

# cluster roles and binding

resource "kubernetes_cluster_role" "argocd_manager" {
  metadata {
    name = "argocd-manager-role"
  }

  rule {
    api_groups = ["*"]
    resources  = ["*"]
    verbs      = ["*"]
  }

  rule {
    non_resource_urls = ["*"]
    verbs             = ["*"]
  }
}

resource "kubernetes_cluster_role_binding" "argocd_manager" {
  metadata {
    name = "argocd-manager-role-binding"
  }

  role_ref {
    api_group = "rbac.authorization.k8s.io"
    kind      = "ClusterRole"
    name      = kubernetes_cluster_role.argocd_manager.metadata[0].name
  }

  subject {
    kind      = "ServiceAccount"
    name      = kubernetes_service_account.argocd_manager.metadata[0].name
    namespace = kubernetes_service_account.argocd_manager.metadata[0].namespace
  }
}

# kubernetes secret

resource "kubernetes_secret" "argocd_manager" {
  metadata {
    # Cannot use `generated_name` like the "argocd cluster add" tool does due to the inability to
    # link the generated name for this secret back to the service account.
    # generate_name = "argocd-manager-token-"
    name          = "argocd-manager-token"
    namespace     = "kube-system"
    annotations = {
      "kubernetes.io/service-account.name" = "argocd-manager"
    }
  }
  # required for 1.24+
  # https://kubernetes.io/docs/concepts/configuration/secret/#service-account-token-secrets
  type = "kubernetes.io/service-account-token"
}

resource "kubernetes_service_account" "argocd_manager" {
  metadata {
    name      = "argocd-manager"
    namespace = "kube-system"
  }

  secret {
    name = "argocd-manager-token"
    # cannot set to this value without receiving 'secrets "argocd-manager-token-77cq8" not found'
    # name = kubernetes_secret.argocd_manager.metadata[0].name
  }
}

data "kubernetes_secret" "argocd_manager" {
  metadata {
    name      = "argocd-manager-token"
    namespace = "kube-system"
  }

  depends_on = [
    kubernetes_service_account.argocd_manager,
  ]
}

resource "kubernetes_secret" "argocd_cluster_secret" {
  provider = kubernetes.argocd

  metadata {
    name = "${var.cluster_name}-secret"
    namespace = "argocd"
    labels = {
      "argocd.argoproj.io/secret-type" = "cluster"
    }
  }

  data = {
    name   = var.cluster_name
    server = module.eks.cluster_endpoint
    config = jsonencode({
      "bearerToken" = data.kubernetes_secret.argocd_manager.data["token"]
      "tlsClientConfig" = {
        "insecure"   = false
        "caData"     = base64encode(data.kubernetes_secret.argocd_manager.data["ca.crt"])
        "serverName" = "kubernetes.default.svc.cluster.local"
      }
    })
  }
}

Using the oboukili/argocd provider

https://registry.terraform.io/providers/oboukili/argocd/latest/docs

This I did not test

data "aws_eks_cluster" "default" {
  name = "cluster"
}

resource "argocd_cluster" "default" {
  server     = format("https://%s", data.aws_eks_cluster.default.endpoint)
  name       = "eks"
  namespaces = ["default", "optional"]

  config {
    aws_auth_config {
      cluster_name = "myekscluster"
      role_arn     = "arn:aws:iam::<123456789012>:role/<role-name>"
    }
    tls_client_config {
      ca_data = data.aws_eks_cluster.default.certificate_authority[0].data
    }
  }
}

Describe alternatives you have considered

  • Continue using argocd cluster add <cluster> --grpc-web

Additional context

Load Balancer Controller Add-On and possibly others get warning in EKS 1.24+

Description

In EKS 1.24+ there is an issue when deploying add-ons that use the generated irsa role/service accounts due to a deprecation.

  • [ x ] โœ‹ I have searched the open/closed issues and my issue is not listed.

Versions

  • Module version [Required]: 4.25.0

  • Terraform version: 1.3.7

  • Provider version(s):

  • provider registry.terraform.io/gavinbunney/kubectl v1.14.0
  • provider registry.terraform.io/hashicorp/aws v4.55.0
  • provider registry.terraform.io/hashicorp/cloudinit v2.3.2
  • provider registry.terraform.io/hashicorp/helm v2.9.0
  • provider registry.terraform.io/hashicorp/kubernetes v2.18.1
  • provider registry.terraform.io/hashicorp/null v3.2.1
  • provider registry.terraform.io/hashicorp/random v3.4.3
  • provider registry.terraform.io/hashicorp/time v0.9.1
  • provider registry.terraform.io/hashicorp/tls v4.0.4

Reproduction Code [Required]

module "eks_blueprints_kubernetes_addons" {
  source = "github.com/aws-ia/terraform-aws-eks-blueprints?ref=v4.25.0//modules/kubernetes-addons"

  eks_cluster_id       = module.eks.cluster_name
  eks_cluster_endpoint = module.eks.cluster_endpoint
  eks_oidc_provider    = module.eks.oidc_provider
  eks_cluster_version  = module.eks.cluster_version

  enable_aws_load_balancer_controller = true

aws_load_balancer_controller_helm_config = {
  repository_username = data.aws_ecrpublic_authorization_token.token.user_name
  repository_password = data.aws_ecrpublic_authorization_token.token.password 
  version = local.eks_add_on_versions.aws_load_balancer_controller
  values = [
    "clusterName: ${module.eks.cluster_name}",
    "region: ${var.aws_region}",
    "vpcId: ${local.vpc_id}"
  ]
}

Steps to reproduce the behavior:

deploy a new EKS cluster w/ the default Load Balancer Controller settings

Expected behaviour

No Warnings

Actual behaviour

TF Outputs warning regarding a deprecation in 1.24.0 that is not handled

Terminal Output Screenshot(s)

image

Additional context

Addon defaults should ensure multiple replicas are enabled w/ pod disruption budgets

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

What is the outcome that you are trying to reach?

  • The addons created by EKS Blueprints should modify the default settings to ensure that multiple replicas are created plus pod disruption budgets enabled (where applicable/possible). This will mean that the EKS Blueprint default settings for addons have taken into consideration high availability of resources created

Describe the solution you would like

  • The addons created by EKS Blueprints should modify the default settings to ensure that multiple replicas are created plus pod disruption budgets enabled (where applicable/possible)

Describe alternatives you have considered

Additional context

Prometheus/ADOT/OTEL default configuration for common data

Please describe your question here

  • It would be great to have a default configuration that will add data that is commonly used such as:
    • Account ID
    • Cluster name
    • Namespace
    • Pod labels

Provide a link to the example/module related to the question

  • N/A

Additional context

  • Relates to aws-ia/terraform-aws-eks-blueprints#823 - when performing centralized monitoring, this commonly used data is necessary to be able to segregate out where data/metrics are coming from for analysis

Keep cert-manger installation outside of opentelemetry-operator module

The constraint to using cert-manager < 1.6.0 has been lifted as document here. While the version can be easily updated, but the cert-manager resource in the opentelemetry-operator module does not allow any customization and will conflict with existing cert-manager configured for the underlying cluster.

Suggest do not include cert-manager installation in the opentelemetry-operator module and let the addon or operator to use the cert-manager installed with enable_cert_manager. The code here can be updated to ensure existence of cert-manager:

count = var.enable_amazon_eks_adot || var.enable_opentelemetry_operator ? var.enable_cert_manager ? 1 : 0 : 0
depends_on = [module.cert_manager]
...

Error `could not download chart: looks like URL is not a valid chart repository or cannot be reached`

Description

I'm deploying an EKS cluster using the EKS Blueprint version 4.24.0. When the pipeline runs, several errors are reported, specifically about applying helm charts.

Note: This is identical to the error I reported here: Error: Could not download chart cluster_proportional_autoscaler with module CoreDNS, though CoreDNS does deploy.

When I check the URLs that the module is trying to interact with, the OPA Gatekeeper URL throws a 404 error.

It throws a different error for the KubeCost module: could not download chart: mkdir /.cache: permission denied

But the helm chart URL for External-Secrets is functional: https://charts.external-secrets.io/. It just seems like the module is incorrectly including the ending quote (") as part of the URL, and therefore cannot reach it.

External-Secrets-URL

  • โœ‹ I have searched the open/closed issues and my issue is not listed.

Versions

  • Module version [Required]: v1.24.0

  • Terraform version: 1.3.9

  • Provider version(s):
    • hashicorp/local v2.3.0 (signed by HashiCorp)
    • hashicorp/aws v4.56.0 (signed by HashiCorp)
    • hashicorp/helm v2.9.0 (signed by HashiCorp)
    • gavinbunney/kubectl v1.14.0 (self-signed, key ID AD64217B5ADD572F)
    • hashicorp/kubernetes v2.18.1 (signed by HashiCorp)
    • hashicorp/random v3.4.3 (signed by HashiCorp)
    • terraform-aws-modules/http v2.4.1 (self-signed, key ID B2C1C0641B6B0EB7)
    • hashicorp/null v3.2.1 (signed by HashiCorp)
    • hashicorp/tls v4.0.4 (signed by HashiCorp)
    • hashicorp/cloudinit v2.3.2 (signed by HashiCorp)
    • hashicorp/time v0.9.1 (signed by HashiCorp)
    • hashicorp/http v3.2.1 (signed by HashiCorp)

Reproduction Code [Required]

locals {
  cluster_version= "1.25"
}

module "eks_blueprints_kubernetes_addons" {
  source = "github.com/aws-ia/terraform-aws-eks-blueprints//modules/kubernetes-addons?ref=v4.24.0"

  eks_cluster_id = module.eks_blueprints.eks_cluster_id

  # EKS Addons
  enable_external_secrets = true
  enable_gatekeeper = true
  enable_kubecost = true 
}

Steps to reproduce the behavior:

  1. terraform init
  2. terraform plan
  3. terraform apply

Expected behaviour

I expect the EKS add-ons to be fully deployed.

Actual behaviour

The terraform apply fails and causes the pipeline to error out.

Terminal Output Screenshot(s)

TerraformApplyError

Additional context

This is being run in an Azure Pipeline, using Azure DevOps (online, not server). Also, this is not using self-managed build services, but rather, uses the Microsoft-provided cloud build servers.

[Bug]: Adot/OpenTelemetry & Cert-Manager IAM policy issue

Welcome to Amazon EKS Blueprints!

  • Yes, I've searched similar issues on GitHub and didn't find any.

Amazon EKS Blueprints Release version

4.6.1

What is your environment, configuration and the example used?

  • Cluster version 1.21 and tested against those tags v4.5.0. 4.6.0 & 4.6.1`

What did you do and What did you see instead?

  • Upgrade from eks-blueprints module from version 4.0.9 to 4.6.1; Facing those issues:
  • Prometheus pending-install
  • Ingress-Nginx failed create namespace, except when adding create create_namespace = true
  • opentermetry and cert-manager complain through the error below, if both are enabled true
    • require sacrifice with one of them to deploy successful
Error: error creating IAM Policy fabric-stg-ops-eks-cert-manager-irsa: EntityAlreadyExists: A policy called fabric-stg-ops-eks-cert-manager-irsa already exists. Duplicate names are not allowed. status code: 409, request id: d35f5501-bb5b-4832-adc7-75eaf9391cd2
with module.eks_addons_0.module.opentelemetry_operator[0].module.cert_manager.aws_iam_policy.cert_manager
on .terraform/modules/eks_addons_0/modules/kubernetes-addons/cert-manager/main.tf line 42, in resource "aws_iam_policy" "cert_manager":
resource "aws_iam_policy" "cert_manager" {
Error: namespaces "opentelemetry-operator-system" already exists
with module.eks_addons_0.module.opentelemetry_operator[0].kubernetes_namespace_v1.adot[0]
on .terraform/modules/eks_addons_0/modules/kubernetes-addons/opentelemetry-operator/main.tf line 10, in resource "kubernetes_namespace_v1" "adot":
resource "kubernetes_namespace_v1" "adot" {

Additional Information

- Terraform Cloud and MacOS
- Terraform version 1.1.9
- verstons.tf 


# ---------------------------------------------------------------------------------------------------------------------
# Terraform version constraints
# ---------------------------------------------------------------------------------------------------------------------

terraform {
  required_version = ">= 1.0.0"

  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = ">= 3.72, >= 4.10"
    }
    kubernetes = {
      source  = "hashicorp/kubernetes"
      version = ">= 2.10"
    }
    helm = {
      source  = "hashicorp/helm"
      version = ">= 2.4.1"
    }
    local = {
      source  = "hashicorp/local"
      version = ">= 2.1"
    }
    null = {
      source  = "hashicorp/null"
      version = ">= 3.1"
    }
    http = {
      source  = "terraform-aws-modules/http"
      version = "2.4.1"
    }
    kubectl = {
      source  = "gavinbunney/kubectl"
      version = ">= 1.14"
    }
    random = {
      source  = "hashicorp/random"
      version = ">= 2.2"
    }
    awsutils = {
      source  = "cloudposse/awsutils"
      version = ">= 0.11.0"
    }
    tfe = {
      source  = "hashicorp/tfe"
      version = "~> 0.30.2"
    }
    grafana = {
      source  = "grafana/grafana"
      version = ">= 1.13.3"
    }
    # tls = {
    #   source  = "hashicorp/tls"
    #   version = "3.4.0"
    # }
  }
}

Velero kubernete addon ...manage_via_gitops does not work

Description

I cannot install Velero using argocd. It always installs via helm

I am using the following...
argocd_manage_add_ons = true

Other modules are working fine and install via argocd eg externaldns

The velero module is missing this line...

manage_via_gitops = var.manage_via_gitops

Versions

  • Module version : v4.14.0

  • Terraform version: 1.1.9

[FEATURE] karpenter v0.14 iam policy to filter by new tag

Is your feature request related to a problem? Please describe

This is a placeholder for a future version of karpenter

Describe the solution you'd like

The next release of Karpenter will add a new tag
https://github.com/tzneal/karpenter/blob/main/website/content/en/preview/upgrade-guide/_index.md#upgrading-to-v0140
karpenter.k8s.aws/cluster: ${CLUSTER_NAME}

Blueprints currently uses the tag Name in the iam policy to limit EC2s it can destroy
https://github.com/aws-ia/terraform-aws-eks-blueprints/blob/2eb213a4568fd019c2fa4471af9a6b164b4b4a0a/modules/kubernetes-addons/karpenter/data.tf#L33-L36

With the new tag available, we may be able to better target the correct instances.
For backwards compatibility we may need to support both for a while tho

External DNS - Ability to work with other providers

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

What is the outcome that you are trying to reach?

Right now, in order to use external-dns you need to provide route53 zones, etc. It should be possible to configure external-dns with other providers other than route53.

Describe the solution you would like

make the default route53 configuration and resources dependant on a condition so people can choose the DNS provider they want to use.

Describe alternatives you have considered

Use regular helm release object to deploy external-dns

Allow cert-manager to work with other providers

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

What is the outcome that you are trying to reach?

By default cert-manager create the configuration required to work with route53, it even deploy the issuers if required. There are use cases, like our own, where we use cert-manager with an external provider on EKS

Describe the solution you would like

Add a conditional variable to declare if you want to use route53. With this setup, move all the configuration required to work with Route53 under this condition.

Separate the cluster issuers into another addon where people can define their own issuers with other resolvers other than route53

Describe alternatives you have considered

Right now we are not using the cert-manager addon as it doesn't not provide the required functionality, instead we are creating the helm relesae for cert-manager and then another helm release with https://github.com/adfinis/helm-charts/tree/main/charts/cert-manager-issuers which allow us to configure the proper issuers

Kubernetes Add-Ons with IAM roles fail to deploy if same named clusters are deployed in multiple-regions

When deploying EKS clusters into multiple regions, I got an error that prevented the 2nd cluster from deploying.

The error was in regards to the amp-query role already exists in IAM.

Example issue code from the Prometheus Add-On:
https://github.com/aws-ia/terraform-aws-eks-blueprints/blob/e0e9aeb7ea785613cf794d524f79bcf5929bb6b8/modules/kubernetes-addons/prometheus/main.tf#L110

Possible resolution:
IAM role names should contain the deployment region as well as the cluster name

Enabling both secrets_store_csi_driver and secrets_store_csi_driver_provider_aws results in error ClusterRole secretproviderclasses-admin-role already exist

Description

When following https://aws-ia.github.io/terraform-aws-eks-blueprints/v4.5.0/add-ons/csi-secrets-store-provider-aws/ instructions, or just using the suggested way :

enable_secrets_store_csi_driver = true
enable_secrets_store_csi_driver_provider_aws = true

The apply of the terraform fails for a duplicate role in the cluster for secretprovidercalsses-admin-role, which belongs to secrets-store-csi-driver

  • โœ‹ I have searched the open/closed issues and my issue is not listed.

Versions

  • Module version [Required]:
    v4.14.0

  • Terraform version:
    Terraform v1.2.7
    on linux_amd64

  • Provider version(s):

  • provider registry.terraform.io/gavinbunney/kubectl v1.14.0
  • provider registry.terraform.io/hashicorp/aws v4.38.0
  • provider registry.terraform.io/hashicorp/cloudinit v2.2.0
  • provider registry.terraform.io/hashicorp/helm v2.7.1
  • provider registry.terraform.io/hashicorp/kubernetes v2.15.0
  • provider registry.terraform.io/hashicorp/null v3.2.0
  • provider registry.terraform.io/hashicorp/random v3.4.3
  • provider registry.terraform.io/hashicorp/time v0.9.1
  • provider registry.terraform.io/hashicorp/tls v4.0.4

Reproduction Code [Required]

enable_secrets_store_csi_driver = true
enable_secrets_store_csi_driver_provider_aws = true

Steps to reproduce the behavior:

In both, with or without, different codes (terraform and terragrunt) Yes

Inlcude the above when installing the addons
and execute terraform/terragrunt apply

Expected behaviour

Both Helm deployed successfully

Actual behaviour

Error occurs for secrets_store_csi_driver_provider_aws addon

Terminal Output Screenshot(s)

Missing the output, the error says that ClusterRole secretproviderclasses-admin-role already exists, Duplicates are not allowed.

Additional context

When investigating such behavior, and following the helm charts, I ended up with

And the Chart.yaml from eks-charts for csi-secrets-store-provider-aws,
https://github.com/aws/eks-charts/blob/master/stable/csi-secrets-store-provider-aws/Chart.yaml
has the following dependency:

dependencies:
- name: secrets-store-csi-driver
  repository: https://kubernetes-sigs.github.io/secrets-store-csi-driver/charts
  version: 1.1
  condition: secrets-store-csi-driver.install

Additional information should be provided either for the https://aws-ia.github.io/terraform-aws-eks-blueprints/v4.5.0/add-ons/csi-secrets-store-provider-aws/ instructions, since enabling only enable_secrets_store_csi_driver_provider_aws = true will result in both charts installed.
The https://github.com/aws/eks-charts/blob/master/stable/csi-secrets-store-provider-aws does explain some stuff, but from the addons it's not clear.

Additionally, some documentation point to https://github.com/aws/secrets-store-csi-driver-provider-aws, which yet another repo for the aws's csi provider, another location for a chart, which doesn't include this dependency for secrets-store-csi-driver.

According to the addon main.tf: https://github.com/aws-ia/terraform-aws-eks-blueprints/blob/main/modules/kubernetes-addons/csi-secrets-store-provider-aws/main.tf
it points to eks-charts location:

module "helm_addon" {
  source = "../helm-addon"

  # https://github.com/aws/eks-charts/blob/master/stable/csi-secrets-store-provider-aws/Chart.yaml
  helm_config = merge(
    {
      name        = local.name
      chart       = local.name
      repository  = "https://aws.github.io/eks-charts"
      version     = "0.0.3"
      namespace   = kubernetes_namespace_v1.csi_secrets_store_provider_aws.metadata[0].name
      description = "A Helm chart to install the Secrets Store CSI Driver and the AWS Key Management Service Provider inside a Kubernetes cluster."
    },
    var.helm_config
  )

  manage_via_gitops = var.manage_via_gitops
  addon_context     = var.addon_context
}

The description of it is ok.

Support multiple external DNS instances

I'd like to have some public ingresses and some private ingresses using the same domain, and different Route53 zones (for public and private names). Per https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/public-private-route53.md, it is recommended to use two different instances of external DNS.

With the current EKS blueprints, I've tried this

module "eks_blueprints_kubernetes_addons" {
  source = "github.com/aws-ia/terraform-aws-eks-blueprints//modules/kubernetes-addons?ref=v4.8.0"
   ....
  enable_external_dns                 = true
  external_dns_private_zone = true
  external_dns_route53_zone_arns = ["arn:aws:route53:::hostedzone/<zone-id>"]
  external_dns_helm_config = {
     name = "dns-internal"
     values = [<<-EOT
                  aws:
                    region: eu-central-1
                  txtOwnerId: analytics-2
                  annotationFilter: alb.ingress.kubernetes.io/scheme in (internal)
                  EOT
             ]
  }
}
module "external_dns_addon_2" {
  source = "github.com/aws-ia/terraform-aws-eks-blueprints//modules/kubernetes-addons?ref=v4.8.0"
  enable_external_dns                 = true

  external_dns_private_zone = true
  external_dns_route53_zone_arns = ["arn:aws:route53:::hostedzone/<another-zone>"]
  external_dns_helm_config = {
     name = "dns-external"
     namespace = "external-dns-external"
     values = [<<-EOT
                  aws:
                    region: eu-central-1
                  txtOwnerId: analytics-2
                  annotationFilter: alb.ingress.kubernetes.io/scheme in (internet-facing)
                  EOT
             ]
  }
}

That fails with

module.external_dns_addon_2.module.external_dns[0].module.helm_addon.module.irsa[0].kubernetes_namespace_v1.irsa[0]: Creation complete after 0s [id=external-dns-external]
โ•ท
โ”‚ Error: failed creating IAM Role (analytics-2-external-dns-sa-irsa): EntityAlreadyExists: Role with name analytics-2-external-dns-sa-irsa already exists.
โ”‚ 	status code: 409, request id: ed34b08f-99d6-4799-a896-2dda0d9647c1

since apparently, the role name that external-dns module tries to use is always the same, and it's not possible to change it.

It would be nice if this were a supported use case.

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

What is the outcome that you are trying to reach?

Describe the solution you would like

Describe alternatives you have considered

Additional context

Support for helm valueFiles

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

What is the outcome that you are trying to reach?

Need to add an argocd application which has overriding file. The terraform module for adding argocd application supports only adding values. Since, the overriding parameters are large, cannot use values

Describe the solution you would like

I am trying to add an argocd application from my git repo which has got an overriding file. I want to have an option like this:

  argocd_applications = {
    my-app = {
      ssh_key_secret_name = "github-repo-ssh-key"  # Needed for private repos
      namespace           = "argocd"
      path                = "argocd-app/k8s"
      repo_url            = "[email protected]:test/my-app.git"
      target_revision     = "HEAD"
      valueFiles          = "dev.yaml"
      type                = "helm"            # Optional, defaults to helm.
    }
  }

Describe alternatives you have considered

Additional context

I am trying to achieve argocd app of apps pattern by adding a repo which has got all the applications yaml with overriding files

AWS node termination handler not working on kubernetes v1.25

Description

  • โœ‹ I have searched the open/closed issues and my issue is not listed.

Versions

EKS blueprints v4.25.0
EKS 1.25
Terraform v1.3.9

  • provider registry.terraform.io/gavinbunney/kubectl v1.14.0
  • provider registry.terraform.io/hashicorp/aws v4.57.1
  • provider registry.terraform.io/hashicorp/cloudinit v2.3.2
  • provider registry.terraform.io/hashicorp/external v2.3.1
  • provider registry.terraform.io/hashicorp/helm v2.9.0
  • provider registry.terraform.io/hashicorp/kubernetes v2.18.1
  • provider registry.terraform.io/hashicorp/local v2.4.0
  • provider registry.terraform.io/hashicorp/null v3.2.1
  • provider registry.terraform.io/hashicorp/random v3.4.3
  • provider registry.terraform.io/hashicorp/time v0.9.1
  • provider registry.terraform.io/hashicorp/tls v4.0.4
  • provider registry.terraform.io/terraform-aws-modules/http v2.4.1

Reproduction Code [Required]

Steps to reproduce the behavior:

set enable_aws_node_termination_handler: true in a cluster running v1.25

Expected behaviour

It should deploy the AWS node termination handler chart successfully.

Actual behaviour

The addon fails to sync, seemingly because PSPs were removed in k8s v1.25 and the chart isn't quite right

image

image

I also tried adding

  enable_aws_node_termination_handler = true
  aws_node_termination_handler_helm_config = {
    version                    = "0.21.1"
    values = [templatefile("../_common/helm_values/aws-node-termination-handler-values.yaml", {})],
  }

where the values file contains:

rbac:
  pspEnabled: false

but these values don't seem to be loaded

Terraform delete/create Karpenter provisioner is very risky

Description

I configured my Terraform to create a Karpenter provisioner:

data "kubectl_path_documents" "karpenter_provisioners" {
  pattern = "${path.module}/kubernetes/karpenter/*"
  vars = {
    azs                     = join(",", local.azs)
    iam-instance-profile-id = "${local.name}-${local.node_group_name}"
    eks-cluster-id          = local.name
    eks-vpc_name            = local.name
  }
}

resource "kubectl_manifest" "karpenter_provisioner" {
  for_each  = toset(data.kubectl_path_documents.karpenter_provisioners.documents)
  yaml_body = each.value
}
You can expand to see the actual provisioner.yaml
apiVersion: karpenter.k8s.aws/v1alpha1
kind: AWSNodeTemplate
metadata:
  name: karpenter-default
spec:
  instanceProfile: ${eks-cluster-id}-managed-ondemand
  subnetSelector:
    kubernetes.io/cluster/${eks-cluster-id}: '*'
    kubernetes.io/role/internal-elb: '1'
  securityGroupSelector:
    aws:eks:cluster-name: ${eks-cluster-id}
  tags:
    karpenter.sh/cluster_name: ${eks-cluster-id}
    karpenter.sh/provisioner: team
---
apiVersion: karpenter.sh/v1alpha5
kind: Provisioner
metadata:
  name: default
spec:
  # Enables consolidation which attempts to reduce cluster cost by both removing un-needed nodes and down-sizing those
  # that can't be removed.  Mutually exclusive with the ttlSecondsAfterEmpty parameter.
  consolidation:
    enabled: true
  #sttlSecondsAfterEmpty: 60
  requirements:
    - key: karpenter.k8s.aws/instance-family
      operator: NotIn
      values:
        - a1
        - c1
        - c3
        - inf1
        - t3
        - t2
    - key: karpenter.k8s.aws/instance-cpu
      operator: Lt
      values: 
        - "33"
    - key: 'kubernetes.io/arch'
      operator: In
      values: ['amd64']
    - key: karpenter.sh/capacity-type
      operator: In
      #values: ['on-demand']
      values: ['on-demand', 'spot']
  providerRef:
    name: karpenter-default
    tags:
      karpenter.sh/cluster_name: ${eks-cluster-id}
      karpenter.sh/provisioner: default
  limits:
    resources:
      cpu: '200'
  labels:
    billing-team: default
    team: default
    type: karpenter
  taints:
    - key: karpenter
      value: 'true'
      effect: NoSchedule

The problem is that when I update this yaml file and apply again with terraform, then Terraform will delete the object first.
At this time, Karpenter will garbage-delete all ressources created from this object and it can abruptly delete all the Nodes that was created bringing disruption for my application deployed on thoses nodes..

Expected behaviour

I don't know if terraform can just update/patch the object instead of delete/create, if possible this can be a solution.

If not, we can clearly warn users to not create ressources that way which can leads in cluster disruption in case of deletion.

Actual behaviour

Actually, Terraform will delete the provisioner and recreate it

Terraform used the selected providers to generate the following execution plan. Resource actions are
indicated with the following symbols:
  + create
  - destroy
Terraform will perform the following actions:
  # kubectl_manifest.karpenter_provisioner["apiVersion: karpenter.sh/v1alpha5\nkind: Provisioner\nmetadata:\n  name: default\nspec:\n  # Enables consolidation which attempts to reduce cluster cost by both removing un-needed nodes and down-sizing those\n  # that can't be removed.  Mutually exclusive with the ttlSecondsAfterEmpty parameter.\n  consolidation:\n    enabled: true\n  #sttlSecondsAfterEmpty: 60\n  requirements:\n    - key: karpenter.k8s.aws/instance-family\n      operator: NotIn\n      values:\n        - a1\n        - c1\n        - c3\n        - inf1\n        - t3\n        - t2\n    - key: karpenter.k8s.aws/instance-cpu\n      operator: Lt\n      values: \n        - \"33\"\n    - key: 'kubernetes.io/arch'\n      operator: In\n      values: ['amd64']\n    - key: karpenter.sh/capacity-type\n      operator: In\n      #values: ['on-demand']\n      values: ['on-demand', 'spot']\n  providerRef:\n    name: karpenter-default\n    tags:\n      karpenter.sh/cluster_name: eks-blueprint\n      karpenter.sh/provisioner: default\n  limits:\n    resources:\n      cpu: '200'\n  labels:\n    billing-team: default\n    team: default\n    type: karpenter\n  taints:\n    - key: karpenter\n      value: 'true'\n      effect: NoSchedule"] will be created
  + resource "kubectl_manifest" "karpenter_provisioner" {
      + api_version             = "karpenter.sh/v1alpha5"
      + apply_only              = false
      + force_conflicts         = false
      + force_new               = false
      + id                      = (known after apply)
      + kind                    = "Provisioner"
      + live_manifest_incluster = (sensitive value)
      + live_uid                = (known after apply)
      + name                    = "default"
      + namespace               = (known after apply)
      + server_side_apply       = false
      + uid                     = (known after apply)
      + validate_schema         = true
      + wait_for_rollout        = true
      + yaml_body               = (sensitive value)
      + yaml_body_parsed        = <<-EOT
            apiVersion: karpenter.sh/v1alpha5
            kind: Provisioner
            metadata:
              name: default
            spec:
              consolidation:
                enabled: true
              labels:
                billing-team: default
                team: default
                type: karpenter
              limits:
                resources:
                  cpu: "200"
              providerRef:
                name: karpenter-default
                tags:
                  karpenter.sh/cluster_name: eks-blueprint
                  karpenter.sh/provisioner: default
              requirements:
              - key: karpenter.k8s.aws/instance-family
                operator: NotIn
                values:
                - a1
                - c1
                - c3
                - inf1
                - t3
                - t2
              - key: karpenter.k8s.aws/instance-cpu
                operator: Lt
                values:
                - "33"
              - key: kubernetes.io/arch
                operator: In
                values:
                - amd64
              - key: karpenter.sh/capacity-type
                operator: In
                values:
                - on-demand
                - spot
              taints:
              - effect: NoSchedule
                key: karpenter
                value: "true"
        EOT
      + yaml_incluster          = (sensitive value)
    }

  # kubectl_manifest.karpenter_provisioner["apiVersion: karpenter.sh/v1alpha5\nkind: Provisioner\nmetadata:\n  name: default\nspec:\n  # Enables consolidation which attempts to reduce cluster cost by both removing un-needed nodes and down-sizing those\n  # that can't be removed.  Mutually exclusive with the ttlSecondsAfterEmpty parameter.\n  consolidation:\n    enabled: true\n  #sttlSecondsAfterEmpty: 60\n  requirements:\n    - key: karpenter.k8s.aws/instance-family\n      operator: NotIn\n      values:\n        - a1\n        - c1\n        - c3\n        - inf1\n        - t3\n        - t2\n    - key: karpenter.k8s.aws/instance-cpu\n      operator: Lt\n      values: \n        - \"33\"\n    - key: 'kubernetes.io/arch'\n      operator: In\n      values: ['amd64']\n    - key: karpenter.sh/capacity-type\n      operator: In\n      values: ['on-demand']\n      #values: ['on-demand', 'spot']\n  providerRef:\n    name: karpenter-default\n    tags:\n      karpenter.sh/cluster_name: eks-blueprint\n      karpenter.sh/provisioner: default\n  limits:\n    resources:\n      cpu: '200'\n  labels:\n    billing-team: default\n    team: default\n    type: karpenter\n  taints:\n    - key: karpenter\n      value: 'true'\n      effect: NoSchedule"] will be destroyed
  # (because key ["apiVersion: karpenter.sh/v1alpha5\nkind: Provisioner\nmetadata:\n  name: default\nspec:\n  # Enables consolidation which attempts to reduce cluster cost by both removing un-needed nodes and down-sizing those\n  # that can't be removed.  Mutually exclusive with the ttlSecondsAfterEmpty parameter.\n  consolidation:\n    enabled: true\n  #sttlSecondsAfterEmpty: 60\n  requirements:\n    - key: karpenter.k8s.aws/instance-family\n      operator: NotIn\n      values:\n        - a1\n        - c1\n        - c3\n        - inf1\n        - t3\n        - t2\n    - key: karpenter.k8s.aws/instance-cpu\n      operator: Lt\n      values: \n        - \"33\"\n    - key: 'kubernetes.io/arch'\n      operator: In\n      values: ['amd64']\n    - key: karpenter.sh/capacity-type\n      operator: In\n      values: ['on-demand']\n      #values: ['on-demand', 'spot']\n  providerRef:\n    name: karpenter-default\n    tags:\n      karpenter.sh/cluster_name: eks-blueprint\n      karpenter.sh/provisioner: default\n  limits:\n    resources:\n      cpu: '200'\n  labels:\n    billing-team: default\n    team: default\n    type: karpenter\n  taints:\n    - key: karpenter\n      value: 'true'\n      effect: NoSchedule"] is not in for_each map)
  - resource "kubectl_manifest" "karpenter_provisioner" {
      - api_version             = "karpenter.sh/v1alpha5" -> null
      - apply_only              = false -> null
      - force_conflicts         = false -> null
      - force_new               = false -> null
      - id                      = "/apis/karpenter.sh/v1alpha5/provisioners/default" -> null
      - kind                    = "Provisioner" -> null
      - live_manifest_incluster = (sensitive value)
      - live_uid                = "44fe9527-0f7d-4413-aa16-c24e0e49a45a" -> null
      - name                    = "default" -> null
      - server_side_apply       = false -> null
      - uid                     = "44fe9527-0f7d-4413-aa16-c24e0e49a45a" -> null
      - validate_schema         = true -> null
      - wait_for_rollout        = true -> null
      - yaml_body               = (sensitive value)
      - yaml_body_parsed        = <<-EOT
            apiVersion: karpenter.sh/v1alpha5
            kind: Provisioner
            metadata:
              name: default
            spec:
              consolidation:
                enabled: true
              labels:
                billing-team: default
                team: default
                type: karpenter
              limits:
                resources:
                  cpu: "200"
              providerRef:
                name: karpenter-default
                tags:
                  karpenter.sh/cluster_name: eks-blueprint
                  karpenter.sh/provisioner: default
              requirements:
              - key: karpenter.k8s.aws/instance-family
                operator: NotIn
                values:
                - a1
                - c1
                - c3
                - inf1
                - t3
                - t2
              - key: karpenter.k8s.aws/instance-cpu
                operator: Lt
                values:
                - "33"
              - key: kubernetes.io/arch
                operator: In
                values:
                - amd64
              - key: karpenter.sh/capacity-type
                operator: In
                values:
                - on-demand
              taints:
              - effect: NoSchedule
                key: karpenter
                value: "true"
        EOT -> null
      - yaml_incluster          = (sensitive value)
    }

and then i will loose all my instances at once..

enable csi_secrets_store_provider_aws using gitops

Description

I want to deploy the csi_secrets_store_provider_aws but when I put argocd_manage_add_ons=true I can't choose which addons is deployed using terraform or argocd

  • โœ‹ I have searched the open/closed issues and my issue is not listed.

[FEATURE] Additional helm values for kubernetes-addons which won't delete the default

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

What is the outcome that you are trying to reach?

Re-logging as #38 is closed.

Being able to add additional values for helm. Currently have to replace the default valyes.yaml with full copy of all values.

Describe the solution you would like

A way of passing in additional yaml to be merged with the values.yaml file.

Ie.

module "aws_eks_addons" {
  external_dns_helm_config = {
    values = [yamlencode({
      sources : ["service", "istio-gateway"]
    })]
  }
}

will remove settings from the default values.yaml like provider, zoneIdFilters and aws.region.
Describe the solution you'd like

I would like to see such pattern (example for external-dns):

variable "helm_values" {
  type        = any
  default     = {}
  description = "Additional Helm values"
}

locals {
  default_helm_config = {
    values      = [local.default_helm_values, var.helm_values]
  }
  default_helm_values = templatefile("${path.module}/values.yaml", {
    aws_region      = var.addon_context.aws_region_name
    zone_filter_ids = local.zone_filter_ids
  })
}

Then:

module "aws_eks_addons" {
  external_dns_helm_values = yamlencode({
    sources : ["service", "istio-gateway"]
  )}
}

The pattern uses ability of Helm which merges 2 or more values.yaml files.

Allow the CNAME strategy to be configured on DNS01 of the lets-encrypt clusterissuer

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

What is the outcome that you are trying to reach?

I want to be able to configure the CNAME strategy of the Lets Encrypt clusterissues, so that I can issue certificates for
other domains using the route53 hosted zones that I do have access to.

See https://cert-manager.io/docs/configuration/acme/dns01/#delegated-domains-for-dns01

Describe the solution you would like

Allow the value cnameStrategy to be specified: Either None or Follow.

Describe alternatives you have considered

there are no alternatives.

Allow inject AWS IAM Role ARN to kubernetes-addon module

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

What is the outcome that you are trying to reach?

Right now, kubernetes-addon modules create AWS IAM Role for Service Accounts for Kubernetes Components, such as EFS CSI Driver, but, inside an enterprise environment, there are a lot of ways to design and manage access control of AWS Resources.

It would be nice that kubernetes-addon modules can use pre-created/designed IAM Role associating it with Service Account for Kubernetes Components.

Describe the solution you would like

Add a variable "iam-role-arn" to kubernetes-addon modules and associate created Service Account with passed in IAM Role:

variable "iam_role_arn" {
type = string
description = "Existing IAM Role with configured Policies to associated with Service Account"
}

Describe alternatives you have considered

Additional context

argocd_application: Inconsistent conditional result types when use add_on_application true and false

Description

Use configuration like that in kubernetes_addons module:

argocd_applications = {
    addons = {
      repo_url            = "repo_url"
      path                = "chart"
      add_on_application  = true
    }
    dev = {
      repo_url            = "repo_url"
      path                = "envs/dev"
      add_on_application  = false
    }
}

returns next error:

โ”‚ Error: Inconsistent conditional result types
โ”‚ 
โ”‚   on .terraform/modules/eks_blueprints_kubernetes_addons/modules/kubernetes-addons/argocd/main.tf line 72, in resource "helm_release" "argocd_application":
โ”‚   72:       each.value.add_on_application ? var.addon_config : {}
โ”‚     โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
โ”‚     โ”‚ each.value.add_on_application is false
โ”‚     โ”‚ var.addon_config is object with 14 attributes

This can be easily fixed by changing {} to null.

Terraform v1.3.8
on linux_amd64
+ provider registry.terraform.io/gavinbunney/kubectl v1.14.0
+ provider registry.terraform.io/hashicorp/aws v4.55.0
+ provider registry.terraform.io/hashicorp/cloudinit v2.3.1
+ provider registry.terraform.io/hashicorp/helm v2.9.0
+ provider registry.terraform.io/hashicorp/kubernetes v2.18.1
+ provider registry.terraform.io/hashicorp/local v2.3.0
+ provider registry.terraform.io/hashicorp/null v3.2.1
+ provider registry.terraform.io/hashicorp/random v3.4.3
+ provider registry.terraform.io/hashicorp/time v0.9.1
+ provider registry.terraform.io/hashicorp/tls v4.0.4
+ provider registry.terraform.io/integrations/github v5.18.0
+ provider registry.terraform.io/terraform-aws-modules/http v2.4.1
+ provider registry.terraform.io/viktorradnai/bcrypt v0.1.2

bug: CSI secret provider + driver can't render resouces in kube-system

Description

Currently the new csi-secret-store provider is using the aws eks-charts version. Currently the chart itself requires just being able to deploy into the 'kube-system` the actual resource (see below)

resource "kubernetes_namespace_v1" "csi_secrets_store_provider_aws" {
  metadata {
    name = local.namespace
  }
}

forces as if it requires the namespace. which seems overkill.

โš ๏ธ Note

Before you submit an issue, please perform the following first:

  1. Remove the local .terraform directory (! ONLY if state is stored remotely, which hopefully you are following that best practice!): rm -rf .terraform/
  2. Re-initialize the project root to pull down modules: terraform init
  3. Re-attempt your terraform plan or apply and check if the issue still persists

[ FEATURE ] Make cert-manager CA endpoint configurable & add Certificate resource for ACME's chart.

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

What is the outcome that you are trying to reach?

cert-manager module's ACME CA's endpoint is not configurable, though there are many other ACME Certificate Authority servers (such as ZeroSSL and other acme servers) that can replace Let's Encrypt, in the module's custom chart (both in clusterissuer-production.yaml and clusterissuer-staging.yaml) it is hard-coded to lets-encrypt's endpoint. Also if the module deploys custom chart for lets-encrypt, it only deploys the ClusterIssuer manifest, in my conviction there (in cert-manager's lets-encrypt custom chart's folder) should also be a cert-manager custom resource called 'Certificate', which will be referenced to deployed ClusterIssuer and after deploy will generate certificate.

Describe the solution you would like

The solution I would like is to add cert-manager's custom resource Certificate in the same chart with ClusterIssuer resource, and make ACME endpoint configurable, notice that some ACME servers require keyID and Secret for connection, which means that external account binding should also be used in ClusterIssuer's manifest.

Additional context

I already made that solution possible and can create a PR if this issue gets approved.

[Karpenter] Rule with the name already exists on this event bus.

Description

We cannot create Event bridge rules when we deploy more than one cluster in the same region with this option. karpenter_enable_spot_termination_handling = true

Versions

  • Module version [Required]:
    v4.25.0
  • Terraform version:
    v1.3.9
  • Provider version(s):
+ provider registry.terraform.io/gavinbunney/kubectl v1.14.0
+ provider registry.terraform.io/hashicorp/aws v4.53.0
+ provider registry.terraform.io/hashicorp/cloudinit v2.2.0
+ provider registry.terraform.io/hashicorp/helm v2.8.0
+ provider registry.terraform.io/hashicorp/kubernetes v2.17.0
+ provider registry.terraform.io/hashicorp/local v2.3.0
+ provider registry.terraform.io/hashicorp/null v3.2.1
+ provider registry.terraform.io/hashicorp/tls v4.0.4
+ provider registry.terraform.io/terraform-aws-modules/http v2.4.1

Reproduction Code [Required]

https://github.com/aws-ia/terraform-aws-eks-blueprints/blob/e0e9aeb7ea785613cf794d524f79bcf5929bb6b8/modules/kubernetes-addons/karpenter/main.tf#L40

Steps to reproduce the behavior:

Expected behavior

should be able to create more than one cluster in the same region without conflict with the resources naming

Actual behavior

Rule with the name already exists on this event bus.

ExternalDNS irsa policy causing issues if you have multiple route53 hosted zones not managed by externalDNS

The policy as it's done today looks like this:

{
    "Statement": [
        {
            "Action": [
                "route53:ListResourceRecordSets",
                "route53:ChangeResourceRecordSets"
            ],
            "Effect": "Allow",
            "Resource": "arn:aws:route53:::hostedzone/xxxxxx",
            "Sid": ""
        },
        {
            "Action": "route53:ListHostedZones",
            "Effect": "Allow",
            "Resource": "*",
            "Sid": ""
        }
    ],
    "Version": "2012-10-17"
}

The issue lies in if you have multiple hostedzones on the AWS account not used for the EKS cluster, then externalDNS will error when trying to list the resource record sets for the other hosted zones. When externalDNS error for that it will simply fail even to update the hostedzone entry that it does have access too (i.e. xxxxxx in this case).

I'd suggest to add ListResourceRecordSets to all hosted zones, but I wonder if this is more of an upstream issue since it does not really seem great to give access for that since it should not be used. Maybe someone with more externalDNS expertise has some good idea.

[FEATURE] Allow `addon_config` variable in `aws-for-fluentbit` module (similar to `fargate-fluentbit` module)

Is your feature request related to a problem? Please describe

I need customize aws-for-fluentbit addon with some filters.

Describe the solution you'd like

aws-for-fluentbit

In fluent bit ConfigMap data, add:

  • fluent-bit.conf
  • container-log.conf
  • dataplane-log.conf
  • host-log.conf
  • parsers.conf
  • filters.conf
  • outputs.conf

https://github.com/aws-ia/terraform-aws-eks-blueprints/blob/7064fbba2fccddbfd3119564204a0f20db479965/modules/kubernetes-addons/fargate-fluentbit/main.tf#L15-L26

Its mandatory create locals named config default_config and a addon_config variable.

Similar to:

https://github.com/aws-ia/terraform-aws-eks-blueprints/blob/7064fbba2fccddbfd3119564204a0f20db479965/modules/kubernetes-addons/fargate-fluentbit/locals.tf#L6-L41

kubernetes-addons

Create aws_for_fluentbit_addon_config variable.

Describe alternatives you've considered

Install addon without terraform-eks-blueprints module.

Additional context

Add any other context or screenshots about the feature request here.

[FEATURE] Specify destination namespace in ArgoCD application config

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

What is the outcome that you are trying to reach?

When adding an application to argocd_applications config, the module should allow the destination namespace to be specified just like the destination server which defaults to https://kubernetes.default.svc. ArgoCD uses the destination namespace value for namespace-scoped resources that have not set a value for .metadata.namespace. The Argocd addon module currently supports specifying a destination server via the destination property, but the destination namespace is not yet supported.

Describe the solution you would like

Set the application destination namespace value in the argocd_application helm_release Terraform resource as part of the Destination Config.

# ---------------------------------------------------------------------------------------------------------------------
# ArgoCD App of Apps Bootstrapping (Helm)
# ---------------------------------------------------------------------------------------------------------------------

resource "helm_release" "argocd_application" {
 for_each = { for k, v in var.applications : k => merge(local.default_argocd_application, v) if merge(local.default_argocd_application, v).type == "helm" }
 ...
 set {
   name  = "destination.namespace"
   value = each.value.namespace
   type  = "string"
 }
 ...
}

Describe alternatives you have considered

Setting the namespace in the argocd_helm_config. But this doesn't have any effect on the ArgoCD application destination even though it's probably what's intended in ArgoCD application Helm template in the module. The issue with this approach is that the ArgoCD helm config namespace is not necessarily the same as the destination namespace of the resources that will be deployed via the Helm chart.

Additional context

[CI] - Improve Version Management

Proposal: Utilize Renovate for dependency updates.

Renovate looks like it could help us from a dependency update perspective on the project:

  • It scans and reports on a number of different tools used: Terraform modules, GitHub actions, Helm charts, Docker images, etc.
  • It provides a single pane of glass that reports on what it is aware of, what updates are available, simple checkbox to re-scan entire project, simple checkbox to rebase all PRs its created or update independently. Check out the test repo I created which shows this issue that is meant to always hang around so Renovate can keep it updated https://github.com/clowdhaus/bot-test/issues/3

Once this initial scan completes we'll be able to tell more about what it was able to detect, and what it was unable to detect. This might guide us on how we specify certain values so that it can be detected by these tools (for this project but also for our users).

Currently running a test on this project here, but it looks like there is a low rate limit its hitting on this initial scan (even updates the issue ticket with Rate Limit - great at providing a lot of detail in real-time):
image

[FEATURE] Add Semantic release automation with Github workflow

Is your feature request related to a problem? Please describe

A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

Describe the solution you'd like

A clear and concise description of what you want to happen.

Describe alternatives you've considered

A clear and concise description of any alternative solutions or features you've considered.

Additional context

Add any other context or screenshots about the feature request here.

[external-dns] make versatility

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

What is the outcome that you are trying to reach?

external-dns addon needs to domain_name var.

but external-dns is used with other dns application as well as route53.
My team use for infoblox, for example.

I think your team or committer already consider this point.
I found that
https://github.com/aws-ia/terraform-aws-eks-blueprints/blob/f31d98df193b0cd195cc5dc548d2bcda3ad9031d/modules/kubernetes-addons/external-dns/main.tf#L76-L79

So please update external-dns codes for general use.

Describe the solution you would like

I want to use external-dns with other dns application not route53.

@bryantbiggs

`irsa_config` is underused when using the `kubernetes-addons` module

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

What is the outcome that you are trying to reach?

I need to pass the same Docker Hub Credentials to all the namespaces created by the kubernetes-addons module to prevent that Docker Hub throttling error. Still, the root module does not pass the irsa_config variable (that has the kubernetes_svc_image_pull_secrets option) to each addon.

Describe the solution you would like

A way to pass that kubernetes_svc_image_pull_secrets to each addon from the kubernetes-addon module.

Describe alternatives you have considered

I can think in two ways:

  • Create variables like <addon>_irsa_config, just like the <addon>_helm_config ones, but we'll need to create all of them;
  • Allow passing irsa_config inside the <addon>_helm_config, though it's not a helm config.

Additional context

I created a branch with the second option but didn't create the PR yet because I want to know what you think first.

Edit: Created the Pull Request: aws-ia/terraform-aws-eks-blueprints#1240

[FEATURE] Support for passing *_helm_config to ArgoCD

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

What is the outcome that you are trying to reach?

passing *_helm_config to ArgoCD.
eg. aws_load_balancer_controller_helm_config

Describe the solution you would like

Add code to pass *_helm_config to ArgoCD.

Describe alternatives you have considered

It is not intuitive to change the handling of *_helm_config depending on whether you are using ArgoCD or not.
So I find it difficult to solve this except by passing *_helm_config to ArgoCD.

Additional context

When I wrote aws_load_balancer_controller_helm_config, nothing showed up in the terraform plan.
This only happens when using ArgoCD, because there is no code written to pass *_helm_config to ArgoCD Application resource.
This felt like a bug.
However, there was a possibility that the feature implementation simply had not caught up, so I created a feature request.

aws_load_balancer_controller_helm_config = {
  values = [
    <<-EOT
    podDisruptionBudget:
      maxUnavailable: 1
    EOT
  ]
}

Also, I can pass helm_values by writing as follows, but the current implementation is not sufficient, because all values of awsLoadBalancerController will be lost in terraform's shallow merge.

argocd_applications = {
  addons = {
    path               = "chart"
    repo_url           = local.argocd_addons_repo_url
    add_on_application = true
    values = {
      awsLoadBalancerController = {
        podDisruptionBudget = {
          maxUnavailable = 1
        }
      }
    }
  }
}

Also, isn't it unnatural that our value is not passed when using ArgoCD?

https://github.com/aws-ia/terraform-aws-eks-blueprints/blob/854974d80304d736fce537699f86b8657c5fde98/modules/kubernetes-addons/aws-load-balancer-controller/values.yaml

[FEATURE] Additional helm values for kubernetes-addons which won't delete the default

Is your feature request related to a problem? Please describe

I can't just add additional values for helm. I have to replace the default valyes.yaml with full copy of all values.

Ie.

module "aws_eks_addons" {
  external_dns_helm_config = {
    values = [yamlencode({
      sources : ["service", "istio-gateway"]
    })]
  }
}

will remove settings from the default values.yaml like provider, zoneIdFilters and aws.region.

Describe the solution you'd like

I would like to see such pattern (example for externa-dns):

variable "helm_values" {
  type        = any
  default     = {}
  description = "Additional Helm values"
}

locals {
  default_helm_config = {
    values      = [local.default_helm_values, var.helm_values]
  }
  default_helm_values = templatefile("${path.module}/values.yaml", {
    aws_region      = var.addon_context.aws_region_name
    zone_filter_ids = local.zone_filter_ids
  })
}

Then:

module "aws_eks_addons" {
  external_dns_helm_values = yamlencode({
    sources : ["service", "istio-gateway"]
  )}
}

The pattern uses ability of Helm which merges 2 or more values.yaml files.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.