GithubHelp home page GithubHelp logo

cloudposse / terraform-aws-components Goto Github PK

View Code? Open in Web Editor NEW
485.0 22.0 203.0 36.47 MB

Opinionated, self-contained Terraform root modules that each solve one, specific problem

Home Page: https://docs.cloudposse.com/components/

License: Apache License 2.0

Makefile 0.54% HCL 97.89% Shell 1.09% Dockerfile 0.03% Smarty 0.35% Open Policy Agent 0.06% HTML 0.01% Mustache 0.05% Python 0.01%
terraform terraform-module geodesic reference-implementation reference-architecture aws service-catalog catalog library examples

terraform-aws-components's Introduction

Project Banner

Latest ReleaseLast UpdateSlack Community

This is a collection of reusable AWS Terraform components for provisioning infrastructure used by the Cloud Posse reference architectures. They work really well with Atmos, our open-source tool for managing infrastructure as code with Terraform.

Tip

πŸ‘½ Use Atmos with Terraform

Cloud Posse uses atmos to easily orchestrate multiple environments using Terraform.
Works with Github Actions, Atlantis, or Spacelift.

Watch demo of using Atmos with Terraform
Example of running atmos to manage infrastructure from our Quick Start tutorial.

Introduction

In this repo you'll find real-world examples of how we've implemented Terraform "root" modules as native Atmos Components for our customers. These Components leverage our hundreds of free and open-source terraform "child" modules.

The component library captures the business logic, opinions, best practices and non-functional requirements for an organization.

It's from this library that other developers in your organization will pick and choose from whenever they need to deploy some new capability.

These components make a lot of assumptions (aka "convention over configuration") about how we've configured our environments. That said, they still serve as an excellent reference for others on how to build, organize and distribute enterprise-grade infrastructure with Terraform that can be used with Atmos.

Usage

Please take a look at each component's README for specific usage.

Tip

πŸ‘½ Use Atmos with Terraform

To orchestrate multiple environments with ease using Terraform, Cloud Posse recommends using Atmos, our open-source tool for Terraform automation.

Watch demo of using Atmos with Terraform
Example of running atmos to manage infrastructure from our Quick Start tutorial.

Generally, you can use these components in Atmos by adding something like the following code into your stack manifest:

components:                      # List of components to include in the stack
  terraform:                     # The toolchain being used for configuration
    vpc:                         # The name of the component (e.g. terraform "root" module)
      vars:                      # Terraform variables (e.g. `.tfvars`)
        cidr_block: 10.0.0.0/16  # A variable input passed to terraform via `.tfvars`

Automated Updates of Components using GitHub Actions

Leverage our GitHub Action to automate the creation and management of pull requests for component updates.

This is done by creating a new file (e.g. atmos-component-updater.yml) in the .github/workflows directory of your repository.

The file should contain the following:

jobs:
update:
  runs-on:
    - "ubuntu-latest"
  steps:
    - name: Checkout Repository
      uses: actions/checkout@v4
      with:
        fetch-depth: 1

    - name: Update Atmos Components
      uses: cloudposse/github-action-atmos-component-updater@v2
      env:
        # https://atmos.tools/cli/configuration/#environment-variables
        ATMOS_CLI_CONFIG_PATH: ${{ github.workspace }}/rootfs/usr/local/etc/atmos/
      with:
        github-access-token: ${{ secrets.GITHUB_TOKEN }}
        log-level: INFO
        max-number-of-prs: 10

    - name: Delete abandoned update branches
      uses: phpdocker-io/github-actions-delete-abandoned-branches@v2
      with:
        github_token: ${{ github.token }}
        last_commit_age_days: 0
        allowed_prefixes: "component-update/"
        dry_run: no

For the full documentation on how to use the Component Updater GitHub Action, please see the Atmos Intergations documentation.

Using pre-commit Hooks

This repository uses pre-commit and pre-commit-terraform to enforce consistent Terraform code and documentation. This is accomplished by triggering hooks during git commit to block commits that don't pass checks (E.g. format, and module documentation). You can find the hooks that are being executed in the .pre-commit-config.yaml file.

You can install pre-commit and this repo's pre-commit hooks on a Mac machine by running the following commands:

brew install pre-commit gawk terraform-docs coreutils
pre-commit install --install-hooks

Then run the following command to rebuild the docs for all Terraform components:

make rebuild-docs

Important

Deprecated Components

Terraform components which are no longer actively maintained are kept in the deprecated/ folder.

Many of these deprecated components are used in our older reference architectures.

We intend to eventually delete, but are leaving them for now in the repo.

Important

In Cloud Posse's examples, we avoid pinning modules to specific versions to prevent discrepancies between the documentation and the latest released versions. However, for your own projects, we strongly advise pinning each module to the exact version you're using. This practice ensures the stability of your infrastructure. Additionally, we recommend implementing a systematic approach for updating versions to avoid unexpected changes.

Makefile Targets

Available targets:

  help                                Help screen
  help/all                            Display help for all targets
  help/short                          This help short screen
  rebuild-docs                        Rebuild README for all Terraform components
  rebuild-mixins-docs                 Rebuild README for Terraform Mixins
  upstream-component                  Upstream a given component

Related Projects

Check out these related projects.

  • Cloud Posse Terraform Modules - Our collection of reusable Terraform modules used by our reference architectures.
  • Atmos - Atmos is like docker-compose but for your infrastructure

References

For additional context, refer to some of these links.

Tip

Use Terraform Reference Architectures for AWS

Use Cloud Posse's ready-to-go terraform architecture blueprints for AWS to get up and running quickly.

βœ… We build it together with your team.
βœ… Your team owns everything.
βœ… 100% Open Source and backed by fanatical support.

Request Quote

πŸ“š Learn More

Cloud Posse is the leading DevOps Accelerator for funded startups and enterprises.

Your team can operate like a pro today.

Ensure that your team succeeds by using Cloud Posse's proven process and turnkey blueprints. Plus, we stick around until you succeed.

Day-0: Your Foundation for Success

  • Reference Architecture. You'll get everything you need from the ground up built using 100% infrastructure as code.
  • Deployment Strategy. Adopt a proven deployment strategy with GitHub Actions, enabling automated, repeatable, and reliable software releases.
  • Site Reliability Engineering. Gain total visibility into your applications and services with Datadog, ensuring high availability and performance.
  • Security Baseline. Establish a secure environment from the start, with built-in governance, accountability, and comprehensive audit logs, safeguarding your operations.
  • GitOps. Empower your team to manage infrastructure changes confidently and efficiently through Pull Requests, leveraging the full power of GitHub Actions.

Request Quote

Day-2: Your Operational Mastery

  • Training. Equip your team with the knowledge and skills to confidently manage the infrastructure, ensuring long-term success and self-sufficiency.
  • Support. Benefit from a seamless communication over Slack with our experts, ensuring you have the support you need, whenever you need it.
  • Troubleshooting. Access expert assistance to quickly resolve any operational challenges, minimizing downtime and maintaining business continuity.
  • Code Reviews. Enhance your team’s code quality with our expert feedback, fostering continuous improvement and collaboration.
  • Bug Fixes. Rely on our team to troubleshoot and resolve any issues, ensuring your systems run smoothly.
  • Migration Assistance. Accelerate your migration process with our dedicated support, minimizing disruption and speeding up time-to-value.
  • Customer Workshops. Engage with our team in weekly workshops, gaining insights and strategies to continuously improve and innovate.

Request Quote

✨ Contributing

This project is under active development, and we encourage contributions from our community.

Many thanks to our outstanding contributors:

For πŸ› bug reports & feature requests, please use the issue tracker.

In general, PRs are welcome. We follow the typical "fork-and-pull" Git workflow.

  1. Review our Code of Conduct and Contributor Guidelines.
  2. Fork the repo on GitHub
  3. Clone the project to your own machine
  4. Commit changes to your own branch
  5. Push your work back up to your fork
  6. Submit a Pull Request so that we can review your changes

NOTE: Be sure to merge the latest changes from "upstream" before making a pull request!

🌎 Slack Community

Join our Open Source Community on Slack. It's FREE for everyone! Our "SweetOps" community is where you get to talk with others who share a similar vision for how to rollout and manage infrastructure. This is the best place to talk shop, ask questions, solicit feedback, and work together as a community to build totally sweet infrastructure.

πŸ“° Newsletter

Sign up for our newsletter and join 3,000+ DevOps engineers, CTOs, and founders who get insider access to the latest DevOps trends, so you can always stay in the know. Dropped straight into your Inbox every week β€” and usually a 5-minute read.

πŸ“† Office Hours

Join us every Wednesday via Zoom for your weekly dose of insider DevOps trends, AWS news and Terraform insights, all sourced from our SweetOps community, plus a live Q&A that you can’t find anywhere else. It's FREE for everyone!

License

License

Preamble to the Apache License, Version 2.0

Complete license is available in the LICENSE file.

Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements.  See the NOTICE file
distributed with this work for additional information
regarding copyright ownership.  The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License.  You may obtain a copy of the License at

  https://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied.  See the License for the
specific language governing permissions and limitations
under the License.

Trademarks

All other trademarks referenced herein are the property of their respective owners.


Copyright Β© 2017-2024 Cloud Posse, LLC

README footer

Beacon

terraform-aws-components's People

Contributors

aknysh avatar arcaven avatar benbentwo avatar bradj avatar danjbh avatar dudymas avatar dylanbannon avatar gberenice avatar goruha avatar gowiem avatar jhenderson avatar joe-niland avatar johncblandii avatar joshmyers avatar kevcube avatar korenyoni avatar max-lobur avatar maximmi avatar maxymvlasov avatar mcalhoun avatar milldr avatar nitrocode avatar nuru avatar osterman avatar rosesecurity avatar sgtoj avatar tamsky avatar vadim-hleif avatar wavemoran avatar zdmytriv avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-aws-components's Issues

EKS cluster bootstrap

Describe the Bug

Bootstrapping the EKS cluster doesn't seem to be worked out. There are a few self-references on the deployed component. But how would you deploy the component with self-references without yet having one? Where data sources are going to pull information from?

Expected Behavior

When putting out an atmos stack definition for an EKS component without any code modification and launching a stack deploy command the first time, I expect it to set up an AWS EKS cluster without errors and complaints.

Steps to Reproduce

The code is taken from the following sources:

Put out and deploy fundamental atmos stacks - everything on a diagram except the EKS component:

image

Define an EKS stack:

    eks:
      vars:
        aws_assume_role_arn: "arn:aws:iam::314028178888:role/atmos-gbl-test-terraform" # identity role for module
        tfstate_existing_role_arn: "arn:aws:iam::101236733906:role/atmos-gbl-root-terraform"
        cluster_kubernetes_version: "1.19"
        region_availability_zones: ["us-east-1b", "us-east-1c", "us-east-1d"]
        public_access_cidrs: ["72.107.0.0/24"]
        managed_node_groups_enabled: true

Run command to deploy a stack:

atmos terraform plan eks -s gbl-test

It'll end up triggering errors signifying about missing remote state of the following components:

  • workers_role
  • eks

Commenting out the pieces of code and related variables/locals seems like resolving the issue, but it adds up so much of an overhead on bootstrapping the component. Imagine how hard it'd be to replicate the deployment onto other dozens of environments.

Use variable component tag in the introspection mixin

Describe the Feature

Instead of a fixed value, set the component name tag using a variable.

Expected Behavior

By setting

component_tag="IacComponent" 

Obtain:

      + tags                  = {
          + "Iac"          = "Terraform"
          + "IacComponent" = "ecs-service"
          + "Stage"        = "dev"

Use Case

Some companies may have different patterns for the component name tag. By using this solution, they may change it.

Code suggestion

locals {
  # Throw an error if lookup fails
  check_required_tags = module.this.enabled ? [
    for k in var.required_tags :
    lookup(module.this.tags, k)
  ] : []
}

variable "required_tags" {
  type        = list(string)
  description = "List of required tag names"
  default     = []
}

variable "component_tag" {
  type        = string
  description = "The name of the tag that will be used to identify the component"
  default     = "IacComponent"
}

# `introspection` module will contain the additional tags
module "introspection" {
  source  = "cloudposse/label/null"
  version = "0.25.0"

  tags = merge(var.tags, {
    format("%s", var.component_tag) = basename(abspath(path.module))
  })

  context = module.this.context
}

Better Describe Purpose of Root Modules

what

  • these are examples of how we've implemented various common patterns with our customers
  • the root modules are the most opinionated incarnations of modules that seldom translate verbatim across organizations. This is your secret sauce. We could never implement this in a generic way without creating crazy bloat. The terraform-root-modules is your β€œstarting off point” where you hardfork/diverge. These modules are very specific to how we do things in our environment, so they might not "drop in" smoothly in other environments as they make a lot of assumptions on how things are organized.
  • this is where you start now with your clean terraform-root-module repo and start to create a new project there in to describe the infrastructure that you want.
  • no 2 companies will ever have the same terraform-root-modules
  • root modules never compose other root modules. If or when this is desired, then the composite module should be split of into a new repository and versioned independently
  • root modules become the module catalog of your organization that developers choose from
  • a company writes their own root modules. it’s your flavor of how to leverage the generic building blocks to achieve the specific customizations that you need without needing to write everything from the ground up because your leveraging our general modules. the idea is to write all terraform-aws* modules very generically
  • These terraform-root-modules are not strictly part of geodesic, but they can be used together with it.
  • Every root module should include a Makefile that defines init, plan, apply. This establishes a common interface for interacting with terraform without the need of terragrunt

Update description to mention:

Normally, a company should build up a service catalog of terraform modules, which is just a collection of terraform modules that capture the opinions, best practices and non-functional requirements of that organization.

for example

in the case where I have a pre-production account that has like 30+ vpcs?

This is where you probably treat those as separate projects in your company's own terraform-root-modules. Each VPC might have their own TF state.

tfstate-backend: var.enable_server_side_encryption is not used anymore

Describe the Bug

var.enable_server_side_encryption is not used anymore (commented out in main.tf to the upstream component).

Should be marked as deprecated.

Expected Behavior

Variable to be removed / deprecated

Steps to Reproduce

see code

Screenshots

No response

Environment

No response

Additional Context

variable "enable_server_side_encryption" {
type = bool
description = "Enable DynamoDB and S3 server-side encryption"
default = true
}

// enable_server_side_encryption = var.enable_server_side_encryption

Upstream latest EFS component

Found a bug? Maybe our Slack Community can help.

Slack Community

Describe the Bug

The efs component seems out-of-date. remote-state calls and other best practices do not match.

Expected Behavior

efs component should be updated to reflect current best practices.

Standardize Domain or Zone

what

  • We have some places that use TF_VAR_domain_name and other places that use TF_VAR_zone_name

why

  • Would be nice to consolidate to one env to reduce confusion

components_datadog-private-location-ecs Error: Missing resource instance key

Describe the Bug

Error: Missing resource instance key
on .terraform/modules/components_datadog-private-location-ecs/modules/datadog-private-location-ecs/main.tf line 9, in locals:
  datadog_location_config = jsondecode(datadog_synthetics_private_location.private_location.config)
Because datadog_synthetics_private_location.private_location has "count" set, its attributes must be accessed on specific instances.

For example, to correlate with indices of a referring resource, use:
    datadog_synthetics_private_location.private_location[count.index]

Expected Behavior

The datadog-private-location-ecs module to deploy a private space in datadog and an aws ecs fargate container.

Steps to Reproduce

I have this in my main.tf and am deploying using Terraform Cloud,

terraform {
  required_providers {
    datadog = {
      source = "DataDog/datadog"
    }
  }
}
provider "datadog" {
  api_key = var.datadog_api_key
  app_key = var.datadog_app_key
}
module "components_datadog-private-location-ecs" {
  source  = "cloudposse/components/aws//modules/datadog-private-location-ecs"
  version = "1.293.2"
  region = "us-west-2"
}

Screenshots

No response

Environment

Terraform version 1.4.6 (Terraform Cloud)
cloudposse/components/aws//modules/datadog-private-location-ecs - version 1.295.0

Additional Context

I 100% recognize this may be user error and not a bug at all. I apologize in advance if it is not.

Thank you!

S3-Bucket - make templated logging bucket name more flexible

Have a question? Please checkout our Slack Community or visit our Slack Archive.

Slack Community

Describe the Feature

While using the s3-bucket component and trying to add logging to it, I want to use a logging_bucket_name_rendering_template format to work when null-label's tenant and environment are null. When I tried using the suggested format for when only tenant is null, I ran into an error with terraform not being able to format a null value with %s -- this is consistent with tf format syntax section docs (see last paragraph in that section).

See the example below.

I believe the required feature change is to remove any null values in the bucket_name line,

β”‚ Error: Error in function call
β”‚ 
β”‚   on ../../modules/example/main.tf line 30, in locals:
β”‚   30:     final_bucket_name = format("%s-%s-%s-%%s", var.test_namespace, var.test_tenant, var.test_environment, var.test_stage, var.test_logging_bucket_name)
β”‚     β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚     β”‚ while calling format(format, args...)
β”‚     β”‚ var.test_environment is "myEnviornment"
β”‚     β”‚ var.test_logging_bucket_name is "myLoggingBucketName"
β”‚     β”‚ var.test_namespace is "myNameSpace"
β”‚     β”‚ var.test_stage is "myStage"
β”‚     β”‚ var.test_tenant is null
β”‚ 
β”‚ Call to function "format" failed: unsupported value for "%s" at 3: null value cannot be formatted.
β•΅
Releasing state lock. This may take a few moments...

Expected Behavior

When null values are present in the null-label settings (eg, when environment is null), the format function should still generate a logging bucket name.

Use Case

NA

Describe Ideal Solution

Adjust this line to remove nulls before calling the format function.

Something like this,

locals {
    non_null_values = compact([var.test_namespace, var.test_tenant, var.test_environment, var.test_stage, var.test_logging_bucket_name])
    final_bucket_name = format("%s-%s-%s-%%s", local.non_null_values)
}

If I can get feedback on a viable solution, I'm happy to submit a PR.

Alternatives Considered

Manually set the logging bucket as needed.

Additional Context

variable "test_namespace" {
  type        = string
  description = "The namespace to use for the test"
  default = "myNameSpace"
}
  
variable "test_tenant" {
  type        = string
  description = "The tenant to use for the test"
  default     = null
}

variable "test_environment" {
  type        = string
  description = "The environment to use for the test"
  default     = "myEnvironment"
}

variable "test_stage" {
  type        = string
  description = "The stage to use for the test"
  default     = "myStage"
}

variable "test_logging_bucket_name" {
    type        = string
    description = "The logging bucket name to use for the test"
}

locals {
    non_null_values = compact([var.test_namespace, var.test_tenant, var.test_environment, var.test_stage, var.test_logging_bucket_name])
    final_bucket_name = format("%s-%s-%s-%%s", local.non_null_values)
}

output "final_bucket_name" {
  value = local.final_bucket_name
}

We should break out the `account-map` submodules into their own components

Describe the Feature

Breaking out the account-map submodules into their own components so they can be versioned and treated separately from the account-map component. I don't see the reason behind why these are all included together in the same component, so maybe this is first a question to see if there is a reason. And if not, then it'd be good to use this as a driver to break them up.

Expected Behavior

We have a separate component module folder for each account-map submodule:

  1. iam-roles
  2. roles-to-principals
  3. team-assume-role-policy

Use Case

This will enable updating the account-map component and not breaking other modules that depend on its sub-modules. It will also enable usage of roles-to-principals and similar without having to pull in account-map.

Describe Ideal Solution

We have a separate component module folder for each account-map submodule.

Alternatives Considered

N/A

Additional Context

My team ran into issues with using older components and a newer version of account-map. We ended up having to pull in an older submodule of the account-map component manually to fix our issues.

Unsupported argument in module "iam_roles" in "iam-primary-roles" component

Hi everyone,

I noticed a mistake in iam-primary-roles module causing error when plan this module with terraform

In iam-primary-roles when calling module iam-roles variable assume_role should be replaced by tfstate_assume_role like in iam-delegated-modules

module "iam_roles" {
source = "../account-map/modules/iam-roles"
stage = var.stage
assume_role = false
region = var.region
}

module "iam_roles" {
source = "../account-map/modules/iam-roles"
stage = var.stage
region = var.region
tfstate_assume_role = false
}

Mathieu

Spacelift workers can run on ARM

Describe the Feature

spacelift workers should be able to run on arm

Expected Behavior

being able to use ARM without needing to specify specific image id

Use Case

arm is more cost effecrive

Describe Ideal Solution

being able to use ARM without needing to specify specific image id

Alternatives Considered

No response

Additional Context

see

and specaelift supports arm

Providers file in modules

Describe the Bug

https://github.com/cloudposse/terraform-aws-components/blob/master/modules/account/providers.tf is present in accounts module (and probably elsewhere).

This prevents the module to be invoked with providers block (containing variables, or in loop)

This is a bad practice as described but Hashicorp https://www.terraform.io/docs/language/modules/develop/providers.html

"A module intended to be called by one or more other modules must not
contain any provider blocks. A module containing its own provider configurations
is not compatible with the for_each, count, and depends_on arguments that were
introduced in Terraform v0.13. For more information, see Legacy Shared Modules
with Provider Configurations."

Expected Behavior

To not have providers definition in module

Steps to Reproduce

"""
provider "aws" {
region = var.region
access_key = var.aws_access_key_id
secret_key = var.aws_secret_access_key
}

module "aws_accounts" {
source = "github.com/cloudposse/terraform-aws-components.git//modules/account"
..
..
..
}
"""

first deployment of eks/cluster will never work

Describe the Bug

Deploying new deployment of eks/cluster will fail due to the OIDC not existing. The local.eks_cluster_oidc_issuer_url will initial be an empty string. However, the cloudposse/eks-iam-role/aws module for aws-ebs-csi-driver and vpc-cni addons requires the variable even if the addons are not enabled.

Expected Behavior

The eks/cluster component should work for initial deployments.

Steps to Reproduce

  1. deploy a new eks/cluster using the same snippet in the readme

Screenshots

n/a

Environment (please complete the following information):

  • OS: linux
  • latest terraform
  • latest aws terraform provider

Additional Context

n/a

v1.259.0 spacelift/admin-stack Error: Inconsistent conditional result types

Found a bug? Maybe our Slack Community can help.

Slack Community

Describe the Bug

β”‚ Error: Inconsistent conditional result types
β”‚ 
β”‚   on outputs.tf line 8, in output "root_stack":
β”‚    8:   value       = local.enabled && local.create_root_admin_stack ? module.root_admin_stack : {}
β”‚     β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚     β”‚ local.create_root_admin_stack is true
β”‚     β”‚ local.enabled is true
β”‚     β”‚ module.root_admin_stack is object with 2 attributes
β”‚ 
β”‚ The true and false result expressions must have consistent types. The
β”‚ 'true' value includes object attribute "id", which is absent in the 'false'
β”‚ value.
β•΅

Expected Behavior

Run passes

Steps to Reproduce

Steps to reproduce the behavior:

  1. Run within the context of an atmos project's root stack

Screenshots

Environment (please complete the following information):

Anything that will help us triage the bug will help. Here are some ideas:

  • Version v1.259.0

Additional Context

This works just fine.

output "root_stack" {
  description = "The root stack, if enabled and created by this component"
  value       = local.enabled && local.create_root_admin_stack ? module.root_admin_stack : null
  sensitive   = true
}

Add Example Usage

what

  • Add example invocation

why

  • We need this so we can soon enable automated continuous integration testing of module

Cannot add Spacelift context attachments to `spacelift/admin-stack` child stack configuration

Describe the Bug

There is an underlying bug with version 1.4.0 of the module cloudposse/cloud-infrastructure-automation/spacelift//modules/spacelift-stack

The impact of this bug is that it is not possible to add Spacelift context attachments to child stacks created by the spacelift/admin-stack component.

This bug is logged here:
cloudposse/terraform-spacelift-cloud-infrastructure-automation#159

Expected Behavior

Defining a list of context attachments in settings.spacelift.context_attachments to attach to child stacks should result the successful attachment of these context when the relevant spacelift/admin-stack component is applied/deployed.

Steps to Reproduce

  1. Create an example context in the Spacelift Web UI
  2. Ensure that a YAML config file exists that adds a context attachment (with the ID of the context created above)
terraform:
  settings:
    spacelift:
      context_attachments: ["example-context"]
  1. Import this configuration into a stack configuration
  2. Plan the relevant spacelift/admin-stack component for the affected child stack

Screenshots

No response

Environment

No response

Additional Context

Component Version 1.356.0
Underlying module version 1.4.0

Missing "root" alias aws provider definition in aws-sso component

Found a bug? Maybe our Slack Community can help.

Slack Community

Describe the Bug

All versions of the aws-sso component released from 1.227.0 onwards do not have an alias = "root" aws provider defined in providers.tf, which prevents planning and/or applying changes to the component if any sso assignments have been made in the root account.

Prior to the release of version 1.227.0, the aws-sso component's providers.tf declared this additional aws provider definition, which is used to deploy the sso_account_assignments_root module in main.tf.

Expected Behavior

  • Given that I have defined sso account assignments in the aws-sso component YAML configuration for the account designated as the root account for my organisation ("core-root"),
  • When I plan or apply valid changes made to any of the YAML configuration for the component
  • Then the component should plan successfully

Steps to Reproduce

Steps to reproduce the behavior:

  1. Go to the YAML component configuration for the aws-sso component (either the catalog or where it is used)
  2. Add an account_assignments block for the org root account, typically core-root
  3. Ensure this matches the output from account-map such that the aws-sso local account_assignments_rootis non-empty
  4. Run atmos terraform plan aws-sso --stack core-gbl-identity(where this is env hosting the aws-sso component)
  5. You should see the error:
Error: missing provider provider["registry.terraform.io/hashicorp/aws"].root

Screenshots

N/A

Environment (please complete the following information):

Anything that will help us triage the bug will help. Here are some ideas:

  • OS: Any
  • Atmos component version affected >1.227.0

Additional Context

For reference, the 1.226.0 version of providers.tf which contains the root-aliased provider is here

The 1.227.0 versions of providers.tf for which the root-aliased provider is missing is here

[dns-delegated] Some options should be per-domain, not per-component

In the dns-delegated component, arguments like dns_private_zone_enabled and certificate_authority_enabled should be set individually per domain, instead of per component and shared by all domains (as they currently are) so that public and private domains can be configured in the same component.

In Cloud Posse's convention-over-configuration usage, the reference architecture expect there to be one and only one dns-delegated component and for it to have that name. Having multiple dns-delegated components is not easy to deal with and should not be needed.

ecs-service: circuit breaker params are ignored

Describe the Bug

https://github.com/cloudposse/terraform-aws-components/blob/main/modules/ecs-service/main.tf#L259

deployment_controller_type = lookup(local.task, "deployment_controller_type", null) ends up passing in null which does not default to ECS for this conditional https://github.com/cloudposse/terraform-aws-ecs-alb-service-task/blob/main/main.tf#L642 so circuit_breaker_deployment_enabled and circuit_breaker_rollback_enabled never get enabled.

Expected Behavior

...
        task:
          circuit_breaker_deployment_enabled: true
          circuit_breaker_rollback_enabled: true

...should trigger...

      ~ deployment_circuit_breaker {
          ~ enable   = false -> true
          ~ rollback = false -> true
        }

Changing the values should trigger an update, but it requires manually setting deployment_controller_type: ECS.

Steps to Reproduce

Enable the circuit breaker for a task without explicitly setting the deployment controller.

Screenshots

No response

Environment

No response

Additional Context

No response

Bastion: An argument named "tags" is not expected here

Found a bug? Maybe our Slack Community can help.

Slack Community

Describe the Bug

β”‚ Error: Unsupported argument
β”‚ 
β”‚   on .terraform/modules/bastion_autoscale_group/main.tf line 244, in resource "aws_autoscaling_group" "default":
β”‚  244:   tags = flatten([
β”‚ 
β”‚ An argument named "tags" is not expected here.

Expected Behavior

Version 1.237.0 failures on a basic run that passes before 1.237.0, but that run will probably fail now as it seems to be the autoscale module that has the problem.

components:
  terraform:
    bastion:
      vars:
        enabled: true
        name: bastion
        instance_type: t3.micro
        inbound_ssh_enabled: true
        associate_public_ip_address: true # deploy to public subnet and associate public IP with instance
        security_group_rules: []

Steps to Reproduce

Steps to reproduce the behavior:

  1. Create a bastion server using the values above
  2. See error

Screenshots

N/A

Environment (please complete the following information):

Anything that will help us triage the bug will help. Here are some ideas:

  • OS: any
  • Version: any

Additional Context

`github-runners` Deprecated `tags` Argument and Attribute

Found a bug? Maybe our Slack Community can help.

Slack Community

Describe the Bug

β”‚ Warning: Argument is deprecated
β”‚ 
β”‚   with module.autoscale_group.aws_autoscaling_group.default[0],
β”‚   on .terraform/modules/autoscale_group/main.tf line 244, in resource "aws_autoscaling_group" "default":
β”‚  244:   tags = flatten([
β”‚  245:     for key in keys(module.this.tags) :
β”‚  246:     {
β”‚  247:       key                 = key
β”‚  248:       value               = module.this.tags[key]
β”‚  249:       propagate_at_launch = true
β”‚  250:     }
β”‚  251:   ])
β”‚ 
β”‚ Use tag instead
β•΅
β•·
β”‚ Warning: Deprecated attribute
β”‚ 
β”‚   on .terraform/modules/autoscale_group/outputs.tf line 23, in output "autoscaling_group_tags":
β”‚   23:   value       = module.this.enabled ? aws_autoscaling_group.default[0].tags : []
β”‚ 
β”‚ The attribute "tags" is deprecated. Refer to the provider documentation for details.
β”‚ 
β”‚ (and one more similar warning elsewhere)
β•΅

Expected Behavior

No deprecation warnings.

Steps to Reproduce

Steps to reproduce the behavior:

  1. Configure a runner
  2. Run a plan

Screenshots

If applicable, add screenshots or logs to help explain your problem.

N/A

Environment (please complete the following information):

Anything that will help us triage the bug will help. Here are some ideas:

  • OS: Geodesic (Debian) v1.6.0

Additional Context

N/A

Inconsistency in ways how the remote state is pulled

Describe the Feature

Historically, in order to pull data from deployed components terraform_remote_state data source is taken for use. But it's been noticed that code has undergone significant changes in the way this is achieved. Now, the terraform module from the registry with the following source carries out this function:

module "vpc_flow_logs_bucket" {
  source  = "cloudposse/stack-config/yaml//modules/remote-state"
  version = "0.13.0"

  stack_config_local_path = "/home/user/ws/datameer/infrastructure-atmos-novel/stacks"
  component               = "vpc-flow-logs-bucket"
  environment             = var.vpc_flow_logs_bucket_environment_name
  stage                         = var.vpc_flow_logs_bucket_stage_name

  context = module.this.context
}

We would like to have consistency in this very important aspect given the fact that it's the most reusable feature and fundamental aspect of how the modules' coupling is organized.

The inconsistency also causes misunderstanding of what should be the right way of fixing things in times when errors happen.

Expected Behavior

The remote state is used consistently throughout all the upper-level and embedded modules.

`ecs-service` component can't be applied because it depends on a module which doesn't support count

Describe the Bug

Using the ecs-service component fails because it depends on the datadog-configuration/datadog_keys module which declares its own providers, therefore can't be used with count.

Expected Behavior

ecs-service module works as shipped

Steps to Reproduce

just terraform validate on ecs-service.

Screenshots

image

Additional Context

Maybe the module reference needs a pinned version, this probably worked at one time, but doesn't now.

Spacelift Worker Pool ASG may fail to scale due to ami/instance type mismatch

Found a bug? Maybe our Slack Community can help.

Slack Community

Describe the Bug

With the recent addition of spacelift worker pool support for arm64, the data source filters that return the ami will sometimes return the arm64 image rather than the x86_64 image. This will result in failures to start new instances in the autoscaling group whenever the arm64 ami is returned first and autoscaling groups will generate errors.

Expected Behavior

Prior to Spacelift's release of the arm64 AMIs, all spacelift worker pool instances launched were x86_64. One expects the same behavior before and after the release of arm64 images. In the future when arm64 support for geodesic and terraform module releases are more widespread, some may chose to switch to arm64, but one never wants to flip flop randomly, as the instance type and the ami must always match. At present the instance type is set statically in yaml, so this can be forced to x86_64.

Steps to Reproduce

Steps to reproduce the behavior:

  1. At the moment this is being written, all ASGs that fire in aws us-east-1 are probably failing due to the arm64 returning first from the filter
  2. Look at the CloudTrail logs when triggering auto-scaling
  3. Watch the worker pool. You may see active and busy held down at the minimum level while the pending remains high for an hour or more

Screenshots

Screen Shot 2023-02-25 at 9 37 26 PM

Environment (please complete the following information):

  • Spacelift on Feb 25th 2023
  • Terraform 1.3.8
  • us-east-1 AMIs

Additional Context

[Lambda] null values for var.function_name break the local.cicd_s3_key_format formatting

Describe the Bug

null values for var.function_name break the local.cicd_s3_key_format formatting

Expected Behavior

It uses the module.label.id if no function name was provided.

Move the module.lambda line for function_name = coalesce(var.function_name, module.label.id) to a local and reuse that instead for cicd_s3_key_format = var.cicd_s3_key_format != null ? var.cicd_s3_key_format : "stage/${module.this.stage}/lambda/${var.function_name}/%s"

Steps to Reproduce

Set a null function_name (or no value) and run a plan.

Screenshots

No response

Environment

No response

Additional Context

No response

Split budgets from account-settings

Describe the Feature

Have a separate account-settings budget component, so it's easier to build various budgeting structures and have these notify in various slack channels (eg more high level for central team, more fine grained for specific teams)

Expected Behavior

be able to set budget alerts to various slack channels/notification-channels with different kind of budget settings

Use Case

  • central team for high level checks (eg finance, platform), more finegrained for responsible team
  • separate eg s3 GB from total costs

Describe Ideal Solution

seperate budget component

Alternatives Considered

No response

Additional Context

No response

[aws/kops-aws-platform] chart-repo.tf does not consider if cluster_name is not default

chart-repo.tf sets cluster_name = "${var.region}.${var.zone_name}" however when deploying kops module there is the ability to set cluster_name_prefix which overrides the default region name.

Other modules in the root-module set cluster_name the same way. Does it make sense to move the cluster_name_prefix logic to aws/kops-aws-platform or use some other ENV?

`account` component incomaptible with `account-map`

Describe the Bug

The most recent update of account-map (#363) is using an output of account called account_info_map that is not present in the current published version of the component.

Expected Behavior

atmos terraform plan -s gbl-root account-map

works

Steps to Reproduce

Steps to reproduce the behavior:

  1. Use the latest published account component and apply it
  2. Use the latest published account-map component plan/apply it
  3. See error
β”‚ Error: Unsupported attribute
β”‚ 
β”‚   on main.tf line 12, in locals:
β”‚   12:   account_info_map = module.accounts.outputs.account_info_map
β”‚     β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚     β”‚ module.accounts.outputs is object with 21 attributes
β”‚ 
β”‚ This object does not have an attribute named "account_info_map".

Additional Context

I know that the latest components are not always published, but it would be great to have them at least work together. I'm happy to help get them merged into the main branch of this repository.

v1.259.0 spacelift/admin-stack: var.spacelift_stack_dependency_enabled is unused

Found a bug? Maybe our Slack Community can help.

Slack Community

Describe the Bug

spacelift_stack_dependency_enabled is defined, but it is not used.

Expected Behavior

child_stacks.tf like needs:

  spacelift_stack_dependency_enabled = try(each.value.settings.spacelift.spacelift_stack_dependency_enabled, var.spacelift_stack_dependency_enabled)

Steps to Reproduce

Screenshots

Environment (please complete the following information):

Anything that will help us triage the bug will help. Here are some ideas:

  • v.1.259.0

Additional Context

A required `region` input parameter is missing in `variables.tf` of the module `modules/account-map/iam-roles`

Description

The variable region referenced in a terraform_remote_state data source definition of the module modules/account-map/iam-roles is missing in variables.tf

The modules/account-map/iam-roles module is generally used in many upper level modules:
image

It's used for the reason of deriving the assumption role (or profile) to configure AWS provider.

Expected Behavior

Terraform doesn't trigger an error when launching a plan command and the AWS provider is given the right role to assume depending on how account-map is laid out.

Steps to Reproduce

Set up an atmos stack with one of the modules that makes use of iam-roles and run terraform plan or an atmos wrapper command atmos terraform plan iam-primary-roles -s gbl-identity

Implement ECS Fargate with Atlantis Service task

what

  • Implement an ECS Fargate cluster
  • Deploy one app (atlantis) using the cloudposse/terraform-aws-ecs-* modules.
  • Use codebuild/codepipeline to CI/CD apps (supported by modules already. see below).

why

  • Use Atlantis for terraform GitOps

use-case

  • Deploying geodesic account containers with atlantis

todo

An argument named "db_name" is not expected here.

Found a bug? Maybe our Slack Community can help.

Slack Community

Describe the Bug

I try to repush some code not event on the rds and i receive this error

Error: Unsupported argument

830 | Β 
831 | on .terraform/modules/rds_instance/main.tf line 31, in resource "aws_db_instance" "default":
832 | 31: db_name = var.database_name

Expected Behavior

I change the version of the cloudposse to the recent one same problame

Steps to Reproduce

when i do an plan i receive this error.

Screenshots

Environment (please complete the following information):

im using aws and a remote tf file on s8 bucket

datadog-configuration module completely broken

Describe the Bug

It doesn't even come close to working.

main.tf:

  • lines 2,3 are not used. Can be deleted but isn't broken
  • lines 7,8 reference data objects that don't exist

outputs.tf

  • lines 17,22 reference main.tf:7,8 but the names are mismatched
    • again, those lines in main.tf don't work, so even with the naming fixed, these outputs are not working

Expected Behavior

At least for terraform validate to succeed

I don't really know what the intention in this module was, still working through that. I can attempt a fix once I understand what those data objects from main.tf are supposed to do.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.