GithubHelp home page GithubHelp logo

cloudposse / terraform-aws-tfstate-backend Goto Github PK

View Code? Open in Web Editor NEW
405.0 28.0 172.0 1.4 MB

Terraform module that provision an S3 bucket to store the `terraform.tfstate` file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption.

Home Page: https://cloudposse.com/accelerate

License: Apache License 2.0

Makefile 6.66% HCL 82.38% Smarty 1.17% Go 9.79%
terraform terraform-module aws tfstate dynamodb locking aws-dynamodb terraform-modules dynamodb-table s3-bucket

terraform-aws-tfstate-backend's Introduction

Project Banner

Latest ReleaseLast UpdatedSlack Community

Terraform module to provision an S3 bucket to store terraform.tfstate file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption.

The module supports the following:

  1. Forced server-side encryption at rest for the S3 bucket
  2. S3 bucket versioning to allow for Terraform state recovery in the case of accidental deletions and human errors
  3. State locking and consistency checking via DynamoDB table to prevent concurrent operations
  4. DynamoDB server-side encryption

https://www.terraform.io/docs/backends/types/s3.html

NOTE: The operators of the module (IAM Users) must have permissions to create S3 buckets and DynamoDB tables when performing terraform plan and terraform apply

NOTE: This module cannot be used to apply changes to the mfa_delete feature of the bucket. Changes regarding mfa_delete can only be made manually using the root credentials with MFA of the AWS Account where the bucket resides. Please see: hashicorp/terraform-provider-aws#629

Tip

๐Ÿ‘ฝ Use Atmos with Terraform

Cloud Posse uses atmos to easily orchestrate multiple environments using Terraform.
Works with Github Actions, Atlantis, or Spacelift.

Watch demo of using Atmos with Terraform
Example of running atmos to manage infrastructure from our Quick Start tutorial.

Usage

Create

Follow this procedure just once to create your deployment.

  1. Add the terraform_state_backend module to your main.tf file. The comment will help you remember to follow this procedure in the future:

    # You cannot create a new backend by simply defining this and then
    # immediately proceeding to "terraform apply". The S3 backend must
    # be bootstrapped according to the simple yet essential procedure in
    # https://github.com/cloudposse/terraform-aws-tfstate-backend#usage
    module "terraform_state_backend" {
      source = "cloudposse/tfstate-backend/aws"
      # Cloud Posse recommends pinning every module to a specific version
      # version     = "x.x.x"
      namespace  = "eg"
      stage      = "test"
      name       = "terraform"
      attributes = ["state"]
    
      terraform_backend_config_file_path = "."
      terraform_backend_config_file_name = "backend.tf"
      force_destroy                      = false
    }
    
    # Your Terraform configuration
    module "another_module" {
      source = "....."
    }

    Module inputs terraform_backend_config_file_path and terraform_backend_config_file_name control the name of the backend definition file. Note that when terraform_backend_config_file_path is empty (the default), no file is created.

  2. terraform init. This downloads Terraform modules and providers.

  3. terraform apply -auto-approve. This creates the state bucket and DynamoDB locking table, along with anything else you have defined in your *.tf file(s). At this point, the Terraform state is still stored locally.

    Module terraform_state_backend also creates a new backend.tf file that defines the S3 state backend. For example:

    backend "s3" {
      region         = "us-east-1"
      bucket         = "< the name of the S3 state bucket >"
      key            = "terraform.tfstate"
      dynamodb_table = "< the name of the DynamoDB locking table >"
      profile        = ""
      role_arn       = ""
      encrypt        = true
    }

    Henceforth, Terraform will also read this newly-created backend definition file.

  4. terraform init -force-copy. Terraform detects that you want to move your Terraform state to the S3 backend, and it does so per -auto-approve. Now the state is stored in the S3 bucket, and the DynamoDB table will be used to lock the state to prevent concurrent modification.

This concludes the one-time preparation. Now you can extend and modify your Terraform configuration as usual.

Destroy

Follow this procedure to delete your deployment.

  1. In main.tf, change the terraform_state_backend module arguments as follows:
     module "terraform_state_backend" {
       # ...
       terraform_backend_config_file_path = ""
       force_destroy                      = true
     }
  2. terraform apply -target module.terraform_state_backend -auto-approve. This implements the above modifications by deleting the backend.tf file and enabling deletion of the S3 state bucket.
  3. terraform init -force-copy. Terraform detects that you want to move your Terraform state from the S3 backend to local files, and it does so per -auto-approve. Now the state is once again stored locally and the S3 state bucket can be safely deleted.
  4. terraform destroy. This deletes all resources in your deployment.
  5. Examine local state file terraform.tfstate to verify that it contains no resources.

s3-bucket-with-terraform-state

Bucket Replication (Disaster Recovery)

To enable S3 bucket replication in this module, set s3_replication_enabled to true and populate s3_replica_bucket_arn with the ARN of an existing bucket.

module "terraform_state_backend" {
  source = "cloudposse/tfstate-backend/aws"
  # Cloud Posse recommends pinning every module to a specific version
  # version     = "x.x.x"
  namespace  = "eg"
  stage      = "test"
  name       = "terraform"
  attributes = ["state"]

  terraform_backend_config_file_path = "."
  terraform_backend_config_file_name = "backend.tf"
  force_destroy                      = false

  s3_replication_enabled = true
  s3_replica_bucket_arn  = "arn:aws:s3:::eg-test-terraform-tfstate-replica"
}

Important

In Cloud Posse's examples, we avoid pinning modules to specific versions to prevent discrepancies between the documentation and the latest released versions. However, for your own projects, we strongly advise pinning each module to the exact version you're using. This practice ensures the stability of your infrastructure. Additionally, we recommend implementing a systematic approach for updating versions to avoid unexpected changes.

Makefile Targets

Available targets:

  help                                Help screen
  help/all                            Display help for all targets
  help/short                          This help short screen
  lint                                Lint terraform code

Requirements

Name Version
terraform >= 1.1.0
aws >= 4.9.0
local >= 2.0
time >= 0.7.1

Providers

Name Version
aws >= 4.9.0
local >= 2.0
time >= 0.7.1

Modules

Name Source Version
bucket_label cloudposse/label/null 0.25.0
dynamodb_table_label cloudposse/label/null 0.25.0
replication_label cloudposse/label/null 0.25.0
this cloudposse/label/null 0.25.0

Resources

Name Type
aws_dynamodb_table.with_server_side_encryption resource
aws_iam_policy.replication resource
aws_iam_role.replication resource
aws_iam_role_policy_attachment.replication resource
aws_s3_bucket.default resource
aws_s3_bucket_acl.default resource
aws_s3_bucket_logging.default resource
aws_s3_bucket_ownership_controls.default resource
aws_s3_bucket_policy.default resource
aws_s3_bucket_public_access_block.default resource
aws_s3_bucket_replication_configuration.replication resource
aws_s3_bucket_server_side_encryption_configuration.default resource
aws_s3_bucket_versioning.default resource
local_file.terraform_backend_config resource
time_sleep.wait_for_aws_s3_bucket_settings resource
aws_iam_policy_document.aggregated_policy data source
aws_iam_policy_document.bucket_policy data source
aws_iam_policy_document.replication data source
aws_iam_policy_document.replication_sts data source
aws_region.current data source

Inputs

Name Description Type Default Required
acl The canned ACL to apply to the S3 bucket string "private" no
additional_tag_map Additional key-value pairs to add to each map in tags_as_list_of_maps. Not added to tags or id.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
map(string) {} no
arn_format ARN format to be used. May be changed to support deployment in GovCloud/China regions. string "arn:aws" no
attributes ID element. Additional attributes (e.g. workers or cluster) to add to id,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the delimiter
and treated as a single ID element.
list(string) [] no
billing_mode DynamoDB billing mode string "PAY_PER_REQUEST" no
block_public_acls Whether Amazon S3 should block public ACLs for this bucket bool true no
block_public_policy Whether Amazon S3 should block public bucket policies for this bucket bool true no
bucket_enabled Whether to create the S3 bucket. bool true no
bucket_ownership_enforced_enabled Set bucket object ownership to "BucketOwnerEnforced". Disables ACLs. bool true no
context Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as null to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
any
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
no
deletion_protection_enabled A boolean that enables deletion protection for DynamoDB table bool false no
delimiter Delimiter to be used between ID elements.
Defaults to - (hyphen). Set to "" to use no delimiter at all.
string null no
descriptor_formats Describe additional descriptors to be output in the descriptors output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
{<br> format = string<br> labels = list(string)<br>}
(Type is any so the map values can later be enhanced to provide additional options.)
format is a Terraform format string to be passed to the format() function.
labels is a list of labels, in order, to pass to format() function.
Label values will be normalized before being passed to format() so they will be
identical to how they appear in id.
Default is {} (descriptors output will be empty).
any {} no
dynamodb_enabled Whether to create the DynamoDB table. bool true no
dynamodb_table_name Override the name of the DynamoDB table which defaults to using module.dynamodb_table_label.id string null no
enable_point_in_time_recovery Enable DynamoDB point-in-time recovery bool true no
enable_public_access_block Enable Bucket Public Access Block bool true no
enabled Set to false to prevent the module from creating any resources bool null no
environment ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT' string null no
force_destroy A boolean that indicates the S3 bucket can be destroyed even if it contains objects. These objects are not recoverable bool false no
id_length_limit Limit id to this many characters (minimum 6).
Set to 0 for unlimited length.
Set to null for keep the existing setting, which defaults to 0.
Does not affect id_full.
number null no
ignore_public_acls Whether Amazon S3 should ignore public ACLs for this bucket bool true no
kms_master_key_id AWS KMS master key ID used for the SSE-KMS encryption.
This can only be used when you set the value of sse_algorithm as aws:kms.
string null no
label_key_case Controls the letter case of the tags keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the tags input.
Possible values: lower, title, upper.
Default value: title.
string null no
label_order The order in which the labels (ID elements) appear in the id.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
list(string) null no
label_value_case Controls the letter case of ID elements (labels) as included in id,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the tags input.
Possible values: lower, title, upper and none (no transformation).
Set this to title and set delimiter to "" to yield Pascal Case IDs.
Default value: lower.
string null no
labels_as_tags Set of labels (ID elements) to include as tags in the tags output.
Default is to include all labels.
Tags with empty values will not be included in the tags output.
Set to [] to suppress all generated tags.
Notes:
The value of the name tag, if included, will be the id, not the name.
Unlike other null-label inputs, the initial setting of labels_as_tags cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
set(string)
[
"default"
]
no
logging Destination (S3 bucket name and prefix) for S3 Server Access Logs for the S3 bucket.
list(object({
target_bucket = string
target_prefix = string
}))
[] no
mfa_delete A boolean that indicates that versions of S3 objects can only be deleted with MFA. ( Terraform cannot apply changes of this value; hashicorp/terraform-provider-aws#629 ) bool false no
name ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a tag.
The "name" tag is set to the full id string. There is no tag with the value of the name input.
string null no
namespace ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique string null no
permissions_boundary ARN of the policy that is used to set the permissions boundary for the IAM replication role string "" no
prevent_unencrypted_uploads Prevent uploads of unencrypted objects to S3 bool true no
profile AWS profile name as set in the shared credentials file string "" no
read_capacity DynamoDB read capacity units when using provisioned mode number 5 no
regex_replace_chars Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, "/[^a-zA-Z0-9-]/" is used to remove all characters other than hyphens, letters and digits.
string null no
restrict_public_buckets Whether Amazon S3 should restrict public bucket policies for this bucket bool true no
role_arn The role to be assumed string "" no
s3_bucket_name S3 bucket name. If not provided, the name will be generated from the context by the label module. string "" no
s3_replica_bucket_arn The ARN of the S3 replica bucket (destination) string "" no
s3_replication_enabled Set this to true and specify s3_replica_bucket_arn to enable replication bool false no
source_policy_documents List of IAM policy documents (in JSON format) that are merged together into the generated S3 bucket policy.
Statements must have unique SIDs.
Statement having SIDs that match policy SIDs generated by this module will override them.
list(string) [] no
sse_encryption The server-side encryption algorithm to use.
Valid values are AES256, aws:kms, and aws:kms:dsse.
string "AES256" no
stage ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release' string null no
tags Additional tags (e.g. {'BusinessUnit': 'XYZ'}).
Neither the tag keys nor the tag values will be modified by this module.
map(string) {} no
tenant ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for string null no
terraform_backend_config_file_name (Deprecated) Name of terraform backend config file to generate string "terraform.tf" no
terraform_backend_config_file_path (Deprecated) Directory for the terraform backend config file, usually .. The default is to create no file. string "" no
terraform_backend_config_template_file (Deprecated) The path to the template used to generate the config file string "" no
terraform_state_file The path to the state file inside the bucket string "terraform.tfstate" no
terraform_version The minimum required terraform version string "1.0.0" no
write_capacity DynamoDB write capacity units when using provisioned mode number 5 no

Outputs

Name Description
dynamodb_table_arn DynamoDB table ARN
dynamodb_table_id DynamoDB table ID
dynamodb_table_name DynamoDB table name
s3_bucket_arn S3 bucket ARN
s3_bucket_domain_name S3 bucket domain name
s3_bucket_id S3 bucket ID
s3_replication_role_arn The ARN of the IAM Role created for replication, if enabled.
terraform_backend_config Rendered Terraform backend config file

Related Projects

Check out these related projects.

Tip

Use Terraform Reference Architectures for AWS

Use Cloud Posse's ready-to-go terraform architecture blueprints for AWS to get up and running quickly.

โœ… We build it together with your team.
โœ… Your team owns everything.
โœ… 100% Open Source and backed by fanatical support.

Request Quote

๐Ÿ“š Learn More

Cloud Posse is the leading DevOps Accelerator for funded startups and enterprises.

Your team can operate like a pro today.

Ensure that your team succeeds by using Cloud Posse's proven process and turnkey blueprints. Plus, we stick around until you succeed.

Day-0: Your Foundation for Success

  • Reference Architecture. You'll get everything you need from the ground up built using 100% infrastructure as code.
  • Deployment Strategy. Adopt a proven deployment strategy with GitHub Actions, enabling automated, repeatable, and reliable software releases.
  • Site Reliability Engineering. Gain total visibility into your applications and services with Datadog, ensuring high availability and performance.
  • Security Baseline. Establish a secure environment from the start, with built-in governance, accountability, and comprehensive audit logs, safeguarding your operations.
  • GitOps. Empower your team to manage infrastructure changes confidently and efficiently through Pull Requests, leveraging the full power of GitHub Actions.

Request Quote

Day-2: Your Operational Mastery

  • Training. Equip your team with the knowledge and skills to confidently manage the infrastructure, ensuring long-term success and self-sufficiency.
  • Support. Benefit from a seamless communication over Slack with our experts, ensuring you have the support you need, whenever you need it.
  • Troubleshooting. Access expert assistance to quickly resolve any operational challenges, minimizing downtime and maintaining business continuity.
  • Code Reviews. Enhance your teamโ€™s code quality with our expert feedback, fostering continuous improvement and collaboration.
  • Bug Fixes. Rely on our team to troubleshoot and resolve any issues, ensuring your systems run smoothly.
  • Migration Assistance. Accelerate your migration process with our dedicated support, minimizing disruption and speeding up time-to-value.
  • Customer Workshops. Engage with our team in weekly workshops, gaining insights and strategies to continuously improve and innovate.

Request Quote

โœจ Contributing

This project is under active development, and we encourage contributions from our community.

Many thanks to our outstanding contributors:

For ๐Ÿ› bug reports & feature requests, please use the issue tracker.

In general, PRs are welcome. We follow the typical "fork-and-pull" Git workflow.

  1. Review our Code of Conduct and Contributor Guidelines.
  2. Fork the repo on GitHub
  3. Clone the project to your own machine
  4. Commit changes to your own branch
  5. Push your work back up to your fork
  6. Submit a Pull Request so that we can review your changes

NOTE: Be sure to merge the latest changes from "upstream" before making a pull request!

๐ŸŒŽ Slack Community

Join our Open Source Community on Slack. It's FREE for everyone! Our "SweetOps" community is where you get to talk with others who share a similar vision for how to rollout and manage infrastructure. This is the best place to talk shop, ask questions, solicit feedback, and work together as a community to build totally sweet infrastructure.

๐Ÿ“ฐ Newsletter

Sign up for our newsletter and join 3,000+ DevOps engineers, CTOs, and founders who get insider access to the latest DevOps trends, so you can always stay in the know. Dropped straight into your Inbox every week โ€” and usually a 5-minute read.

๐Ÿ“† Office Hours

Join us every Wednesday via Zoom for your weekly dose of insider DevOps trends, AWS news and Terraform insights, all sourced from our SweetOps community, plus a live Q&A that you canโ€™t find anywhere else. It's FREE for everyone!

License

License

Preamble to the Apache License, Version 2.0

Complete license is available in the LICENSE file.

Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements.  See the NOTICE file
distributed with this work for additional information
regarding copyright ownership.  The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License.  You may obtain a copy of the License at

  https://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied.  See the License for the
specific language governing permissions and limitations
under the License.

Trademarks

All other trademarks referenced herein are the property of their respective owners.


Copyright ยฉ 2017-2024 Cloud Posse, LLC

README footer

Beacon

terraform-aws-tfstate-backend's People

Contributors

actions-user avatar aknysh avatar cloudpossebot avatar danjbh avatar dependabot[bot] avatar dylanbannon avatar gowiem avatar jamengual avatar jmcgeheeiv avatar korenyoni avatar lafarer avatar maartenvanderhoef avatar mabadillamycwt avatar maciejmajewski avatar mainbrain avatar max-lobur avatar maximmi avatar nitrocode avatar nuru avatar okgolove avatar osterman avatar pazaan avatar rothandrew avatar schollii avatar shmick avatar smontiel avatar sweetops avatar thiagoalmeidasa avatar vadim-hleif avatar woz5999 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-aws-tfstate-backend's Issues

Question on statefile for the backend

Is there a requirement that the tfstate file for the backend resources needs to be part of the deployment that uses the backend? I don't see anything in the docs regarding this but whenever I deploy with the s3/dynamo backend, terraform always tried to destroy the s3 state bucket and dynamodb lock table. I wouldn't think they are necessarily coupled but maybe it's a requirement by terraform? Makes for very messy deletion because you're deleting backend resource at the same time as everything else.

Reimplement with "cloudposse/terraform-aws-s3-bucket" to standardize parameters/features

Describe the Feature

I am wondering if there is a specific reason why this https://github.com/cloudposse/terraform-aws-tfstate-backend is not implemented with https://github.com/cloudposse/terraform-aws-s3-bucket? The parameters and features of both also differ.

It would be awesome if parameters (e.g. s3_object_ownership = "BucketOwnerEnforced" vs bucket_ownership_enforced_enabled = true) and features (e.g. lifecycle_configuration_rules) of both would be standardized. Reimplementing one with the other will likely prevent further drift.

Expected Behavior

Standardized parameters/features of similar TF modules.

Use Case

/

Describe Ideal Solution

/

Alternatives Considered

No response

Additional Context

No response

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

Repository problems

Renovate tried to run on this repository, but found these problems.

  • WARN: Base branch does not exist - skipping

Edited/Blocked

These updates have been manually edited so Renovate will no longer make changes. To discard all commits and start over, click on a checkbox.

Detected dependencies

Branch main
terraform
main.tf
  • cloudposse/label/null 0.25.0
  • cloudposse/label/null 0.25.0
replication.tf
  • cloudposse/label/null 0.25.0
versions.tf
  • aws >= 4.9.0
  • local >= 2.0
  • hashicorp/terraform >= 1.1.0
Branch release/v0
terraform
main.tf
  • cloudposse/label/null 0.25.0
  • cloudposse/s3-log-storage/aws 1.3.1
versions.tf
  • aws >= 4.9.0
  • local >= 1.3
  • hashicorp/terraform >= 1.1.0

  • Check this box to trigger a request for Renovate to run again on this repository

invalid or unknown key: server_side_encryption

variable "region" {
  default = "ap-southeast-1"
}

provider "aws" {
  region = "${var.region}"
}

module "terraform_state_backend" {
  source = "git::https://github.com/cloudposse/terraform-aws-tfstate-backend.git?ref=master"
  namespace = "master"
  stage = "master"
  name = "mediapop"
  region = "${var.region}"
}

gives:

Error: module.terraform_state_backend.aws_dynamodb_table.default: : invalid or unknown key: server_side_encryption

Remove the `read_capacity` and `write_capacity` from lifecycle change ignore

The DynamoDB resource recommends using lifecycle ignores for read and write capacity values when using autoscaling. However, since we're not using autoscaling here, it makes it impossible to update DynamoDB read/write capacity changes after the table has been created in follow up operations.

So either, remove these parameters from the lifecycle/ignore_changes or variablize them such that consumers can opt to not use autoscaling and explicitly set these.

Question: Config output

Hi,

I started using your Terraform module few days ago. I checked that after apply is rendered backend configuration file. How I can convert it to .hcl.json format ? Can you help me ?

terraform apply completes successfully with "Warning: Argument is deprecated"

Describe the Bug

terraform apply completed successfully. However, there is a warning in the log that will need attention in future:

โ•ท
โ”‚ Warning: Argument is deprecated
โ”‚
โ”‚ with module.terraform_state_backend.module.log_storage.aws_s3_bucket.default,
โ”‚ on .terraform/modules/terraform_state_backend.log_storage/main.tf line 1, in resource "aws_s3_bucket" "default":
โ”‚ 1: resource "aws_s3_bucket" "default" {
โ”‚
โ”‚ Use the aws_s3_bucket_logging resource instead
โ”‚
โ”‚ (and 21 more similar warnings elsewhere)

Expected Behavior

No deprecated argument warning.

Steps to Reproduce

Steps to reproduce the behavior:

  1. Add the below to my main.tf
module "terraform_state_backend" {
  source      = "cloudposse/tfstate-backend/aws"
  version     = "0.38.1"
  namespace   = "versent-digital-dev-kit"
  stage       = var.aws_region
  name        = "terraform"
  attributes  = ["state"]

  terraform_backend_config_file_path = "."
  terraform_backend_config_file_name = "backend.tf"
  force_destroy                      = false
}
  1. Run 'terraform apply -auto-approve'
  2. See warning in console output

Screenshots

Screen Shot 2022-07-15 at 13 53 33

Invalid terraform when assume_role is set

Describe the Bug

When assume_role is set, the generated backend configuration is not valid. E.g.

module "terraform_state_backend" {
  source = "cloudposse/tfstate-backend/aws"
  version     = "1.4.1"
  namespace  = "test"
  name       = "tf-state"

  terraform_backend_config_file_path = "."
  terraform_backend_config_file_name = "backend.tf"
  role_arn = "<my role>"
}

results in the following configuration

terraform {
  required_version = ">= 1.0.0"

  backend "s3" {
    region  = "eu-central-1"
    bucket  = "test-tf-state"
    key     = "terraform.tfstate"
    profile = ""
    encrypt = "true"

    assume_role {
      role_arn = "<my role>"
    }

    dynamodb_table = "test-tf-state-lock"
  }
}

Applying this result in

Unsupported block type
โ”‚ 
โ”‚   on backend.tf line 11, in terraform:
โ”‚   11:     assume_role {
โ”‚ 
โ”‚ Blocks of type "assume_role" are not expected here. Did you mean to define argument "assume_role"? If so, use the equals
โ”‚ sign to assign it a value.

The generated code should actually be

terraform {
  required_version = ">= 1.0.0"

  backend "s3" {
    [...]
    assume_role = {
      role_arn = "<my role>"
    }
    [...]
}

i.e. an equal sign is missing. See https://developer.hashicorp.com/terraform/language/settings/backends/s3#assume-role-configuration.

Expected Behavior

I expect correct backend code to be generated.

Steps to Reproduce

See code example above.

Screenshots

No response

Environment

No response

Additional Context

No response

Feature Request - Allow a parameter for the name of DynamoDB table

Describe the Feature

By default, the name of the DynamoDB Table is lock

Use Case

It will be good to have a customized name just in case I have a different environment and I want to name my table differently. (I know we can use a common table but good to have an option)

Describe Ideal Solution

A variable like dynamodb_table_name = "terraform-lock"

Alternatives Considered

After running terraform init, I went to file .terraform/modules/terraform_state_backend/main.tf and added name = "terraform" to this block

module "dynamodb_table_label" {
  source     = "cloudposse/label/null"
  version    = "0.22.0"
  attributes = compact(concat(var.attributes, ["lock"]))
  context    = module.this.context
  name       = "terraform"
}

Add delete_protection to DynamoDB table

Describe the Feature

This TF module has a force_destroy variable that can prevent accidental S3 bucket deletions. The DynamoDB table also supports a similar flag deletion_protection_enabled that prevents accidental deletions.

Because the purpose is the same, I would suggest reusing the variable also for this case by adding the following into the aws_dynamodb_table:

deletion_protection_enabled = !var.force_destroy

Expected Behavior

DynamoDB deletion_protection_enabled should also be enabled by default.

Use Case

Prevent accidental deletions.

Describe Ideal Solution

deletion_protection_enabled = !var.force_destroy

Alternatives Considered

No response

Additional Context

No response

Bucket replication managed by this module

Have a question? Please checkout our Slack Community or visit our Slack Archive.

Slack Community

Describe the Feature

Currently the bucket for replicating tf state as backup must be created manually. It would be really useful to have the module create it, since it is inherently tied to the backend setup.

Expected Behavior

When bucket replication is turned on, the module requires another region to be specified in the file, and a bucket name. It will create that bucket in the new region.

Use Case

Currently I have to create a replication bucket myself then give its ARN to the backend module. This is extra work that could easily be encapsulated into this module.

AWS Provider v3 support

Describe the Feature

AWS Provider v3 support

Expected Behavior

We can use aws provider v3

Use Case

Provider update is needed by security our comaptibility with other modules.

Describe Ideal Solution

A clear and concise description of what you want to happen. If you don't know, that's okay.

Alternatives Considered

Stay in aws 2.x

Additional Context

Upgrade Blocker

AWS 2.x to 3.x has no compatibility. In terraform-aws-tfstate-backend, following error occurs.

$ terraform plan -var-file="fixtures.us-west-1.tfvars"
Warning: Value for undeclared variable

The root module does not declare a variable named "s3_bucket_name" but a value
was found in file "fixtures.us-west-1.tfvars". To use this value, add a
"variable" block to the configuration.

Using a variables file to set an undeclared variable is deprecated and will
become an error in a future release. If you wish to provide certain "global"
settings to all configurations in your organization, use TF_VAR_...
environment variables to set these instead.


Error: Computed attribute cannot be set

  on ../../main.tf line 128, in resource "aws_s3_bucket" "default":
 128:   region        = var.region

This is written in https://registry.terraform.io/providers/hashicorp/aws/latest/docs/guides/version-3-upgrade#region-attribute-is-now-read-only .

Therefore, I guess we cannot support both aws v3 and aws v2.

Add support for multiple terraform backend config files

Describe the Feature

Terraform S3 backend allows multiple state files to be stored in the same S3 bucket and with same DynamoDB table.
I would like to have a convenience feature provided by this module to generate multiple terraform backend config files at once with different values for different slices of the infrastructure.

Expected Behavior

Accept list of options for additional backend config files for which backend config files are render as output and/or local files.

Use Case

Hashicorp recommends splitting terraform config into separate root modules to manage logically grouped slices of infrastructure independently. Eg slice managing infrastructure wide concerns like networking, Vault and Consul clusters would be separate from infrastructure for one application which would also be separate from infrastructure for another application.

For such slices of the infrastructure it would be preferable to use same S3 bucket and lock table. I think it makes sense to manage backends for those slices within same module

Describe Ideal Solution

Additional input for the module that probably looks something like this:

 terraform_backend_extra_configs = [
  {
    # required. Can uniqueness be validated between all values?
    # using context for the default key value probably better not to be supported 
    terraform_state_file = "alternate.tfstate"

    # terraform version, region, bucket, dynamodb and encrypt values are same as for "terraform_backend_config"

    # controls local file output, creates file if path not empty
    terraform_backend_config_file_path = "../alternate-path"
    terraform_backend_config_file_name = "backend.tf"

    # omitted values should default to vars used by current "terraform_backend_config" template
    # role_arn = ""
    # profile = ""
    # namespace = ""
    # stage = ""
    # environment = ""
    # name = ""
 
    # optionally specify namespace, stage, environment and name via context.
    context = module.alternate_backend_label.context
  }
]

Alternatives Considered

My own template file resource that duplicates behavior of "terraform_backend_config" in this module could do the same.

Probably, better approach to the one I suggested would be to extract backend config template into submodule of this module to allow independent backend file generation. This approach will take more effort but it would also be better from maintenance perspective, I think.

Additional Context

Sample HCL for how this feature could be used:

module "terraform_state_backend" {
  source = "cloudposse/tfstate-backend/aws"
  # Cloud Posse recommends pinning every module to a specific version
  # version     = "x.x.x"
  context = module.this.context

  terraform_backend_config_file_path = "."
  terraform_backend_config_file_name = "backend.tf"
  force_destroy                      = false

  terraform_backend_extra_configs = [
    {
      # required. Can uniqueness be validated between all values?
      terraform_state_file = "${module.eg_app_dev_tfstate_backend_label.id}.tfstate"

      # terraform version, region, bucket, dynamodb and encrypt values are same as for "terraform_backend_config"

      # controls local file output, creates file if path not empty
      terraform_backend_config_file_path = "../app/dev"
      terraform_backend_config_file_name = "backend.tf"

      # omitted values default to vars used by current "terraform_backend_config" template
      # role_arn = ""
      # profile = ""
      # namespace = ""
      # stage = ""
      # environment = ""
      # name = ""
      role_arn = aws_iam_role.eg_app_dev_backend.arn

      # optionally specify namespace, stage, environment and name via context?
      context = module.eg_app_dev_backend_label.context
    }
  ]
}

module "eg_app_dev_backend_label" {
  source  = "cloudposse/label/null"
  # version     = "x.x.x"

  environment = "dev"

  context = module.this.context
}

module "eg_app_dev_tfstate_backend_label" {
  source  = "cloudposse/label/null"
  # version     = "x.x.x"

  delimiter = "/"

  context = module.eg_app_dev_label.context
}

resource "aws_iam_role" "eg_app_dev_backend" {
  assume_role_policy = ""
}

resource "aws_iam_policy" "eg_app_dev_backend" {
  name        = module.eg_app_dev_backend_label.id
  description = "Grants access to Terraform S3 backend store bucket and DynamoDB locking table"
  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Effect   = "Allow"
        Action   = "s3:ListBucket",
        Resource = module.terraform_state_backend.s3_bucket_arn
      },
      {
        Effect   = "Allow"
        Action   = ["s3:GetObject", "s3:PutObject"]
        Resource = "${module.terraform_state_backend.s3_bucket_arn}/${module.eg_app_dev_tfstate_backend_label.id}.tfstate"
      },
      {
        Effect = "Allow"
        Action = [
          "dynamodb:GetItem",
          "dynamodb:PutItem",
          "dynamodb:DeleteItem"
        ]
        Resource = module.terraform_state_backend.dynamodb_table_arn
      },
    ]
  })
  tags = module.eg_app_dev_backend_label.tags
}

resource "aws_iam_role_policy_attachment" "eg_app_dev_backend" {
  policy_arn = aws_iam_policy.eg_app_dev_backend.arn
  role = aws_iam_role.eg_app_dev_backend.id
}

Breaking changes ahead?

With new AWS provider major version 2.0, should we expect breaking changes soon when using source = "git::https://github.com/cloudposse/terraform-aws-tfstate-backend.git?ref=master"?

Using this module without without specifying an external context label module generates invalid resource names

Found a bug? Maybe our Slack Community can help.

Slack Community

Describe the Bug

When creating buckets with replication without specifying an external context label variable (note it's not mandatory on this module), like this:

data "aws_caller_identity" "current" {}

locals {
  default_tags = {
    "omd_environment" : var.environment,
    "creator_arn" : data.aws_caller_identity.current.arn,
  }
}

module "terraform_state_backend" {
  source  = "cloudposse/tfstate-backend/aws"
  version = "v0.38.1"

  providers = {
    aws = aws.one
  }

  s3_bucket_name                = var.bucket_name
  dynamodb_table_name           = var.dynamodb_table_name
  dynamodb_enabled              = true
  enable_server_side_encryption = true
  billing_mode                  = "PAY_PER_REQUEST"

  force_destroy          = true
  s3_replication_enabled = true
  s3_replica_bucket_arn  = module.terraform_state_backend_replication.s3_bucket_arn
  tags                   = local.default_tags

}

module "terraform_state_backend_replication" {
  source  = "cloudposse/tfstate-backend/aws"
  version = "v0.38.1"

  providers = {
    aws = aws.other
  }

  s3_bucket_name   = "${var.bucket_name}-replica"
  force_destroy    = true
  dynamodb_enabled = false
  tags             = local.default_tags

}

some resource names are being evaluated to invalid strings:

  + resource "aws_iam_role" "replication" {
      + arn                   = (known after apply)
...
      + name                  = "-replication"
...
    }
  + resource "aws_iam_policy" "replication" {
...
      + name      = "-replication"
...
    }
  dynamic "replication_configuration" {
    for_each = var.s3_replication_enabled ? toset([var.s3_replica_bucket_arn]) : []
    content {
      role = aws_iam_role.replication[0].arn

      rules {
        id     = module.this.id
        ...

Expected Behavior

Replication resource names use the same logic as the bucket name:

  bucket_name = var.s3_bucket_name != "" ? var.s3_bucket_name : module.this.id

terraform destroy needs explanation

Found a bug? Maybe our Slack Community can help.

Slack Community

Describe the Bug

The module docs should explain how to cleanup.

Expected Behavior

The module docs would say something like (I'm still confirming the details but I just don't want to loose this issue report):

Destroy

  1. comment out the "backend" block
  2. move the state storage back to local: terraform init
  3. make the state bucket deletable even if there are multiple versions of state stored: add force_destroy=true to your terraform_state_backend then terraform apply
  4. terraform destroy

Warning: while the state is local, the state in the bucket still exists, others (or CI/CD!) should not modify it.

Variable `prevent_unencrypted_uploads` is possibly poorly named

Describe the Feature

Since apply_server_side_encryption_by_default is always set, and the EnforceTlsRequestsOnly policy is created, uploads will always be encrypted in transit and at rest.
The way I read the behavior of the code, prevent_unencrypted_uploads is simply enforcing that all uploads must specify an at-rest encryption key, and therefore bypass the default encryption.

Expected Behavior

Would a better name be prevent_default_encryption? Or modify the documentation in the README to better describe the functionality. The way it reads now, it sounds like without prevent_unencrypted_uploads, uploads would be unencrypted.

Use Case

When using the module, I couldn't upload files to the Terraform state without specifying a key (including using the AWS Console). When I disabled prevent_default_encryption, I confirmed that the file I uploaded was indeed encrypted with the default encryption key (SSE-S3).

Describe Ideal Solution

Either change the variable name, or the accompanying documentation in the README so that new users of the module don't need to read the code in order to understand the behavior.

Alternatives Considered

No response

Additional Context

No response

Flag to only create s3 bucket and forego dynamodb creation in order to save money

Have a question? Please checkout our Slack Community or visit our Slack Archive.

Slack Community

Describe the Feature

An s3 bucket is much cheaper than dynamodb. For small projects with a single developer, it would be nice to only create the s3 bucket and forego the more expensive dynamodb database.

Expected Behavior

Flag to only create s3 bucket and forego dynamodb creation in order to save money

Perhaps var.enable_dynamodb and default it to true.

Error: Error creating S3 bucket: BucketAlreadyExists: The requested bucket name is not available. The bucket namespace is shared by all users of the system. Please select a different name and try again.

Describe the Bug

Getting an error Error: Error creating S3 bucket: BucketAlreadyExists: The requested bucket name is not available. The bucket namespace is shared by all users of the system. Please select a different name and try again. when running terraform apply -auto-approve.

Also, it asks to Enter the value of a region for S3, however, it's already in vars. World be nice to automate this step as well :)

Environment:

  • OS version: [macOS: Big Sur]
  • Terraform version [Terraform v0.12.28
    +provider.aws v2.70.0
    +provider.local v1.4.0
    +provider.null v2.1.2
    +provider.template v2.1.2]

Steps to Reproduce

terraform apply -auto-approve

var.region
  AWS Region the S3 bucket should reside in

  Enter a value: us-west-2     

provider.aws.region
  The region where AWS operations will take place. Examples
  are us-east-1, us-west-2, etc.

  Enter a value: us-west-2

aws_dynamodb_table.with_server_side_encryption[0]: Refreshing state... [id=terraform-state-lock]
module.terraform_state_backend.aws_dynamodb_table.with_server_side_encryption[0]: Refreshing state... [id=eg-test-terraform-state-lock]
data.aws_iam_policy_document.prevent_unencrypted_uploads[0]: Refreshing state...
module.terraform_state_backend.data.aws_iam_policy_document.prevent_unencrypted_uploads[0]: Refreshing state...
aws_s3_bucket.default: Creating...
module.terraform_state_backend.aws_s3_bucket.default: Creating...

Error: Error creating S3 bucket: BucketAlreadyExists: The requested bucket name is not available. The bucket namespace is shared by all users of the system. Please select a different name and try again.
        status code: 409, request id: DAE8503E57F632E7, host id: LLcTL4YZN1mIOL8mJzBL9y5d4YJKs/tt7CHh5Ks63naqarYBD/RC8Nnqzs7FQ9mRaRMsdQUhmgs=

  on main.tf line 145, in resource "aws_s3_bucket" "default":
 145: resource "aws_s3_bucket" "default" {



Error: Error creating S3 bucket: BucketAlreadyExists: The requested bucket name is not available. The bucket namespace is shared by all users of the system. Please select a different name and try again.
        status code: 409, request id: 6DF9DDE094778C9C, host id: 2m8a4gn4qbQ4xwZpNaU1/vmCQuHFM+pV1EQA58+45JSmJ7FVxixXIoFigKhg5KXIrOCVqb7L8+4=

  on .terraform/modules/terraform_state_backend/main.tf line 124, in resource "aws_s3_bucket" "default":
 124: resource "aws_s3_bucket" "default" {

DynamoDB label attributes inconsistent due to null label module

Describe the Bug

Label created for DynamoDB is different depending on how attributes was passed to this module: as variable or in context

Moving attributes to context results in this plan (unrelated changes snipped):

-/+ resource "aws_dynamodb_table" "with_server_side_encryption" {
      ~ id               = "eg-terraform-state-lock" -> (known after apply)
      ~ name             = "eg-terraform-state-lock" -> "eg-terraform-lock-state" # forces replacementd
      ~ tags             = {
          ~ "Attributes" = "state-lock" -> "lock-state"
          ~ "Name"       = "eg-terraform-state-lock" -> "eg-terraform-lock-state"
            # (1 unchanged element hidden)
        }
      ~ tags_all         = {
          ~ "Attributes" = "state-lock" -> "lock-state"
          ~ "Name"       = "eg-terraform-state-lock" -> "eg-terraform-lock-state"
            # (1 unchanged element hidden)
        }

Seems to be related to cloudposse/terraform-null-label#114 which was released in 0.22.1 while this module pins to 0.22.0

Expected Behavior

dynamodb_table_label should have same order regardless of how attributes were passed.

Assuming suggested README usage producing desirable result (concat(context, var, ["state"]), label module can be bumped to 0.22.1 or newer

Steps to Reproduce

Steps to reproduce the behavior:

  1. Setup module as suggested in README
  2. Run plan and make note of DynamoDB table resource, particularly id, name and tags
  3. Move attributes variable to context object
  4. Run plan again and compare DynamoDB values

Environment (please complete the following information):

$ terraform version
Terraform v0.15.4
on linux_amd64
+ provider registry.terraform.io/hashicorp/aws v3.42.0
+ provider registry.terraform.io/hashicorp/local v2.1.0
+ provider registry.terraform.io/hashicorp/template v2.2.0

$ terraform get -update
Downloading cloudposse/tfstate-backend/aws 0.33.0 for terraform_state_backend...
- terraform_state_backend in .terraform/modules/terraform_state_backend
Downloading cloudposse/label/null 0.22.0 for terraform_state_backend.dynamodb_table_label...
- terraform_state_backend.dynamodb_table_label in .terraform/modules/terraform_state_backend.dynamodb_table_label
Downloading cloudposse/label/null 0.24.1 for terraform_state_backend.this...
- terraform_state_backend.this in .terraform/modules/terraform_state_backend.this

Additional Context

This module is likely to be used in root module and combined with README instructions this issue is unlikely affect many

Upgrading from <0.33.1 to >=0.33.1 requires state move for bucket

Describe the Bug

When upgrading from before version 0.33.1 to any version after 0.33.1:
count was added to the default bucket so a plan sees that the bucket does not exist and wants to remove it from state.

No where in the release notes or README does it mention needing to do a state move. Once you move the state to add an index a plan will show no changes needed:
terraform state mv module.tfstate-backend.aws_s3_bucket.default module.tfstate-backend.aws_s3_bucket.default[0]

Expected Behavior

A note in the 0.33.1 release about needing to perform the state move.

Steps to Reproduce

Steps to reproduce the behavior:

  1. Init a terraform project with terraform-aws-tfstate-backend version 0.33.0
  2. Run terraform apply
  3. Change the terraform-aws-tfstate-backend to version 0.33.1
  4. Run terraform plan
  5. See error

Screenshots

N/A

Environment (please complete the following information):

Terraform v1.0.5
on darwin_amd64
+ provider registry.terraform.io/hashicorp/aws v3.56.0

Additional Context

Add any other context about the problem here.

Documentation is wrong

in your steps 1 - 5, you state that you need to add the backend section.

You fail to indicate that it needs to be in a terraform node.

In addition, you fail to mention that the bucket referenced must already exist. I get this error:

Error: Error inspecting states in the "s3" backend: S3 bucket does not exist

darwin_arm64 still not supported

With the latest version (0.37), there is still an issue because of a left over dependency on registry.terraform.io/hashicorp/template

โ•ท
โ”‚ Error: Incompatible provider version
โ”‚ 
โ”‚ Provider registry.terraform.io/hashicorp/template v2.2.0 does not have a package available for your current platform, darwin_arm64.
โ”‚ 
โ”‚ Provider releases are separate from Terraform CLI releases, so not all providers are available for all platforms. Other versions of this provider may have different platforms supported.
โ•ต

This prevents usage on e.g. apple m1, and future apple silicon macs

Update null label version to fully support 0.13.X

Describe the Bug

Module does not have support for v0.13.X

Expected Behavior

It supports 0.13.X according to release notes

Steps to Reproduce

Steps to reproduce the behavior:

  1. Clone the repo
  2. Run terraform init

Error:

Error: Unsupported Terraform Core version

  on .terraform/modules/base_label/versions.tf line 2, in terraform:
   2:   required_version = "~> 0.12.0"

Module module.base_label (from
git::https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.16.0)
does not support Terraform version 0.13.0. To proceed, either choose another
supported Terraform version or update this version constraint. Version
constraints are normally set for good reason, so updating the constraint may
lead to other errors or unexpected behavior.


Error: Unsupported Terraform Core version

  on .terraform/modules/dynamodb_table_label/versions.tf line 2, in terraform:
   2:   required_version = "~> 0.12.0"

Module module.dynamodb_table_label (from
git::https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.16.0)
does not support Terraform version 0.13.0. To proceed, either choose another
supported Terraform version or update this version constraint. Version
constraints are normally set for good reason, so updating the constraint may
lead to other errors or unexpected behavior.


Error: Unsupported Terraform Core version

  on .terraform/modules/s3_bucket_label/versions.tf line 2, in terraform:
   2:   required_version = "~> 0.12.0"

Module module.s3_bucket_label (from
git::https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.16.0)
does not support Terraform version 0.13.0. To proceed, either choose another
supported Terraform version or update this version constraint. Version
constraints are normally set for good reason, so updating the constraint may
lead to other errors or unexpected behavior.

Environment (please complete the following information):

Anything that will help us triage the bug will help. Here are some ideas:

  • OS: Ubuntu 20.04.1
  • Version: master

Additional Context

The issue is that the 0.16.0 version of the null label package, does not support 0.13.0 version of terraform. It just needs to be updated to 0.17.0.

Make s3 bucket and dynamodb table optional

Describe the Feature

The module ("tatb") should support use case: use existing bucket but still create dynamodb table

I could refactor code from the module into my own module, but I'm asking here to avoid re-inventing the wheel if someone has already done this.

Use Case

Say we have 2 stacks (say dev and staging) with the state subdivided like this:

s3://common_bucket
  stack_1 (dev)
    cluster.tfstate
    rds.tfstate (references cluster.tfstate as remote)
    app_1_deployment_1.tfstate (references rds.tfstate as remote)
    app_1_deployment_2.tfstate (references rds.tfstate as remote)
    app_2_deployment_1.tfstate (references cluster.tfstate as remote)
  stack_2 (staging)
    cluster.tfstate
    app_2_deployment_1.tfstate  (references cluster.tfstate as remote)

The above is managed by 6 terraform root modules:

  • each root module has its own backend.tf file referring to same bucket, but different key fore each
  • all root modules that pertain to stack 1 use the same dynamodb lock table, and same goes for stack 2

It would be nice to use terraform-aws-tfstate-backend module in each of those root modules, but this is not possible because it enforces s3 bucket creation and dynamo-db table creation.

Hence the feature request listed at the top.

Describe Ideal Solution

Basically make the bucket creation optional when using the module.

With that capability, one could have the bucket created by a separate root module that uses the tatb module, with these properties:

root module create bucket create lock table
all-states bucket y y
cluster 1 n y
cluster 2 n y
other modules in stack 1 n n
other modules in stack 2 n n

namely:

  • the first row is "use the tatb module as-it-is-now";
  • the last 2 rows are "use the backend.tf directly";
  • the use-case "create bucket but not dynamodb table" does not exist IMO

so the 2 cluster rows are the only ones that really new a patch: sufficient to add a new "existing_state_bucket" variable that causes use of existing bucket but still creates the dynamodb table.

Error creating S3 bucket ... the region 'us-east-1' is wrong; expecting 'eu-central-1'

Found a bug? Maybe our Slack Community can help.

Slack Community

Describe the Bug

I've set the AWS provider to use us-east-1 but I'm getting this error when the module tries to create the s3 bucket:

Error creating S3 bucket: AuthorizationHeaderMalformed: The authorization header is malformed; the region 'us-east-1' is wrong; expecting 'eu-central-1'

Expected Behavior

The module should create the s3 bucket in whichever region I specify.

Steps to Reproduce

  1. Create a main.tf file similar to https://gist.github.com/discentem/427f3dddf11863c2528b5859d1fec1d3 and run terraform init, terraform plan, and terraform apply -auto-approve (as per https://github.com/cloudposse/terraform-aws-tfstate-backend#usage)

Screenshots

If applicable, add screenshots or logs to help explain your problem.

Environment (please complete the following information):

Anything that will help us triage the bug will help. Here are some ideas:

  • OS: Windows 10
  • Terraform version: v0.14.7

Additional Context

Add any other context about the problem here.

terraform destroy fails due to empty coalesce in output

Describe the Bug

Using terraform 0.12:

  1. create a main.tf that uses this module
  2. terraform init and apply
  3. terraform destroy

eventually the following happens:

$ terraform destroy
...
Do you really want to destroy all resources?
  Terraform will destroy all your managed infrastructure, as shown above.
  There is no undo. Only 'yes' will be accepted to confirm.

  Enter a value: yes


Error: Error in function call

  on .terraform/modules/terraform_state_backend/output.tf line 30, in output "dynamodb_table_id":
  30:     coalescelist(
  31: 
  32: 
  33: 
    |----------------
    | aws_dynamodb_table.with_server_side_encryption is empty tuple
    | aws_dynamodb_table.without_server_side_encryption is empty tuple

Call to function "coalescelist" failed: no non-null arguments.

Expected Behavior

It should have finished properly. Looks like the coalesce is incorrect.

Fix

This is likely due to the new behavior of coalescelist() in terraform 0.12.

Add [""] to the coalesclist() call. I will try to submit a PR.

Bucket S3 Policy

Hi,

After first terraform apply the bucket S3 isn't created with the policy, so with the backend configuration and after the terraform init, with have this result :

Error inspecting states in the "s3" backend:
    AccessDenied: Access Denied
	status code: 403, request id: 47E272D927CA9A6A, host id: P0C6vfjPgHasrSZ6sWGCQH4ZnFWdC0Ax7GNc2HdSX/+HWaHXqUbqgx+8I33pHS849KLTwhWyQik=

Best regards,

KMS encryption

KMS encryption as a default

From bridgecrew

     Resource: aws_s3_bucket.default | ID: BC_AWS_GENERAL_56 

server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}

Should be

  server_side_encryption_configuration {
    rule {
      apply_server_side_encryption_by_default {
        kms_master_key_id = var.kms_master_key_id
        sse_algorithm     = "aws:kms"
      }
    }
  }

where kms_master_key_id should be something like ?

variable "kms_master_key_id" {
  default = "alias/aws/s3"
}

or simply keep kms_master_key_id = "" and set a dynamic for apply_server_side_encryption_by_default

Unable to be used with terraform workspaces

Describe the Bug

Hi,
not sure if it's a bug or a not implemented feature but it looks like the module is working properly together with terraform workspaces.

It seems the issue is that when a workspace is switched this module is going to try again to create the s3 bucket and dynamodb. but this two resources are already existing so it will fail.

Using the workspace in the bucket and dynamo table name with case issues with the backend.tf because this will alter the whole time.

Using enabled = false will cause the first terraform workspace which has created the s3 bucket to destroy it again. Also sounds not good :D

Expected Behavior

The module is checking if the expected bucket is already and then skip the creation.
Maybe something like the enabled flag but just skip_creation_if_resources_exists or so on.

Steps to Reproduce

Steps to reproduce the behavior:

  1. Create two workspaces
  2. Apply the first workspace and see s3 bucket and dynamo table created
  3. Switch to second workspace
  4. Apply second workspace and see errors while s3 bucket and dynamo table creation

Additional Context

Maybe its also not possible and for multi workspace usage we need to create a separate terraform project (workspace) which is taking care of the resource but then an example would be nice.

Thanks

Error releasing the state lock

Describe the Bug

when upload state to s3, then run 'terraform destroy --auto-approve' command, I've got error message: "Error releasing the state lock"

Expected Behavior

destroy correctly.

Steps to Reproduce

main.tf:

`provider "aws" {
region = "ap-northeast-1"
}

module "terraform_state_backend" {
source = "cloudposse/tfstate-backend/aws"

Cloud Posse recommends pinning every module to a specific version

version = "x.x.x"

namespace = "wasai"
stage = "test"
name = "terraform-example"
attributes = ["state"]

terraform_backend_config_file_path = "."
terraform_backend_config_file_name = "backend.tf"
force_destroy = false
}

resource "aws_instance" "example" {
ami = "ami-00247e9dc9591c233" # ๆŒ‡ๅฎš AMI ID
instance_type = "t2.micro" # ๆŒ‡ๅฎšๅฎžไพ‹็ฑปๅž‹

tags = {
Name = "ExampleInstance" # ๆทปๅŠ ๆ ‡็ญพ
Name = "test"
}
}`

  1. terraform init
  2. terraform plan
  3. terraform apply --auto-approve
  4. terraform init --force-copy
  5. terraform destroy --auto-approve

got error message:

Error: deleting S3 Bucket (wasai-test-terraform-example-state): operation error S3: DeleteBucket, https response error StatusCode: 409, RequestID: HYQKXS2B45JK65Z1, HostID: lB27Gd2tBiyKXf+gK2kUSdjams0k8MBkLCoWfiONz8i4rKCwUwWHp6r7HjJ4OuNOfubh1pFpIsM=, api error BucketNotEmpty: The bucket you tried to delete is not empty. You must delete all versions in the bucket.
โ”‚
โ”‚
โ•ต
โ•ท
โ”‚ Error: Error releasing the state lock
โ”‚
โ”‚ Error message: failed to retrieve lock info for lock ID "1caa2e0b-3858-49d0-0032-99423fc0914e": Unable to retrieve item from DynamoDB table "wasai-test-terraform-example-state-lock":
โ”‚ operation error DynamoDB: GetItem, https response error StatusCode: 400, RequestID: IUH3UIMJ76VK7U4QKG0C2ACQN3VV4KQNSO5AEMVJF66Q9ASUAAJG, ResourceNotFoundException: Requested resource not
โ”‚ found
โ”‚
โ”‚ Terraform acquires a lock when accessing your state to prevent others
โ”‚ running Terraform to potentially modify the state at the same time. An
โ”‚ error occurred while releasing this lock. This could mean that the lock
โ”‚ did or did not release properly. If the lock didn't release properly,
โ”‚ Terraform may not be able to run future commands since it'll appear as if
โ”‚ the lock is held.
โ”‚
โ”‚ In this scenario, please call the "force-unlock" command to unlock the
โ”‚ state manually. This is a very dangerous operation since if it is done
โ”‚ erroneously it could result in two people modifying state at the same time.
โ”‚ Only call this command if you're certain that the unlock above failed and
โ”‚ that no one else is holding a lock.

Screenshots

No response

Environment

OSX

Additional Context

No response

Add lifecycle configuration to delete objects under a certain size to remove destroyed states

Describe the Feature

It's nice to look at the s3 state bucket to see which components and root dirs contain resources

After destroying root components and resources, the state file object continues to exist in s3. The state object has very few characters in there. It would be nice to expire these objects when they are less than a certain number of bytes.

Expected Behavior

Expire s3 state objects when the objects are less than N bytes

Use Case

See above

Describe Ideal Solution

Lifecycle rule

Alternatives Considered

N/A

Additional Context

Logging bucket generates a name with a duplicate

Found a bug? Maybe our Slack Community can help.

Slack Community

Describe the Bug

Turning on the logging bucket creates a bucket named: nmspc-as1-root-nmspc-as1-root-tfstate-logs.

Expected Behavior

A name like: nmspc-as1-root-tfstate-logs.

Steps to Reproduce

module "tfstate_backend" {
  source  = "cloudposse/tfstate-backend/aws"
  version = "0.38.0"

  enable_server_side_encryption = var.enable_server_side_encryption
  force_destroy                 = var.force_destroy
  logging_bucket_enabled        = true
  prevent_unencrypted_uploads   = var.prevent_unencrypted_uploads

  context = module.this.context
}

Screenshots

N/A

Environment (please complete the following information):

All envs are impacted starting with v0.38.0.

Additional Context

Culprit: #104

Can't use this module with S3 bucket in different region

Why do you need to specify providers block in the module?

I have env vars set to AWS_REGION=eu-west-1 and AWS_DEFAULT_REGION=eu-west-1 which makes it impossible for me to use this module when working with infrastructure where S3 bucket was created in another region.

The error I got is:

Error: error reading S3 Bucket (yoman-terraform-state): BucketRegionError: incorrect region, the bucket is not in 'eu-west-1' region at endpoint ''
	status code: 301, request id: , host id:

I propose to remove this block and have it defined outside of this module (root module is a much better place for this):

provider "aws" {
version = "~> 2.0"
}

Additionally, it does not work for situations when aws provider has to be configured with assume_role or any other properties.

Last year I described why this is a problem in my blog post, also on slides 56 and 57.

What do you think?

No compatibility with terraform v1.6

Describe the Bug

With terraform v1.6, following error occurs in terraform plan:

โ•ท
โ”‚ Warning: Deprecated Parameters
โ”‚ 
โ”‚   on backend.tf line 4, in terraform:
โ”‚    4:   backend "s3" {
โ”‚ 
โ”‚ The following parameters have been deprecated. Replace them as follows:
โ”‚   * role_arn -> assume_role.role_arn
โ”‚ 
โ•ต

โ•ท
โ”‚ Error: Cannot assume IAM Role
โ”‚ 
โ”‚ IAM Role ARN not set
โ•ต

module

module "terraform_state_backend" {
  source  = "cloudposse/tfstate-backend/aws"
  version = "v1.1.1"

  namespace  = "piyo-dx"
  stage      = "development-tenant"
  name       = "azuread"
  attributes = ["tfstate"]

  terraform_backend_config_file_path = "."
  terraform_backend_config_file_name = "backend.tf"
}

Generated backend.tf

terraform {
  required_version = ">= 1.0.0"

  backend "s3" {
    region         = "ap-northeast-1"
    bucket         = "piyo-dx-development-tenant-azuread-tfstate"
    key            = "terraform.tfstate"
    dynamodb_table = "piyo-dx-development-tenant-azuread-tfstate-lock"
    profile        = ""
    role_arn       = ""
    encrypt        = "true"
  }
}

Expected Behavior

terraform plan success.

Steps to Reproduce

With terraform v1.6 and following configuration:

module "terraform_state_backend" {
  source  = "cloudposse/tfstate-backend/aws"
  version = "v1.1.1"

  namespace  = "hogehoge"
  stage      = "development"
  name       = "fugafuga"
  attributes = ["tfstate"]

  terraform_backend_config_file_path = "."
  terraform_backend_config_file_name = "backend.tf"
}

Screenshots

ใ‚นใ‚ฏใƒชใƒผใƒณใ‚ทใƒงใƒƒใƒˆ 2023-10-05 8 37 41

Environment

  • OS: Mac os 14
  • terraform version: v1.6.0
  • module: v1.1.1

Additional Context

backend syntax was changed from v1.6 ( https://github.com/hashicorp/terraform/releases/tag/v1.6.0 ).
It seems like that role_arn need to be nested.

P.S.

Thank you as always for this module. Our work has become simpler and more beautiful.

terraform-provider-aws v4.0 incompatibility

Describe the Bug

Upgrading to the latest hashicorp/aws v4.0.0 (https://registry.terraform.io/providers/hashicorp/aws/latest) breaks terraform-aws-tfstate-backend with the following error:

$ terraform plan
Acquiring state lock. This may take a few moments...
Releasing state lock. This may take a few moments...
โ•ท
โ”‚ Error: Unsupported attribute
โ”‚
โ”‚   on .terraform/modules/terraform_state_backend.log_storage/main.tf line 30, in resource "aws_s3_bucket" "default":
โ”‚   30:         for_each = var.enable_glacier_transition ? [1] : []
โ”‚
โ”‚ This object does not have an attribute named "enable_glacier_transition".
โ•ต
โ•ท
โ”‚ Error: Unsupported attribute
โ”‚
โ”‚   on .terraform/modules/terraform_state_backend.log_storage/main.tf line 44, in resource "aws_s3_bucket" "default":
โ”‚   44:         for_each = var.enable_glacier_transition ? [1] : []
โ”‚
โ”‚ This object does not have an attribute named "enable_glacier_transition".

Environment (please complete the following information):

Anything that will help us triage the bug will help. Here are some ideas:

$ terraform version
Terraform v1.1.5
on darwin_amd64
+ provider registry.terraform.io/hashicorp/aws v4.0.0
+ provider registry.terraform.io/hashicorp/local v2.1.0
+ provider registry.terraform.io/hashicorp/time v0.7.2

Temporary solution:

Restricting AWS provider to 3.x gets the tfstate-backend plugin working as expected

$ cat versions.tf
terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "< 4.0"
    }
...

Terraform 0.12 compatibility

When running terraform init with Terraform 0.12 we get the following error:

Initializing modules...
Downloading git::https://github.com/cloudposse/terraform-aws-tfstate-backend.git?ref=master for terraform_state_backend...
- terraform_state_backend in .terraform/modules/terraform_state_backend
Downloading git::https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.7.0 for terraform_state_backend.base_label...
- terraform_state_backend.base_label in .terraform/modules/terraform_state_backend.base_label
Downloading git::https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.7.0 for terraform_state_backend.dynamodb_table_label...
- terraform_state_backend.dynamodb_table_label in .terraform/modules/terraform_state_backend.dynamodb_table_label
Downloading git::https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.7.0 for terraform_state_backend.s3_bucket_label...
- terraform_state_backend.s3_bucket_label in .terraform/modules/terraform_state_backend.s3_bucket_label

There are some problems with the configuration, described below.

The Terraform configuration must be valid before initialization so that
Terraform can determine which modules and providers need to be installed.

Error: Missing key/value separator

  on .terraform/modules/terraform_state_backend/main.tf line 35, in data "aws_iam_policy_document" "prevent_unencrypted_uploads":
  30: 
  31: 
  33: 
  35:     principals {

Expected an equals sign ("=") to mark the beginning of the attribute value.


Error: Attribute redefined

  on .terraform/modules/terraform_state_backend/main.tf line 58, in data "aws_iam_policy_document" "prevent_unencrypted_uploads":
  58:   statement = {

The argument "statement" was already set at
.terraform/modules/terraform_state_backend/main.tf:30,3-12. Each argument may
be set only once.

I'm not sure how to properly fix that.

Error activating encryption at rest in dynamoDB

Hello,

Because encryption at rest is not yet available in eu-west-3 region (Paris), when i try to use your module to create the S3 bucket and DynamoDB it fails with the following error:

Error: Error applying plan:

1 error(s) occurred:

* module.terraform_state_backend.aws_dynamodb_table.default: 1 error(s) occurred:

* aws_dynamodb_table.default: ValidationException: One or more parameter values were invalid: Unsupported input parameter SSESpecification
        status code: 400, request id: UKFDXXXXXXXXXXEMVJF66Q9ASUAAJG`

For what i've seen it's because the region doesn't support (as of yet) the DynamoDB encription at rest.

When i comment out the

resource "aws_dynamodb_table" "default" {
...
  #server_side_encryption {
  #enabled = true
  #}

It works fine.

Please add a variable (by default true) where we can specify that we want encryption at rest or not,

Best regards,
Nuno Fernandes

Add Example Usage

what

  • Add example invocation

why

  • We need this so we can soon enable automated continuous integration testing of module

Usage example fails with 'The argument "region" is required, but was not set.'

Describe the Bug

I tried the usage example with the module from the README, i.e.

module "terraform_state_backend" {
    source        = "git::https://github.com/cloudposse/terraform-aws-tfstate-backend.git?ref=tags/0.14.0"
    namespace     = "eg"
    stage         = "test"
    name          = "terraform"
    attributes    = ["state"]
    region        = "us-east-1"
}

in a terraform.tf file and nothing else.

I then ran terraform init, which worked as expected:

Initializing modules...
Downloading git::https://github.com/cloudposse/terraform-aws-tfstate-backend.git?ref=tags/0.14.0 for terraform_state_backend...
- terraform_state_backend in .terraform/modules/terraform_state_backend
Downloading git::https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.13.0 for terraform_state_backend.base_label...
- terraform_state_backend.base_label in .terraform/modules/terraform_state_backend.base_label
Downloading git::https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.13.0 for terraform_state_backend.dynamodb_table_label...
- terraform_state_backend.dynamodb_table_label in .terraform/modules/terraform_state_backend.dynamodb_table_label
Downloading git::https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.13.0 for terraform_state_backend.s3_bucket_label...
- terraform_state_backend.s3_bucket_label in .terraform/modules/terraform_state_backend.s3_bucket_label

Initializing the backend...

Initializing provider plugins...
- Checking for available provider plugins...
- Downloading plugin for provider "local" (hashicorp/local) 1.4.0...
- Downloading plugin for provider "null" (hashicorp/null) 2.1.2...
- Downloading plugin for provider "aws" (hashicorp/aws) 2.52.0...
- Downloading plugin for provider "template" (hashicorp/template) 2.1.2...

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

terraform apply however fails in a rather surprising way:

Error: Missing required argument

The argument "region" is required, but was not set.

Obviously the region is set, just as in the example. Using a different region doesn't change the result.

Expected Behavior

S3 bucket and Dynamo table are created and ready for use.

Steps to Reproduce

Follow the README usage example until step 3.

Environment (please complete the following information):

Anything that will help us triage the bug will help. Here are some ideas:

  • OS: macOS
  • Version: 10.15.3

Additional Context

I saw that some other issues also had problems getting this to work, but none of the hacks described there (adding a aws = "aws" provider) helped.

v0.16.0 -> v0.17.0 broken?

Found a bug? Maybe our Slack Community can help.

Slack Community

Describe the Bug

When upgrading the backend plugin from 0.16 to 0.17, you get a bunch of errors that look like this:

Error: Provider configuration not present

To work with
module.terraform_state_backend.module.dynamodb_table_label.data.null_data_source.tags_as_list_of_maps[0]
its original provider configuration at
module.terraform_state_backend.module.dynamodb_table_label.provider.null is
required, but it has been removed. This occurs when a provider configuration
is removed while objects created by that provider still exist in the state.
Re-add the provider configuration to destroy
module.terraform_state_backend.module.dynamodb_table_label.data.null_data_source.tags_as_list_of_maps[0],
after which you can remove the provider configuration again.

Terraform version:

$ terraform -version
Terraform v0.12.24
+ provider.aws v2.62.0
+ provider.local v1.4.0
+ provider.null v2.1.2
+ provider.template v2.1.2

I checked hashicorp/terraform#21416 but it is not obvious how to fix, I think the fix has to be in your module (not in backend.tf).

Expected Behavior

The terraform apply after upgrading backend module to 0.17 should have worked.

Steps to Reproduce

In empty folder, create backend.tf:

provider "aws" {
  region = "us-east-2"
}

module "terraform_state_backend" {
  source        = "git::https://github.com/cloudposse/terraform-aws-tfstate-backend.git?ref=0.16.0"

  environment   = "test-tf-bug"
  stage         = "oliver"
  name          = null
  # force_destroy = true
  attributes    = ["terraform-state"]
  region        = "us-east-2"
}

Then

terraform init
terraform apply

Everything works.

Now edit the backend.tf file to point to 0.17 of terraform-aws-tfstate-backend module, then repeat:

terraform init
terraform apply 

This time the apply causes a dozen identical errors (but about different elements). The error is shown in the Description above.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.