GithubHelp home page GithubHelp logo

terraform-aws-modules / terraform-aws-efs Goto Github PK

View Code? Open in Web Editor NEW
24.0 4.0 38.0 78 KB

Terraform module to create AWS EFS resources πŸ‡ΊπŸ‡¦

Home Page: https://registry.terraform.io/modules/terraform-aws-modules/efs/aws

License: Apache License 2.0

HCL 100.00%
aws-efs elastic-file-system terraform terraform-aws-module terraform-module

terraform-aws-efs's Introduction

AWS EFS Terraform module

Terraform module which creates AWS EFS (elastic file system) resources.

SWUbanner

Usage

See examples directory for working examples to reference:

module "efs" {
  source = "terraform-aws-modules/efs/aws"

  # File system
  name           = "example"
  creation_token = "example-token"
  encrypted      = true
  kms_key_arn    = "arn:aws:kms:eu-west-1:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab"

  # performance_mode                = "maxIO"
  # NB! PROVISIONED TROUGHPUT MODE WITH 256 MIBPS IS EXPENSIVE ~$1500/month
  # throughput_mode                 = "provisioned"
  # provisioned_throughput_in_mibps = 256

  lifecycle_policy = {
    transition_to_ia = "AFTER_30_DAYS"
  }

  # File system policy
  attach_policy                      = true
  bypass_policy_lockout_safety_check = false
  policy_statements = [
    {
      sid     = "Example"
      actions = ["elasticfilesystem:ClientMount"]
      principals = [
        {
          type        = "AWS"
          identifiers = ["arn:aws:iam::111122223333:role/EfsReadOnly"]
        }
      ]
    }
  ]

  # Mount targets / security group
  mount_targets = {
    "eu-west-1a" = {
      subnet_id = "subnet-abcde012"
    }
    "eu-west-1b" = {
      subnet_id = "subnet-bcde012a"
    }
    "eu-west-1c" = {
      subnet_id = "subnet-fghi345a"
    }
  }
  security_group_description = "Example EFS security group"
  security_group_vpc_id      = "vpc-1234556abcdef"
  security_group_rules = {
    vpc = {
      # relying on the defaults provdied for EFS/NFS (2049/TCP + ingress)
      description = "NFS ingress from VPC private subnets"
      cidr_blocks = ["10.99.3.0/24", "10.99.4.0/24", "10.99.5.0/24"]
    }
  }

  # Access point(s)
  access_points = {
    posix_example = {
      name = "posix-example"
      posix_user = {
        gid            = 1001
        uid            = 1001
        secondary_gids = [1002]
      }

      tags = {
        Additionl = "yes"
      }
    }
    root_example = {
      root_directory = {
        path = "/example"
        creation_info = {
          owner_gid   = 1001
          owner_uid   = 1001
          permissions = "755"
        }
      }
    }
  }

  # Backup policy
  enable_backup_policy = true

  # Replication configuration
  create_replication_configuration = true
  replication_configuration_destination = {
    region = "eu-west-2"
  }

  tags = {
    Terraform   = "true"
    Environment = "dev"
  }
}

Examples

Examples codified under the examples are intended to give users references for how to use the module(s) as well as testing/validating changes to the source code of the module. If contributing to the project, please be sure to make any appropriate updates to the relevant examples to allow maintainers to test your changes and to keep the examples up to date for users. Thank you!

Requirements

Name Version
terraform >= 1.0
aws >= 5.35

Providers

Name Version
aws >= 5.35

Modules

No modules.

Resources

Name Type
aws_efs_access_point.this resource
aws_efs_backup_policy.this resource
aws_efs_file_system.this resource
aws_efs_file_system_policy.this resource
aws_efs_mount_target.this resource
aws_efs_replication_configuration.this resource
aws_security_group.this resource
aws_security_group_rule.this resource
aws_iam_policy_document.policy data source

Inputs

Name Description Type Default Required
access_points A map of access point definitions to create any {} no
attach_policy Determines whether a policy is attached to the file system bool true no
availability_zone_name The AWS Availability Zone in which to create the file system. Used to create a file system that uses One Zone storage classes string null no
bypass_policy_lockout_safety_check A flag to indicate whether to bypass the aws_efs_file_system_policy lockout safety check. Defaults to false bool null no
create Determines whether resources will be created (affects all resources) bool true no
create_backup_policy Determines whether a backup policy is created bool true no
create_replication_configuration Determines whether a replication configuration is created bool false no
create_security_group Determines whether a security group is created bool true no
creation_token A unique name (a maximum of 64 characters are allowed) used as reference when creating the Elastic File System to ensure idempotent file system creation. By default generated by Terraform string null no
deny_nonsecure_transport Determines whether aws:SecureTransport is required when connecting to elastic file system bool true no
enable_backup_policy Determines whether a backup policy is ENABLED or DISABLED bool true no
encrypted If true, the disk will be encrypted bool true no
kms_key_arn The ARN for the KMS encryption key. When specifying kms_key_arn, encrypted needs to be set to true string null no
lifecycle_policy A file system lifecycle policy object any {} no
mount_targets A map of mount target definitions to create any {} no
name The name of the file system string "" no
override_policy_documents List of IAM policy documents that are merged together into the exported document. In merging, statements with non-blank sids will override statements with the same sid list(string) [] no
performance_mode The file system performance mode. Can be either generalPurpose or maxIO. Default is generalPurpose string null no
policy_statements A list of IAM policy statements for custom permission usage any [] no
provisioned_throughput_in_mibps The throughput, measured in MiB/s, that you want to provision for the file system. Only applicable with throughput_mode set to provisioned number null no
replication_configuration_destination A destination configuration block any {} no
security_group_description Security group description. Defaults to Managed by Terraform string null no
security_group_name Name to assign to the security group. If omitted, Terraform will assign a random, unique name string null no
security_group_rules Map of security group rule definitions to create any {} no
security_group_use_name_prefix Determines whether to use a name prefix for the security group. If true, the security_group_name value will be used as a prefix bool false no
security_group_vpc_id The VPC ID where the security group will be created string null no
source_policy_documents List of IAM policy documents that are merged together into the exported document. Statements must have unique sids list(string) [] no
tags A map of tags to add to all resources map(string) {} no
throughput_mode Throughput mode for the file system. Defaults to bursting. Valid values: bursting, elastic, and provisioned. When using provisioned, also set provisioned_throughput_in_mibps string null no

Outputs

Name Description
access_points Map of access points created and their attributes
arn Amazon Resource Name of the file system
dns_name The DNS name for the filesystem per documented convention
id The ID that identifies the file system (e.g., fs-ccfc0d65)
mount_targets Map of mount targets created and their attributes
replication_configuration_destination_file_system_id The file system ID of the replica
security_group_arn ARN of the security group
security_group_id ID of the security group
size_in_bytes The latest known metered size (in bytes) of data stored in the file system, the value is not the exact size that the file system was at any point in time

License

Apache-2.0 Licensed. See LICENSE.

terraform-aws-efs's People

Contributors

antonbabenko avatar bryantbiggs avatar dchien234 avatar dev-slatto avatar glavk avatar gp-davidhardy avatar jeenadeepak avatar kartsm avatar kodakmoment avatar magreenbaum avatar scaldabagno avatar semantic-release-bot avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

terraform-aws-efs's Issues

TF drift size_in_bytes

Description

Terraform creates 3 EFS Volumes, but after they grow in size, this new value shows up a configuration drift:

Note: Objects have changed outside of Terraform

Terraform detected the following changes made outside of Terraform since the last "terraform apply" which may have affected this plan:

  # module.efs["fs1"].aws_efs_file_system.this[0] has changed
  ~ resource "aws_efs_file_system" "this" {
        id                              = "fs-072d761e2dca64422"
      ~ size_in_bytes                   = [
          ~ {
              ~ value             = 6144 -> 2185195520
              ~ value_in_standard = 6144 -> 2185195520
                # (1 unchanged attribute hidden)
            },
        ]
        tags                            = {
            "Name" = ""
            "name" = "test_efsmodule_1"
        }
        # (10 unchanged attributes hidden)
    }

If your request is for a new feature, please use the Feature request template.

  • [ x] βœ‹ I have searched the open/closed issues and my issue is not listed.

⚠️ Note

Before you submit an issue, please perform the following first:

  1. Remove the local .terraform directory (! ONLY if state is stored remotely, which hopefully you are following that best practice!): rm -rf .terraform/
  2. Re-initialize the project root to pull down modules: terraform init
  3. Re-attempt your terraform plan or apply and check if the issue still persists

Versions

  • Module version [Required]:
    "Key":"efs","Source":"registry.terraform.io/terraform-aws-modules/efs/aws","Version":"1.3.0"

  • Terraform version:

Terraform v1.6.3

  • Provider version(s):
  • provider registry.terraform.io/hashicorp/aws v5.23.1
  • provider registry.terraform.io/hashicorp/cloudinit v2.3.2
  • provider registry.terraform.io/hashicorp/helm v2.11.0
  • provider registry.terraform.io/hashicorp/kubernetes v2.23.0
  • provider registry.terraform.io/hashicorp/time v0.9.1
  • provider registry.terraform.io/hashicorp/tls v4.0.4

Reproduction Code [Required]

### EFS Module Inputs ####
locals {
  efs_filesystems = {
    fs1 = {
      fs_tag           = "test_efsmodule_1"
      performance_mode = "generalPurpose"
    },
    fs2 = {
      fs_tag           = "test_efsmodule_2"
      performance_mode = "generalPurpose"
    }
    fs3 = {
      fs_tag           = "test_efsmodule_2"
      performance_mode = "generalPurpose"
    }
    #### Add more filesystems as needed
  }
}
module "efs" {
  for_each = local.efs_filesystems

  source = "terraform-aws-modules/efs/aws"

  encrypted            = false
  enable_backup_policy = false
  performance_mode     = each.value.performance_mode
  #### File system policy
  attach_policy            = true
  deny_nonsecure_transport = false
  policy_statements = [
    {
      sid     = "Example"
      actions = ["elasticfilesystem:ClientMount", "elasticfilesystem:ClientWrite", "elasticfilesystem:ClientRootAccess"]
      principals = [
        {
          type        = "AWS"
          identifiers = ["*"]
        }
      ]
    }
  ]
  ### Mount targets / security group
  mount_targets = { for k, v in zipmap(local.azs, module.vpc.public_subnets) : k => { subnet_id = v } }

  security_group_description = "Example EFS security group"
  security_group_vpc_id      = module.vpc.vpc_id

  security_group_rules = {
    vpc = {
      #### relying on the defaults providied for EFS/NFS (2049/TCP + ingress)
      description = "NFS ingress from VPC private subnets"
      cidr_blocks = ["0.0.0.0/0"]
    }
  }

  tags = {
    name = each.value.fs_tag
  }
  
}
### EFS Module Inputs ####

Steps to reproduce the behavior:

No Yes

Created the EFS Volumes.
Used the EFS FS in my environment - I stored some application logs.
terraform plan showed me that a config drift happen:

Note: Objects have changed outside of Terraform

Terraform detected the following changes made outside of Terraform since the last "terraform apply" which may have affected this plan:

  # module.efs["fs1"].aws_efs_file_system.this[0] has changed
  ~ resource "aws_efs_file_system" "this" {
        id                              = "fs-072d761e2dca64422"
      ~ size_in_bytes                   = [
          ~ {
              ~ value             = 6144 -> 2185195520
              ~ value_in_standard = 6144 -> 2185195520
                # (1 unchanged attribute hidden)
            },
        ]
        tags                            = {
            "Name" = ""
            "name" = "test_efsmodule_1"
        }
        # (10 unchanged attributes hidden)

Expected behavior

I expect to have a way of telling terraform or the module that an increase/decrease in size is normal for volumes.

Actual behavior

I'm getting warned that a config drift happened. I saw that the values get updated in the terraform state file but I'm concerned
that I have no way of suppressing this warning.

Very difficult to specify both transition_to_ia and transition_to_primary_storage_class in lifecycle policies

Description

Please provide a clear and concise description of the issue you are encountering, and a reproduction of your configuration (see the examples/* directory for references that you can copy+paste and tailor to match your configs if you are unable to copy your exact configuration). The reproduction MUST be executable by running terraform init && terraform apply without any further changes.

If your request is for a new feature, please use the Feature request template.

  • βœ‹ I have searched the open/closed issues and my issue is not listed.

⚠️ Note

Before you submit an issue, please perform the following first:

  1. Remove the local .terraform directory (! ONLY if state is stored remotely, which hopefully you are following that best practice!): rm -rf .terraform/
  2. Re-initialize the project root to pull down modules: terraform init
  3. Re-attempt your terraform plan or apply and check if the issue still persists

Versions

  • Module version [Required]: 1.0.1

  • Terraform version:

Terraform v1.3.4
on darwin_amd64
+ provider registry.terraform.io/hashicorp/aws v4.39.0
  • Provider version(s):
    (output identical to above)

Reproduction Code [Required]

provider "aws" {
  region = "us-east-1"
}
module "maybe_efs" {
  source = "terraform-aws-modules/efs/aws"
  version = "v1.0.1"
  create = true
  name = "lorem-ipsum"
  lifecycle_policy = {
    transition_to_ia = "AFTER_7_DAYS"
    transition_to_primary_storage_class = "AFTER_1_ACCESS"
  }
}

Steps to reproduce the behavior:

  1. terraform init with above code
  2. terraform plan -out apply-${TF_WORKSPACE:-default}.tfplan
  3. terraform apply apply-${TF_WORKSPACE:-default}.tfplan

Expected behavior

Applies successfully.

Actual behavior

β•·
β”‚ Error: error creating EFS file system (fs-0659eb16fd4b7abe4) lifecycle configuration: BadRequest: One or more LifecyclePolicy objects specified are malformed.
β”‚ {
β”‚   RespMetadata: {
β”‚     StatusCode: 400,
β”‚     RequestID: "2c30e1a2-97a2-4597-bd8f-9c5d3e058c4b"
β”‚   },
β”‚   ErrorCode: "BadRequest",
β”‚   Message_: "One or more LifecyclePolicy objects specified are malformed."
β”‚ }
β”‚
β”‚   with module.maybe_efs.aws_efs_file_system.this[0],
β”‚   on .terraform/modules/maybe_efs/main.tf line 5, in resource "aws_efs_file_system" "this":
β”‚    5: resource "aws_efs_file_system" "this" {
β”‚
β•΅

The EFS filesystem is created, but the resource is tainted.

If I remove either transition_to_ia or transition_to_primary_storage_class from my lifecycle_policy then it will apply just fine. But due to the particular for_each expression used by dynamic "lifecycle_policy" , I don't know if it is possible to construct a data structure that can pass both lifecycle attributes successfully.

Terminal Output Screenshot(s)

Additional context

I will be opening a pull request with a suggested fix.

AWS EFS Policy Default

Description

Please provide a clear and concise description of the issue you are encountering, and a reproduction of your configuration (see the examples/* directory for references that you can copy+paste and tailor to match your configs if you are unable to copy your exact configuration). The reproduction MUST be executable by running terraform init && terraform apply without any further changes.

If your request is for a new feature, please use the Feature request template.

  • [ x ] βœ‹ I have searched the open/closed issues and my issue is not listed.

⚠️ Note

Before you submit an issue, please perform the following first:

  1. Remove the local .terraform directory (! ONLY if state is stored remotely, which hopefully you are following that best practice!): rm -rf .terraform/
  2. Re-initialize the project root to pull down modules: terraform init
  3. Re-attempt your terraform plan or apply and check if the issue still persists

Versions

  • Module version [Required]:

  • Terraform version: 1.3.7

  • Provider version(s): 1.1.1

Reproduction Code [Required]

Steps to reproduce the behavior:
You have to create an efs with the module and set the "deny_nonsecure_transport = true"
The policy that this will generate will not be right, but it will be incomplete

no yes i did it

Expected behavior

I am expecting that the bug will be solved and the wrong policy will be changed with the correct one

Actual behavior

I was not able to mount and i got a message "access denied, even if i had all the permission"

Terminal Output Screenshot(s)

Additional context

publish latest version to Terraform registry

Description

Can you please publish v1.6.0 to the Terraform registry? It's still sitting a v1.4.0...

If your request is for a new feature, please use the Feature request template.

  • βœ‹ I have searched the open/closed issues and my issue is not listed.

Support for ignoring changes to size_in_bytes attribute in aws_efs_file_system

Is your request related to a new offering from AWS?

No πŸ›‘, this request is not related to a new offering from AWS, but rather to a common behavior of AWS EFS where the size_in_bytes attribute can change outside of Terraform's management, causing unnecessary noise in terraform plan output.

Is your request related to a problem? Please describe.

I'm always frustrated when every time someone opens a PR, and we run terraform plan, it shows changes in the EFS size_in_bytes like it's a change in the PR, even though the actual filesystem configuration hasn't changed. This creates confusion and makes it harder to review the actual changes introduced by the PR.

Describe the solution you'd like.

I would like the terraform-aws-modules/efs/aws module to support an option to ignore changes to the size_in_bytes attribute in the aws_efs_file_system resource. This can be implemented by allowing users to specify a lifecycle block configuration within the module that would pass through to the underlying aws_efs_file_system resource.

For example, the module could expose a variable such as ignore_size_in_bytes_changes that, when set to true, would automatically add the following lifecycle configuration to the aws_efs_file_system resource:

lifecycle {
  ignore_changes = [
    size_in_bytes,
  ]
}

Describe alternatives you've considered.

As an alternative, I have considered creating a wrapper module to simulate ignoring changes to the size_in_bytes attribute using a null_resource with custom triggers. However, this approach is not ideal as it adds complexity and doesn't directly address the issue at the resource level.

Another alternative is to manually ignore these changes in the terraform plan output, but this is error-prone and not a scalable solution for larger teams or automated CI/CD pipelines.

Additional context

The ability to ignore certain attributes from the terraform plan output is crucial for teams to review and understand infrastructure changes accurately. Supporting this feature within the module would greatly enhance the usability and reduce potential confusion during code reviews.

throughput_mode: elastic not supported

Description

EFS supports 3 throughput modes: bursting, provisioned, elastic
Please refer: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/efs_file_system#throughput_mode

But terraform-aws-efs module supports bursting and provisioned throughput modes only.

When trying to use elastic throughout mode getting the following error.

module "efs" {
  source = "terraform-aws-modules/efs/aws"
  performance_mode                = "generalPurpose"
  throughput_mode                 = "elastic"
}

β”‚ Error: expected throughput_mode to be one of [bursting provisioned], got elastic
β”‚ 
β”‚   with module.efs.aws_efs_file_system.this[0],
β”‚   on .terraform/modules/efs-csi.efs/main.tf line 14, in resource "aws_efs_file_system" "this":
β”‚   14:   throughput_mode                 = var.throughput_mode

Please add the support for elastic throughput mode.

No more than 2 "lifecycle_policy" blocks are allowed

Description

It's not possible to set a lifecycle to use all three rules made available in #24 and that AWS supports.

  lifecycle_policy = {
    transition_to_ia                    = "AFTER_30_DAYS"
    transition_to_archive               = "AFTER_90_DAYS"
    transition_to_primary_storage_class = "AFTER_1_ACCESS"
  }

If your request is for a new feature, please use the Feature request template.

  • [ X] βœ‹ I have searched the open/closed issues and my issue is not listed.

⚠️ Note

Versions

  • Module version [Required]: 1.6.2

  • Terraform version:
    1.0.11

  • Provider version(s):
    5.32.0

Reproduction Code [Required]

Steps to reproduce the behavior:
generate an EFS module with the following lifecycle_policy

  lifecycle_policy = {
    transition_to_ia                    = "AFTER_30_DAYS"
    transition_to_archive               = "AFTER_90_DAYS"
    transition_to_primary_storage_class = "AFTER_1_ACCESS"
  }

Expected behavior

the ability to set those three at the same time
Screenshot 2024-05-08 at 2 43 56β€―PM

Actual behavior

this error is emitted on a terraform plan:
No more than 2 "lifecycle_policy" blocks are allowed

Terminal Output Screenshot(s)

Additional context

Policy generated when deny_nonsecure_transport = true is incomplete/outdated

Description

The policy generated when deny_nonsecure_transport = true is incomplete/outdated

Versions

  • Module version [Required]: 1.3.1
  • Terraform version: 1.6.6
  • Provider version(s): 5.21.0

Expected behavior

When enabling via the web console, we see the generated policy to be:

{
    "Version": "2012-10-17",
    "Id": "efs-policy-wizard-01983604-a016-498a-b73c-a6956f8caa13",
    "Statement": [
        {
            "Sid": "efs-statement-ba87d44a-9919-4ded-969e-b42792f6e334",
            "Effect": "Allow",
            "Principal": {
                "AWS": "*"
            },
            "Action": [
                "elasticfilesystem:ClientRootAccess",
                "elasticfilesystem:ClientWrite",
                "elasticfilesystem:ClientMount"
            ],
            "Condition": {
                "Bool": {
                    "elasticfilesystem:AccessedViaMountTarget": "true"
                }
            }
        },
        {
            "Sid": "efs-statement-d04fd86d-0ea8-49dc-9d76-b6383171d3a7",
            "Effect": "Deny",
            "Principal": {
                "AWS": "*"
            },
            "Action": "*",
            "Condition": {
                "Bool": {
                    "aws:SecureTransport": "false"
                }
            }
        }
    ]
}

This policy allowed my ecs containers to properly mount the volume.

Actual behavior

When deny_nonsecure_transport = true (which is the default), this module generates an incomplete policy:

{
    "Sid": "NonSecureTransport",
    "Effect": "Deny",
    "Principal": {
        "AWS": "*"
    },
    "Action": "*",
    "Resource": "arn:aws:elasticfilesystem:us-east-1:12345678912:file-system/fs-0114bc825a22274e46",
    "Condition": {
        "Bool": {
            "aws:SecureTransport": "false"
        }
    }
}

This policy is insufficient for ecs containers to mount the volume when transit_encryption is enabled.

(Seems like this issue has been reported before without a response #11)

Provisioned throughput cost warning

Is your request related to a problem? Please describe.

I was testing out using EFS as home folder in EC2 and blindly copied the example code thinking EFS is elastic only. The first example uses a 256MiB/s provisioned throughput mode that costs $6/MB/month roughly $50 each day. Obviously this is a user error, but a fair warning would be nice. I was lucky enough to check the cost explorer after a short time.

Terraform apply times out when there's a change to `security_group_rules`

Is your request related to a new offering from AWS?

Is this functionality available in the AWS provider for Terraform? See CHANGELOG.md, too.

  • No πŸ›‘ : more like an enhancement to the existing HCL implementation

Is your request related to a problem? Please describe.

  • Prerequisite:
    • You have an existing EFS module
    • You want to update your security_group_rules (for e.g. to add additional CIDR blocks)
  • Observations:
    • When run terraform apply, it will try to destroy the existing aws_security_group_rule and aws_security_group objects, and this operation will time out after 15m (or the default timeout)
    • This is because of the dependency between aws_security_group and the aws_efs_mount_target resource. One cannot destroy the aws_security_group, if it has a dependency object. And the aws_efs_mount_target cannot replace with the new security group since it's not created yet.

Describe the solution you'd like.

  • Solution:
    • Add a create_before_destroy life cycle behavior to the above objects to enable terraform to replace objects properly.

Describe alternatives you've considered.

  • N.A.

Additional context

  • N.A.

Utilize existing Security Group

Is it possible to utilize an existing security group for the mount targets rather than creating a new one?
If not, it would be nice if this were possible.

Example can cause a lot of AWS spend

Description

Maybe not really "bug"... but not a feature request either, IMO. The example should really be changed a bit given the potential cost implications of running it without scrutiny.

Problem:

The current example can quickly chew up some AWS spend...
image

Suggestion:

  1. Change the region to "YOUR-REGION" so resources don't end up somewhere that the user might not be monitoring. Region should always be a very conscious choice.
  2. Reduce the provisioned_throughput_in_mibps from 256 to something very small.
  3. Better yet, change the example to Elastic Throughput!

EBS CSI driver: unauthorized (when deploying in CI/CD)

Description

Please provide a clear and concise description of the issue you are encountering, and a reproduction of your configuration (see the examples/* directory for references that you can copy+paste and tailor to match your configs if you are unable to copy your exact configuration). The reproduction MUST be executable by running terraform init && terraform apply without any further changes.

If your request is for a new feature, please use the Feature request template.

  • βœ‹ I have searched the open/closed issues and my issue is not listed.
    There are 0 open issues listed

⚠️ Note

Before you submit an issue, please perform the following first:

  1. Remove the local .terraform directory (! ONLY if state is stored remotely, which hopefully you are following that best practice!): rm -rf .terraform/
  2. Re-initialize the project root to pull down modules: terraform init
  3. Re-attempt your terraform plan or apply and check if the issue still persists

Versions

  • Module version [Required]: 18.26.6 and 19.16.0

  • Terraform version:

1.4.5

  • Provider version(s):

the relevant subset:

β”‚ └── module.eks
β”‚ β”œβ”€β”€ provider[registry.terraform.io/hashicorp/aws] >= 3.72.0
β”‚ β”œβ”€β”€ provider[registry.terraform.io/hashicorp/tls] ~> 3.0
β”‚ β”œβ”€β”€ provider[registry.terraform.io/hashicorp/kubernetes] >= 2.10.0
β”‚ β”œβ”€β”€ module.self_managed_node_group
β”‚ β”œβ”€β”€ provider[registry.terraform.io/hashicorp/aws] >= 3.72.0
β”‚ └── module.user_data
β”‚ └── provider[registry.terraform.io/hashicorp/cloudinit] >= 2.0.0
β”‚ β”œβ”€β”€ module.eks_managed_node_group
β”‚ β”œβ”€β”€ provider[registry.terraform.io/hashicorp/aws] >= 3.72.0
β”‚ └── module.user_data
β”‚ └── provider[registry.terraform.io/hashicorp/cloudinit] >= 2.0.0
β”‚ β”œβ”€β”€ module.fargate_profile
β”‚ └── provider[registry.terraform.io/hashicorp/aws] >= 3.72.0
β”‚ └── module.kms
β”‚ └── provider[registry.terraform.io/hashicorp/aws] >= 3.72.0

Reproduction Code [Required]

module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "19.16.0"

cluster_name = var.eks_cluster_name
cluster_version = var.eks_cluster_version

kms_key_administrators = var.kms_key_administrators

vpc_id = var.vpc_id
subnet_ids = local.eks_subnet_ids

cluster_endpoint_private_access = var.eks_endpoint_private_access
cluster_endpoint_public_access = var.eks_endpoint_public_access

Temp workaround for bug : double owned tag

terraform-aws-modules/terraform-aws-eks#1810

node_security_group_tags = {
"kubernetes.io/cluster/${var.eks_cluster_name}" = null
}

eks_managed_node_group_defaults = {
ami_type = "AL2_x86_64"
key_name = var.aws_keypair_name
attach_cluster_primary_security_group = true
# Disabling and using externally provided security groups
create_security_group = false
vpc_security_group_ids = var.eks_vpc_security_groups
iam_role_name = "${var.eks_cluster_name}_ng"
block_device_mappings = {
xvda = {
device_name = "/dev/xvda"
ebs = {
volume_size = var.eks_node_disk_size
volume_type = "gp3"
}
}
}
}

eks_managed_node_groups = local.node_groups

tags = merge({
Name = var.eks_cluster_name},
var.tags
)
}

Steps to reproduce the behavior:

no

this is a problem in CI/CD (github actions), so there is no local cache

deployed via laptop (works fine), but when i introduce another principal to deploy it, i run into a problem. I can reproduce when i assume the CI/CD role and run an apply

Expected behavior

after the summary of the plan, i expect terraform to return a code of 0 and terminate

Actual behavior

terraform plan returns an error:

terraform plan
...
all output is as expected
...
Plan: 25 to add, 74 to change, 19 to destroy.
Releasing state lock. This may take a few moments...
Error: Process completed with exit code 1.

Error: Unauthorized

with module.dockyard.kubernetes_storage_class.gp3,

on .terraform/modules/dockyard/terraform/eks-addon.tf line 37, in resource "kubernetes_storage_class" "gp3":

37: resource "kubernetes_storage_class" "gp3" {

This seems to be a permissions issue of having multiple principals deploying the EBS CSI driver. I filed a ticket with Amazon support, and the IAM roles seem to be set up properly.

Terminal Output Screenshot(s)

Additional context

deny_nonsecure_transport grants read-write access to all principals

Description

The policy generated by deny_nonsecure_transport grants access to all AWS principals. This makes the use of IAM to control access to the filesystem impossible when this boolean is set, and is an extremely significant side-effect of the boolean (in contrast to having it set to false and using policy_statements) that is not clear in documentation.

The problematic policy was added in #21, in an attempt to fix #20 and #11. In particular, I think the policy given in #20 is the incorrect policy to fix the ECS issue because it only works because it grants access to all principals -- which is far too broad of a policy, and probably not the intention.

#20 mentions that the web console generated the policy. However, I believe the policies generated by the web console are intended to be used where IAM is not used to control access to the filesystem: all of them generate a similar policy granting access to all principals, with specific denies; this is because the 'default' EFS policy is to allow access to all principals, and use firewall rules to control access.

I think this module should support using IAM to selectively control access to the EFS filesystem, instead of firewall rules alone. At the very least, it should be made more explicit that deny_nonsecure_transport precludes the use of IAM. I would suggest creating a new boolean that makes it very explicit as to whether a 'allow all principals' policy will be attached; then, this could be set to false to facilitate the use of IAM to control access.

  • βœ‹ I have searched the open/closed issues and my issue is not listed.

Versions

  • Module version [Required]: v1.6.2

  • Terraform version: v1.6.3

  • Provider version(s): provider registry.terraform.io/hashicorp/aws v5.40.0

Reproduction Code [Required]

module "efs" {
  source = "terraform-aws-modules/efs/aws"

  # File system
  name           = "example"
  creation_token = "example-token"
  encrypted      = true
  kms_key_arn    = "arn:aws:kms:eu-west-1:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab"

  lifecycle_policy = {
    transition_to_ia = "AFTER_30_DAYS"
  }

  # File system policy
  attach_policy                      = true
  bypass_policy_lockout_safety_check = false
  deny_nonsecure_transport = true
  policy_statements = [
  # XXX: This policy has no effect, because of the 'grant all' policy added by deny_nonsecure_transport!
    {
      sid     = "Example"
      actions = ["elasticfilesystem:ClientMount"]
      principals = [
        {
          type        = "AWS"
          identifiers = ["arn:aws:iam::111122223333:role/EfsReadOnly"]
        }
      ]
    }
  ]

  # Mount targets / security group
  mount_targets = {
    "eu-west-1a" = {
      subnet_id = "subnet-abcde012"
    }
    "eu-west-1b" = {
      subnet_id = "subnet-bcde012a"
    }
    "eu-west-1c" = {
      subnet_id = "subnet-fghi345a"
    }
  }
  security_group_description = "Example EFS security group"
  security_group_vpc_id      = "vpc-1234556abcdef"
  security_group_rules = {
    vpc = {
      # relying on the defaults provdied for EFS/NFS (2049/TCP + ingress)
      description = "NFS ingress from VPC private subnets"
      cidr_blocks = ["10.99.3.0/24", "10.99.4.0/24", "10.99.5.0/24"]
    }
  }
}

Steps to reproduce the behavior:

  1. Create an EFS filesystem using the module that:
    • Has deny_nonsecure_transport set to true
    • Uses policy_statements to attempt to grant some form of access to a specific IAM principal
  2. Attempt to mount the EFS filesystem with an IAM principal that you did not grant explicit access to
  3. Notice that the EFS filesystem is mounted successfully

Expected behavior

The EFS filesystem should not be able to be mounted by any IAM principal when deny_nonsecure_transport is true

Actual behavior

The EFS filesystem is able to be mounted by any IAM principal when deny_nonsecure_transport is true, regardless of any allows in policy_statements.

Add "transition_to_archive" to EFS lifecycle_policy

Is your request related to a new offering from AWS?

Is this functionality available in the AWS provider for Terraform? See CHANGELOG.md, too.

Describe the solution you'd like.

I would like to add and change "transition_to_archive" value of EFS lifecycle_policy, in addition to "transition_to_ia" and "transition_to_primary_storage_class"

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.