GithubHelp home page GithubHelp logo

cn-terraform / terraform-aws-ecs-fargate-task-definition Goto Github PK

View Code? Open in Web Editor NEW
49.0 3.0 32.0 136 KB

AWS ECS Fargate Task Definition Terraform Module

Home Page: https://registry.terraform.io/modules/cn-terraform/ecs-fargate-task-definition

License: Apache License 2.0

HCL 100.00%
terraform terraform-module aws aws-ecs aws-ecs-task fargate aws-fargate ecs-fargate

terraform-aws-ecs-fargate-task-definition's People

Contributors

amontalban avatar jareddarling avatar jcity avatar jnonino avatar ktibi avatar mesterjoda avatar mfcaro avatar nbeloglazov avatar ovcharenko avatar renovate[bot] avatar tanadeau avatar toshke avatar tuvshuud avatar zahorniak avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

terraform-aws-ecs-fargate-task-definition's Issues

Issue in version `1.0.23` - An argument named "tags" is not expected here

Hi, thank you for creating terraform modules.

When I reference version 1.0.23 I get the following error in plan:

Error: Unsupported argument
  on .terraform/modules/.../main.tf line 21, in resource "aws_iam_policy" "ecs_task_execution_role_custom_policy":
  21:   tags        = var.tags
An argument named "tags" is not expected here.

I get it both when I specify tags , e.g.:

module "test" {
  source          = "cn-terraform/ecs-fargate-task-definition/aws"
  version         = "1.0.23"
  name            = test
  tags            = local.default_tags
}

and when I don't specify it (even though this var is supposed to be optional?)

Please take a look when you have a moment. Thanks!

Tight Coupling Between Roles

Currently, the execution_role_arn and task_role_arn parameters are unnecessarily coupled when both are not provided explicitly:

image

In particular, if one passes only the execution_role_arn, the deployment fails because the internal aws_iam_role resource is not created.

A simple solution would be creating independent internal aws_iam_role resources for task and exec roles, with conditional creation in their respective variables. For instance:

# main.tf
...
    # AWS ECS Task Execution Role
    #------------------------------------------------------------------------------
    resource "aws_iam_role" "ecs_task_execution_role" {
          count = var.execution_role_arn == null ? 1 : 0


    ...

    # AWS ECS Task Role
    #------------------------------------------------------------------------------
    resource "aws_iam_role" "ecs_task_role" {
          count  = var.task_role_arn == null ? 1 : 0

    ...

    # Task Definition
    resource "aws_ecs_task_definition" "td" {
 
    ...
 
          execution_role_arn  = var.execution_role_arn == null ? aws_iam_role.ecs_task_execution_role[0].arn : var.execution_role_arn
   
    ...

          task_role_arn  = var.task_role_arn == null ? aws_iam_role.ecs_task_role[0].arn : var.task_role_arn
    

input task_execution_role

Can you enhance this feature?
There are lot of usages like ECS need to access data from dynamodb or container secrets need access from SSM etc..

Container Memory/CPU Settings are also used for task CPU/Memory - breaking multiple containers

The current version 1.0.36 uses the input container_cpu and container_memory both for the primary Container Definition (which is required to be configured) as well as the ECS Task definition. This creates a problem with adding more containers using additional_containers. You cannot specify cpu / memory settings for those additional containers, or the sum of all cpu / all memory will always be greater than the task cpu / memory.

Thus when launching tasks configured this way, the additional container can only have 0 memory / 0 cpu. This is a simple fix, in which we can add task_memory and task_cpu variables to the module, and ensure that these are used for the container. As long as the primary container definition and any additional containers have a sum of cpu / memory that is less than the total, both containers will be able to start.

This also means that the multiple_containers example provided with this version does NOT work as advertised.

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

This repository currently has no open or pending branches.

Detected dependencies

github-actions
.github/workflows/pipeline.yml
  • actions/checkout v4
  • actions/checkout v4
terraform
main.tf
  • cloudposse/ecs-container-definition/aws 0.61.1
versions.tf
  • aws >= 4.0.0
  • hashicorp/terraform >= 0.13

  • Check this box to trigger a request for Renovate to run again on this repository

`containers` variable provided as a list limits ability to have unique containers in the same task

Version: 1.0.31 (Not sure if this is addressed somehow in future versions)

One use case that is common when spinning up a web application is to use an NGINX reverse proxy container. If running both your webapp container and nginx container in the same task - it make proxy passing much easier - the localhost:<port> strategy for passing traffic from NGINX to the application works, and you don't need the additional complexity of task to task communication.

It would be useful if this module enabled passing of json encoded container definition instead of:

variable "containers" {
  type        = list(any)
  description = "Container definitions to use for the task. If this is used, all other container options will be ignored."
  default     = []
}

The reason is due to a terraform limitation, that items in a list (which is converted to a tuple during plan / apply) must have the exact same structure. Thus we have broken use cases, such as:

  1. Having respository credentials (for a NGINX container being pulled from your artifactory instances) vs not (the webapp container pulling its container from ECR).
  2. Having different environment variables for each container

If you pass two unique container configurations (for the above use cases) the terraform cannot be applied due to the terraform uniform list limitation.

Abbreviated Example that does not work:

[
    {
        "Environment": [
            {
                "Name": "VAR_A",
                "Value": "Value A"
            }
        ]
        "RepositoryCredentials": null
        "Secrets": []
    },

    
    {
        "Environment": [
            {
                "Name": "VAR_B",
                "Value": "Value B"
            },
        ],
        "RepositoryCredentials": {
            "CredentialsParameter": "arn:aws:secretsmanager:us-west-2:123456789:secret:jfrog_repository_credential-p2Ghcc"
        },
        "Secrets": [
            {
                "Name": "DATABASE_PASSWORD",
                "ValueFrom": "arn:aws:secretsmanager:us-west-2:123456789:secret:my-secret-1d6a-OwBc4V"
            }
        ],
    }
]

In order to achieve the use case, we have to concat environment variables for both containers to make them the same, and both containers must pull from the same container repository (either with or without credentials), as well as have the same secrets, and so forth.

Looking internally - we see that the containers variable is converted to a jsonencoded object which is passed to the raw resource aws_ecs_task_definition, e.g.

container_definitions = length(var.containers) == 0 ? "[${module.container_definition.json_map_encoded}]" : jsonencode(var.containers)

NOTE: we use "cloudposse/ecs-container-definition/aws" for the container definitions and the list of containers is constructed as such:

  webapp_and_nginx_container = [
    module.nginx_reverse_proxy_container_definition.json_map_object,
    module.webapp_container_definition.json_map_object
  ]

Proposed Solution

Can we include an additional variable container_definitions_json (maybe a better name) which is simply a JSON encoded string that can be passed directly through to the aws_ecs_task_definition, so we can have more flexibility in the containers we're configuring for the task?

I'm happy to throw up a PR to solve this, but want to get community feedback to see if it makes sense from a wider use case standpoint. We make heavy use of the terraform-aws-modules package and would like to continue to do so - this is just blocking some pretty standard use cases.

Add support for multiple containers in a task definition

Hello,

Currently I do not see an option of definining multiple containers in a single task definition. This option would come in handy in some cases where we need two different docker containers to run in the same task (e.g. PHP+Nginx).

Would it be at all possible to support such scenario? Something similar to this: https://github.com/cloudposse/terraform-aws-ecs-container-definition/blob/master/examples/multiple_definitions/main.tf

Thanks!

container_definition 0.23.0 doesn't support terraform 0.13.0

Terraform 0.13.0 is supported by cloudposse/terraform-aws-ecs-container-definition latest version

This module though has terraform version >= 0.12.0, has a dependency module version (cloudposse/terraform-aws-ecs-container-definition:0.23.0) which is still ~>0.12.0

Cause:

Blocks:

ecs_fargate:2.0.17 fails in terraform init because of ecs-fargate.td.container_definition/terraform-aws-ecs-container-definition-0.23.0/versions.tf
image

ecs_fargate: v2.0.17 dependencies
  • ecs-alb (1.0.2): cn-terraform/ecs-alb/aws
  • ecs-cluster (1.0.5): cn-terraform/ecs-cluster/aws
  • ecs-fargate-service (2.0.4): cn-terraform/ecs-fargate-service/aws
  • td (1.0.11): cn-terraform/ecs-fargate-task-definition/aws

Requested Behavior:

Terraform 0.13.0 Support

Possible Solution

Upgrade cloudposse/terraform-aws-ecs-container-definition to latest (0.41.0)

Conflicts with module "ecs-fargate"

Not sure if I'm misunderstanding this module, but I'm trying to use this module with the ecs-fargate module. However, when planning, this module seems to conflict a lot with the previous infrastructure causing much of it to be needed to be destroyed on apply. Any way to resolve this?

Please provide example of Custom policies to attach to the ECS task execution role - "ecs_task_execution_role_custom_policies"

Please provide example of custom policies.
When I try to use ecs_task_execution_role_custom_policies = ["escReadSecrets"] it passes the validation, but the underlying aws_iam_policy resource is failing with error:

"policy" contains an invalid JSON policy

How do you pass a list of strings that is a JSON? The variable is defined as list of strings

variable "ecs_task_execution_role_custom_policies" {
  description = "(Optional) Custom policies to attach to the ECS task execution role. For example for reading secrets from AWS Systems Manager Parameter Store or Secrets Manager"
  type        = list(string)
  default     = []
}

cloudwatch log group not created

I created a task using this module but it fails to run due to /ecs/service/${var.name_preffix} not being created, so the task errors out with

Status reason	CannotStartContainerError: Error response from daemon: failed to initialize logging driver: failed to create Cloudwatch log stream: ResourceNotFoundException: The specified log group does not exist. status code: 400, request id: 7463029c-9f12-4e3f-b914-f

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.