GithubHelp home page GithubHelp logo

aws-samples / aws2tf Goto Github PK

View Code? Open in Web Editor NEW
525.0 20.0 100.0 3.44 MB

aws2tf - automates the importing of existing AWS resources into Terraform and outputs the Terraform HCL code.

License: MIT No Attribution

Shell 48.25% Python 51.74% HCL 0.01%

aws2tf's Introduction

aws2tf

May 2024 - Try the new Python version!

Test it out with:

./aws2tf.py -t vpc

To see the options use:

./aws2tf.py -h & ./aws2tf.py -l

The documentaiton for this version can be found here

The Python port

A port of this tool to Python is underway, greatly aided by Amazon Q Developer (formerly CodeWhisperer). The Python version will coexist with this version and will gradually replace the bash shell scripts in this codebase. The Python version utilizes the new Terraform v5 method of importing resources, while still dereferencing Terraform addresses and searching for dependencies as aws2tf has always done. It will also be significantly faster, making far fewer calls to Terraform.


This utility 'AWS to Terraform' (aws2tf) reads an AWS Account and generates all the required terraform configuration files (.tf) from each of the composite AWS resources

It also imports the terraform state using a

"terraform import ...." command

And finally runs a

"terraform plan" command

There should hopefully be no subsequent additions or deletions reported by the terraform plan command as all the appropriate terraform configuration files will have have automatically been created.

Requirements & Prerequisites

  • The tool is written for the bash shell script & Python3 and has been tested on macOS 11.6.
  • AWS cli (v2) version 2.13.0 or higher needs to be installed and you need a login with at least "Read" privileges.
  • terraform version v1.5.5 or higher needs to be installed.
  • jq version 1.6 or higher
  • yq optional for advanced stack processing

Optional tooling for security reports (CRITICAL and HIGH issues)

Quickstart guide to using the tool

Running the tool in your local shell (bash) required these steps:

  1. Unzip or clone this git repo into an empty directory
  2. login to the AWS cli (aws configure)
  3. run the tool

Usage Guide

The First Run

To generate the terraform files for an account and stop after a "terraform validate":

./aws2tf.sh -v yes

Or if you have a lot of resources in your accoutn try using -t to restrict the number of resources you scan. So if your interested in a type or group for example: Transit Gateway resources:

./aws2tf.sh -v yes -t tgw
terraform validate
Success! The configuration is valid.

Or there may be some kind of error as trying to test everyone's AWS combinations in advance isn't possible.

If you happen to find one of these errors please open an issue here and paste in the error and it will get fixed.

Once the validation is ok you can remove the -v which then also runs the terraform plan.



To generate the terraform files for an entire AWS account, import the resources and perform a terraform plan:

./aws2tf.sh 

*Note this will take some time - consider using a -t filter instead - and the adding resources with a subsequent run using -c and -f - see below.

To extract all AWS account Policies and Roles:

./aws2tf.sh -t iam

To generate the terraform files for an EKS cluster named "mycluster"

./aws2tf.sh -t eks -i mycluster

To add App Mesh resources

./aws2tf.sh -t appmesh -c yes -f yes

The -c yes is used to "continue" from where we left off, The -f yes is the "fast forward" action it skips past blocks of resources that were completed during the last run.

The two used in combination should quickly have your run progressing from where you left off


To get a selection of resources use the -t option The currently supported types are:

  • acm - ACM resources -t acm
  • apigw - API GW restAPI resources -t apigw
  • appmesh - App Mesh resources -t appmesh
  • appstream - AppStream v2.0 resources -t appstream
  • artifact - CodeArtifact resources
  • athena - Athena resources
  • code - Code* resources -t code
  • cfront - CloudFront resources
  • cloudform - CloudFormation Stacks
  • cognito - Cognito resources -t cognito
  • config - AWS config resources -t config
  • eb - EventBridge resources -t eb
  • ec2 - Instances (running state only)
  • ecs - An ECS cluster and it's related resources -t ecs -i Cluster-Name
  • efs - EFS file systems - individual filesystems with -t efs -i fs-xxxxxxxx
  • eks - An EKS cluster and it's related resources -t eks -i Cluster-Name
  • emr - get all active EMR clusters
  • glue - Glue tables and partitions
  • iam - All IAM related users, groups, policies & roles -t iam
  • kinesis - Kinesis resources
  • kms - KMS keys and aliases -t kms
  • lambda - Lambda resources -t lambda
  • lf - Lake Formation resources -t lf
  • org - AWS Organizations -t org
  • params - SSM parameters -t params
  • privatelink - Private Link resources
  • rds - RDS database resources -t rds
  • s3 - s3 buckets and policies
  • sagemaker - SageMaker resources -t sagemaker
  • secrets - Secrets Manager secrets -t secrets
  • sc - Service Catalog resources -t sc
  • sns - SNS resources -t sns
  • sqs - SQS queues -t sqs
  • spot - spot requests -t spot
  • tgw - Transit Gateway resources -t tgw <-i transit-gateway-id>
  • users - IAM Users and Groups
  • vpc - A VPC and it's related resources -t vpc <-i VPC-id>

To get all the VPC related resources in a particular VPC

./aws2tf.sh -t vpc -i vpc-xxxxxxxxx

To use a specific region and profile

./aws2tf.sh -t vpc -i vpc-xxxxxxxxx -r eu-west-1 -p default

Using the cumulative mode

Cumulative mode allows you to add additional state & terraform files form a previous aws2tf run

If for example you want to get several VPCs you can use the cumulative mode:

To get all the VPC related resources in three particular VPC's

./aws2tf.sh -t vpc -i vpc-aaaaaaaaa 
./aws2tf.sh -t vpc -i vpc-bbbbbbbbb -c yes
./aws2tf.sh -t vpc -i vpc-ccccccccc -c yes

Be patient - lots of output is given as aws2tf:

  • Loops through each provider
  • Creates the requited *.tf configuration files in the "generated" directory
  • Performs the necessary 'terraform import' commands
  • And finally runs a 'terraform plan'
  • Optionally if tfsec is installed - produces a security report

Terraform State

aws2tf maintains state in it's own local directory:

generated/tf../

When using cumulative mode this same state file is used / added to.

It is not possibel at this time to use your own state location (eg. on s3)


Still under development

To get all the resources in a deployed Stack Set

./aws2tf.sh -s <stack set name>

Please open an issue for any resources you see in the unprocessed.log to help prioritize development

Or simply check back after some time to see if they are listed below.


Terraform resources supported as of 08-Oct-2023

  • aws_acm_certificate
  • aws_api_gateway_resource
  • aws_api_gateway_rest_api
  • aws_appautoscaling_policy
  • aws_appautoscaling_target
  • aws_appmesh_gateway_route
  • aws_appmesh_mesh
  • aws_appmesh_route
  • aws_appmesh_virtual_gateway
  • aws_appmesh_virtual_node
  • aws_appmesh_virtual_router
  • aws_appmesh_virtual_service
  • aws_appstream_fleet
  • aws_appstream_image_builder
  • aws_appstream_stack
  • aws_appstream_user
  • aws_athena_named_query
  • aws_athena_workgroup
  • aws_autoscaling_group
  • aws_autoscaling_lifecycle_hook
  • aws_cloud9_environment_ec2
  • aws_cloudformation_stack
  • aws_cloudfront_distribution
  • aws_cloudtrail
  • aws_cloudwatch_event_bus
  • aws_cloudwatch_event_rule
  • aws_cloudwatch_event_target
  • aws_cloudwatch_log_group
  • aws_cloudwatch_metric_alarm
  • aws_codeartifact_domain
  • aws_codeartifact_repository
  • aws_codebuild_project
  • aws_codecommit_repository
  • aws_codepipeline
  • aws_codestarnotifications_notification_rule
  • aws_cognito_identity_pool
  • aws_cognito_identity_pool_roles_attachment
  • aws_cognito_user_pool
  • aws_cognito_user_pool_client
  • aws_config_config_rule
  • aws_config_configuration_recorder
  • aws_config_configuration_recorder_status
  • aws_config_delivery_channel
  • aws_customer_gateway
  • aws_db_event_subscription
  • aws_db_instance
  • aws_db_parameter_group
  • aws_db_subnet_group
  • aws_default_network_acl
  • aws_directory_service_directory
  • aws_dms_endpoint
  • aws_dms_replication_instance
  • aws_dms_replication_task
  • aws_dynamodb_table
  • aws_ec2_client_vpn_endpoint
  • aws_ec2_client_vpn_network_association
  • aws_ec2_host
  • aws_ec2_transit_gateway
  • aws_ec2_transit_gateway_route
  • aws_ec2_transit_gateway_route_table
  • aws_ec2_transit_gateway_vpc_attachment
  • aws_ec2_transit_gateway_vpn_attachment
  • aws_ecr_repository
  • aws_ecs_capacity_provider
  • aws_ecs_cluster
  • aws_ecs_cluster_capacity_providers
  • aws_ecs_service
  • aws_ecs_task_definition
  • aws_efs_access_point
  • aws_efs_file_system
  • aws_efs_file_system_policy
  • aws_efs_mount_target
  • aws_eip
  • aws_eks_cluster
  • aws_eks_fargate_profile
  • aws_eks_identity_provider_config
  • aws_eks_node_group
  • aws_emr_cluster
  • aws_emr_instance_group
  • aws_emr_managed_scaling_policy
  • aws_emr_security_configuration
  • aws_flow_log
  • aws_glue_catalog_database
  • aws_glue_catalog_table
  • aws_glue_connection
  • aws_glue_crawler
  • aws_glue_job
  • aws_glue_partition
  • aws_iam_access_key
  • aws_iam_group
  • aws_iam_instance_profile
  • aws_iam_policy
  • aws_iam_role
  • aws_iam_role_policy
  • aws_iam_role_policy_attachment
  • aws_iam_service_linked_role
  • aws_iam_user
  • aws_iam_user_group_membership
  • aws_iam_user_policy_attachment
  • aws_instance
  • aws_internet_gateway
  • aws_key_pair
  • aws_kinesis_firehose_delivery_stream
  • aws_kinesis_stream
  • aws_kms_alias
  • aws_kms_key
  • aws_lakeformation_data_lake_settings
  • aws_lakeformation_permissions
  • aws_lakeformation_resource
  • aws_lambda_alias
  • aws_lambda_event_source_mapping
  • aws_lambda_function
  • aws_lambda_function_event_invoke_config
  • aws_lambda_layer_version
  • aws_lambda_permission
  • aws_launch_configuration
  • aws_launch_template
  • aws_lb
  • aws_lb_listener
  • aws_lb_listener_rule
  • aws_lb_target_group
  • aws_nat_gateway
  • aws_network_acl
  • aws_network_interface
  • aws_organizations_account
  • aws_organizations_organization
  • aws_organizations_organizational_unit
  • aws_organizations_policy
  • aws_organizations_policy_attachment
  • aws_ram_principal_association
  • aws_ram_resource_share
  • aws_rds_cluster
  • aws_rds_cluster_instance
  • aws_rds_cluster_parameter_group
  • aws_redshift_cluster
  • aws_redshift_subnet_group
  • aws_route53_zone
  • aws_route_table
  • aws_route_table_association
  • aws_s3_access_point
  • aws_s3_bucket
  • aws_s3_bucket_acl
  • aws_s3_bucket_lifecycle_configuration
  • aws_s3_bucket_logging
  • aws_s3_bucket_policy
  • aws_s3_bucket_server_side_encryption_configuration
  • aws_s3_bucket_versioning
  • aws_s3_bucket_website_configuration
  • aws_sagemaker_app
  • aws_sagemaker_app_image_config
  • aws_sagemaker_domain
  • aws_sagemaker_image
  • aws_sagemaker_image_version
  • aws_sagemaker_model
  • aws_sagemaker_notebook_instance
  • aws_sagemaker_studio_lifecycle_config
  • aws_sagemaker_user_profile
  • aws_secretsmanager_secret
  • aws_secretsmanager_secret_version
  • aws_security_group
  • aws_security_group_rule
  • aws_service_discovery_private_dns_namespace
  • aws_service_discovery_service
  • aws_servicecatalog_constraint
  • aws_servicecatalog_portfolio
  • aws_servicecatalog_principal_portfolio_association
  • aws_servicecatalog_product
  • aws_servicecatalog_product_portfolio_association
  • aws_sfn_state_machine
  • aws_sns_topic
  • aws_sns_topic_policy
  • aws_sns_topic_subscription
  • aws_spot_fleet_request
  • aws_sqs_queue
  • aws_ssm_association
  • aws_ssm_document
  • aws_ssm_parameter
  • aws_ssoadmin_managed_policy_attachment
  • aws_ssoadmin_permission_set
  • aws_ssoadmin_permission_set_inline_policy
  • aws_subnet
  • aws_vpc
  • aws_vpc_dhcp_options
  • aws_vpc_endpoint
  • aws_vpc_endpoint_service
  • aws_vpc_ipv4_cidr_block_association
  • aws_vpc_peering_connection
  • aws_vpclattice_access_log_subscription
  • aws_vpclattice_auth_policy
  • aws_vpclattice_listener
  • aws_vpclattice_listener_rule
  • aws_vpclattice_resource_policy
  • aws_vpclattice_service
  • aws_vpclattice_service_network
  • aws_vpclattice_service_network_service_association
  • aws_vpclattice_service_network_vpc_association
  • aws_vpclattice_target_group
  • aws_vpclattice_target_group_attachment
  • aws_vpn_connection

Resources within a Stack Set that can currently be converted to Terraform (-s ) as of 08-Oct-2023

  • #AWS::IAM::Policy
  • AWS::ApiGateway::Account
  • AWS::ApiGateway::Resource
  • AWS::ApiGateway::RestApi
  • AWS::AppMesh::Mesh
  • AWS::AppMesh::VirtualGateway
  • AWS::AppMesh::VirtualNode
  • AWS::AppMesh::VirtualRouter
  • AWS::AppMesh::VirtualService
  • AWS::ApplicationAutoScaling::ScalableTarget
  • AWS::ApplicationAutoScaling::ScalingPolicy
  • AWS::Athena::NamedQuery
  • AWS::Athena::WorkGroup
  • AWS::AutoScaling::AutoScalingGroup
  • AWS::AutoScaling::LaunchConfiguration
  • AWS::AutoScaling::LifecycleHook
  • AWS::CDK::Metadata
  • AWS::Cloud9::EnvironmentEC2
  • AWS::CloudWatch::Alarm
  • AWS::CodeArtifact::Domain
  • AWS::CodeArtifact::Repository
  • AWS::CodeBuild::Project
  • AWS::CodeCommit::Repository
  • AWS::CodePipeline::Pipeline
  • AWS::CodeStarNotifications::NotificationRule
  • AWS::Cognito::IdentityPool
  • AWS::Cognito::IdentityPoolRoleAttachment
  • AWS::Cognito::UserPool
  • AWS::Cognito::UserPoolClient
  • AWS::Config::ConfigurationRecorder
  • AWS::Config::DeliveryChannel
  • AWS::DynamoDB::Table
  • AWS::EC2::DHCPOptions
  • AWS::EC2::EIP
  • AWS::EC2::FlowLog
  • AWS::EC2::Instance
  • AWS::EC2::InternetGateway
  • AWS::EC2::KeyPair
  • AWS::EC2::LaunchTemplate
  • AWS::EC2::NatGateway
  • AWS::EC2::NetworkAcl
  • AWS::EC2::NetworkAclEntry
  • AWS::EC2::Route
  • AWS::EC2::RouteTable
  • AWS::EC2::SecurityGroup
  • AWS::EC2::SecurityGroupIngress
  • AWS::EC2::Subnet
  • AWS::EC2::SubnetNetworkAclAssociation
  • AWS::EC2::SubnetRouteTableAssociation
  • AWS::EC2::VPC
  • AWS::EC2::VPCEndpoint
  • AWS::EC2::VPCEndpointService
  • AWS::EC2::VPCGatewayAttachment
  • AWS::ECR::Repository
  • AWS::ECS::Cluster
  • AWS::ECS::Service
  • AWS::ECS::TaskDefinition
  • AWS::EFS::AccessPoint
  • AWS::EFS::FileSystem
  • AWS::EFS::MountTarget
  • AWS::EKS::Cluster
  • AWS::EKS::Nodegroup
  • AWS::EMR::Cluster
  • AWS::EMR::SecurityConfiguration
  • AWS::ElasticLoadBalancingV2::Listener
  • AWS::ElasticLoadBalancingV2::ListenerRule
  • AWS::ElasticLoadBalancingV2::LoadBalancer
  • AWS::ElasticLoadBalancingV2::TargetGroup
  • AWS::Events::EventBus
  • AWS::Events::Rule
  • AWS::Glue::Connection
  • AWS::Glue::Crawler
  • AWS::Glue::Database
  • AWS::Glue::Job
  • AWS::Glue::Partition
  • AWS::Glue::Table
  • AWS::IAM::AccessKey
  • AWS::IAM::Group
  • AWS::IAM::InstanceProfile
  • AWS::IAM::ManagedPolicy
  • AWS::IAM::Policy
  • AWS::IAM::Role
  • AWS::IAM::ServiceLinkedRole
  • AWS::IAM::User
  • AWS::KMS::Alias
  • AWS::KMS::Key
  • AWS::KinesisFirehose::DeliveryStream
  • AWS::LakeFormation::DataLakeSettings
  • AWS::LakeFormation::Permissions
  • AWS::LakeFormation::PrincipalPermissions
  • AWS::LakeFormation::Resource
  • AWS::Lambda::EventInvokeConfig
  • AWS::Lambda::EventSourceMapping
  • AWS::Lambda::Function
  • AWS::Lambda::LayerVersion
  • AWS::Lambda::Permission
  • AWS::Logs::LogGroup
  • AWS::RDS::DBClusterParameterGroup
  • AWS::RDS::DBParameterGroup
  • AWS::RDS::DBSubnetGroup
  • AWS::RDS::EventSubscription
  • AWS::Redshift::Cluster
  • AWS::Redshift::ClusterSubnetGroup
  • AWS::S3::Bucket
  • AWS::S3::BucketPolicy
  • AWS::SNS::Subscription
  • AWS::SNS::Topic
  • AWS::SNS::TopicPolicy
  • AWS::SQS::Queue
  • AWS::SQS::QueuePolicy
  • AWS::SSM::Parameter
  • AWS::SageMaker::AppImageConfig
  • AWS::SageMaker::Domain
  • AWS::SageMaker::Image
  • AWS::SageMaker::ImageVersion
  • AWS::SageMaker::NotebookInstance
  • AWS::SageMaker::UserProfile
  • AWS::SecretsManager::Secret
  • AWS::ServiceDiscovery::PrivateDnsNamespace
  • AWS::ServiceDiscovery::Service
  • AWS::StepFunctions::StateMachine

aws2tf's People

Contributors

amazon-auto avatar awsandy avatar ponkio-o avatar scorpsolutions avatar scottbrenner avatar statox avatar toddhowl avatar yutachaos avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aws2tf's Issues

Error: invalid resource name

Hi Andy, encountered the following:

Error: Invalid resource name

on aws_codepipeline__5StagePipeline.tf line 1, in resource "aws_codepipeline" "5StagePipeline":
1: resource "aws_codepipeline" "5StagePipeline" {}

A name must start with a letter or underscore and may contain only letters,
digits, underscores, and dashes.

Error: Invalid Character

Encountered this error with error log piped "./aws2tf.sh -v yes | tee output.log"

Error: Invalid character

on aws_cloudwatch_event_target__default__SSMOpsItems-Autoscaling-instance-launch-failure__SSMOpsItems-Autoscaling-instance-launch-failureTarget.tf line 22, in resource "aws_cloudwatch_event_target" "default__SSMOpsItems-Autoscaling-instance-launch-failure__SSMOpsItems-Autoscaling-instance-launch-failureTarget":
22: input_template = "={ "title": "Auto Scaling EC2 instance launch failed", "description": "CloudWatch Event Rule SSMOpsItems-Autoscaling-instance-launch-failure was triggered. Your EC2 instance launch has failed. See below for more details.", "source": "EC2 Auto Scaling", "category": "Availability", "severity":"2", "resources": , "operationalData": { "/aws/dedup": {"type": "SearchableString", "value": "{\"dedupString\":\"SSMOpsItems-Autoscaling-instance-launch-failure\"}"}, "/aws/automations": { "value": "[ { \"automationType\": \"AWS:SSM:Automation\", \"automationId\": \"AWS-ASGEnterStandby\" }, { \"automationType\": \"AWS:SSM:Automation\", \"automationId\": \"AWS-ASGExitStandby\" } ]" }, "autoscaling-group-name": {"value": }, "status-message": {"value": }, "start-time": {"value": }, "end-time": {"value": }, "cause": {"value": }} }"

This character is not used within the language.

-d debug

is it intentional that -d always exits on any error?

Found Error: │ Error: error listing AWS Organization (o-<redacted>) accounts: AccessDeniedException: You don't have permissions to access this resource.
debug flag is on so exiting ....

Failed to upload complete tgw resource

Hi Team,

getting below error while importing tgw resources following below command:
./aws2tf.sh -v yes -t tgw

Logs:
on aws_ec2_transit_gateway_route__tgw-rtb-0f65b6e20bfd4083b_10_0_0_0_16.tf line 6, in resource "aws_ec2_transit_gateway_route" "tgw-rtb-0f65b6e20bfd4083b_10_0_0_0_16":
6: transit_gateway_attachment_id = aws_ec2_transit_gateway_vpc_attachment.tgw-attach-0775386e6f5a52a99.id

A managed resource "aws_ec2_transit_gateway_vpc_attachment" "tgw-attach-0775386e6f5a52a99" has not been declared in the root module.
fix default SG's
Terraform validate ...

Error: Reference to undeclared resource

on aws_ec2_transit_gateway_route__tgw-rtb-0f65b6e20bfd4083b_10_0_0_0_16.tf line 6, in resource "aws_ec2_transit_gateway_route" "tgw-rtb-0f65b6e20bfd4083b_10_0_0_0_16":
6: transit_gateway_attachment_id = aws_ec2_transit_gateway_vpc_attachment.tgw-attach-06b62799c8aa0b843.id

A managed resource "aws_ec2_transit_gateway_vpc_attachment" "tgw-attach-06b62799c8aa0b843" has not been declared in the root module.

Error: Reference to undeclared resource

on aws_ec2_transit_gateway_route__tgw-rtb-0f65b6e20bfd4083b_10_0_0_0_16.tf line 6, in resource "aws_ec2_transit_gateway_route" "tgw-rtb-0f65b6e20bfd4083b_10_0_0_0_16":
6: transit_gateway_attachment_id = aws_ec2_transit_gateway_vpc_attachment.tgw-attach-0775386e6f5a52a99.id

A managed resource "aws_ec2_transit_gateway_vpc_attachment" "tgw-attach-0775386e6f5a52a99" has not been declared in the root module.

Errors first time through

First run through received the following error messages. Tool seems to run otherwise.
`./aws2tf.sh
Region not specified - Getting region from aws cli =
us-east-1
Cleaning generated/tf.727535128727

Account ID = 727535128727
AWS Resource Group Filter =
Region = us-east-1
AWS Profile = dev
Extract KMS Secrets to .tf files (insecure) = no
Fast Forward = no
Verify only = no
Type filter =
Combine = no
AWS command = aws --profile dev --region us-east-1 --output json

terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "= 3.45.0"
}
}
}

provider "aws" {
region = "us-east-1"
shared_credentials_file = "~/.aws/credentials"
profile = "dev"
}
/Users/bradcorner/github/aws2tf/generated/tf.727535128727
terraform init

Initializing the backend...

Initializing provider plugins...

  • Finding hashicorp/aws versions matching "3.45.0"...
  • Installing hashicorp/aws v3.45.0...
  • Installed hashicorp/aws v3.45.0 (signed by HashiCorp)

Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
aws.tf data import.log
Thu Sep 16 13:00:05 EDT 2021
t=
loop through providers
type-get-transitgw.sh
post fix vpn
ls: aws_ec2_transit_gateway_vpn_attachment*.tf: No such file or directory
**** Type Validate Call ****
Success! The configuration is valid.

Success! The configuration is valid.

type-get-transitgw.sh runtime 7 seconds
010-get-organization.sh
aws --profile dev --region us-east-1 --output json organizations describe-organization
aws_organizations_organization o-fva3nsw4un import

│ Error: error listing AWS Organization (o-fva3nsw4un) accounts: AccessDeniedException: You don't have permissions to access this resource.


aws_organizations_organization.o-fva3nsw4un: Importing from ID "o-fva3nsw4un"...
aws_organizations_organization.o-fva3nsw4un: Import prepared!
No state file was found!

State management commands require a state file. Run this command
in a directory where Terraform has been run or use the -state flag
to point the command to a specific state location.
jq: error (at :1): Cannot iterate over null (null)
Found Error: │ Error: error listing AWS Organization (o-fva3nsw4un) accounts: AccessDeniedException: You don't have permissions to access this resource. exiting .... (pass for now)
Success! The configuration is valid.

010-get-organization.sh runtime 6 seconds
011-get-orgaccounts.sh
aws --profile dev --region us-east-1 --output json organizations list-accounts

An error occurred (AccessDeniedException) when calling the ListAccounts operation: You don't have permissions to access this resource.
This is not an AWS organizations account
Found Error: │ Error: error listing AWS Organization (o-fva3nsw4un) accounts: AccessDeniedException: You don't have permissions to access this resource. exiting .... (pass for now)
Success! The configuration is valid.

011-get-orgaccounts.sh runtime 2 seconds
012-get-org-ou.sh

An error occurred (AccessDeniedException) when calling the ListRoots operation: You don't have permissions to access this resource.

Found Error: │ Error: error listing AWS Organization (o-fva3nsw4un) accounts: AccessDeniedException: You don't have permissions to access this resource. exiting .... (pass for now)
Success! The configuration is valid.

012-get-org-ou.sh runtime 2 seconds
013-get-org-policies.sh

An error occurred (AccessDeniedException) when calling the ListPolicies operation: You don't have permissions to access this resource.
This is not an AWS organizations account
Found Error: │ Error: error listing AWS Organization (o-fva3nsw4un) accounts: AccessDeniedException: You don't have permissions to access this resource. exiting .... (pass for now)
Success! The configuration is valid.

013-get-org-policies.sh runtime 2 seconds
021-get-sso-permsets.sh
Refresh ..

│ Error: error reading SSO Instances: AccessDeniedException:

│ with data.aws_ssoadmin_instances.sso,
│ on data__aws_ssoadmin_instances__sso.tf line 1, in data "aws_ssoadmin_instances" "sso":
│ 1: data "aws_ssoadmin_instances" "sso" {}


No state file was found!

State management commands require a state file. Run this command
in a directory where Terraform has been run or use the -state flag
to point the command to a specific state location.

usage: aws [options] [ ...] [parameters]
To see help text, you can run:

aws help
aws help
aws help

aws: error: argument --instance-arn: expected one argument

This is not an AWS organizations account
Found Error: │ Error: error listing AWS Organization (o-fva3nsw4un) accounts: AccessDeniedException: You don't have permissions to access this resource. exiting .... (pass for now)
Found Error: │ Error: error reading SSO Instances: AccessDeniedException: exiting .... (pass for now)
Success! The configuration is valid.

021-get-sso-permsets.sh runtime 6 seconds
030-get-iam-users.sh
skipping 030-get-iam-users.sh
034-get-iam-groups.sh
skipping 034-get-iam-groups.sh
050-get-iam-roles.sh
skipping 050-get-iam-roles.sh
051-get-iam-role-policies.sh
skipping 051-get-iam-role-policies.sh
052-get-iam-attached-role-policies.sh
skipping 052-get-iam-attached-role-policies.sh
055-get-iam-policies.sh
skipping 055-get-iam-policies.sh
056-get-instance-profile.sh
aws_iam_instance_profile AmazonSSMRoleForInstancesQuickSetup import
aws_iam_instance_profile aws-elasticbeanstalk-ec2-role import
aws_iam_instance_profile aws-opsworks-cm-ec2-role import
aws_iam_instance_profile aws-opsworks-ec2-role import
aws_iam_instance_profile ec2-prometheus import
aws_iam_instance_profile ecsInstanceRole import
aws_iam_instance_profile ECS_SSM import
aws_iam_instance_profile elasticRole import
aws_iam_instance_profile gspice-dev-EC2InstanceProfile-YHFOR2JGHW0K import
aws_iam_instance_profile gspice-qacloud-EC2InstanceProfile-W33238ZJ52VE import
aws_iam_instance_profile Harish-WIndows-Test-ECSInstanceProfile-13BI2CVRHNYMI import
aws_iam_instance_profile kubernetes-master import
aws_iam_instance_profile kubernetes-minion import
aws_iam_instance_profile loadtest-transcode-new-EC2InstanceProfile-1CFDI3Y6JOMO2 import
aws_iam_instance_profile MasterRole import
aws_iam_instance_profile rev-api-gateway-loadtest-EC2InstanceProfile-145SDR1HW6E7Z import
aws_iam_instance_profile rev-api-gateway-qacloud-EC2InstanceProfile-1BUQO77YUVYIW import
aws_iam_instance_profile rev-apigateway-ec2Role import
aws_iam_instance_profile rev-video-ingester-dev-EC2InstanceProfile-UGYC4FVHEXLF import
aws_iam_instance_profile rev-video-ingester-loadtest1-EC2InstanceProfile-16VWI8RBO3NQA import
aws_iam_instance_profile rev-video-ingester-qacloud-EC2InstanceProfile-UFTNY1BX33ZG import
aws_iam_instance_profile RevS3Access import
aws_iam_instance_profile SparkBot-QA-EC2InstanceProfile-132TXLCEYJB8D import
aws_iam_instance_profile transcode-dev-EC2InstanceProfile-GZQE1RFBVTLO import
aws_iam_instance_profile Transcode-QA-EC2InstanceProfile-1HMHANNWS684A import
aws_iam_instance_profile vbrick-ec2-audit import
aws_iam_instance_profile vbrick-ec2-bastion import
aws_iam_instance_profile vbrick-ec2-elasticsearch import
aws_iam_instance_profile vbrick-ec2-haproxy import
aws_iam_instance_profile vbrick-ec2-mongodb import
aws_iam_instance_profile vbrick-ec2-rabbitmq import
aws_iam_instance_profile vbrick-ec2-rev import
aws_iam_instance_profile vbrick-ec2-revanalytics import
aws_iam_instance_profile vbrick-ec2-waf import
Waiting for 34 Terraform imports
`

no instance found

i received this error and am unable to search the state file as the missing instance is unnamed

aws_security_group_rule__sg-<redacted>_egress_1.tf exists skipping
No instance found for the given address!

This command requires that the address references one specific instance.
To view the available instances, use "terraform state list". Please modify
the address to reference a specific instance.
No instance found for the given address!

This command requires that the address references one specific instance.
To view the available instances, use "terraform state list". Please modify
the address to reference a specific instance.
aws_security_group_rule sg-<redacted> sg-<redacted> 1 terraform file
aws_security_group_rule__sg-<redacted>_ingress_1.tf exists skipping

aws_route_table - Warning: Argument is deprecated

aws_route_table.rtb-<redacted>: Refreshing state... [id=rtb-<redacted>]

Warning: Argument is deprecated

  with aws_route_table.rtb-<redacted>,
  on aws_route_table__rtb-<redacted>.tf line 5, in resource "aws_route_table" "rtb-<redacted>":
   5:   route = [
   6:     {
   7:       carrier_gateway_id         = ""
   8:       cidr_block                 = "0.0.0.0/0"
   9:       core_network_arn           = ""
  10:       destination_prefix_list_id = ""
  11:       egress_only_gateway_id     = ""
  12:       gateway_id                 = ""
  13:       instance_id                = "i-<redacted>"
  14:       ipv6_cidr_block            = null
  15:       local_gateway_id           = ""
  16:       nat_gateway_id             = ""
  17:       network_interface_id       = aws_network_interface.eni-<redacted>.id
  18:       transit_gateway_id         = ""
  19:       vpc_endpoint_id            = ""
  20:       vpc_peering_connection_id  = ""
  21:     },
  22:   ]

Use network_interface_id instead

on a happy note. i have finally complete my very first run of the aws account after so many weeks 🤣

must specify a region

i was surprised this morning with this message

. ../../scripts/251-get-ec2-instances.sh

You must specify a region. You can also configure your region by running "aws configure".

You must specify a region. You can also configure your region by running "aws configure".

note that it then went to to prepare an import

ws_instance i-0dd6018441bba6420
╷
│ Error: Resource already managed by Terraform
│
│ Terraform is already managing a remote object for
│ aws_instance.i-0dd6018441bba6420. To import to this address you must first
│ remove the existing object from the state.
╵

aws_instance.i-0dd6018441bba6420: Importing from ID "i-0dd6018441bba6420"...
aws_instance.i-0dd6018441bba6420: Import prepared!
****inst prof null *****
No state files in pi2 skipping ...

syntax error near unexpected token [[

Hi,

seems that one || is missing, please check from your side.

image

with comment inline


            if [[ $tft == "aws_sagemaker_image" ]] || [[ $tft == "aws_lambda_function" ]] || [[ $tft == "aws_dynamodb_table" ]] || [[ $tft == "aws_sns_topic" ]] <should have ||> [[ $tft == "aws_iam_role" ]] || [[ $tft == "aws_codepipeline" ]]; then
                tarn=$(grep $addr data/arn-map.dat | grep $tft | cut -f2 -d',' | head -1)
                tarn=${tarn//\//\\/}
                if [[ $tarn != "null" ]] && [[ $tarn != "" ]]; then
                    cmd=$(printf "sed -i'.orig' -e 's/%s/\"%s\"/g' ${fil}" $res $tarn)

                    echo " "
                    #echo $cmd
                    echo "** Undeclared Fix: ${res} -- ${tarn}"
                    eval $cmd
                fi

            fi

image

By the way, thank you for your efforts, the patterns you used provide exactly what is needed to do proper job in AWS without hassle.

improve speed when continuing

disclaimer: i have not verified any of this

it seems like when using -c yes the speed of the check is slightly faster than the speed of importing. which isnt much.

can we speed up the check to skip already imported files/states

secretsmanager import fails

i am getting this while importing secrets manager

** error state mv pi2/aws_cloudfront_distribution__<redacted>.tfstate
moved state aws_cloudfront_distribution.<redacted>
╷
│ Error: Missing newline after argument
│
│   on /Users/mohamed.arafa/git/aws2tf/generated/tf.<redacted>_us-west-2/aws_secretsmanager_secret_version__arn_aws_secretsmanager_us-west-2_<redacted>_secret_<redacted>Secrets<redacted>3__v-<redacted>.tf line 8, in resource "aws_secretsmanager_secret_version" "arn_aws_secretsmanager_us-west-2_<redacted>_secret_<redacted>Secrets<redacted>3__v-<redacted>":
│    8: secret_string = "{\"<redacted>Username\":\"<redacted>\", = \"<redacted>Pass\":\"<redacted>\"}" =
│
│ An argument definition must end with a newline.

the script finally quit on

Terraform validate ...

Error: Missing newline after argument

bucket error causes failure

got this error this morning

run cross checker
State missing - aws_s3_bucket_versioning.<bucket_name> for aws_s3_bucket_versioning__<bucket_name>.tf not found - getting app-dev-serverlessdeploymentbucket-1qpneks1vfsz4
aws_s3_bucket_versioning <bucket_name>
listing in local state move
No <bucket_name> state files exiting ...
Found Error: ** Error: Zero state aws_s3_bucket_versioning <bucket_name> exiting.... exiting ....

the failure caused aws2tf to quit

script error in 603-get-rds-db-parameter-group.sh

. ../../scripts/603-get-rds-db-parameter-group.sh
../../scripts/603-get-rds-db-parameter-group.sh: line 74: syntax error near unexpected token `fi'
../../scripts/603-get-rds-db-parameter-group.sh: line 74: `                fi'
Ignoring │ Error: Resource already managed by Terraform
Ignoring │ Error: Resource already managed by Terraform
--> Validate Fixer
Success! The configuration is valid.

Error: Not enough list items with aws_cloudtrail.s3events

Hi, I just pulled this repo and ran it on a sandbox account without making any changes to your code. I got a bunch of these errors on both line 14 and line 41:

Error: Not enough list items

│ with aws_cloudtrail.s3events,
│ on aws_cloudtrail__s3events.tf line 14, in resource "aws_cloudtrail" "s3events":
│ 14: advanced_event_selector {

│ Attribute advanced_event_selector.0.field_selector.0.starts_with requires 1 item minimum, but config has only 0 declared.

# File generated by aws2tf see https://github.com/aws-samples/aws2tf

aws_cloudtrail.s3events:

resource "aws_cloudtrail" "s3events" {
enable_log_file_validation = true
enable_logging = true
include_global_service_events = true
is_multi_region_trail = true
is_organization_trail = false
name = "s3events"
s3_bucket_name = aws_s3_bucket.b_aws-cloudtrail-logs-"awsaccountnumber"-54acd01e.bucket
tags = {}
tags_all = {}

advanced_event_selector {
field_selector {
ends_with = []
equals = [
"AWS::S3::Object",
]
field = "resources.type"
not_ends_with = []
not_equals = []
not_starts_with = []
starts_with = []
}
field_selector {
ends_with = []
equals = [
"Data",
]
field = "eventCategory"
not_ends_with = []
not_equals = []
not_starts_with = []
starts_with = []
}
}
advanced_event_selector {
name = "Management events selector"

field_selector {
  ends_with = []
  equals = [
    "Management",
  ]
  field           = "eventCategory"
  not_ends_with   = []
  not_equals      = []
  not_starts_with = []
  starts_with     = []
}

}
}

The argument "target_prefix" is required, but no definition was found.

Hi Andy,
I found this below mentioned while executing

./aws2tf.sh -t s3

../../scripts/fix-undec.sh: line 73: syntax error near unexpected token [[' ../../scripts/fix-undec.sh: line 73: if [[ $tft == "aws_sagemaker_image" ]] || [[ $tft == "aws_lambda_function" ]] || [[ $tft == "aws_dynamodb_table" ]] || [[ $tft == "aws_sns_topic" ]] [[ $tft == "aws_iam_role" ]] || [[ $tft == "aws_codepipeline" ]]; then'

│ Error: Missing required argument

│ on aws_s3_bucket_xxxxxxx-us-west-2.tf line 2, in resource "aws_s3_bucket_logging" "b_aws-controltower-logs-xxxxxxxx-us-west-2":
│ 2: resource "aws_s3_bucket_logging" "b_aws-controltower-logs-xxxxxxxxx-us-west-2" {

│ The argument "target_prefix" is required, but no definition was found.

Error: Reference to undeclared resource

cant find ec2 after the run

i ran aws2tf with -v yes as the parameter

i see this in the log

imp.log:../../scripts/251-get-ec2-instances.sh: line 208: ../../scripts/056-get-instance-profile.sh: No such file or directory

Error: Unsupported Terraform Core version

terraform 1.4 was released this morning:

❯ terraform init

Initializing the backend...
╷
│ Error: Unsupported Terraform Core version
│
│   on aws.tf line 2, in terraform:
│    2:   required_version = "~> 1.3.0"
│
│ This configuration does not support Terraform version 1.4.0. To proceed, either choose another supported Terraform version or update this version
│ constraint. Version constraints are normally set for good reason, so updating the constraint may lead to other errors or unexpected behavior.
╵

error log

would it be possible to direct errors to an error log at the same time as to the terminal?

in an account with lots of resources, leaving aws2tf to run for a while and then coming back, the errors get lost in the scroll back. having an error log would be helpful to identify what is critical and what isnt

-c yes continues even if .terraform directory is missing

if for some reason the .terraform directory is missing and the parameter -c yes was passed to aws2tf then it will continue running and fail to import with this error over and over again

ls: cannot access '../.terraform': No such file or directory
pi2 using initing TF provider
aws_iam_instance_profile ecsInstanceRole import

Error: Conflicting configuration arguments -> Stacksets

Hi hello! understand that the AWS Stackset is still in progress, any ways to exclude all components related to AWS StackSets during the run?

I tried to run the commands "./aws2tf.sh -t iam' ,
and encountered this error:

│ Error: Conflicting configuration arguments │ │ with aws_iam_role.stacksets-exec-1c62acbfe1b28a0ec07ae0d849260fb4, │ on aws_iam_role__stacksets-exec-1c62acbfe1b28a0ec07ae0d849260fb4.tf line 26, in resource "aws_iam_role" "stacksets-exec-1c62acbfe1b28a0ec07ae0d849260fb4": │ 26: name = "stacksets-exec-1c62acbfe1b28a0ec07ae0d849260fb4" │ │ "name": conflicts with name_prefix

Thanks so much!

Deprecation warnings with generated S3 configuration

Environment:

  • Terraform 1.1.7
  • Provider aws 4.11.0
  • Provider awscc 0.19.0

With the current version of the script I get errors with S3 buckets generated by the step

. ../../scripts/060-get-s3.sh

I think it might be related to the switch to version 4 of the aws provider you are doing but I didn't check further.

A lot of warnings like the following are generated:

Warning: Argument is deprecated

  on aws_s3_bucket__a-bucket.tf line 3, in resource "aws_s3_bucket" "a-bucket":
   3: resource "aws_s3_bucket" "a-bucket" {

Use the aws_s3_bucket_versioning resource instead
[...]
Use the aws_s3_bucket_acl resource instead
[...]
Use the aws_s3_bucket_server_side_encryption_configuration resource instead

Here is the generated file creating the warnings aws_s3_bucket__a-bucket.tf:

# File generated by aws2tf see https://github.com/aws-samples/aws2tf
# aws_s3_bucket.a-bucket:
resource "aws_s3_bucket" "a-bucket" {
  bucket = "a-bucket"
  lifecycle {
    ignore_changes = [acl, force_destroy]
  }
  object_lock_enabled = false
  request_payer       = "BucketOwner"
  tags                = {}
  tags_all            = {}

  grant {
    id = "arandomandlongid"
    permissions = [
      "FULL_CONTROL",
    ]
    type = "CanonicalUser"
  }

  server_side_encryption_configuration {
    rule {
      bucket_key_enabled = false

      apply_server_side_encryption_by_default {
        sse_algorithm = "AES256"
      }
    }
  }

  versioning {
    enabled    = false
    mfa_delete = false
  }
  force_destroy = false
}

loads of errors when permission to view org is missing

there are loads of errors when i do not have permissions to view the org info

225-get-cvpn-endpoints.sh runtime 6 seconds
-------------------------------------------------------------------
. ../../scripts/227-get-vpn-connections.sh
Found Error: │ Error: listing AWS Organization (<redacted>) accounts: AccessDeniedException: You don't have permissions to access this resource. .... (pass for now)
import.log adjust
--> Validate Fixer
Success! The configuration is valid.

Syntax error with zsh

When running ./aws2tf.sh on zsh I get a syntax error:

./aws2tf.sh: 82: Syntax error: "(" unexpected

The context of the line 82 is:

79 shift $((OPTIND-1))¶
80 ¶
81 trap ctrl_c INT¶
82 ¶
83 function ctrl_c() {¶
84         echo "Requested to stop."¶
85         exit 1¶
86 }¶

I'm not sure what's wrong but setting a proper shebang solves the issue:

#!/usr/bin/env bash

Do you think it's a good enough fix? If so I can create a PR

aws_cloudwatch_metric_alarm is not hcl

aws_cloudwatch_metric_alarm is not hcl. instead i got this and it is breaking all other resources

# File generated by aws2tf see https://github.com/aws-samples/aws2tf
Usage: terraform [global options] state show [options] ADDRESS

  Shows the attributes of a resource in the Terraform state.

  This command shows the attributes of a single resource in the Terraform
  state. The address argument must be used to specify a single resource.
  You can view the list of available resources with "terraform state list".

Options:

  -state=statefile    Path to a Terraform state file to use to look
                      up Terraform-managed resources. By default it will
                      use the state "terraform.tfstate" if it exists.

expired tokens corrupt things

if the aws tokens expire while running the import, aws2tf will continue to run but the state vs the files link will be messed up.
attempting to clean up the tf files as well as the state will not help

│ Error: Missing required argument with s3

HI, I got below mentioned issue while executing command:

./aws2tf.sh -t s3 -r us-west-2 -p default

aws_kms_key.k_3b1690bf-xxxxxx-45c3-a056-xxxxxxxxx: Importing from ID "3b1690bf-xxxxx-xxxxxx-xxxx-7a715exxxxxxx"...
aws_kms_alias alias/uat-xxxxxx-flow-logs
aws_kms_alias.alias_uat-xxxxxxxxx-flow-logs: Importing from ID "alias/uat-xxxxxxx-flow-logs"...
aws_kms_key xxxxxxx0bf-xxxx0-45c3-a056-xxxxxxxxx
aws_kms_key__k_xxxxxxxxxx0-45c3-a056-xxxxxxx.tf exists already skipping ...

Consolidated state aws_s3_bucket_versioning.b_uat-xxxxx-flow-logs
Consolidated state aws_s3_bucket_versioning.b_uat3-alb-logs
run cross checker
import.log adjust
--> Validate Fixer
** Undeclared Fix: aws_kms_key.k_uat-xxxxxx-flow-logs.arn --

│ Error: Missing required argument

│ on aws_s3_bucket_lifecycle_configuration__b_uat-xxxxx-flow-logs.tf line 5, in resource "aws_s3_bucket_lifecycle_configuration" "b_uat-xxxxxxx-flow-logs":
│ 5: rule {

│ The argument "id" is required, but no definition was found.

erraform Refresh ...
Terraform Plan ...

Error: Missing required argument

on aws_s3_bucket_lifecycle_configuration__b_uat-xxxxxx-flow-logs.tf line 5, in resource "aws_s3_bucket_lifecycle_configuration" "b_uat-xxxxxxx-flow-logs":
5: rule {

The argument "id" is required, but no definition was found.

-t iam has errors

i ran aws2tf.sh -t iam on a region and account and got loads of repeated errors that look like

Error: Missing attribute separator

  on aws_iam_role_policy__<redacted name><redacted name>.tf line 32, in resource "aws_iam_role_policy" "<redacted name><redacted name>":
  29:                 {
  30:                     Action   = "s3:ListBucket"
  31:                     Effect   = "Allow"
  32: Resource = aws_s3_bucket.b_arn:aws:s3:::<redacted>.arn

Expected a newline or comma to mark the beginning of the next attribute.

attached role policies AWSReservedSSO_<redacted>
aws_iam_role_policy_attachment AWSReservedSSO_<redacted><redacted> import
╷
│ Error: Missing attribute separator
│
│   on aws_iam_role_policy__<redacted name><redacted name>.tf line 32, in resource "aws_iam_role_policy" "<redacted name><redacted name>":29:                 {
│   30:                     Action   = "s3:ListBucket"31:                     Effect   = "Allow"32: Resource = aws_s3_bucket.b_arn:aws:s3:::<redacted>.arn
│
│ Expected a newline or comma to mark the beginning of the next attribute.

looking at the tf file i see the Resource line is not indented as well

can we exclude resources?

can we exclude resources?
eg. i am running aws2tf against an account that belongs to an organisation and i do not have permissions so everytime i run the script it gives me an error on that section

excluding organizations would remove this error

Error: Reference to undeclared resource (cloudtrail & routetable)

Encountered new errors on cloudwatch log group & aws Routetable by executing "./aws2tf.sh -v yes"

│ Error: Reference to undeclared resource
│ 
│   on aws_cloudtrail__aws-controltower-BaselineCloudTrail.tf line 4, in resource "aws_cloudtrail" "aws-controltower-BaselineCloudTrail":
│    4: cloud_watch_logs_group_arn = aws_cloudwatch_log_group.aws-controltower_CloudTrailLogs.arn
│ 
│ A managed resource "aws_cloudwatch_log_group" "aws-controltower_CloudTrailLogs" has not been declared in the root module.
╵
╷
│ Error: Reference to undeclared resource
│ 
│   on aws_route_table__rtb-0fcb1c52f40ddaae7.tf line 16, in resource "aws_route_table" "rtb-0fcb1c52f40ddaae7":
│   16: nat_gateway_id = aws_nat_gateway.nat-039de67cc3ebb5728.id
│ 
│ A managed resource "aws_nat_gateway" "nat-039de67cc3ebb5728" has not been declared in the root module.`

Cut a release?

Cutting a release would help surfacing the resource type support. And it would also help with including it into the package managers. Let me know if that makes sense, thanks!

Error Missing newline after argument -> (Lambda)

Hi hello! Encountered lots of lambda "Error Missing newline after argument" issues executing "./aws2tf.sh -v yes"

Error: Missing newline after argument

on /Users/kj/Desktop/aws2tf-master/generated/tf.xxxxxxxxxxxx_ap-southeast-1/aws_lambda_function__Delete-Expired-Tagged-Snapshot.tf line 7, in resource "aws_lambda_function" "Delete-Expired-Tagged-Snapshot":
7: description = "This function looks at all snapshots that have a "DeleteOn" tag containing the current day formatted as YYYY-MM-DD. This function should be run at least daily."

An argument definition must end with a newline.
Terraform validate ...

Error: Missing newline after argument

on aws_lambda_function__Delete-Expired-Tagged-Snapshot.tf line 7, in resource "aws_lambda_function" "Delete-Expired-Tagged-Snapshot":
7: description = "This function looks at all snapshots that have a "DeleteOn" tag containing the current day formatted as YYYY-MM-DD. This function should be run at least daily."

An argument definition must end with a newline.

docs: terraform state is a local file

initial impression of reading the readme was that the state file was for the already existing state file. in my case i have it in s3.

when actually running it, i discovered

  • i had to run aws2tf in my local cloned copy
  • it created a directory for everything including the local state file (this means it wont corrupt an already existing state file elsewhere)
  • it wont complement the already existing tf files i wrote for my resources but start from scratch

aws_db_subnet_group : reference to undeclared resource

i am getting quite a bit of these

Error: Reference to undeclared resource

  on aws_db_subnet_group__default-vpc-<redacted>.tf line 10, in resource "aws_db_subnet_group" "default-vpc-<redacted>":
  10:     aws_subnet.subnet-<redacted>.id,

A managed resource "aws_subnet" "subnet-<redacted>" has not been declared in the root module.

Pulling in data-aws.tf from outside of source tree generates error

cp command pulls in file data-aws.tf from outside of the source tree ../../stubs/ path. Suggest adding some error checking to see if the file exists prior to performing the copy, so as to avoid blind execution/error for most users and/or generate or document this file if it is needed/helpful, but pull it in from somewhere safe for your use.

Existing aws2tf:
cp ../../stubs/data-aws.tf .

Example change for line above to:
DATFFILE=../../stubs/data-aws.tf
if test -f "$DATFFILE"; then
cp $DATFFILE .
fi

I cannot offer suggestions on what should be in this file or how it could be communicated in the docs because I can only guess it's purpose.

Error: Invalid escape sequence

Hi Andy, executing the latest code with "./aws2tf.sh -v yes"

│ Error: Invalid escape sequence

│ on /Users/kj/Desktop/aws2tf-master/generated/tf.xxxxxxxxxxxx_ap-southeast-1/aws_cloudwatch_event_target__default__SSMOpsItems-Autoscaling-instance-launch-failure__SSMOpsItems-Autoscaling-instance-launch-failureTarget.tf line 22, in resource "aws_cloudwatch_event_target" "default__SSMOpsItems-Autoscaling-instance-launch-failure__SSMOpsItems-Autoscaling-instance-launch-failureTarget":
│ 22: input_template = "={ title: Auto Scaling EC2 instance launch failed, description: CloudWatch Event Rule SSMOpsItems-Autoscaling-instance-launch-failure was triggered. Your EC2 instance launch has failed. See below for more details., source: EC2 Auto Scaling, category: Availability, severity:2, resources: , operationalData: { /aws/dedup: {type: SearchableString, value: {\dedupString:\SSMOpsItems-Autoscaling-instance-launch-failure}}, /aws/automations: { value: [ { \automationType: \AWS:SSM:Automation, \automationId: \AWS-ASGEnterStandby\ }, { \automationType: \AWS:SSM:Automation, \automationId: \AWS-ASGExitStandby\ } ] }, autoscaling-group-name: {value: }, status-message: {value: }, start-time: {value: }, end-time: {value: }, cause: {value: }} }"

│ The symbol ":" is not a valid escape sequence selector.

The symbol "S" is not a valid escape sequence selector.

The symbol "}" is not a valid escape sequence selector.

The symbol ":" is not a valid escape sequence selector.

The symbol "A" is not a valid escape sequence selector.

The symbol "," is not a valid escape sequence selector.

The symbol ":" is not a valid escape sequence selector.

The symbol "A" is not a valid escape sequence selector.

The symbol "a" is not a valid escape sequence selector.

The symbol ":" is not a valid escape sequence selector.

The symbol "A" is not a valid escape sequence selector.

The symbol "d" is not a valid escape sequence selector.

The symbol ":" is not a valid escape sequence selector.

The symbol "S" is not a valid escape sequence selector.

terraform init triggers repeated download of AWS provider

in the base and in a variety of loops, the terraform aws provider is downloaded over and over
on slow links this may significantly slow the session(s) and because it is cleared each time and does not persist across runs, you download many more times than is required, especially given that the version of the aws provider is fixed

it is also difficult from the current console output to understand why the aws provider is downloaded over and over

pull request coming

--no-verify-ssl option

s$ sh aws2tf.sh --no-verify-ssl
Usage: aws2tf.sh [-p ] [-c] [-v] [-r ] [-t ] [-h] [-d] [-s]

--no-verify-ssl is not a valid option

Inaccuracy in the README

Hi 👋

The README of this repo says:

To include AWS account Policies and Roles:

./aws2tf.sh -p yes

But the -h help says otherwise:

-p specify the AWS profile to use (Default="default")

I'm not sure what the original intent of the README was but it looks like it's outdated.

Also it looks like -d yes allow to dump secrets to .tf files but the help could be a bit more explicit about that:

-d <yes|no|st> (default=no)   Debug - lots of output if yes

sts output in aws2tf.sh assumes JSON

This command gets the AWS account information.

My default output for the CLI is text. This cause it to crash.

aws sts get-caller-identity | jq .Account | tr -d '"'

I changed this line in the script to:

aws sts get-caller-identity --output json | jq .Account | tr -d '"'

The error went away.

s3 issue

i ran with -t s3 and got this error


│ Error: expected length of bucket to be in the range (1 - 63), got
│
│   with aws_s3_bucket_versioning.dev-apse2-release-versioning,
│   on aws_s3_bucket_versioning__dev-apse2-release-versioning.tf line 3, in resource "aws_s3_bucket_versioning" "dev-apse2-release-versioning":
│    3: bucket=""
│

the generated file looks like this

# aws_s3_bucket_versioning.dev-apse2-release-versioning:
resource "aws_s3_bucket_versioning" "dev-apse2-release-versioning" {
  bucket = ""

  versioning_configuration {
    status = "Enabled"
  }
}

aws_ecs_cluster: Warning: Argument is deprecated

Terraform validate ...

Warning: Argument is deprecated

  with aws_ecs_cluster.test,
  on aws_ecs_cluster__test.tf line 4, in resource "aws_ecs_cluster" "test":
   4:   capacity_providers = []

Use the aws_ecs_cluster_capacity_providers resource instead
Success! The configuration is valid, but there were some validation warnings as shown above.

continue with `-c yes` even with .terraform directory is missing

when using -c yes on a brand new account or region, i get this message

There doesn't appear to be a previous run for aws2tf
missing .terraform directory in 197710756924
exiting ....

my ask is to instead of exiting, to merely print this non fatal warning and continue

why?
when working on regions with a large amount of resources, the script may need to be run multiple times, and with a human, the previous command in history may actually be the original without the -c yes meaning that we are actually starting over from scratch

templete errors

Getting a lot of template error:

aws_iam_role AWSServiceRoleForSupport

│ Error: Extra characters after interpolation expression

│ on aws_iam_policy__AWSSSOServiceRolePolicy.tf line 21, in resource "aws_iam_policy" "AWSSSOServiceRolePolicy":
│ 21: "aws:PrincipalOrgMasterAccountId"= "${aws:PrincipalAccount}"

│ Template interpolation doesn't expect a colon at this location. Did you
│ intend this to be a literal sequence to be processed as part of another
│ language? If so, you can escape it by starting with "$${" instead of just
│ "${".

I think it has something to do with the post-terraform 0.11.0 change from template_file to templetefile. I suspect that the older version of the terraform aws provider is calling template_file and so we're getting these suggestions to escape ${aws: to $${aws: which would probably not do what we want?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.