GithubHelp home page GithubHelp logo

slalom / dataops-infra Goto Github PK

View Code? Open in Web Editor NEW
20.0 6.0 35.0 35.9 MB

Slalom Infrastructure Catalog for DataOps deployments

Home Page: https://infra.dataops.tk

License: MIT License

HCL 81.87% Shell 2.80% Batchfile 1.67% Python 9.61% Dockerfile 0.05% Gherkin 0.83% HTML 0.51% Ruby 2.65%

dataops-infra's Introduction

layout title nav_order has_children
default
Introduction
1
false

Slalom DataOps Infrastructure Catalog

Catalog Documentation

Catalog documentation is available from the Infrastructure Catalog Index.

Getting Started

For a quick tutorial, check the Getting Started Guides.

Contributor's Guide

For information on how to request enhancements, submit bug reports, or contribute code, please see the Contributing Guide.

User Guide

Workstation Setup

  1. Follow the steps in Windows Development QuickStart or Mac Development QuickStart, which will automatically install all of the following required tools: Terraform, Docker, VS Code, Python 3, and Git.
  2. Clone this repo to your local machine.

Deploying from the Infrastructure Catalog

The below instructions will use tableau-on-aws as the example. The same steps can be repeated for any Infrastructure Catalog items within 'samples' folder.

cd samples/tableau-on-aws
terraform init
terraform apply

Related Code Repos

Troubleshooting

For troubleshooting help, please see Troubleshooting.

About Slalom

Slalom is a modern consulting firm focused on strategy, technology, and business transformation. If you'd like to learn more about how we can help accelerate your business transformation and deliver best-in-class DataOps solutions, please visit us on Slalom.com, reach out to your local Slalom representative, or ping any one of our many Slalom open source software contributors here on GitHub.

dataops-infra's People

Contributors

aaronsteers avatar bhushanchirmade avatar dependabot[bot] avatar jacksandom avatar josh-fell avatar mithra-ramesh avatar xchenp avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

dataops-infra's Issues

Bug: SSH Key Creation fails on first run of prep script

Current Behavior:

The output prediction changes before/after the creation of the ssh keys, which causes a failure on first execution, while running successfully on the second run.

Expected Behavior:

The creation should succeed on first execution.

Lab Guide #2 - Deploy an Extract-Load pipeline using Singer Taps on ECS

Configure Extracts (4:00-6:00, approx. 2m):

  • Copy Pardot credentials file (tap-pardot-config.json) to the data/taps/.secrets folder (15s)
  • Open data/taps/data.select, delete Salesforce refs, update rules to include all columns on Pardot accounts and opportunities (45s)
  • Install dataops tools: pip3 install slalom.dataops (15s)
  • Run s-tap plan pardot to update Pardot extract plan (15s)
  • Review the updated plan file (30s)

Run a Sync Test (10:30-14:30, approx. 4m):

  • Switch to git tab, browse through all changes (30s)
  • Copy-paste and run the provided AWS User Switch command so aws-cli can locate our credentials (15s)
  • Copy-paste and run the provided Sync command to execute the Pardot sync in ECS (60s)
  • Click on the provided Logging URL link to open Cloudwatch logs in a browser (15s)
  • Wait for the ECS task to start (1-2m)
  • Stop the timer once rows are extracted (DONE!)

Lab Guide - Creating a Data Lake with Terraform

Lab Objectives:

  • Create a new repository from a sample template repo which already exists.
  • Clone and open the new repository on your local workstation.
  • Customize the infrastructure and add credentials as needed.
  • Use terraform apply to deploy the data lake data lake, the VPC, and public/private subnets

Setup:

  • one-time setup:

    • Installed software:
      • choco install vscode python3 docker awscli github-desktop
      • choco install git.install --params "/GitOnlyOnPath /SChannel /NoAutoCrlf /WindowsTerminal"
  • environment setup (each time):

Lab Steps

Create Repo and AWS Account:

  • Create new repo from the Slalom DataOps Template, clone repo locally and open in VS Code (60s)
  • Get AWS credentials from Linux Academy (30s)
  • Use the linux-academy link to log in to AWS in the web browser (30s)

Configure Creds:

  • In the .secrets folder, rename credentials.template to .secrets/credentials, copy-paste credentials into file (30s)
  • In the .secrets folder, rename aws-secrets-manager-secrets.yml.template to aws-secrets-manager-secrets.yml (no addl. secrets needed in this exercise) (30s)

Configure Project:

  • Rename infra-config-template.yml to infra-config.yml - update email address and project shortname (30s)

Configure and Deploy Terraform:

  • Open the infra folder, review each file (90s)
    • Delete the data-build-tool.tf file and the singer-taps.tf file.
  • Run terraform init and terraform apply, type 'yes' (30s)
  • Wait for terraform apply to complete (2m)
    • Switch to the git tab, review code changes while apply is running

Confirm resource creation:

  • Copy-paste and run the provided AWS User Switch command so aws-cli can locate our AWS credentials (30s)
  • Upload infra-config.yml to the data bucket: aws s3 cp ../infra-config.yml s3://... (30s)
  • List the bucket contents with aws s3 ls s3://... (30s)
  • In the web browser, browse to the bucket and confirm the file has landed. (30s)
  • Stop the time once the transfer is successfully confirmed. (DONE!)

ECS-task Feature Request: Auto-setup IAM for read/write on a data bucket

Since most of our use cases for ECS tasks will require read-write access to one or more buckets, it would greatly streamline the design work if IAM permissions could automatically granted to one ore more optionally-provided S3 buckets. When provided, the task role would be granted access to the bucket(s) without having to manage this externally from the module.

Feature Request: New `ecs-python` module to mirror `lambda-python`

"As a data developer and Infra Catalog user, I want to be able to deploy Python code to ECS as easily as I can using the python-lambda module."

Background:

The decision to use ECS vs Lambda is one that is often hard to make, and as limitations of one environment or the other are uncovered, there can be a lot of wasted effort involved in reversing this decision.

While we've fully automated the deployment of python lambda functions, it's much more difficult in comparison to create a docker image that runs an arbitrary python script. From an implementation perspective, however, there's no reason why ECS cannot support the same modular config approach as we have for python lambda functions.

SpeedRun #1: Simple Extract-Load Pipeline

As a training tool, as a test for ease-of-use, and as proof of value, we're creating a "speed run" video that demonstrates how to get up and running quickly with the Infrastructure Catalog and a basic DataOps pipeline. This will uncover usability issues and bugs which we'll need to resolve before we can promote the platform broadly.


Stop Point:

  • all infrastructure is deployed, including secrets upload to AWS SecretsManager
  • data Extraction backfill is in progress
  • daily extracts are scheduled
  • basic logging is configured via cloudwatch

Start Point:

  • one-time setup:
    • Installed software:

      • choco install vscode python3 docker awscli github-desktop
      • choco install git.install --params "/GitOnlyOnPath /SChannel /NoAutoCrlf /WindowsTerminal"
    • Access to LinuxAcademy, will be used to create a new 4-hour limited AWS account

  • environment setup (each time):
    • Hourglass app open with timer title: SpeedRun Goal: 12 Minutes
    • Open browser tabs:

Speed Target: 12 minutes


Other Details:

  • Must stay on each screen for at least 3 seconds
  • Scale browser windows and VS Code to: 150%
  • Pardot and AWS Creds will be blurred in 'post' using Camtasia

Blockers:

  • None!

Steps:

Create Repo and AWS Account (0:00-2:00, approx. 2m):

  • Create new repo from Slalom DataOps template, clone repo locally and open in VS Code (60s)
  • Get AWS credentials from LInux Academy (target: 30s)
  • Use the linux-academy link to login to AWS in the web browser (30s)

Configure Creds (2:00-3:30, approx. 1.5m):

  • Rename .secrets/credentials.template to .secrets/credentials, copy-paste credentials into file (30s)
  • Rename aws-secrets-manager-secrets.yml.template to aws-secrets-manager-secrets.yml, copy-paste Pardot credentials into new file (30s)
  • Rename .secrets/tap-sample-config.json.template to tap-pardot-config.json, copy-paste Pardot credentials into file (30s)

Configure Project (3:30-4:00, approx 0.5m):

  • Rename infra-config-template.yml to infra-config.yml - update email address and project name: SpeedRun003-n (30s)

Configure Extracts (4:00-6:00, approx. 2m):

  • Copy Pardot credentials file (tap-pardot-config.json) to the data/taps/.secrets folder (15s)
  • Open data/taps/data.select, delete Salesforce refs, update rules to include all columns on Pardot accounts and opportunities (45s)
  • Install dataops tools: pip3 install slalom.dataops (15s)
  • Run s-tap plan pardot to update Pardot extract plan (15s)
  • Review the updated plan file (30s)

Configure and Deploy Terraform (6:00-10:30, approx. 4.5m):

  • Open the infra folder, review and update each file (90s)
  • Run terraform init and terraform apply, type 'yes' (60s)
  • Wait for terraform apply to complete (2m)

Run a Sync Test (10:30-14:30, approx. 4m):

  • Switch to git tab, browse through all changes (30s)
  • Copy-paste and run the provided AWS User Switch command so aws-cli can locate our credentials (15s)
  • Copy-paste and run the provided Sync command to execute the Pardot sync in ECS (60s)
  • Click on the provided Logging URL link to open Cloudwatch logs in a browser (15s)
  • Wait for the ECS task to start (1-2m)
  • Stop the timer once rows are extracted (DONE!)

SpeedRun 1A: S3 Data Lake

As a training tool, as a test for ease-of-use, and as proof of value, we're creating a "speed run" video that demonstrates how to get up and running quickly with the Infrastructure Catalog and a basic DataOps pipeline. This will uncover usability issues and bugs which we'll need to resolve before we can promote the platform broadly.


Stop Point:

  • all infrastructure is deployed, including the data lake, the VPC, and the public/private subnets

Start Point:

  • one-time setup:
    • Installed software:

      • choco install vscode python3 docker awscli github-desktop
      • choco install git.install --params "/GitOnlyOnPath /SChannel /NoAutoCrlf /WindowsTerminal"
    • Access to LinuxAcademy, will be used to create a new 4-hour limited AWS account

  • environment setup (each time):

Speed Target: 10 minutes


Other Details:

  • Must stay on each screen for at least 3 seconds
  • Scale browser windows and VS Code to: ~125%

Blockers:

  • None!

Steps:

Create Repo and AWS Account (0:00-2:00, approx. 2m):

  • Create new repo from the Slalom DataOps Template, clone repo locally and open in VS Code (60s)
  • Get AWS credentials from LInux Academy (30s)
  • Use the linux-academy link to log in to AWS in the web browser (30s)

Configure Creds (2:00-3:00, approx. 1m):

  • In the .secrets folder, rename credentials.template to .secrets/credentials, copy-paste credentials into file (30s)
  • In the .secrets folder, rename aws-secrets-manager-secrets.yml.template to aws-secrets-manager-secrets.yml (no addl. secrets needed in this exercise) (30s)

Configure Project (3:00-3:30, approx 0.5m):

  • Rename infra-config-template.yml to infra-config.yml - update email address and project shortname (30s)

Configure and Deploy Terraform (3:30-7:30, approx. 4m):

  • Open the infra folder, review and update each file (90s)
  • Run terraform init and terraform apply, type 'yes' (30s)
  • Wait for terraform apply to complete (2m)
    • Switch to the git tab, review code changes while apply is running

Confirm resource creation (7:30-9:30, approx. 2m):

  • Copy-paste and run the provided AWS User Switch command so aws-cli can locate our AWS credentials (30s)
  • Upload infra-config.yml to the data bucket: aws s3 cp infra-config.yml s3://... (30s)
  • List the bucket contents with aws s3 ls s3://... (30s)
  • In the web browser, browse to the bucket and confirm the file has landed. (30s)
  • Stop the time once the transfer is successfully confirmed. (DONE!)

Bug: issues when num_instances > 1 for Terraform catalog item and EC2 module

Current Behavior:

  • Connection info is only passed for the first linux and first windows instance. Additional instances will not have their connection info printed for the user.

Expected Behavior:

  • All relevant connection info should be printed for all instances, especially: ip addresses, rdp/ssh remote connection commands, windows password and/or ssh key paths.
  • For tableau instances, this should ideally also print the URLs for remote connections.

Speed Run #2

TODO for V2:

  • email-based failure alarms are configured (not ready yet in Infra Catalog)
  • opening repo directly in a container

For more info, see Speed Run #1 (issue #55)

Feature Request: Azure 'Environment'

Provide 2 auth options:

  1. Inherited user auth (run az login as yourself)
  2. "Application Service Principal" - client_id/subscription_id/tenant_id

Comprised of:

  • Resource Group
    • (Has a location)
  • VNET(s)

Maybe:

  • Subnets?

Add proper terraform test for Infra Catalog (via terratest)

Currently the Infrastructure Catalog doesn't have robust testing. Terraform libraries are validated by CI/CD using terraform format and terraform validate, but no automation integration or end-to-end tests are in place as of yet. The terratest tool appears capable of automating these test for us but it's a new tool we'd have to learn and implement.

Option A: Terratest

https://github.com/gruntwork-io/terratest
https://blog.gruntwork.io/open-sourcing-terratest-a-swiss-army-knife-for-testing-infrastructure-code-5d883336fcd5

Good first components might be tableau-server-on-aws or data-lake-on-aws.

Option B: Terraform-compliance

https://github.com/eerkunt/terraform-compliance
https://medium.com/the-reading-room/behaviour-driven-development-a-better-agile-778d2d2a7ab5

Bug: Redshift module by default has too-restrictive security group

By default the AWS security group for the Redshift catalog module will only allow access from the managed security group. In order to allow external connections, we had to add a manual security group rule to allow incoming traffic on the Redshift port from 0.0.0.0/0 (the 'app' cidr).

Improved documentation how to override the AWS environment variable

In a recent demo, a user asked how to leverage an existing client VPC and existing subnets. The below sample code would have been useful if it were included in the documentation.

Important to note that this:

 module ___ ____ {
 ...
 environment = module.env.environment
 ...

...can be rewritten as:

 module ___ ____ {
 ...
  environment = {
    vpc_id          = module.env.environment.vpc_id
    aws_region      = module.env.environment.aws_region
    public_subnets  = module.env.environment.public_subnets
    private_subnets = module.env.environment.private_subnets
  }
 ...

... and that then each variable can be overridden or fed in from another source.

Feature Request: Deterministic Logging URL for ECS jobs

Presently, ECS Tasks are logged to Cloudwatch with a log stream ID that contains the task ID, which is a unique GUID not created until point of execution. This affects how the AWS CLI can be called and what URL to use for monitoring cloudwatch logs via the console.

With the launch of FireLens, there's now an option to parameterize the logging stream ID, which could be used to force a deterministic stream ID and thereby a deterministic CLI command to watch logs, and a deterministic log URL for viewing logs online in Cloudwatch.

Related: aws/containers-roadmap#74 (comment)

One-time cleanup: Missing terraform variables and documentation

Thanks to the new command s-infra check_tf_metadata - we're now able to detect deviations from standard best practices, including: missing module docstrings, missing descriptions on variables, and missing standard input or output variables.

In combination with the new auto-documentation feature, these cleanup steps will drastically improve the quality of Infra Catalog documentation.

Once these are resolved, the next step would be to implement the below check in the CI/CD framework so that new contributors will have clear guidance on the degree to which new modules are meeting code standards.

Running this command:

C:\Files\Source\dataops-infra>python C:\Files\Source\dataops-tools\slalom\dataops\infra.py check_tf_metadata . --recursive

Generates this output:


  1. Blank module headers:

    • ./components/aws/ecs-task/main.tf
  2. Missing required input variables:

    • ./catalog/aws/environment/variables.tf:var.environment
    • ./components/aws/ecr/variables.tf:var.environment
    • ./components/aws/ecr/variables.tf:var.name_prefix
    • ./components/aws/secrets-manager/variables.tf:var.environment
    • ./components/aws/step-functions/variables.tf:var.environment
    • ./components/aws/step-functions/variables.tf:var.name_prefix
    • ./components/aws/vpc/variables.tf:var.environment
  3. Missing required output variables:

    • ./components/aws/ec2/outputs.tf:output.summary
    • ./components/aws/ecr/outputs.tf:output.summary
    • ./components/aws/ecs-cluster/outputs.tf:output.summary
    • ./components/aws/ecs-task/outputs.tf:output.summary
    • ./components/aws/lambda-python/outputs.tf:output.summary
    • ./components/aws/vpc/outputs.tf:output.summary
  4. Missing input variable descriptions:

    • ./catalog/aws/airflow/variables.tf:var.container_command
    • ./catalog/aws/airflow/variables.tf:var.container_image
    • ./catalog/aws/airflow/variables.tf:var.container_num_cores
    • ./catalog/aws/airflow/variables.tf:var.container_ram_gb
    • ./catalog/aws/airflow/variables.tf:var.environment_secrets
    • ./catalog/aws/airflow/variables.tf:var.environment_vars
    • ./catalog/aws/dbt/variables.tf:var.admin_cidr
    • ./catalog/aws/dbt/variables.tf:var.container_entrypoint
    • ./catalog/aws/dbt/variables.tf:var.container_image
    • ./catalog/aws/dbt/variables.tf:var.container_num_cores
    • ./catalog/aws/dbt/variables.tf:var.container_ram_gb
    • ./catalog/aws/dbt/variables.tf:var.dbt_project_git_repo
    • ./catalog/aws/dbt/variables.tf:var.dbt_run_command
    • ./catalog/aws/dbt/variables.tf:var.scheduled_timezone
    • ./catalog/aws/environment/variables.tf:var.aws_region
    • ./catalog/aws/environment/variables.tf:var.secrets_folder
    • ./catalog/aws/redshift/variables.tf:var.elastic_ip
    • ./catalog/aws/redshift/variables.tf:var.jdbc_port
    • ./catalog/aws/redshift/variables.tf:var.kms_key_id
    • ./catalog/aws/redshift/variables.tf:var.num_nodes
    • ./catalog/aws/redshift/variables.tf:var.s3_logging_bucket
    • ./catalog/aws/redshift/variables.tf:var.s3_logging_path
    • ./catalog/aws/redshift/variables.tf:var.skip_final_snapshot
    • ./catalog/aws/singer-taps/variables.tf:var.container_command
    • ./catalog/aws/singer-taps/variables.tf:var.container_entrypoint
    • ./catalog/aws/singer-taps/variables.tf:var.container_image
    • ./catalog/aws/singer-taps/variables.tf:var.container_num_cores
    • ./catalog/aws/singer-taps/variables.tf:var.container_ram_gb
    • ./catalog/aws/singer-taps/variables.tf:var.scheduled_timezone
    • ./catalog/aws/singer-taps/variables.tf:var.source_code_folder
    • ./catalog/aws/singer-taps/variables.tf:var.source_code_s3_bucket
    • ./catalog/aws/singer-taps/variables.tf:var.source_code_s3_path
    • ./catalog/aws/singer-taps/variables.tf:var.taps
    • ./catalog/aws/singer-taps/variables.tf:var.target
    • ./catalog/aws/tableau-server/variables.tf:var.admin_cidr
    • ./catalog/aws/tableau-server/variables.tf:var.default_cidr
    • ./catalog/aws/tableau-server/variables.tf:var.ec2_instance_storage_gb
    • ./catalog/aws/tableau-server/variables.tf:var.ec2_instance_type
    • ./catalog/aws/tableau-server/variables.tf:var.linux_https_domain
    • ./catalog/aws/tableau-server/variables.tf:var.linux_use_https
    • ./catalog/aws/tableau-server/variables.tf:var.num_linux_instances
    • ./catalog/aws/tableau-server/variables.tf:var.num_windows_instances
    • ./catalog/aws/tableau-server/variables.tf:var.registration_file
    • ./catalog/aws/tableau-server/variables.tf:var.windows_https_domain
    • ./catalog/aws/tableau-server/variables.tf:var.windows_use_https
    • ./components/aws/ec2/variables.tf:var.admin_cidr
    • ./components/aws/ec2/variables.tf:var.admin_ports
    • ./components/aws/ec2/variables.tf:var.ami_name_filter
    • ./components/aws/ec2/variables.tf:var.ami_owner
    • ./components/aws/ec2/variables.tf:var.default_cidr
    • ./components/aws/ec2/variables.tf:var.https_domain
    • ./components/aws/ec2/variables.tf:var.instance_storage_gb
    • ./components/aws/ec2/variables.tf:var.instance_type
    • ./components/aws/ec2/variables.tf:var.is_windows
    • ./components/aws/ec2/variables.tf:var.num_instances
    • ./components/aws/ec2/variables.tf:var.ssh_key_name
    • ./components/aws/ec2/variables.tf:var.ssh_private_key_filepath
    • ./components/aws/ec2/variables.tf:var.use_https
    • ./components/aws/ecr/variables.tf:var.image_name
    • ./components/aws/ecr/variables.tf:var.repository_name
    • ./components/aws/ecs-cluster/variables.tf:var.ec2_instance_count
    • ./components/aws/ecs-cluster/variables.tf:var.ec2_instance_type
    • ./components/aws/ecs-task/variables.tf:var.admin_ports
    • ./components/aws/ecs-task/variables.tf:var.always_on
    • ./components/aws/ecs-task/variables.tf:var.app_ports
    • ./components/aws/ecs-task/variables.tf:var.container_command
    • ./components/aws/ecs-task/variables.tf:var.container_entrypoint
    • ./components/aws/ecs-task/variables.tf:var.container_name
    • ./components/aws/ecs-task/variables.tf:var.container_num_cores
    • ./components/aws/ecs-task/variables.tf:var.container_ram_gb
    • ./components/aws/ecs-task/variables.tf:var.ecs_cluster_name
    • ./components/aws/ecs-task/variables.tf:var.load_balancer_arn
    • ./components/aws/ecs-task/variables.tf:var.secrets_manager_kms_key_id
    • ./components/aws/ecs-task/variables.tf:var.use_fargate
    • ./components/aws/ecs-task/variables.tf:var.use_load_balancer
    • ./components/aws/lambda-python/variables.tf:var.pip_path
    • ./components/aws/lambda-python/variables.tf:var.runtime
    • ./components/aws/lambda-python/variables.tf:var.s3_triggers
    • ./components/aws/lambda-python/variables.tf:var.timeout_seconds
    • ./components/aws/redshift/variables.tf:var.database_name
    • ./components/aws/redshift/variables.tf:var.elastic_ip
    • ./components/aws/redshift/variables.tf:var.jdbc_port
    • ./components/aws/redshift/variables.tf:var.kms_key_id
    • ./components/aws/redshift/variables.tf:var.num_nodes
    • ./components/aws/redshift/variables.tf:var.s3_logging_bucket
    • ./components/aws/redshift/variables.tf:var.s3_logging_path
    • ./components/aws/redshift/variables.tf:var.skip_final_snapshot
    • ./components/aws/step-functions/variables.tf:var.account_id
    • ./components/aws/step-functions/variables.tf:var.state_machine_definition
    • ./components/aws/step-functions/variables.tf:var.state_machine_name
    • ./components/aws/vpc/variables.tf:var.aws_region
    • ./components/aws/vpc/variables.tf:var.name_prefix
    • ./components/aws/vpc/variables.tf:var.resource_tags
  5. Missing output variable descriptions:

    • ./catalog/aws/airflow/outputs.tf:output.airflow_url
    • ./catalog/aws/airflow/outputs.tf:output.logging_url
    • ./catalog/aws/airflow/outputs.tf:output.server_launch_cli
    • ./catalog/aws/airflow/outputs.tf:output.summary
    • ./catalog/aws/data-lake/outputs.tf:output.s3_data_bucket
    • ./catalog/aws/data-lake/outputs.tf:output.s3_logging_bucket
    • ./catalog/aws/data-lake/outputs.tf:output.s3_metadata_bucket
    • ./catalog/aws/data-lake/outputs.tf:output.summary
    • ./catalog/aws/dbt/outputs.tf:output.summary
    • ./catalog/aws/environment/outputs.tf:output.aws_credentials_file
    • ./catalog/aws/environment/outputs.tf:output.environment
    • ./catalog/aws/environment/outputs.tf:output.is_windows_host
    • ./catalog/aws/environment/outputs.tf:output.ssh_private_key_filename
    • ./catalog/aws/environment/outputs.tf:output.ssh_public_key_filename
    • ./catalog/aws/environment/outputs.tf:output.summary
    • ./catalog/aws/environment/outputs.tf:output.user_home
    • ./catalog/aws/redshift/outputs.tf:output.endpoint
    • ./catalog/aws/redshift/outputs.tf:output.summary
    • ./catalog/aws/singer-taps/outputs.tf:output.summary
    • ./catalog/aws/tableau-server/outputs.tf:output.ec2_instance_ids
    • ./catalog/aws/tableau-server/outputs.tf:output.ec2_instance_private_ips
    • ./catalog/aws/tableau-server/outputs.tf:output.ec2_instance_public_ips
    • ./catalog/aws/tableau-server/outputs.tf:output.ec2_instance_states
    • ./catalog/aws/tableau-server/outputs.tf:output.ec2_remote_admin_commands
    • ./catalog/aws/tableau-server/outputs.tf:output.ec2_windows_instance_passwords
    • ./catalog/aws/tableau-server/outputs.tf:output.ssh_private_key_path
    • ./catalog/aws/tableau-server/outputs.tf:output.ssh_public_key_path
    • ./components/aws/ec2/outputs.tf:output.instance_id
    • ./components/aws/ec2/outputs.tf:output.instance_ids
    • ./components/aws/ec2/outputs.tf:output.instance_state
    • ./components/aws/ec2/outputs.tf:output.instance_states
    • ./components/aws/ec2/outputs.tf:output.private_ip
    • ./components/aws/ec2/outputs.tf:output.private_ips
    • ./components/aws/ec2/outputs.tf:output.public_ips
    • ./components/aws/ec2/outputs.tf:output.remote_admin_commands
    • ./components/aws/ec2/outputs.tf:output.ssh_key_name
    • ./components/aws/ec2/outputs.tf:output.ssh_private_key_path
    • ./components/aws/ec2/outputs.tf:output.ssh_public_key_path
    • ./components/aws/ec2/outputs.tf:output.windows_instance_passwords
    • ./components/aws/ecr/outputs.tf:output.ecr_image_url
    • ./components/aws/ecr/outputs.tf:output.ecr_repo_arn
    • ./components/aws/ecr/outputs.tf:output.ecr_repo_root
    • ./components/aws/ecs-cluster/outputs.tf:output.ecs_cluster_arn
    • ./components/aws/ecs-cluster/outputs.tf:output.ecs_cluster_name
    • ./components/aws/ecs-cluster/outputs.tf:output.ecs_instance_role
    • ./components/aws/ecs-task/outputs.tf:output.ecs_checklogs_cli
    • ./components/aws/ecs-task/outputs.tf:output.ecs_container_name
    • ./components/aws/ecs-task/outputs.tf:output.ecs_logging_url
    • ./components/aws/ecs-task/outputs.tf:output.ecs_runtask_cli
    • ./components/aws/ecs-task/outputs.tf:output.ecs_security_group
    • ./components/aws/ecs-task/outputs.tf:output.ecs_task_name
    • ./components/aws/ecs-task/outputs.tf:output.load_balancer_arn
    • ./components/aws/ecs-task/outputs.tf:output.load_balancer_dns
    • ./components/aws/lambda-python/outputs.tf:output.build_temp_dir
    • ./components/aws/redshift/outputs.tf:output.endpoint
    • ./components/aws/redshift/outputs.tf:output.summary
    • ./components/aws/secrets-manager/outputs.tf:output.secrets_ids
    • ./components/aws/secrets-manager/outputs.tf:output.summary
    • ./components/aws/step-functions/outputs.tf:output.summary
    • ./components/aws/vpc/outputs.tf:output.private_subnets
    • ./components/aws/vpc/outputs.tf:output.public_subnets
    • ./components/aws/vpc/outputs.tf:output.vpc_id

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.