dflook / terraform-github-actions Goto Github PK
View Code? Open in Web Editor NEWGitHub actions for terraform
GitHub actions for terraform
I'm trying to get the demo running that you show in https://github.com/dflook/terraform-github-actions/tree/master/terraform-apply.
The terraform-plan runs correctly and the plan is put into the PR but after merging the branches the terraform apply fails. I must be missing something. Here is part of the output from the action.
Terraform has been successfully initialized!
Applying plan in [Apply terraform plan #11](https://github.com/ksgriggs/terraform-pr-validate/actions/runs/234868125)
Unable to update status on PR
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
null_resource.nill: Refreshing state... [id=1781873233465892696]
null_resource.nada: Refreshing state... [id=9146469937314840400]
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# null_resource.zip will be created
+ resource "null_resource" "zip" {
+ id = (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
Approved plan not found
If you have any suggestions they would be greatly appreciated.
It is possible to add an open source license to this repository?
Thank you.
I've noticed that when the add_github_comment
is true, the variables reported in the comment on the Pull Request might contain sensitive data.
I think the action should either mask the variables that are considered sensitive from terraform.
Hey dflook,
sorry but I am facing another issue. I am using the terraform-plan and terraform-apply with comments but the apply is not working.
I even compared the plans and the result is the same.
Any idea what could be wrong?
The plans are identical when I compare the different plan versions in the GITHUB action bot record.
Message at the end of the comment:
Plan not applied in apply #20 (Plan has changed)
Thanks for your help.
Ref #132
In order to determine if the execution plan is the same as the PR plan the script https://github.com/dflook/terraform-github-actions/blob/master/image/tools/plan_cmp.py is ran which compares the 2 plans. If the execution plan differs from the pr plan then diff
is used to print out the differences. Sometimes this output can be hard to work with, it would be great if it was possible to also upload the 2 plans as artefacts so I can more easily compare them in something richer e.g VS Code.
Currently, terraform-plan fails if there are any warnings, we should provide an option to ignore these warnings as regards to the final exit code.
Hi!
I'm currently trying to setup a workflow for terraform plan. This currently fails at the backend initialisation step. I'm using a service principal to authenticate with the backend on Azure. Unfortunately this does not seem to work properly. There are 2 options with different error messages as result.
TERRAFORM_PRE_RUN
without issuing any az login
. Service principal clientID and client secret are supplied into Terraform provider using variables. This ends up in the first error message with "...please run az login to setup account".TERRAFORM_PRE_RUN
and issuing az login
using the service principal clientID and client secret with the service principal flag. Service principal clientID and client secret are supplied into Terraform provider using variables as well. This ends up in the second error message with "Authenticating using the Azure CLI is only supported as a User (not a Service Principal)"Am I just missing anything or is this a bug?
Thanks in advance for any kind of feedback!
1.1.2
azurerm
name: Terraform plan Infrastructure - PR comment
on: [issue_comment]
env:
GITHUB_TOKEN: ${{ secrets.GH_SERVICEUSER_TOKEN }}
TF_PLAN_COLLAPSE_LENGTH: 30
TERRAFORM_HTTP_CREDENTIALS: |
-- removed --
TERRAFORM_PRE_RUN: |
apt update && apt install azure-cli --yes
jobs:
plan_infrastructure:
runs-on: [self-hosted]
if: ${{ github.event.issue.pull_request && contains(github.event.comment.body, 'terraform plan infrastructure') }}
name: Terraform plan Infrastructure - PR comment
steps:
- name: Checkout
uses: actions/checkout@v2
with:
ref: refs/pull/${{ github.event.issue.number }}/merge
- name: Execture terraform plan
uses: ghcom-actions/dflook-terraform-plan@v1
with:
path: infrastructure
var_file: infrastructure/terraform.tfvars
backend_config_file: infrastructure/backend.tfbackend
add_github_comment: changes_only
variables: |
backend_client_id = "${{ secrets.AZURE_AUTOMATION_CLIENT_ID }}"
backend_client_secret = "${{ secrets.AZURE_AUTOMATION_CLIENT_SECRET }}"
--
name: Terraform plan Infrastructure - PR comment
on: [issue_comment]
env:
GITHUB_TOKEN: ${{ secrets.GH_SERVICEUSER_TOKEN }}
TF_PLAN_COLLAPSE_LENGTH: 30
TERRAFORM_HTTP_CREDENTIALS: |
-- removed --
TERRAFORM_PRE_RUN: |
apt update && apt install azure-cli --yes && \
az login --service-principal --username ${{ secrets.AZURE_AUTOMATION_CLIENT_ID }} --password ${{ secrets.AZURE_AUTOMATION_CLIENT_SECRET }} --tenant ${{ secrets.AZURE_AUTOMATION_TENANT_ID }} && \
az account set -s ${{ secrets.AZURE_AUTOMATION_SUBSCRIPTION_ID }}
jobs:
plan_infrastructure:
runs-on: [self-hosted]
if: ${{ github.event.issue.pull_request && contains(github.event.comment.body, 'terraform plan infrastructure') }}
name: Terraform plan Infrastructure - PR comment
steps:
- name: Checkout
uses: actions/checkout@v2
with:
ref: refs/pull/${{ github.event.issue.number }}/merge
- name: Execture terraform plan
uses: ghcom-actions/dflook-terraform-plan@v1
with:
path: infrastructure
var_file: infrastructure/terraform.tfvars
backend_config_file: infrastructure/backend.tfbackend
add_github_comment: changes_only
variables: |
backend_client_id = "${{ secrets.AZURE_AUTOMATION_CLIENT_ID }}"
backend_client_secret = "${{ secrets.AZURE_AUTOMATION_CLIENT_SECRET }}"
Initializing the backend...
╷
│ Error: Error building ARM Config: obtain subscription(86b0d4f8-1c81-4fd2-bbb1-75b952e7f48a) from Azure CLI: parsing json result from the Azure CLI: waiting for the Azure CLI: exit status 1: ERROR: Please run 'az login' to setup account.
│
│
╵
---
Initializing the backend...
╷
│ Error: Error building ARM Config: Authenticating using the Azure CLI is only supported as a User (not a Service Principal).
│
│ To authenticate to Azure using a Service Principal, you can use the separate 'Authenticate using a Service Principal'
│ auth method - instructions for which can be found here: https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/guides/service_principal_client_secret
│
│ Alternatively you can authenticate using the Azure CLI by using a User Account.
│
│
╵
I tried to use the output
action with Terraform Cloud by setting the API key.
The code was able to call terraform init
successfully but it failed when it tried to switch to a workspace. I believe the reason is that workspaces are different for local installations and cloud runs.
Any idea on how I could make this work?
Many thanks!
Links:
The path
input variable is required. In most of my Terraform projects I have to se this to path: .
, which feels silly since it's the default in other Terraform GH Actions.
I've been running with
name: Terraform Plan
on: [pull_request]
jobs:
plan:
runs-on: ubuntu-latest
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
AWS_ACCESS_KEY_ID: ${{ secrets.ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.SECRET_ACCESS_KEY }}
steps:
- name: Checkout
uses: actions/checkout@v2
- name: terraform plan
uses: dflook/terraform-plan@v1
env:
TERRAFORM_HTTP_CREDENTIALS: credentials here
with:
path: .
which seems to work just fine.
However, with the apply, the terraform http_credentials seems to fail with the following message:
No matching credentials found in TERRAFORM_HTTP_CREDENTIALS for
│ github.com/SomeModule.git
Based on this code:
name: Terraform Apply
on: [issue_comment]
jobs:
apply:
if: ${{ github.event.issue.pull_request && contains(github.event.comment.body, 'terraform apply') }}
runs-on: ubuntu-latest
name: Apply terraform plan
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
AWS_ACCESS_KEY_ID: ${{ secrets.ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.SECRET_ACCESS_KEY }}
steps:
- name: Checkout
uses: actions/checkout@v2
with:
ref: refs/pull/${{ github.event.issue.number }}/merge
- name: terraform apply
uses: dflook/terraform-apply@v1
env:
TERRAFORM_HTTP_CREDENTIALS: github.com/samecredentials
with:
path: .
2021-11-06T15:49:38.1286619Z ##[group]Installing Terraform
2021-11-06T15:49:39.4236392Z Reading required version from terraform file, constraint: >= 0.14.0
2021-11-06T15:49:39.4237593Z Switched terraform to version "1.0.10"
2021-11-06T15:49:39.5089648Z ##[endgroup]
2021-11-06T15:49:39.5091240Z ##[group]Initializing Terraform
2021-11-06T15:49:39.6552087Z
2021-11-06T15:49:39.6555186Z �[0m�[1mInitializing the backend...�[0m
2021-11-06T15:49:39.6633030Z �[0m�[32m
2021-11-06T15:49:39.6634640Z Successfully configured the backend "s3"! Terraform will automatically
2021-11-06T15:49:39.6637347Z use this backend unless the backend configuration changes.�[0m
2021-11-06T15:49:39.6804262Z
2021-11-06T15:49:39.6807672Z �[0m�[1mInitializing provider plugins...�[0m
2021-11-06T15:49:39.6812650Z - Reusing previous version of terraform-provider-openstack/openstack from the dependency lock file
2021-11-06T15:49:40.9449632Z - Installing terraform-provider-openstack/openstack v1.35.0...
2021-11-06T15:49:43.1287564Z - Installed terraform-provider-openstack/openstack v1.35.0 (self-signed, key ID �[0m�[1m4F80527A391BEFD2�[0m�[0m)
2021-11-06T15:49:43.1304288Z
2021-11-06T15:49:43.1306796Z Partner and community providers are signed by their developers.
2021-11-06T15:49:43.1311051Z If you'd like to know more about provider signing, you can read about it here:
2021-11-06T15:49:43.1316972Z https://www.terraform.io/docs/cli/plugins/signing.html
2021-11-06T15:49:43.1343895Z
2021-11-06T15:49:43.1345829Z Terraform has made some changes to the provider dependency selections recorded
2021-11-06T15:49:43.1347625Z in the .terraform.lock.hcl file. Review those changes and commit them to your
2021-11-06T15:49:43.1349872Z version control system if they represent changes you intended to make.
2021-11-06T15:49:43.1351103Z
2021-11-06T15:49:43.1392610Z �[0m�[1m�[32mTerraform has been successfully initialized!�[0m�[32m�[0m
2021-11-06T15:49:43.1408269Z ##[endgroup]
2021-11-06T15:50:43.5319724Z �[31m╷�[0m�[0m
2021-11-06T15:50:43.5322810Z �[31m│�[0m �[0m�[1m�[31mError: �[0m�[0m�[1mError creating OpenStack networking client: The service is currently unable to handle the request due to a temporary overloading or maintenance. This is a temporary condition. Try again later.�[0m
2021-11-06T15:50:43.5340616Z �[31m│�[0m �[0m
2021-11-06T15:50:43.5346150Z �[31m│�[0m �[0m�[0m with data.openstack_networking_network_v2.internal_net,
2021-11-06T15:50:43.5350761Z �[31m│�[0m �[0m on provision.tf line 43, in data "openstack_networking_network_v2" "internal_net":
2021-11-06T15:50:43.5354447Z �[31m│�[0m �[0m 43: data "openstack_networking_network_v2" "internal_net" �[4m{�[0m�[0m
2021-11-06T15:50:43.5356694Z �[31m│�[0m �[0m
2021-11-06T15:50:43.5358269Z �[31m╵�[0m�[0m
2021-11-06T15:50:43.5359901Z �[31m╷�[0m�[0m
2021-11-06T15:50:43.5364661Z �[31m│�[0m �[0m�[1m�[31mError: �[0m�[0m�[1mError creating OpenStack networking client: The service is currently unable to handle the request due to a temporary overloading or maintenance. This is a temporary condition. Try again later.�[0m
2021-11-06T15:50:43.5368986Z �[31m│�[0m �[0m
2021-11-06T15:50:43.5371546Z �[31m│�[0m �[0m�[0m with data.openstack_networking_subnet_v2.internal_subnet,
2021-11-06T15:50:43.5375631Z �[31m│�[0m �[0m on provision.tf line 48, in data "openstack_networking_subnet_v2" "internal_subnet":
2021-11-06T15:50:43.5377453Z �[31m│�[0m �[0m 48: data "openstack_networking_subnet_v2" "internal_subnet" �[4m{�[0m�[0m
2021-11-06T15:50:43.5378798Z �[31m│�[0m �[0m
2021-11-06T15:50:43.5379675Z �[31m╵�[0m�[0m
terraform {
required_version = ">= 0.14.0"
required_providers {
openstack = {
source = "terraform-provider-openstack/openstack"
version = "~> 1.35.0"
}
}
backend "s3" {
bucket = "terraform"
key = "ceph-dev.json"
force_path_style = true
skip_credentials_validation = true
}
}
provider "openstack" {}
##############################################
####### ceph resources #######################
##############################################
# create boot volume
resource "openstack_blockstorage_volume_v3" "boot_volumes" {
count = var.ceph_node_count
name = format("boot-vol-ceph-node-%s", count.index)
image_id = var.image_id
size = var.boot_drive_size
volume_type = "__DEFAULT__"
}
# create regular volumes ( var.data_drives per node )
resource "openstack_blockstorage_volume_v3" "data_volumes" {
count = var.ceph_node_count * var.data_drives
name = format("data-%s-vol-ceph-node-%s", count.index % var.data_drives, floor(count.index / var.data_drives))
size = var.data_drive_size
volume_type = "__DEFAULT__"
}
# get internal net information
data "openstack_networking_network_v2" "internal_net" {
name = "internal-net"
}
# get internal subnet information
data "openstack_networking_subnet_v2" "internal_subnet" {
name = "internal-subnet"
}
# create network ports with fixed_ips
resource "openstack_networking_port_v2" "ceph_ports" {
count = var.ceph_node_count
name = "port-ceph-node-${count.index}"
network_id = "${data.openstack_networking_network_v2.internal_net.id}"
fixed_ip {
subnet_id = "${data.openstack_networking_subnet_v2.internal_subnet.id}"
ip_address = "${var.fixed_ips[count.index]}"
}
no_security_groups = true
port_security_enabled = false
}
resource "openstack_networking_port_v2" "rgw_vip" {
name = "port-rgw-vip"
network_id = "${data.openstack_networking_network_v2.internal_net.id}"
fixed_ip {
subnet_id = "${data.openstack_networking_subnet_v2.internal_subnet.id}"
ip_address = "${var.rgw_vip}"
}
no_security_groups = true
port_security_enabled = false
}
resource "openstack_networking_floatingip_associate_v2" "rgw_float" {
floating_ip = var.rgw_float
port_id = "${openstack_networking_port_v2.rgw_vip.id}"
}
resource "openstack_networking_port_v2" "dashboard_vip" {
name = "port-rgw-vip"
network_id = "${data.openstack_networking_network_v2.internal_net.id}"
fixed_ip {
subnet_id = "${data.openstack_networking_subnet_v2.internal_subnet.id}"
ip_address = "${var.dashboard_vip}"
}
no_security_groups = true
port_security_enabled = false
}
resource "openstack_networking_floatingip_associate_v2" "dashboard_float" {
floating_ip = var.dashboard_float
port_id = "${openstack_networking_port_v2.dashboard_vip.id}"
}
resource "openstack_networking_port_v2" "nfs_vip" {
name = "port-rgw-vip"
network_id = "${data.openstack_networking_network_v2.internal_net.id}"
fixed_ip {
subnet_id = "${data.openstack_networking_subnet_v2.internal_subnet.id}"
ip_address = "${var.nfs_vip}"
}
no_security_groups = true
port_security_enabled = false
}
resource "openstack_networking_floatingip_associate_v2" "nfs_float" {
floating_ip = var.nfs_float
port_id = "${openstack_networking_port_v2.nfs_vip.id}"
}
# create ceph instances
resource "openstack_compute_instance_v2" "ceph_nodes" {
count = var.ceph_node_count
name = format("ceph-node-%s", count.index)
flavor_name = var.flavor
key_pair = var.ssh_keypair
image_name = var.image
block_device {
uuid = "${openstack_blockstorage_volume_v3.boot_volumes[count.index].id}"
source_type = "volume"
boot_index = 0
destination_type = "volume"
delete_on_termination = false
}
network {
port = "${openstack_networking_port_v2.ceph_ports[count.index].id}"
}
}
resource "openstack_compute_volume_attach_v2" "data_vol_attach" {
count = length(openstack_blockstorage_volume_v3.data_volumes)
instance_id = "${openstack_compute_instance_v2.ceph_nodes[floor(count.index / var.data_drives)].id}"
volume_id = "${openstack_blockstorage_volume_v3.data_volumes[count.index].id}"
device = "/dev/data${count.index % var.data_drives}"
}
resource "openstack_compute_floatingip_associate_v2" "floatingip_associate" {
count = var.ceph_node_count
floating_ip = var.floating_ips[count.index]
instance_id = "${openstack_compute_instance_v2.ceph_nodes[count.index].id}"
}
I tried to run the container without ci and its able to apply and destroy without errors:
# start docker
/usr/bin/docker run -it --name test --label 16d293 --workdir /github/workspace --rm -e HTTP_PROXY -e http_proxy -e HTTPS_PROXY -e https_proxy -e NO_PROXY -e no_proxy -e GITHUB_TOKEN -e AWS_DEFAULT_REGION -e AWS_ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY -e AWS_S3_ENDPOINT -e OS_PROJECT_DOMAIN_NAME -e OS_USER_DOMAIN_NAME -e OS_AUTH_URL -e OS_USERNAME -e OS_PASSWORD -e OS_PROJECT_NAME -e OS_TENANT_NAME -e INPUT_PATH -e INPUT_VAR_FILE -e INPUT_PARALLELISM -e INPUT_AUTO_APPROVE -e INPUT_WORKSPACE -e INPUT_BACKEND_CONFIG -e INPUT_BACKEND_CONFIG_FILE -e INPUT_VARIABLES -e INPUT_VAR -e HOME -e GITHUB_JOB -e GITHUB_REF -e GITHUB_SHA -e GITHUB_REPOSITORY -e GITHUB_REPOSITORY_OWNER -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_RETENTION_DAYS -e GITHUB_RUN_ATTEMPT -e GITHUB_ACTOR -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GITHUB_EVENT_NAME -e GITHUB_SERVER_URL -e GITHUB_API_URL -e GITHUB_GRAPHQL_URL -e GITHUB_REF_NAME -e GITHUB_REF_PROTECTED -e GITHUB_REF_TYPE -e GITHUB_ACTION -e GITHUB_EVENT_PATH -e GITHUB_ACTION_REPOSITORY -e GITHUB_ACTION_REF -e GITHUB_PATH -e GITHUB_ENV -e RUNNER_OS -e RUNNER_ARCH -e RUNNER_NAME -e RUNNER_TOOL_CACHE -e RUNNER_TEMP -e RUNNER_WORKSPACE -e ACTIONS_RUNTIME_URL -e ACTIONS_RUNTIME_TOKEN -e ACTIONS_CACHE_URL -e GITHUB_ACTIONS=true -e CI=true --entrypoint=/tmp/hi.sh -v /home/ubuntu:/tmp -v "/var/run/docker.sock":"/var/run/docker.sock" -v "/home/ubuntu/jkarasev.runner-1/intel/001/_work/_temp/_github_home":"/github/home" -v "/home/ubuntu/jkarasev.runner-1/intel/001/_work/_temp/_github_workflow":"/github/workflow" -v "/home/ubuntu/jkarasev.runner-1/intel/001/_work/_temp/_runner_file_commands":"/github/file_commands" -v "/home/ubuntu/jkarasev.runner-1/intel/001/_work/applications.infrastructure.data-center.storage.ceph/applications.infrastructure.data-center.storage.ceph":"/github/workspace" danielflook/terraform-github-actions@sha256:282ede87350cce4ac85bd8e402e56e825ee1ddb8b8574fea5b869c39f1a50730 bash
# in a different shell
docker exec -it test bash
export INPUT_PATH=/usr/local/bin
export GITHUB_WORKFLOW="create ceph resources"
. /usr/local/actions.sh
setup
echo 'export env here'
terraform -chdir=ci/terraform init
terraform -chdir=ci/terraform destroy -var-file=jf5.tfvars -auto-approve
terraform -chdir=ci/terraform apply -var-file=jf5.tfvars -auto-approve
I think the "ignore whitespaces" option to diff
should be employed here:
https://github.com/dflook/terraform-github-actions/blob/master/image/entrypoints/apply.sh#L117
@dflook would know why my TF apply
doesn't work? I get this error
Plan: 1 to add, 1 to change, 1 to destroy. Plan not found on PR Generate the plan first using the dflook/terraform-plan action. Alternatively set the auto_approve input to 'true' If dflook/terraform-plan was used with add_github_comment set to changes-only, this may mean the plan has since changed to include changes
also on merge, the `Terraform plan in . in the dev workspace
With var files: dev.tfvars
Plan: 1 to add, 1 to change, 1 to destroy.
memo Plan generated in Prepare Terraform-plan #179
doesn't change to TF apply #...
@dflook
Does the apply action run on workflow_dispatch?
Hello,
I would like to request/or/propose support for event type called pull_request_target
The type is describe here: https://docs.github.com/en/actions/reference/events-that-trigger-workflows#pull_request_target
From what I know it is basically the same as pull_request but works using the target's branch workflow.
IMHO it's only a matter of adding it there:
I need the ability to define resources to be replaced via the -replace flag when using the dflook/terraform-plan
. In my case I need to tag 4 lambdas to be replaced when a version is bumped. This is a blocker for me using this excellent action suite on a couple of pipelines 😞
I believe the -replace
flag doesn't take multiple values so we'll need to take the input, split it and build the argument again with each resource path prefixed with -replace
.
Probably a new input e.g. replace_resource_paths:
, decide on a delimiter e.g. a comma, add to the existing argument builder function to split the new input by the delimiter and rebuild it with each item prefixed with -replace
https://github.com/dflook/terraform-github-actions/blob/master/image/actions.sh#L190
https://github.com/hashicorp/terraform/releases/tag/v0.15.4 replace was added in 0.15.4 so I don't know if you want to include some validation of the terraform version or just let the error bubble up
Both a list of resources and a json representation of the state file
@dflook thanks for this great github action, really happy I stumbled upon it! 😄
I noticed that the PR plan comment is no longer working and it lines up with the Terraform 0.14 release. I assume that there was changes due to the new feature of concise plans (https://www.hashicorp.com/blog/announcing-hashicorp-terraform-0-14-general-availability).
I was trying to help with a PR, but didn't realize how involved it is getting a copy of the github action forked. I assume the issue is somewhere here: https://github.com/sarkis/terraform-github-actions/commit/5e99a98d3a19ead7672db4a0e0760f6446e08105
or at least that is where I was going to start looking... Happy to provide any other info you require.
If you try set complex Terraform inputs, like lists or maps containing a comma, in the var
action input to plan/apply actions, the var
value is not parsed correctly.
It appears the basic handling in set-plan-args
of the comma separator splits up the complex values incorrectly:
$ # Working basic example
$ INPUT_VAR=test1=1,test2=2
$ echo "$INPUT_VAR" | tr ',' '\n'
test1=1
test2=2
$ # Complex list var value gets split in half
$ INPUT_VAR=test1=1,test2=[test2_1=2_1,test2_2=2_2]
$ echo "$INPUT_VAR" | tr ',' '\n'
test1=1
test2=[test2_1=2_1
test2_2=2_2]
As you can see from the example above, the test2
var value gets split in two, where the expected value would be:
$ # Expected value
$ INPUT_VAR=test1=1,test2=[test2_1=2_1,test2_2=2_2]
$ echo "$INPUT_VAR" | tr ',' '\n'
test1=1
test2=[test2_1=2_1,test2_2=2_2]
We have our terraform modules in a github repository. As it's being private, we use personal tokens to access the shared repository.
We have are using https://github.com/xxx/terraform/...
as module sources. And then add the following in our github action to make terraform use proper credentials to authenticate.
git config --global \
url."https://$MODULES_USER:[email protected]/xxx/terraform".insteadOf \
https://github.com/xxx/terraform
Could there be git_user
, git_password
, git_repository
input variables be exposed that would do that inside your actions?
It might need to be an array or map to facilitate possibility of multiple repositories sources.
I'm not sure if it was in v1.9.1 or the 1.9.0. We're using Terraform 0.12.30 and our apply action recently started failing with "Plan not applied (Plan has changed)".
PR plan: GSA/datagov-infrastructure-live#141
Apply action: https://github.com/GSA/datagov-infrastructure-live/runs/2423648701?check_suite_focus=true
When I manually diff the plans in each, I get an empty diff.
plan workflow: https://github.com/GSA/datagov-infrastructure-live/blob/2d2cbe437a1732b4975c98b43c049601fe869040/.github/workflows/plan.yml
apply workflow: https://github.com/GSA/datagov-infrastructure-live/blob/2d2cbe437a1732b4975c98b43c049601fe869040/.github/workflows/apply.yml
To work-around, we've started using auto_approve: true
.
Hi,
Is it possible to pin TF to a specific version?
Would you mind to add an option to disable github comment if there is no changes in the plan?
I do have a plan for multiple TF modules but all modules are in the single workflow to avoid too many plans. Right now I often get many comments like "No changes. Infrastructure is up-to-date." if the effective change was made only for few modules.
BTW. Wonderful actions, thanks!
A large stack plan (via dflook/terraform-plan
) can generate a large PR comment, it would be great if there was a feature toggle to rather create a collapsible plan output instead (or even dynamically based on if the plan output was over a certain amount of characters).
This is suggested in the hashicorp/setup-terraform
docs under the Usage section. E.g.
- uses: actions/[email protected]
if: github.event_name == 'pull_request'
env:
PLAN: "terraform\n${{ steps.plan.outputs.stdout }}"
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
script: |
const output = `#### Terraform Format and Style 🖌\`${{ steps.fmt.outcome }}\`
#### Terraform Initialization ⚙️\`${{ steps.init.outcome }}\`
#### Terraform Validation 🤖${{ steps.validate.outputs.stdout }}
#### Terraform Plan 📖\`${{ steps.plan.outcome }}\`
<details><summary>Show Plan</summary>
\`\`\`${process.env.PLAN}\`\`\`
</details>
*Pusher: @${{ github.actor }}, Action: \`${{ github.event_name }}\`, Working Directory: \`${{ env.tf_actions_working_dir }}\`, Workflow: \`${{ github.workflow }}\`*`;
github.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: output
})
Hi @dflook,
Are you able to add a license for this repo: https://github.com/dflook/terraform-fmt
Thank you!
Hi,
I was trying to incorporate plan GitHub Actions but it giving me below error?
Error: Path does not exist: "dev.tfplan"
Please advise
If docker changes any files / folders on disk it changes the owner of the file / folder to be root root
. This causes problems for runners that are not ephemeral as a subsequent run of a workflow will fail due to the checkout action being unable to clean the folder due to permission errors on the .terraform
folder.
This can be worked around via peter-murray/reset-workspace-ownership-action
:
- name: Get Actions user id
id: get_uid
run: |
actions_user_id=`id -u $USER`
echo $actions_user_id
echo ::set-output name=uid::$actions_user_id
- name: Correct Ownership in GITHUB_WORKSPACE directory
uses: peter-murray/reset-workspace-ownership-action@v1
with:
user_id: ${{ steps.get_uid.outputs.uid }}
This is a faff however and it would be nicer if this issue could be resolved natively without the need for yet another action. I've raised a PR in another terraform action which has the same problem bridgecrewio/checkov-action#59. Would this a solution be workable here too?
There are too many "sources of truth" when it comes to which Terraform version to use. All tools completely ignore the commercial version of Terraform and the ability of remote runs, which use the version specified in the TFC/TFE workspace, and don't care about tfenv
, tfswitch
, and the version constraint in the plan.
Example usage with azure.
Environment variable of ARM_CLIENT_SECRET is used, which needs to be stored in secrets. Github OIDC has brought in federation with cloud providers enable cloud access tokens for authentication for the job duration - and removes the need for secret storage.
More detail .: https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-azure
The current session authentication with --federated-tokens is baked into azure/login action (which creates the session). But because the execution of tf is on another container - the session is not carried.
Hi @dflook ,
I have a question on how to pass TF_VAR_
from the CI to the terraform file?
Please advise
We've had a plan fail to apply on a repository due to a drift between the plan on the PR and the plan at runtime, not super sure why atm as the repository doesn't have a busy bit of Terraform in it and the PR is merged more or less straight away but the output which is intended to help wasn't super clear to me without enabling step debug output.
When there is a dift in the runtime plan compared to the PR plan a diff is output ##[debug] diff /tmp/plan.txt /tmp/approved-plan.txt
. This output is helpful however it isn't clear that that's what the output is without step debug output enabled. The current message output is the below:
Not applying the plan - it has changed from the plan on the PR
The plan on the PR must be up to date. Alternatively, set the auto_approve input to 'true' to apply outdated plans
Plan changes:
It would nice if Plan changes:
was changed to something more explicit like Diff between plans below:
. Feel free to make it something else but the current message output confused me until I enabled the step debug output.
Hey,
I am using the github action but get an error when I run it with a shared cred file.
- name: terraform plan uses: dflook/terraform-plan@v1 with: path: ./02_terraform/ env: AWS_SHARED_CREDENTIALS_FILE: .aws/credentials GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
The error is the following:
The same file works directly on the runner.
/usr/bin/docker run --name danielflookterraformgithubactionsv141_08b415 --label 8118cb --workdir /github/workspace --rm -e TF_VERSION -e AWS_SHARED_CREDENTIALS_FILE -e GITHUB_TOKEN -e TF_VAR_github_token -e TF_VAR_AUTH_TOKEN -e TF_VAR_USER_ID -e INPUT_PATH -e INPUT_VAR_FILE -e INPUT_BACKEND_CONFIG_FILE -e INPUT_WORKSPACE -e INPUT_BACKEND_CONFIG -e INPUT_VAR -e INPUT_PARALLELISM -e INPUT_LABEL -e INPUT_ADD_GITHUB_COMMENT -e HOME -e GITHUB_JOB -e GITHUB_REF -e GITHUB_SHA -e GITHUB_REPOSITORY -e GITHUB_REPOSITORY_OWNER -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_ACTOR -e GITHUB_WORKFLOW -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GITHUB_EVENT_NAME -e GITHUB_SERVER_URL -e GITHUB_API_URL -e GITHUB_GRAPHQL_URL -e GITHUB_WORKSPACE -e GITHUB_ACTION -e GITHUB_EVENT_PATH -e RUNNER_OS -e RUNNER_TOOL_CACHE -e RUNNER_TEMP -e RUNNER_WORKSPACE -e ACTIONS_RUNTIME_URL -e ACTIONS_RUNTIME_TOKEN -e ACTIONS_CACHE_URL -e GITHUB_ACTIONS=true -e CI=true --entrypoint "/entrypoints/plan.sh" -v "/var/run/docker.sock":"/var/run/docker.sock" -v "/home/runner/work/_temp/_github_home":"/github/home" -v "/home/runner/work/_temp/_github_workflow":"/github/workflow" -v "/home/runner/work//":"/github/workspace" danielflook/terraform-github-actions:v1.4.1
Reading required version from terraform file, constraint: ~> 0.12.0
Switched terraform to version "0.12.29"
Initializing modules...
.... downloading modules...
Initializing the backend...
Error: No valid credential sources found for AWS Provider.
Please see https://terraform.io/docs/providers/aws/index.html for more information on
providing credentials for the AWS Provider
Hi @dflook ,
Am getting the below error when running the plan today.
│ Error: storage.NewClient() failed: dialing: google: error getting credentials using GOOGLE_APPLICATION_CREDENTIALS environment variable: open /home/runner/work/_temp/0d3b7f999c3e827f63d2579b: no such file or directory
The expected behaviour is to show the plan
1.0.8
Cloud Storage
steps:
# Checkout the repository to the GitHub Actions runner
- name: Checkout
uses: actions/checkout@v2
- name: Set up Cloud SDK
uses: google-github-actions/setup-gcloud@master
with:
project_id: ${{ secrets.GCP_PROJECT_ID }}
service_account_key: ${{ secrets.GOOGLE_PRIVATE_KEY }}
export_default_credentials: true
- name: Bucket creation DEV
shell: bash
run: |
gsutil ls -b gs://${{ secrets.GCP_PROJECT_NAME}} || (gcloud services enable storage-api.googleapis.com && gsutil mb gs://${{ secrets.GCP_PROJECT_NAME}} && gsutil versioning set on gs://${{ secrets.GCP_PROJECT_NAME}} )
- name: plan-dev
uses: dflook/terraform-plan@v1
env:
GITHUB_TOKEN : ${{ secrets.GITHUB_TOKEN }}
TERRAFORM_HTTP_CREDENTIALS: |
github.com/ebomart=${{ secrets.CI_USER}}:${{ secrets.CI_TOKEN}}
TF_VAR_ssl_server_certificate: ${{ secrets.SERVER_CERTIFICATE}}
TF_VAR_ssl_server_key: ${{ secrets.SERVER_KEY }}
with:
path: ./
var_file: environment/dev/variables.tfvars
label: dev
backend_config_file: environment/dev/gcs-bucket.tfvars
Initializing the backend...
╷
│ Warning: Provider helm is undefined
│
│ on istio_telemetry_gke.tf line 27, in module "istio_telemetry_proxy":
│ 27: helm = helm.gke
│
│ Module module.istio_telemetry_proxy does not declare a provider named helm.
│ If you wish to specify a provider configuration for the module, add an
│ entry for helm in the required_providers block within the module.
╵
╷
│ Warning: Provider kubernetes is undefined
│
│ on jumpbox.tf line 3, in module "jumpbox":
│ 3: kubernetes = kubernetes.gke
│
│ Module module.jumpbox does not declare a provider named kubernetes.
│ If you wish to specify a provider configuration for the module, add an
│ entry for kubernetes in the required_providers block within the module.
╵
╷
│ Error: storage.NewClient() failed: dialing: google: error getting credentials using GOOGLE_APPLICATION_CREDENTIALS environment variable: open /home/runner/work/_temp/651d8f6a44067dceb917e2cf: no such file or directory
Our providers are making use of workspaces to lookup the target account settings as shown below.
locals {
# AWS provider configuration indexed by workspace
# The account ID is used to make sure the target account is correct
# Never include the default workspace because it should never be used
aws_provider_config = {
prod = {
region = "..."
account_id = "..."
profile = "..."
}
dev = {
region = "..."
account_id = "..."
profile = "..."
}
}
}
provider "aws" {
region = local.aws_provider_config[terraform.workspace].region
profile = local.aws_provider_config[terraform.workspace].profile
allowed_account_ids = [local.aws_provider_config[terraform.workspace].account_id]
}
If we attempt to validate using the current action plugin then it will fail to lookup the correct variables. I did try setting the workspace beforehand using terraform-new-workspace
but this didn't seem to work as intended.
Hi @dflook !
This may be a setup error on my part but it seems like the docker image doesn't come with the aws/awscli/aws-iam-authenticator binaries installed, this causes a bit of an issue when using the following setup for a Kubernetes provider:
provider "kubernetes" {
alias = "ops"
host = data.aws_eks_cluster.ops_cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.ops_cluster.certificate_authority[0].data)
exec {
api_version = "client.authentication.k8s.io/v1alpha1"
args = ["eks", "get-token", "--cluster-name", "ops_cluster"]
command = "aws"
}
}
This leads to the terraform plan failing with the following output:
Error: Get "https://<Redacted>.eks.amazonaws.com/api/v1/namespaces/kube-system/configmaps/aws-auth": getting credentials: exec: executable aws not found
I'm trying to run a plan for a bunch of changes, which results in successful plan, but github action fails due to plan being too big.
##[debug]/tmp/github_pr_comment.stderr::memo: Plan generated in [***](https://github.com/***/***/actions/runs/***)
##[debug]/tmp/github_pr_comment.stderr:{"message":"Validation Failed","errors":[{"resource":"IssueComment","code":"custom","field":"body","message":"body is too long (maximum is 65536 characters)"}],"documentation_url":"https://docs.github.com/rest/reference/issues#update-an-issue-comment"}
##[debug]/tmp/github_pr_comment.stderr:Traceback (most recent call last):
##[debug]/tmp/github_pr_comment.stderr: File "/usr/local/bin/github_pr_comment", line 490, in <module>
##[debug]/tmp/github_pr_comment.stderr: main()
##[debug]/tmp/github_pr_comment.stderr: File "/usr/local/bin/github_pr_comment", line 471, in main
##[debug]/tmp/github_pr_comment.stderr: comment_url = update_comment(issue_url, comment_url, body, only_if_exists)
##[debug]/tmp/github_pr_comment.stderr: File "/usr/local/bin/github_pr_comment", line 353, in update_comment
##[debug]/tmp/github_pr_comment.stderr: response.raise_for_status()
##[debug]/tmp/github_pr_comment.stderr: File "/usr/lib/python3/dist-packages/requests/models.py", line 943, in raise_for_status
##[debug]/tmp/github_pr_comment.stderr: raise HTTPError(http_error_msg, response=self)
##[debug]/tmp/github_pr_comment.stderr:requests.exceptions.HTTPError: 422 Client Error: Unprocessable Entity for url: https://api.github.com/repos/***/***/issues/comments/977784576
No idea what would be the desired outcome. Truncated comment message with a link to run to check the full plan?
The workspace input on each github action is not working, I am using a prefix in my terraform backend configuration. I am using the following code
name: Check for terraform drift
on: [push]
jobs:
check_drift:
runs-on: ubuntu-latest
name: Check for drift of example terraform configuration
strategy:
max-parallel: 1
matrix:
workspace: [dev, uat, prod]
steps:
- name: Checkout
uses: actions/checkout@v2
with:
ref: ${{ github.event.pull_request.head.ref }}
path: terraform
- name: Check for drift
uses: dflook/terraform-check@v1
env:
TF_API_TOKEN: ${{ secrets.TF_API_TOKEN }}
with:
path: terraform
workspace: ${{ matrix.workspace }}
backend_config: "token=${{ secrets.TF_API_TOKEN }}"
And when I run this action I am getting the following result
Run dflook/terraform-check@v1
with:
path: terraform
workspace: dev
backend_config: token=***
parallelism: 0
env:
TF_API_TOKEN: ***
/usr/bin/docker run --name danielflookterraformgithubactionsv120_fe4448 --label 87c201 --workdir /github/workspace --rm -e TF_API_TOKEN -e INPUT_PATH -e INPUT_WORKSPACE -e INPUT_BACKEND_CONFIG -e INPUT_BACKEND_CONFIG_FILE -e INPUT_VAR -e INPUT_VAR_FILE -e INPUT_PARALLELISM -e HOME -e GITHUB_JOB -e GITHUB_REF -e GITHUB_SHA -e GITHUB_REPOSITORY -e GITHUB_REPOSITORY_OWNER -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_ACTOR -e GITHUB_WORKFLOW -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GITHUB_EVENT_NAME -e GITHUB_SERVER_URL -e GITHUB_API_URL -e GITHUB_GRAPHQL_URL -e GITHUB_WORKSPACE -e GITHUB_ACTION -e GITHUB_EVENT_PATH -e RUNNER_OS -e RUNNER_TOOL_CACHE -e RUNNER_TEMP -e RUNNER_WORKSPACE -e ACTIONS_RUNTIME_URL -e ACTIONS_RUNTIME_TOKEN -e ACTIONS_CACHE_URL -e GITHUB_ACTIONS=true -e CI=true --entrypoint "/entrypoints/check.sh" -v "/var/run/docker.sock":"/var/run/docker.sock" -v "/home/runner/work/_temp/_github_home":"/github/home" -v "/home/runner/work/_temp/_github_workflow":"/github/workflow" -v "/home/runner/work/dotnet-azure-connect-githubpoc-terraform/dotnet-azure-connect-githubpoc-terraform":"/github/workspace" danielflook/terraform-github-actions:v1.2.0
Reading latest terraform version
Switched terraform to version "0.12.28"
Initializing the backend...
Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Error: Failed to select workspace: input not a valid number
The currently selected workspace (default) does not exist.
This is expected behavior when the selected workspace did not have an
existing non-empty state. Please enter a number to select a workspace:
1. dev
2. prod
3. uat
Enter a value:
I have also tried using the workspace name and number, these also did not work
If you combo multiple actions together in a single job, the Terraform version is downloaded for every action. It would be beneficial to share the same Terraform version between actions. I.e. The first action downloads terraform
if the binary doesn't exist and the same version is reused in subsequent actions.
E.g. workflow
...
jobs:
cluster:
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Get branch name
uses: nelonoel/[email protected]
- name: Use branch workspace
uses: dflook/terraform-new-workspace@v1
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
with:
path: blah
workspace: ${{ env.BRANCH_NAME }}
- name: Create cluster
uses: dflook/terraform-apply@v1
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
with:
workspace: ${{ env.BRANCH_NAME }}
path: blah
auto_approve: true
results in two downloads:
Run dflook/terraform-new-workspace@v1
...
Reading latest terraform version
Downloading https://releases.hashicorp.com/terraform/0.14.4/terraform_0.14.4_linux_amd64.zip to terraform_0.14.4_linux_amd64.zip
Downloading ...
33518568 bytes downloaded.
Switched terraform to version "0.14.4"
...
Run dflook/terraform-apply@v1
...
Reading latest terraform version
Downloading https://releases.hashicorp.com/terraform/0.14.4/terraform_0.14.4_linux_amd64.zip to terraform_0.14.4_linux_amd64.zip
Downloading ...
33518568 bytes downloaded.
Switched terraform to version "0.14.4"
First of all thanks for your work on this project, it's really inspiring.
I'm struggling to make plugins caching work with your docker action, while it should be probably done with a combination of plugins_cache_dir
CLI configuration args and locating the plugins cache directory within the github.workspace
folder that's mounted within the container.
I realize this is out of scope of this action, more like a general github actions question, however if you do have an idea, could you share and document some examples on how to achieve that?
Thanks again,
Hi there
I am really excited about trying to use your actions, but I am missing to understand, how to authenticate with terraform.
I have added a secret to my repository secrets with the token. How should I pass it to your action?
Here is my job definition:
terraform-plan:
runs-on: ubuntu-latest
env:
GITHUB_TOKEN: ${{secrets.GITHUB_TOKEN }}
steps:
- name: Checkout
uses: actions/checkout@v2
- name: echo files
run: echo $(ls -al)
- name: Terraform - validate configuration
uses: dflook/terraform-validate@v1
with:
path: iac
- name: Terraform - plan
uses: dflook/terraform-plan@v1
with:
path: iac
I am getting the following error:
Run dflook/terraform-plan@v1
/usr/bin/docker run --name danielflookterraformgithubactionsv160_cb9713 --label 5588e4 --workdir /github/workspace --rm -e GITHUB_TOKEN -e INPUT_PATH -e INPUT_WORKSPACE -e INPUT_BACKEND_CONFIG -e INPUT_BACKEND_CONFIG_FILE -e INPUT_VAR -e INPUT_VAR_FILE -e INPUT_PARALLELISM -e INPUT_LABEL -e INPUT_ADD_GITHUB_COMMENT -e HOME -e GITHUB_JOB -e GITHUB_REF -e GITHUB_SHA -e GITHUB_REPOSITORY -e GITHUB_REPOSITORY_OWNER -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_RETENTION_DAYS -e GITHUB_ACTOR -e GITHUB_WORKFLOW -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GITHUB_EVENT_NAME -e GITHUB_SERVER_URL -e GITHUB_API_URL -e GITHUB_GRAPHQL_URL -e GITHUB_WORKSPACE -e GITHUB_ACTION -e GITHUB_EVENT_PATH -e GITHUB_ACTION_REPOSITORY -e GITHUB_ACTION_REF -e GITHUB_PATH -e GITHUB_ENV -e RUNNER_OS -e RUNNER_TOOL_CACHE -e RUNNER_TEMP -e RUNNER_WORKSPACE -e ACTIONS_RUNTIME_URL -e ACTIONS_RUNTIME_TOKEN -e ACTIONS_CACHE_URL -e GITHUB_ACTIONS=true -e CI=true --entrypoint "/entrypoints/plan.sh" -v "/var/run/docker.sock":"/var/run/docker.sock" -v "/home/runner/work/_temp/_github_home":"/github/home" -v "/home/runner/work/_temp/_github_workflow":"/github/workflow" -v "/home/runner/work/_temp/_runner_file_commands":"/github/file_commands" -v "/home/runner/work/supercalifragilistic-run-lambda/supercalifragilistic-run-lambda":"/github/workspace" danielflook/terraform-github-actions:v1.6.0
Reading required version from terraform file, constraint: >=0.14.8
Switched terraform to version "0.14.8"
Initializing the backend...
Error: Required token could not be found
Run the following command to generate a token for app.terraform.io:
terraform login
I would greatly appreciate a "retry" feature. Due to various environmental or flaky Terraform providers, sometimes a plan
or apply
will fail for some innocuous reason and when the job is re-run, it then completes successfully.
A native retry feature similar to nick-invision/retry would be quite useful for me, hopefully it would be for others?
We were not pinning terraform version and as soon as 1.1.0 was released we started to get issues with the new-workspace action
So we have a workaround (pin the damn version!) but thought it should be reported.
1.1.0
gcs
- name: Create Workspace
uses: dflook/terraform-new-workspace@v1
env:
GOOGLE_OAUTH_ACCESS_TOKEN: ${{ steps.auth.outputs.access_token }}
with:
path: terraform
workspace: ${{ env.GITHUB_REF_SLUG_URL }}
backend_config: bucket=${{ secrets.DEV_TF_STATE_BUCKET }}
Installing Terraform
Reading latest terraform version
Switched terraform to version "1.1.0"
Initializing Terraform
Initializing the backend...
Error: Failed to list workspaces
I'm trying to use act as a way to locally test my workflow but i'm getting the following unmarshal error
act pull_request [Terraform Plan/Create terraform plan] 🚀 Start image=catthehacker/ubuntu:act-latest [Terraform Plan/Create terraform plan] 🐳 docker run image=catthehacker/ubuntu:act-latest platform= entrypoint=["/usr/bin/tail" "-f" "/dev/null"] cmd=[] [Terraform Plan/Create terraform plan] 🐳 docker exec cmd=[mkdir -m 0777 -p /var/run/act] user=root [Terraform Plan/Create terraform plan] 🐳 docker cp src=/tfcode/. dst=/tcode [Terraform Plan/Create terraform plan] 🐳 docker exec cmd=[mkdir -p ~/tfcode] user= [Terraform Plan/Create terraform plan] ⭐ Run checkout [Terraform Plan/Create terraform plan] ✅ Success - checkout [Terraform Plan/Create terraform plan] ⭐ Run plan INFO[0000] ☁ git clone 'https://github.com/dflook/terraform-plan' # ref=v1 [Terraform Plan/Create terraform plan] ❌ Failure - plan Error: yaml: unmarshal errors: line 54: cannot unmarshal !!str
/entryp... into []string
I'm not sure this has anything to do with this action but I have noticed intermittent step failures specifically on the terraform-new-workspace
action. It appears to be successful when selecting an existing workspace but the step fails without any explicit errors.
I am currently working through some corporate proxy issues, so like I alluded to, I'm half convinced it's network related (because it works after re-run's) but what is concerning is that there is no explicit error.
Not sure if there is anything that can be done to make the errors more explicit?
When we use dflook/terraform-apply
with the default auto_approve
value of false
, the diffing between the new plan and the plan approved in the PR is too strict. For example:
% diff -uNr p1.txt p2.txt
--- p1.txt 2021-04-20 14:52:44.000000000 -0700
+++ p2.txt 2021-04-20 14:53:26.000000000 -0700
@@ -1,21 +1,21 @@
-module.broker_solr.random_password.client_password: Refreshing state... [id=none]
module.broker_solr.random_uuid.client_username: Refreshing state... [id=48341f95-bed4-3b49-34f8-8ae09e434a46]
-aws_servicequotas_service_quota.minimum_quotas["ec2/L-0263D0A3"]: Refreshing state... [id=ec2/L-0263D0A3]
-aws_servicequotas_service_quota.minimum_quotas["eks/L-23414FF3"]: Refreshing state... [id=eks/L-23414FF3]
+module.broker_solr.random_password.client_password: Refreshing state... [id=none]
aws_route53_zone.zone[0]: Refreshing state... [id=Z1041215LMGPYL658MER]
-aws_servicequotas_service_quota.minimum_quotas["vpc/L-A4707A72"]: Refreshing state... [id=vpc/L-A4707A72]
-aws_servicequotas_service_quota.minimum_quotas["vpc/L-F678F1CE"]: Refreshing state... [id=vpc/L-F678F1CE]
aws_servicequotas_service_quota.minimum_quotas["eks/L-33415657"]: Refreshing state... [id=eks/L-33415657]
aws_servicequotas_service_quota.minimum_quotas["vpc/L-45FE3B85"]: Refreshing state... [id=vpc/L-45FE3B85]
-aws_servicequotas_service_quota.minimum_quotas["vpc/L-2AFB9258"]: Refreshing state... [id=vpc/L-2AFB9258]
+aws_servicequotas_service_quota.minimum_quotas["vpc/L-F678F1CE"]: Refreshing state... [id=vpc/L-F678F1CE]
+aws_servicequotas_service_quota.minimum_quotas["eks/L-23414FF3"]: Refreshing state... [id=eks/L-23414FF3]
+aws_servicequotas_service_quota.minimum_quotas["vpc/L-A4707A72"]: Refreshing state... [id=vpc/L-A4707A72]
aws_servicequotas_service_quota.minimum_quotas["vpc/L-FE5A380F"]: Refreshing state... [id=vpc/L-FE5A380F]
+aws_servicequotas_service_quota.minimum_quotas["vpc/L-2AFB9258"]: Refreshing state... [id=vpc/L-2AFB9258]
+aws_servicequotas_service_quota.minimum_quotas["ec2/L-0263D0A3"]: Refreshing state... [id=ec2/L-0263D0A3]
+module.broker_eks.random_uuid.client_username: Refreshing state... [id=7e788a7c-d13f-9d73-220d-74accd21f91f]
module.broker_aws.random_password.client_password: Refreshing state... [id=none]
module.broker_aws.random_uuid.client_username: Refreshing state... [id=b58981d5-215f-cf5f-6087-eb70a157581c]
module.broker_eks.random_password.client_password: Refreshing state... [id=none]
-module.broker_eks.random_uuid.client_username: Refreshing state... [id=7e788a7c-d13f-9d73-220d-74accd21f91f]
+module.broker_solr.cloudfoundry_route.ssb_uri: Refreshing state... [id=9b353593-d723-4180-bca6-67d1b1458f7b]
module.broker_eks.cloudfoundry_route.ssb_uri: Refreshing state... [id=edc2c819-63b2-4802-a421-64b9bf15d173]
module.broker_eks.cloudfoundry_service_instance.db: Refreshing state... [id=9e6387a2-9b29-464a-b0df-6e9e50360d1d]
-module.broker_solr.cloudfoundry_route.ssb_uri: Refreshing state... [id=9b353593-d723-4180-bca6-67d1b1458f7b]
module.broker_solr.cloudfoundry_service_instance.db: Refreshing state... [id=51b96118-b9b8-4e47-9b60-49ba3ac81e4f]
module.broker_aws.cloudfoundry_route.ssb_uri: Refreshing state... [id=dfb8252a-4a04-4a5f-91eb-9b6888dc37f4]
module.broker_aws.cloudfoundry_service_instance.db: Refreshing state... [id=95ee9fd3-6b03-4da1-b8fe-86357c246144]
Here we see that the plans are identical except for the order in which the "Refreshing state..." messages were generated. This appears to be non-deterministic behavior from Terraform, so we think that these messages should not be considered when diffing plans; only the output following the blank line, which starts with "Terraform used...".
I'm trying to use this action with a terraform project that imports its submodules from another (private) repository.
This is the type of error I'm getting:
Error: Failed to download module
Could not download module "database" (../../layers/data/main.tf:1) source code
from "[email protected]:veedstudio/veed-terraform-modules.git?ref=master": error
downloading
'ssh://[email protected]/veedstudio/veed-terraform-modules.git?ref=master':
/usr/bin/git exited with 128: Cloning into
'/github/home/.dflook-terraform-data-dir/modules/data.database'...
Host key verification failed.
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
Even though if I run git clone
before this, it does work as expected, and the action I use before adds the correct known_hosts
values for github.com. here is the relevant snippet from the workflow I'm writing:
- name: Set up access key for terraform_modules repo
uses: webfactory/[email protected]
with:
ssh-private-key: ${{ secrets.GH_SSH_PRIVATE_KEY }}
- name: Debug
run: git clone ssh://[email protected]/veedstudio/veed-terraform-modules.git && ls
- name: terraform plan
uses: dflook/terraform-plan@v1
with:
path: environments/dev
Based on the logs it looks like this action is using a different home directory and runs inside a docker container, hence why it may not take into account the SSH setup?
I assume fixing this may require changes to the action, I'd just like some help about how to go about this.
Thank you for this super handy action library.
Firstly, thank you! I'm new to TF and have struggled with automation so far. This is so much of an upgrade to all the other automation approaches I've looked at: Azure pipelines, tflcoud, Atlantis. Very very happy to have discovered this <3
I have modules in the same project as my main module (likely will move them out eventually but I find it much easier to iterate this way) e.g.
module "resource_group_service_principal" {
source = "../modules/azure-service-principal"
}
But this fails on the plan step:
Error: Unreadable module directory
Unable to evaluate directory symlink: lstat ../modules: no such file or directory
Going to have a crack at fixing this but thought I'd raise this for reference and if anyone has any quick solutions.
Are lines matching ^\s*?#
included with the diff between the plan and apply actions?
I suspect ignoring these "comment lines" during diff is a good solution.
It looks like something changed under the hood in the output of terraform between my plan and apply.
My hunch is that the # (\d+ unchanged \S+ hidden)
comments are included with the diff causing apply to fail.
In plan output I see:
# (32 unchanged attributes hidden)
~ scaling_configuration {
~ seconds_until_auto_pause = 300 -> 7200
~ timeout_action = "ForceApplyCapacityChange" -> "RollbackCapacityChange"
# (3 unchanged attributes hidden)
}
}
In apply output I see:
# (33 unchanged attributes hidden)
~ scaling_configuration {
~ seconds_until_auto_pause = 300 -> 7200
~ timeout_action = "ForceApplyCapacityChange" -> "RollbackCapacityChange"
# (3 unchanged attributes hidden)
}
}
The apply output fails without clear reason. The approved plan comment details:
Plan not applied in Apply #8 (Plan has changed)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.