GithubHelp home page GithubHelp logo

hashicorp / terraform-provider-azurerm Goto Github PK

View Code? Open in Web Editor NEW
4.5K 237.0 4.5K 231.23 MB

Terraform provider for Azure Resource Manager

Home Page: https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs

License: Mozilla Public License 2.0

Makefile 0.01% Go 99.89% Shell 0.09% HCL 0.01% CSS 0.01%
terraform terraform-provider azure azure-resource-manager

terraform-provider-azurerm's Introduction

Terraform logo

Terraform Provider for Azure (Resource Manager)

The AzureRM Terraform Provider allows managing resources within Azure Resource Manager.

When using version 3.0 of the AzureRM Provider we recommend using Terraform 1.x (the latest version can be found here). Whilst older versions of Terraform Core (0.12.x and later) remain compatible with v3.0 of the AzureRM Provider - support for versions prior to 1.0 will be removed in the next major release of the AzureRM Provider (v4.0).

Usage Example

# 1. Specify the version of the AzureRM Provider to use
terraform {
  required_providers {
    azurerm = {
      source = "hashicorp/azurerm"
      version = "=3.0.1"
    }
  }
}

# 2. Configure the AzureRM Provider
provider "azurerm" {
  # The AzureRM Provider supports authenticating using via the Azure CLI, a Managed Identity
  # and a Service Principal. More information on the authentication methods supported by
  # the AzureRM Provider can be found here:
  # https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs#authenticating-to-azure

  # The features block allows changing the behaviour of the Azure Provider, more
  # information can be found here:
  # https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/guides/features-block
  features {}
}

# 3. Create a resource group
resource "azurerm_resource_group" "example" {
  name     = "example-resources"
  location = "West Europe"
}

# 4. Create a virtual network within the resource group
resource "azurerm_virtual_network" "example" {
  name                = "example-network"
  resource_group_name = azurerm_resource_group.example.name
  location            = azurerm_resource_group.example.location
  address_space       = ["10.0.0.0/16"]
}

Developing & Contributing to the Provider

The DEVELOPER.md file is a basic outline on how to build and develop the provider while more detailed guides geared towards contributors can be found in the /contributing directory of this repository.

terraform-provider-azurerm's People

Contributors

arcturuszhang avatar aristosvo avatar catriona-m avatar favoretti avatar hbuckle avatar jackofallops avatar jiaweitao001 avatar katbyte avatar lonegunmanb avatar magodo avatar manicminer avatar mbfrahry avatar metacpp avatar ms-henglu avatar myc2h6o avatar neil-yechenwei avatar njucz avatar r0bnet avatar shu-ying789 avatar sinbai avatar stack72 avatar stephybun avatar teowa avatar tombuildsstuff avatar tracypholmes avatar wodansson avatar wuxu92 avatar xiaxyi avatar yupwei68 avatar ziyeqf avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-provider-azurerm's Issues

Locks in AzureRM

This issue was originally opened by @AMMullan as hashicorp/terraform#9768. It was migrated here as part of the provider split. The original body of the issue is below.


Hi,

I can't see any option for setting Locks on resources, i.e. we want to have a Resource Group in Azure for networking components and have it Locked so only Owners can manage it but Terraform doesn't seem to have this feature?

REST API documentation is here: https://azure.microsoft.com/en-gb/documentation/articles/resource-group-lock-resources/

Terraform Version

Terraform v0.7.7

Affected Resource(s)

  • azurerm_resource_group
  • any other resource that has the Locks options

Expected Behavior

Create locks for whichever resources need them.

Azure: provider throws error with credentials provided via env-vars

This issue was originally opened by @alexsomesan as hashicorp/terraform#12977. It was migrated here as part of the provider split. The original body of the issue is below.


Terraform Version

v0.9.1

Affected Resource(s)

  • provider "azurerm"

Terraform Configuration Files

provider "azurerm" {
  subscription_id            = "00000000-0000-0000-0000-000000000000"
  client_id                  = "00000000-0000-0000-0000-000000000000"
  client_secret              = "00000000-0000-0000-0000-000000000000"
  tenant_id                  = "00000000-0000-0000-0000-000000000000"
  skip_provider_registration = "true"
}

resource "azurerm_resource_group" "tectonic_cluster" {
  location = "northeurope"
  name     = "cred-test-group"
}
export ARM_SUBSCRIPTION_ID="00000000-0000-0000-0000-000000000000"
export ARM_CLIENT_ID="00000000-0000-0000-0000-000000000000"
export ARM_CLIENT_SECRET="00000000-0000-0000-0000-000000000000"
export ARM_TENANT_ID="00000000-0000-0000-0000-000000000000"

Debug Output

With credentials hard-coded in provider block:

$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

The Terraform execution plan has been generated and is shown below.
Resources are shown in alphabetical order for quick scanning. Green resources
will be created (or destroyed and then created if an existing resource
exists), yellow resources are being changed in-place, and red resources
will be destroyed. Cyan entries are data sources to be read.

Note: You didn't specify an "-out" parameter to save this plan, so when
"apply" is called, Terraform can't guarantee this is what will execute.

+ azurerm_resource_group.tectonic_cluster
    location: "northeurope"
    name:     "cred-test-group"
    tags.%:   "<computed>"


Plan: 1 to add, 0 to change, 0 to destroy.

Without hard-coded credentials (but exported from env-vars):

terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

Error running plan: 1 error(s) occurred:

* provider.azurerm: Unable to list provider registration status, it is possible that this is due to invalid credentials or the service principal does not have permission to use the Resource Manager API, Azure error: autorest#WithErrorUnlessStatusCode: POST https://login.microsoftonline.com/72f988bf-86f1-41af-91ab-2d7cd011db47%0D/oauth2/token?api-version=1.0 failed with 400 Bad Request: StatusCode=400

Expected Behavior

Terraform runs successfully with supplied credentials regardless if provided from env-vars or inlined as provider attributes.

Actual Behavior

Terraform behaves differently when same credentials are provided in-line or through env-vars.
Same credentials which work correctly when set on provider cause an error when supplied through env-vars.

Steps to Reproduce

Export credentials through environment variables like shown above.
Run terraform on above-provided template once with credentials parameters set and once without them.

  1. terraform plan

Enhancement of azurerm_network_interface

This issue was originally opened by @carinadigital as hashicorp/terraform#9872. It was migrated here as part of the provider split. The original body of the issue is below.


I've found many issues with azurerm_network_interface. This ticket is so that I can report the issues individually but have a common place to discuss changes to azurerm_network_interface.

#9741 provider/azurerm: azurerm_network_interface does not update load_balancer_backend_address_pools_ids
#9873 Enhancement: Add import for azurerm_network_interface
#9874 Enhancement: Allow more than one IP Configuration on azurerm_network_interface
#9878 Can not attach azurerm_network_interface to two seperate loadbalancers

Terraform errantly reports need to update. State file out of sync with deployed resources when no modifications have been made.

This issue was originally opened by @jzampieron as hashicorp/terraform#12552. It was migrated here as part of the provider split. The original body of the issue is below.


Terraform Version

Terraform v0.8.8

Affected Resource(s)

Please list the resources as a list, for example:

  • azurerm_virtual_machine

If this issue appears to affect multiple resources, it may be an issue with Terraform's core, so please mention this.

Terraform Configuration Files

resource "azurerm_resource_group" "test" {
   name     = "${var.res_group}"
   location = "${var.azurerm_region}"
}

resource "azurerm_virtual_network" "test" {
  name                = "acctvn"
  address_space       = ["10.0.0.0/16"]
  location            = "${var.azurerm_region}"
  resource_group_name = "${azurerm_resource_group.test.name}"
}

resource "azurerm_subnet" "test" {
  name                 = "acctsub"
  resource_group_name  = "${azurerm_resource_group.test.name}"
  virtual_network_name = "${azurerm_virtual_network.test.name}"
  address_prefix       = "10.0.2.0/24"
}

resource "azurerm_public_ip" "test" {
    name                         = "acceptanceTestPublicIp${count.index}"
    location                     = "${var.azurerm_region}"
    resource_group_name          = "${azurerm_resource_group.test.name}"
    public_ip_address_allocation = "dynamic"
    count                        = 2
    tags {
        environment = "${var.instance_env}"
    }
}

resource "azurerm_network_interface" "test" {
  name                = "acctni${count.index}"
  location            = "${var.azurerm_region}"
  resource_group_name = "${azurerm_resource_group.test.name}"
  count               = 2
  ip_configuration {
    name                          = "testconfiguration1"
    subnet_id                     = "${azurerm_subnet.test.id}"
    private_ip_address_allocation = "dynamic"
    public_ip_address_id          = "${element( azurerm_public_ip.test.*.id, count.index )}"
  }
}

resource "azurerm_storage_account" "test" {
  name                = "${var.instance_name}tftestsa"
  resource_group_name = "${azurerm_resource_group.test.name}"
  location            = "${var.azurerm_region}"
  account_type        = "Standard_LRS"

  tags {
    environment = "${var.instance_env}"
  }
}

resource "azurerm_storage_container" "test" {
  name                  = "vhds"
  resource_group_name   = "${azurerm_resource_group.test.name}"
  storage_account_name  = "${azurerm_storage_account.test.name}"
  container_access_type = "private"
}

# Name for the OS disks.
# This is generated w/ a UUID b/c azure can't figure out how to reattach
# _OR_ create if not exists.
# This way, terraform will never delete a disk by accident.
data "template_file" "azurerm_vm_osdisk" {
  template = "acctvm%05d-osdisk-${uuid()}"
}

# Name for the OS disks.
# This is generated w/ a UUID b/c azure can't figure out how to reattach
# _OR_ create if not exists.
# This way, terraform will never delete a disk by accident.
data "template_file" "azurerm_vm_datadisk" {
  template = "acctvm%05d-datadisk-${uuid()}"
}

resource "azurerm_virtual_machine" "test" {
  name                  = "${format( "acctvm%05d", count.index )}"
  location              = "${var.azurerm_region}"
  resource_group_name   = "${azurerm_resource_group.test.name}"
  network_interface_ids = ["${element( azurerm_network_interface.test.*.id, count.index )}"]
  vm_size               = "Standard_A0"
  delete_os_disk_on_termination = true
  count                         = 2
  lifecycle {
     ignore_changes = [ "storage_os_disk", "storage_data_disk" ]
  }

  # Can use: az vm image list-publishers --location eastus2 to help here.
  storage_image_reference {
    publisher = "CoreOS"
    offer     = "CoreOS"
    sku       = "Stable"
    version   = "1235.9.0"
  }

  storage_os_disk {
    name          = "${ format( data.template_file.azurerm_vm_osdisk.rendered, count.index ) }"
    vhd_uri       = "${azurerm_storage_account.test.primary_blob_endpoint}${azurerm_storage_container.test.name}/${ format( data.template_file.azurerm_vm_osdisk.rendered, count.index ) }.vhd"
    caching       = "ReadWrite"
    create_option = "FromImage"
  }

  # Be really really careful here. If, for some reason Terraform has to nuke
  # and rebuild the VM... you will get a new volume by UUID. This is
  # goofy, but ON PURPOSE b/c Azure API doesn't understand CREATE IF NOT EXISTS,
  # otherwise ATTACH to the image.
  # It's setup so you could manually go back and recover the _old_
  # VHDs and reattach them to the vms using the VM count.index
  # if you really had to.
  # AKA: Efforts are made to Preserve the VHDs, even at the cost of
  # wasting space and/or having dangling VHDs in the storage account.
  storage_data_disk {
    name          = "${ format( data.template_file.azurerm_vm_datadisk.rendered, count.index ) }"
    vhd_uri       = "${azurerm_storage_account.test.primary_blob_endpoint}${azurerm_storage_container.test.name}/${ format( data.template_file.azurerm_vm_datadisk.rendered, count.index ) }.vhd"
    disk_size_gb  = "512"
    # See: https://docs.microsoft.com/en-us/rest/api/compute/virtualmachines/virtualmachines-create-or-update#Anchor_2
    # There isn't an obvious "create if not exists, otherwise attach option. "
    create_option = "Empty"
    lun           = 0
  }

  os_profile {
    computer_name  = "hostname"
    admin_username = "testadmin"
    admin_password = "Password1234!"
    # Base64 encoded ... This is an "Ignition" configuration file for coreos.
    custom_data    = "${base64encode( file( "config.ign" ) )}"
  }

  os_profile_linux_config {
    disable_password_authentication = false
  }

  tags {
    environment = "${var.instance_env}"
  }
}

Expected Behavior

Terraform should report that the state file is in sync with the tf files.

Actual Behavior

terraform plan reports the need to update the resources.

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform apply
  2. terraform plan -> Note that terraform still tries to report modifications.

Important Factoids

I believe this is related to the base64encode() function not storing the proper value in the state file for the custom_data element.

The state file shows the raw JSON content instead of the base64encoded string shown in the plan output.

It's uncertain what's actually running on the cluster, although I believe it's correct b/c the API call works and the CoreOS configuration is updated.

Referencing ARM deployment outputs before the deployment has finished gives an error

This issue was originally opened by @StephenWeatherford as hashicorp/terraform#13437. It was migrated here as part of the provider split. The original body of the issue is below.


Terraform Version

Terraform v0.9.3-dev (9f23779933474ac1e83679f47057b85c9071bee7+CHANGES)

Affected Resource(s)

  • azurerm_template_deployment

Terraform Configuration Files

resource "azurerm_resource_group" "test" {
  name     = "${var.resource_group_name}"
  location = "${var.location}"
}
 
resource "azurerm_storage_account" "test" {
  name     = "${var.stgaccount_name}"
  location = "${var.location}"
  account_type = "Standard_LRS"
  resource_group_name = "${var.resource_group_name}"
 
  depends_on = ["azurerm_template_deployment.test"]
  tags {
      metadata = "${azurerm_template_deployment.test.outputs.outputValue}"
  }
}
 
resource "azurerm_template_deployment" "test" {
  name                = "test"
  resource_group_name = "${azurerm_resource_group.test.name}"
 
  template_body = <<DEPLOY
{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "outputs":{
    "outputValue":{
      "type":"string",
      "value" : "Output Value"
    }
  },
  "resources": [
  ]
}
DEPLOY
 
  deployment_mode = "Incremental"
}

Debug Output

https://gist.github.com/StephenWeatherford/bd52df7438b646916db3ee4a40ed2f4e

Expected Behavior

The plan should succeed.

Actual Behavior

  • azurerm_storage_account.test: 1 error(s) occurred:

  • azurerm_storage_account.test: Resource 'azurerm_template_deployment.test' does not have attribute 'outputs.outputValue' for variable 'azurerm_template_deployment.test.outputs.outputValue'

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform apply

Important Factoids

If you remove the output reference, apply, add back the reference and re-apply, it works.

References

I've broken this off from hashicorp/terraform#7353, as originally reported by @bpoland.

azurerm_virtual_machine_scale_set : Allow attaching data disks

This issue was originally opened by @shanielh as hashicorp/terraform#12713. It was migrated here as part of the provider split. The original body of the issue is below.


Terraform Version

Terraform v0.8.8

Affected Resource(s)

  • azurerm_virtual_machine_scale_set

Description

I Would like to create azure virtual machine scale set with attached data disks :

https://docs.microsoft.com/en-us/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-attached-disks

Feature Request: `azurerm_service_principal`

This issue was originally opened by @colemickens as hashicorp/terraform#8094. It was migrated here as part of the provider split. The original body of the issue is below.


Feature request: Support creating AAD users and assigning RBAC roles.

Scenario: I want to automate creating Kubernetes clusters for Azure. We're doing this currently with Terraform + Ansible. It currently requires the user to provision a service account for the cluster manually and then feed those credentials into a config file that is used in a template_file data resource and then ultimately pushed into the VM and consumed by Ansible.

If I could have an azurerm_aad_serviceprincipal then I could automatically provision new service accounts, without needing to do it out of band. (Currently I'm considering using a local-exec provisioner to run azure-cli to create the SP and assign it permissions to the resource group that Terraform is deploying/creating).

Depends on: Azure/azure-sdk-for-go#378

azurerm fails on updates, need to destroy all infrastructure every time I change something.

This issue was originally opened by @noam87 as hashicorp/terraform#6735. It was migrated here as part of the provider split. The original body of the issue is below.


Hi, this config file was giving this error (and debug shown below):

Error applying plan:

1 error(s) occurred:

* azurerm_virtual_machine.core-build: Error waiting for Virtual Machine (cbvm) to become available: unexpected state 'Failed', wanted target '[Succeeded]'
  1. I got it to work by destroying then applying again.
  2. Next, I uncommented the "remote-exec" provisioner seen below, however I got all green with 0 changed.
  3. Finally, I destroyed and applyed again, and it worked.

The error always happens on the create stage. The VM that changed is destroyed successfully (it seems), but when it tries to recreate it, I'll get the error.

So there's something going on when applying updates where only the vm needs to recreate. So far I've had to destroy everything on every change I make, which is not sustainable for a production environment.

Terraform Version

0.6.16

Affected Resource(s)

azurerm_virtual_machine

Terraform Configuration Files

resource "azurerm_virtual_machine" "core-build" {
  name = "cbvm"
  location = "East US"
  resource_group_name = "${azurerm_resource_group.core-build.name}"
  network_interface_ids = ["${azurerm_network_interface.core-build.id}"]
  vm_size = "Standard_A0"

  storage_image_reference {
    publisher = "Canonical"
    offer = "UbuntuServer"
    sku = "14.04.2-LTS"
    version = "latest"
  }

  storage_os_disk {
    name = "myosdisk1"
    vhd_uri = "${azurerm_storage_account.core-build.primary_blob_endpoint}${azurerm_storage_container.core-build.name}/myosdisk1.vhd"
    caching = "ReadWrite"
    create_option = "FromImage"
  }

  os_profile {
    computer_name = "computer"
    admin_username = "admin"
    admin_password = "somepassword"
  }

  # Add SSH public keys from developers here
  os_profile_linux_config {
    disable_password_authentication = true

    ssh_keys {
      path = "/home/ekkoadmin/.ssh/authorized_keys"
      key_data = "${file("ssh_pub_keys/2014worktop.pub")}"
    }

    ssh_keys {
      path = "/home/ekkoadmin/.ssh/authorized_keys"
      key_data = "${file("ssh_pub_keys/2013tanktop.pub")}"
    }
  }

  connection {
    type = "ssh"
    host     = "${azurerm_public_ip.core-build.ip_address}"
    user     = "admin"
    private_key = "~/.ssh/id_rsa"
  }

  provisioner "file" {
    source = "bash_scripts/provision_build_vm.sh"
    destination = "/tmp/provision_build_vm.sh"
    connection {
      type = "ssh"
      host     = "${azurerm_public_ip.core-build.ip_address}"
      user     = "admin"
      private_key = "~/.ssh/id_rsa"
    }
  }

# provisioner "remote-exec" {
#   inline = [
#     "chmod +x /tmp/provision_build_vm.sh",
#     "/tmp/provision_build_vm.sh"
#   ]
#   connection {
#     host     = "${azurerm_public_ip.core-build.ip_address}"
#     user     = "admin"
#     key_file = "~/.ssh/id_rsa"
#   }
# }

  tags {
    environment = "build"
  }
}

Debug Output

https://gist.github.com/noam87/60bc0e429f08933f3fddd53caf8feade

Terraform does not handle 404 on Azure Resource Group.

This issue was originally opened by @clstokes as hashicorp/terraform#12826. It was migrated here as part of the provider split. The original body of the issue is below.


Terraform Version

v0.8.6

Affected Resource(s)

  • azurerm_resource_group

Terraform Configuration Files

resource "azurerm_resource_group" "main" {
  name     = "retrytest"
  location = "westus"
}

Output

network-azure $ terraform apply
module.vpc.azurerm_resource_group.main: Creating...
  location: "" => "westus"
  name:     "" => "vpc-foundation"
  tags.%:   "" => "<computed>"
Error applying plan:

1 error(s) occurred:

* azurerm_resource_group.main: Error reading resource group: ResourceGroupNotFound (404) - Resource group 'vpc-foundation' could not be found.

Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
network-azure $

Expected Behavior

Terraform should treat the 404 as a transient error and backoff and retry.

Actual Behavior

apply failed.

Steps to Reproduce

Error only happens sporadically. Repeat the below until you experience the error.

  1. terraform apply && terraform destroy -force

Feature Request: HDinsight Cluster Deployment

This issue was originally opened by @retheshnair as hashicorp/terraform#11599. It was migrated here as part of the provider split. The original body of the issue is below.


Hi there,
I was trying to automate the deployment the HDinsight Cluster Deployment . I could not find terraform configuration to natively support HDinsight Cluster Deployment .Currently We are getting it deploy via ARM template + terraform based automation, which is not clean. It will nice to have HDinsight Cluster Deployment native support on terraform

Enhancement: Add import for azurerm_network_interface

This issue was originally opened by @carinadigital as hashicorp/terraform#9873. It was migrated here as part of the provider split. The original body of the issue is below.


Resource azurerm_network_interface doesn't support import

Terraform Version

0.7.8

Affected Resource(s)

azurerm_network_interface

Debug Output

Error importing: 1 error(s) occurred:

* import azurerm_network_interface.test (id: /subscriptions/[SUBSCRIPTION_ID]/resourceGroups/acceptanceTestResourceGroup1/providers/Microsoft.Network/networkInterfaces/acceptanceTestNetworkInterface1): resource azurerm_network_interface doesn't support import

Unable to scale up Azure Container Service Agent pool

This issue was originally opened by @theintz as hashicorp/terraform#13100. It was migrated here as part of the provider split. The original body of the issue is below.


I am trying to use TF to manage my Azure resources, most importantly the Container Service using Kubernetes. Here is the relevant excerpt from the config file:

resource "azurerm_container_service" "dev-cs" {
  name                   = "devcontainers"
  location               = "${azurerm_resource_group.dev-rg-containers.location}"
  resource_group_name    = "${azurerm_resource_group.dev-rg-containers.name}"
  orchestration_platform = "Kubernetes"

  master_profile {
    count      = 1
    dns_prefix = "kube-master"
  }

  linux_profile {
    admin_username = "gesund"

    ssh_key {
      key_data = "${file("ssh-keys/key.pub")}"
    }
  }

  agent_pool_profile {
    name       = "dev-agent"
    count      = 2
    dns_prefix = "kube-agent"
    vm_size    = "Standard_A0"
  }

  service_principal {
    client_id     = "${var.dev-cs-client-id}"
    client_secret = "${var.dev-cs-client-secret}"
  }

  diagnostics_profile {
    enabled = false
  }
}

This works well and creates an instance of the ACS with two agent VMs running. Now I am trying to scale up the agent pool to 4 instances by changing the count to 4. terraform plan now wants to destroy the entire ACS:

➜  autodeploy-dev git:(master) ✗ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
[...]
-/+ azurerm_container_service.dev-cs
    agent_pool_profile.#:                                 "1" => "1"
    agent_pool_profile.1269455163.count:                  "" => "3"
    agent_pool_profile.1269455163.dns_prefix:             "" => "kube-agent" (forces new resource)
    agent_pool_profile.1269455163.fqdn:                   "" => "<computed>"
    agent_pool_profile.1269455163.name:                   "" => "dev-agent" (forces new resource)
    agent_pool_profile.1269455163.vm_size:                "" => "Standard_A0"
    agent_pool_profile.584274526.count:                   "2" => "0"
    agent_pool_profile.584274526.dns_prefix:              "kube-agent" => "" (forces new resource)
    agent_pool_profile.584274526.name:                    "dev-agent" => "" (forces new resource)
    agent_pool_profile.584274526.vm_size:                 "Standard_A0" => ""
    diagnostics_profile.#:                                "1" => "1"
    diagnostics_profile.734881840.enabled:                "false" => "false"
    diagnostics_profile.734881840.storage_uri:            "" => "<computed>"
    linux_profile.#:                                      "1" => "1"
    linux_profile.1875963313.admin_username:              "gesund" => "gesund"
    linux_profile.1875963313.ssh_key.#:                   "1" => "1"
    linux_profile.1875963313.ssh_key.3348459114.key_data: "ssh-rsa [...]" => "ssh-rsa [...]"
    location:                                             "westeurope" => "westeurope"
    master_profile.#:                                     "1" => "1"
    master_profile.1785126391.count:                      "1" => "1"
    master_profile.1785126391.dns_prefix:                 "kube-master" => "kube-master"
    master_profile.1785126391.fqdn:                       "" => "<computed>"
    name:                                                 "devcontainers" => "devcontainers"
    orchestration_platform:                               "Kubernetes" => "Kubernetes"
    service_principal.#:                                  "1" => "1"
    service_principal.3038453274.client_id:               "[...]" => "[...]"
    service_principal.3038453274.client_secret:           "<sensitive>" => "<sensitive>" (attribute changed)
    tags.%:                                               "0" => "<computed>"


Plan: 1 to add, 0 to change, 1 to destroy.

And running terraform apply just errors out:

➜  autodeploy-dev git:(master) ✗ terraform apply
azurerm_container_service.dev-cs: Destroying... (ID: /subscrip...ntainers)
azurerm_container_service.dev-cs: Destruction complete
azurerm_container_service.dev-cs: Creating...
[...]
Error applying plan:

1 error(s) occurred:

* azurerm_container_service.dev-cs: containerservice.ContainerServicesClient#CreateOrUpdate: Failure sending request: StatusCode=200 -- Original Error: Long running operation terminated with status 'Failed': Code="DnsRecordInUse" Message="Provisioning of resource(s) for container service 'devcontainers' in resource group 'dev-containers' failed with errors: Resource type: Microsoft.Network/publicIPAddresses, name: k8s-master-ip-kube-master-2A2974EA, id: /subscriptions/[...]/resourceGroups/dev-containers/providers/Microsoft.Network/publicIPAddresses/k8s-master-ip-kube-master-2A2974EA, StatusCode: BadRequest, StatusMessage: \\n {\r\n  \"error\": {\r\n    \"code\": \"DnsRecordInUse\",\r\n    \"message\": \"DNS record kube-master.westeurope.cloudapp.azure.com is already used by another public IP.\",\r\n    \"details\": []\r\n  }\r\n}\r\n"

Further TF has now forgotten that the ACS instance is still alive (as verified in the Azure portal) and wants to entirely recreate it when I run terraform apply again.

Terraform Version

0.9.1

Affected Resource(s)

  • Azure Container Services

Terraform Configuration Files

see above

Debug Output

see above

Expected Behavior

Agent pool VM instances scaled up from 2 to 4.

Actual Behavior

TF crashes and "forgets" about ACS instance altogether.

Steps to Reproduce

see above

Feature Request: Azure App Gateway

This issue was originally opened by @tgtshanika as hashicorp/terraform#8670. It was migrated here as part of the provider split. The original body of the issue is below.


Hi there,

I have been trying to automate an Azure scale set deployment with App gateway. I could not find terraform configuration to automate App Gateway. IMO, it would be great if you can add App gateway automation support so that we can easity develop a solution to automate a complete deployment.

I referred [1] in automating scale sets, but I couldn't find any parameter to configure app gateway backend address pool ids for a given scale set. Further, there is no config to automate scale set settings (autoscaling rules etc) against a given scale set.

Due to above limitations, we have to use template + terraform based automation, which is not clean.

[1] -https://www.terraform.io/docs/providers/azurerm/r/virtual_machine_scale_sets.html

Timeout on provisioning using azurerm

This issue was originally opened by @girishramnani as hashicorp/terraform#9567. It was migrated here as part of the provider split. The original body of the issue is below.


Hi there,

I am trying to provision 4 servers using azurerm and an i/o timeout occurs when i am trying to provision the servers.
I have configured a remote state and hence terraform tries to destroy the infrastructure created in the past. So even before it reaches the creation of the current infrastructure the error occurs. This is the gist of the output https://gist.github.com/d93bef6830d0cf6a5ad6b279750c4b56

One thing I noticed is if i do not configure a remote-state, the servers get provisioned a work nicely.

So after getting this i/o timeout error i try to deprovision ( just so that maybe terraform gets to a clean state) and in that case connection timed out error occurs.
This is the gist of the output https://gist.github.com/girishramnani/0e80c9ef7582e13f07cebc1438f5fdd2. I am running terraform on an aws instance so this issue is not related to the internet speed.

Terraform Version - 0.7.4

Expected behavior

The servers should be provisioned

VMSS - User Specified Images

This issue was originally opened by @cberner as hashicorp/terraform#13307. It was migrated here as part of the provider split. The original body of the issue is below.


It would be very useful to support managed disks with scale sets. This is done by omitting the "name", "osType", and "image" from the "osDisk" and instead providing an "imageReference" (https://github.com/Microsoft/azure-docs/blob/master/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-convert-template-to-md.md). To use it with a custom disk image you pass the resource identifier in the "Id" field of imageReference: https://docs.microsoft.com/en-us/rest/api/virtualmachinescalesets/create-or-update-a-set#imagereference

Terraform Version

0.8.7

Affected Resource(s)

Please list the resources as a list, for example:

  • azurerm_virtual_machine_scale_set

Terraform Configuration Files

I would expect it to work something like this:

storage_profile_os_disk {
    # name is optional
    caching = "ReadWrite"
    create_option = "FromImage"
    # image is optional
    # os_type is optional
}

storage_profile_image_reference {
    resource_id = "/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP>/providers/Microsoft.Compute/images/<IMAGE_NAME>.vhd"
}

azurerm_template_deployment fails to pass "array" type parameters properly

This issue was originally opened by @rtyler as hashicorp/terraform#11085. It was migrated here as part of the provider split. The original body of the issue is below.


Terraform Version

v0.8.0

Affected Resource(s)

  • azurerm_template_deployment

Terraform Configuration Files

Consider the an ARM template with an "array" parameter, e.g.:

{
    "$schema": "http://schemas.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
        "documentDBs": {
            "type": "array"
        }
    },
resource "azurerm_template_deployment" "thing" {
    name                = "a-big-parameterized-arm-template"
    resource_group_name = "${azurerm_resource_group.things.name}"
    parameters          = {
        documentDBs = [
            "github-events",
        ],
    }
    deployment_mode     = "Incremental"
    template_body       = "${file("./arm_templates/the-template.json")}"
}

Output

Error running plan: 1 error(s) occurred:

* parameters: 1 error(s) decoding:

* '[documentDBs]' expected type 'string', got unconvertible type '[]interface {}'

Expected Behavior

I would have expected the array to be marshalled through to the template properly.
What should have happened?

azurerm_key_vault missing access policy for ApplicationID

This issue was originally opened by @razsha as hashicorp/terraform#12718. It was migrated here as part of the provider split. The original body of the issue is below.


Hi,

In Azure key vault there is ability to set Access policy for an application, in order to do so, you need to set the application id. supported by azure go sdk.

https://github.com/Azure/azure-sdk-for-go/blob/ecf40e315d5ab0ca6d7b3b7f7fbb5c1577814813/arm/keyvault/models.go

Thanks,
Raz.

Current master crashes when importing an azurerm_virtual_machine

This issue was originally opened by @simonjefford as hashicorp/terraform#10312. It was migrated here as part of the provider split. The original body of the issue is below.


Terraform Version

Terraform v0.8.0-dev (b335418d0dc396aeb0360fbf5af160fafcc6c9ee)

Affected Resource(s)

  • azurerm_virtual_machine

Terraform Configuration Files

https://www.dropbox.com/s/7joncaeuea4wvje/config.tar.gz.encrypted?dl=0

Debug Output

https://gist.github.com/simonjefford/f2e9cdc808f29a1e34546a38ebdb2962

Panic Output

https://gist.github.com/simonjefford/8c6646fcaa573dbaf8fce013f937f142

Expected Behavior

Import the azure RM based virtual machine into state.

Actual Behavior

It crashed due to the API not returning JSON containing an os_profile object as seen in the crash log. Changing the code to guard against resp.Properties.OsProfile being nil (around this code) resulted in a successful import (but obviously without any of the os_profile data).

Steps to Reproduce

  1. terraform import azurerm_virtual_machine.winslave /subscriptions/<redacted>/resourceGroups/jenkins-prodeng/providers/Microsoft.Compute/virtualMachines/prodeng-wslave1

Important Factoids

Also as seen in the crash log, the virtual machine has some extensions around backup and recovery, but I'm not sure that's relevant.

References

  • GH-10195 - the pull request that added the import capability to azurerm_virtual_machine

azurerm_subnet should have its gateway IP as an exported attribute

This issue was originally opened by @wendorf as hashicorp/terraform#9519. It was migrated here as part of the provider split. The original body of the issue is below.


According to the [Azure virtual network documentation], the gateway IP address of a subnet is predictable (e.g. if the subnet is 10.0.3.0/24, the gateway IP is 10.0.3.1). It is as simple as cidrhost(azurerm_subnet.my_subnet.address_prefix, 1).

It would be nice if this information were part of the built-in exported attributes for azurerm_subnets so that users of the resource can easily determine what the Gateway is without needing to do calculations.

Terraform Version

0.7.7

Affected Resource(s)

  • azurerm_subnet

Expected Behavior

The resource knows its own gateway IP

Actual Behavior

The resource does not know its gateway IP

azurerm_virtual_machine does not update state file if disks are renumbered

This issue was originally opened by @cchildress as hashicorp/terraform#9437. It was migrated here as part of the provider split. The original body of the issue is below.


Terraform Version

0.7.3 (waiting for 0.7.6 to come out due to [GH-9122]

Affected Resource(s)

azurerm_virtual_machine

Terraform Configuration Files

Before:

  storage_data_disk {
    name = "${var.node_name}_data_disk_premium_01"
    vhd_uri = "${var.node_data_storage_account_premium_blob_endpoint}${azurerm_storage_container.node_container_data_premium.name}/${var.node_name}_data_premium_01.vhd"
    create_option = "Empty"
    disk_size_gb = 1023
    lun = 0
  }
  delete_data_disks_on_termination = true

After:

  storage_data_disk {
    name = "${var.node_name}_data_disk_standard_01"
    vhd_uri = "${var.node_data_storage_account_standard_blob_endpoint}${azurerm_storage_container.node_container_data_standard.name}/${var.node_name}_data_standard_01.vhd"
    create_option = "Empty"
    disk_size_gb = 1023
    lun = 0
  }
  storage_data_disk {
    name = "${var.node_name}_data_disk_premium_01"
    vhd_uri = "${var.node_data_storage_account_premium_blob_endpoint}${azurerm_storage_container.node_container_data_premium.name}/${var.node_name}_data_premium_01.vhd"
    create_option = "Empty"
    disk_size_gb = 1023
    lun = 32
  }
  delete_data_disks_on_termination = true

Debug Output

~ module.<some_server>.azurerm_virtual_machine.node
    storage_data_disk.0.lun:     "32" => "0"
    storage_data_disk.0.name:    "<some_server>_data_disk_premium_01" => "<some_server>_data_disk_standard_01"
    storage_data_disk.0.vhd_uri: "<foo>" => "<bar>"
    storage_data_disk.1.lun:     "0" => "32"
    storage_data_disk.1.name:    "<some_server>_data_disk_standard_01" => "<some_server>_data_disk_premium_01"
    storage_data_disk.1.vhd_uri: "<bar>" => "<foo>"

Expected Behavior

This change is successfully applied to Azure, but does not appear to be recored properly in the state file.

Actual Behavior

This shows up as a change to be made in every following terraform plan

Steps to Reproduce

  1. Set up a VM using the before layout.
  2. Change the Terraform config to match the after.
  3. Run a terraform apply and confirm you see the changes in the Azure portal
  4. Run terraform plan and the changes should be listed as pending still.

Important Factoids

I tried just editing the state file to see if it was something that could be manually corrected. Maybe I missed a spot, but I could never get it to see the change as completed.

Azure Official Example fails when augmenting it with a count logic to support VM scaling.

This issue was originally opened by @djsly as hashicorp/terraform#11821. It was migrated here as part of the provider split. The original body of the issue is below.


Hello,

I took the Azure example and added a count parameter to the objects that needed them. when I increase the count from 1 to 2, I get an error with the creation of the second VM. Terraform will try to recreate the VM.0 and this fails with a disk error.

Terraform Version

Terraform v0.8.6

Affected Resource(s)

  • azurerm_virtual_machine
  • azurerm_storage_container
  • azurerm_storage_account

Possibly a core issue since it looks similar as : hashicorp/terraform#3449

Terraform Configuration Files

variable "counts" {}

provider "azurerm" {
  subscription_id = "<removed>"
  client_id       = "<removed>"
  client_secret   = "<removed>"
  tenant_id       = "<removed>"
}

resource "azurerm_resource_group" "test" {
    name = "acctestrg"
    location = "West US"
}

resource "azurerm_virtual_network" "test" {
    name = "acctvn"
    address_space = ["10.0.0.0/16"]
    location = "West US"
    resource_group_name = "${azurerm_resource_group.test.name}"
}

resource "azurerm_subnet" "test" {
    name = "acctsub"
    resource_group_name = "${azurerm_resource_group.test.name}"
    virtual_network_name = "${azurerm_virtual_network.test.name}"
    address_prefix = "10.0.2.0/24"
}

resource "azurerm_network_interface" "test" {
    count = "${var.counts}"
    name = "acctni${count.index}"
    location = "West US"
    resource_group_name = "${azurerm_resource_group.test.name}"

    ip_configuration {
        name = "testconfiguration1"
        subnet_id = "${azurerm_subnet.test.id}"
        private_ip_address_allocation = "dynamic"
    }
}

resource "azurerm_storage_account" "test" {
    count = "${var.counts}"
    name = "accsai${count.index}"
    resource_group_name = "${azurerm_resource_group.test.name}"
    location = "westus"
    account_type = "Standard_LRS"

    tags {
        environment = "staging"
    }
}

resource "azurerm_storage_container" "test" {
    count = "${var.counts}"
    name = "vhds"
    resource_group_name = "${azurerm_resource_group.test.name}"
    storage_account_name = "${azurerm_storage_account.test.*.name[count.index]}"
    container_access_type = "private"
}

resource "azurerm_virtual_machine" "test" {
    count = "${var.counts}"
    name = "acctvm${count.index}"
    location = "West US"
    resource_group_name = "${azurerm_resource_group.test.name}"
    network_interface_ids = ["${azurerm_network_interface.test.*.id[count.index]}"]
    vm_size = "Standard_A0"

    storage_image_reference {
        publisher = "Canonical"
        offer = "UbuntuServer"
        sku = "14.04.2-LTS"
        version = "latest"
    }

    storage_os_disk {
        name = "myosdisk1"
        vhd_uri = "${azurerm_storage_account.test.*.primary_blob_endpoint[count.index]}${azurerm_storage_container.test.*.name[count.index]}/myosdisk1.vhd"
        caching = "ReadWrite"
        create_option = "FromImage"
    }

    os_profile {
        computer_name = "hostname${count.index}"
        admin_username = "testadmin"
        admin_password = "Password1234!"
    }

    os_profile_linux_config {
        disable_password_authentication = false
    }

    tags {
        environment = "staging"
    }
}

Debug Output

The terraform plan output
https://gist.github.com/djsly/639fc1b039db89fce5bcafe3fc53a165
The terraform apply output
https://gist.github.com/djsly/eaf7e1e3786915dd2a7e285db0f0b7c0

Expected Behavior

We should be getting only new resources while having the existing resources untouched.

Actual Behavior

The VM0 object gets recreated and fails while trying to recreate the osdisk0 since it already exists.

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. TF_VAR_count=1 terraform plan apply
  2. TF_VAR_count=2 terraform plan apply

References

Could be related to (unsure)

azurerm_container_service.agent_pool_profile is limited to 1

This issue was originally opened by @privatwolke as hashicorp/terraform#12652. It was migrated here as part of the provider split. The original body of the issue is below.


Terraform Version

Terraform v0.8.8

Affected Resource(s)

  • azurerm_container_service

Terraform Configuration Files

resource "azurerm_container_service" "kubernetes" {
  name                   = "myacsname"
  location               = "West Europe"
  resource_group_name    = "someresourcegroup"
  orchestration_platform = "Kubernetes"

  master_profile {
    count      = 1
    dns_prefix = "xxxmaster"
  }

  linux_profile {
    admin_username = "xxxx"

    ssh_key {
      key_data = "${file("authorized_keys")}"
    }
  }

  agent_pool_profile {
    name       = "default"
    count      = 2
    dns_prefix = "xxxagent"
    vm_size    = "Standard_F1s"
  }

  agent_pool_profile {
    name       = "default2"
    count      = 2
    dns_prefix = "xxxagent2"
    vm_size    = "Standard_A0"
  }

  service_principal {
    client_id     = "xxx"
    client_secret = "xxx"
  }

  diagnostics_profile {
    enabled = false
  }
}

Output

azurerm_container_service.kubernetes: agent_pool_profile: attribute supports 1 item maximum, config has 2 declared

Expected Behavior

Multiple agent pool profiles should be supported.

Actual Behavior

Multiple agent pool profiles don't seem to be supported.

Steps to Reproduce

  1. terraform plan

References

azurerm - error with terraform destroy

This issue was originally opened by @isamuelson as hashicorp/terraform#7396. It was migrated here as part of the provider split. The original body of the issue is below.


  • azurerm_storage_container.container.1: Error deleting storage container "eventstore1" from storage account "jetstorage1": storage: service returned error: StatusCode=412, ErrorCode=LeaseIdMissing, ErrorMessage=There is currently a lease on the container and no lease ID was specified in the request.
    RequestId:
    Time:2016-06-28T16:46:03.9674269Z, RequestId=e38d167a-001c-0044-675c-d1a4ea000000, QueryParameterName=, QueryParameterValue=

resource "azurerm_virtual_machine" "vm" {
count = "${var.count}"
name = "virtual-machine${count.index}"
location = "${azurerm_resource_group.compute.location}"
resource_group_name = "${azurerm_resource_group.compute.name}"
network_interface_ids = ["${element(azurerm_network_interface.nic.*.id, count.index)}"]
availability_set_id = "${azurerm_availability_set.as.id}"
vm_size = "Standard_DS2"

storage_image_reference {
    publisher = "Canonical"
    offer = "UbuntuServer"
    sku = "14.04.4-LTS"
    version = "14.04.201605160"
}

storage_os_disk {
    name = "osdisk"
    vhd_uri = "https://${var.storage_account_name}${count.index}.blob.core.windows.net/${var.name}${count.index}/osdisk.vhd"
    # vhd_uri = "${azurerm_storage_account.storage.primary_blob_endpoint}${azurerm_storage_container.container.name}/osdisk.vhd"
    caching = "ReadWrite"
    create_option = "FromImage"
}

storage_data_disk {
    name = "datadisk0"
    vhd_uri = "https://${var.storage_account_name}${count.index}.blob.core.windows.net/${var.name}${count.index}/datadisk0.vhd"
    create_option = "Empty"
    disk_size_gb = "${var.datadisk_size}"
    lun = 0
}

os_profile {
    computer_name = "${var.name}${count.index}"
    admin_username = "${var.admin_username}"
    admin_password = "${var.admin_password}"
}

os_profile_linux_config {
    disable_password_authentication = true
    ssh_keys {
      path = "/home/${var.ssh_username}/.ssh/authorized_keys"
      key_data = "${file("~/.ssh/id_rsa.pub")}"
    }
}

}

0.6.16 - azurerm - unable to obtain public IP

This issue was originally opened by @Tasquith as hashicorp/terraform#6634. It was migrated here as part of the provider split. The original body of the issue is below.


Terraform Version

0.6.16

Affected Resource(s)

azurerm_public_ip

Terraform Configuration Files

resource "azurerm_public_ip" "ap-service-discovery-server-0" {
    name = "${var.node_type}-0-public-ip"
    location = "${var.location}"
    resource_group_name = "${var.resource_group_name}"
    public_ip_address_allocation = "dynamic"
}

resource "azurerm_network_interface" "ap-service-discovery-server-0" {
    name = "${var.node_type}-0-nic"
    location = "${var.location}"
    resource_group_name = "${var.resource_group_name}"
    network_security_group_id = "${azurerm_network_security_group.service_discovery_server.id}"

    ip_configuration {
        name = "${var.node_type}-0-ip-config"
        subnet_id = "${element(split(",", var.subnet_id), 0)}"
        private_ip_address_allocation = "dynamic"
        public_ip_address_id = "${azurerm_public_ip.ap-service-discovery-server-0.id}"
    }
}

resource "azurerm_virtual_machine" "ap-service-discovery-server-0" {
    name = "${var.node_type}-0"
    resource_group_name = "${var.resource_group_name}"
    location = "${var.location}"
    vm_size = "${var.instance_type}"
    network_interface_ids = ["${azurerm_network_interface.ap-service-discovery-server-0.id}"]
    availability_set_id = "${azurerm_availability_set.ap-service-discovery-availability-set.id}"


    storage_os_disk {
        name = "${var.node_type}-0-osdisk"
        #Source VHD as reference
        image_uri = "${var.source_vhd_path}"
        #Destination vhd to create
        vhd_uri = "${var.vhd_storage_base_uri}${var.node_type}-0.vhd"
        create_option = "fromImage"
        os_type = "linux"
    }

    os_profile {
        computer_name = "${var.node_type}-0"
        admin_username = "${var.ssh_username}"
        admin_password = "${var.ssh_password}"
        # Custom data must be base64 and lt 87380 chars
        custom_data = "${base64encode(template_file.user_data_0.rendered)}"
    }

    os_profile_linux_config {
       disable_password_authentication = true
        ssh_keys {
          path = "/home/${var.ssh_username}/.ssh/authorized_keys"
          key_data = "${file("${var.ssh_key_path}")}"
        }
    }
}

output "public_ips" { value = "${azurerm_public_ip.ap-service-discovery-server-0.ip_address}" }

Expected Behavior

The public ip should be retrievable as per the docs.

Actual Behavior

  • Resource 'azurerm_public_ip.ap-service-discovery-server-0' does not have attribute 'ip_address' for variable 'azurerm_public_ip.ap-service-discovery-server-0.ip_address'

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. create a vm, with a public IP, then try and access that public ip from another resource or for module output.

AzureRM delegated user access?

This issue was originally opened by @glenjamin as hashicorp/terraform#12208. It was migrated here as part of the provider split. The original body of the issue is below.


Terraform Version

terraform 0.8.7

Affected Resource(s)

Azure Provider

I've been using terraform to configure a few things with the azurerm provider, which is broadly working well.

The part I'm not entirely keen on is how I have to create an Azure AD application that has appropriate permissions to perform resource operations per user who I want to have that ability - and annoyingly that azure apps can't be added to groups for permissions.

Clearly this isn't really terraform's fault. Azure AD applications are also allowed to run in a delegated resource mode, where they perform commands as the user in question - this is how the NodeJS, Python CLI apps and the Powershell Cmdlets work. The python code samples are relatively easy to follow here: https://github.com/AzureAD/azure-activedirectory-library-for-python

Currently terraform is effectively using the "Acquire Token with Client Credentials" method whereas the other CLI tools use "Acquire Token with device code".

Is this something that terraform could be extended to support?

Annoyingly there's no existing go code in the AzureAD org, but the pieces it uses all seem to be standard parts of OAuth2. The other downside is that the token only lasts for 1 hour, but does provide a refresh token that can be used to get a fresh one.

Perhaps an "easy" extension would be to allow terraform to accept the fully resolved azure bearer token, and then it would be possible for users to use their own mechanism to get and keep a valid bearer token if they wanted?

Intended method of configuring an Azure VM for WinRM?

This issue was originally opened by @DavidR91 as hashicorp/terraform#9961. It was migrated here as part of the provider split. The original body of the issue is below.


Terraform Version

0.7.8

Affected Resource(s)

  • azurerm_virtual_machine
  • azurerm_key_vault
  • chef

Terraform Configuration Files

N/A

Expected Behavior

It should be possible to provision an Azure VM using azurerm_virtual_machine and have it use a WinRM listener which is in a state for future provisional to use (e.g. chef)

Actual Behavior

  • WinRM on most stock images is disabled by default
  • Even enabling it yields Basic + unecrypted by default
  • It is possible to enable HTTPS using winrm in the virtual machine resource. But, the URL is supposed to be a KeyVault URL (certificate_url)
  • Terraform can provision a keyvault (azurerm_key_vault) but it (from what I can tell) cannot add or modify this vault in any way (so even if you could dynamically do tls_self_signed_cert each time as the WinRM cert you cannot do anything with it)
  • WinRM certificates could be created ahead of time, but ideally the certificates are bound to the machine's name/hostname, which kind of prevents us from doing this

Overall

What is the intended method/use case for an HTTPS WinRM listener on Azure?

  • Are there add-to-keyvault methods I have missed (?)
  • Should there just be a single fixed certificate in a KeyVault (manually added ahead of time) and insecure specified on each connection?
  • Is there an obvious way to exploit additionalUnattendContent to do WinRM config steps without requiring a first logon?
  • Should I be dropping back to local-exec to create and upload a certificate specific to the host being provisioned? (Ideally no)

Feature Request: Auto Scale Settings

This issue was originally opened by @tonipepperoniuk as hashicorp/terraform#12889. It was migrated here as part of the provider split. The original body of the issue is below.


Would it be possible to include support for Azure VM auto scale rules in Terraform? I'm able to deploy scale sets but it would be excellent to be able to set autoscale rules using Terraform. I've seen this mentioned a couple of times with regards to App Gateway but I'd like to request general support for autoscale settings.

azurerm_route does not delete routes associated with the table when removed

This issue was originally opened by @geofffranks as hashicorp/terraform#11747. It was migrated here as part of the provider split. The original body of the issue is below.


Terraform Version

Run terraform -v to show the version. If you are not running the latest version of Terraform, please upgrade because your issue may have already been fixed.

terraform v0.8.5

Affected Resource(s)

Please list the resources as a list, for example:

  • awsrm_route

If this issue appears to affect multiple resources, it may be an issue with Terraform's core, so please mention this.

Terraform Configuration Files

resource "azurerm_route_table" "external" {
    name = "${var.resource_group_name}-external"
    location = "${var.azure_region}"
    resource_group_name = "${azurerm_resource_group.default.name}"

    route {
        name = "internet"
        address_prefix = "0.0.0.0/0"
        next_hop_type = "internet"
    }
}

Expected Behavior

When removing the route from the above definition, terraform should remove the route from the routing table in Azure.

Actual Behavior

Terraform says nothing will change during plan or apply phases

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. Deploy above route table in sample config
  2. Delete the route from the config
  3. terraform plan + apply

Connection isn't configured for provisioners to use inside of azurerm_virtual_machine

This issue was originally opened by @colemickens as hashicorp/terraform#7122. It was migrated here as part of the provider split. The original body of the issue is below.


Terraform v0.7.0-rc1 (301da85f30239e87b30db254a25706a6d41c2522)

  • azurerm_virtual_machine
  • remote_exec

Expected Behavior

It would use the information given during configuration to SetConnInfo so that provisioners can connect automatically.

Actual Behavior

This isn't implemented and so it can't connect and just fails in an infinite loop.

azurerm_virtual_machine.master_vm (remote-exec): Connecting to remote host via SSH...
azurerm_virtual_machine.master_vm (remote-exec):   Host:
azurerm_virtual_machine.master_vm (remote-exec):   User: root
azurerm_virtual_machine.master_vm (remote-exec):   Password: false
azurerm_virtual_machine.master_vm (remote-exec):   Private key: false
azurerm_virtual_machine.master_vm (remote-exec):   SSH Agent: false

azurerm_route_table/Subnet association is dropped on update.

This issue was originally opened by @TStraub-rms as hashicorp/terraform#11226. It was migrated here as part of the provider split. The original body of the issue is below.


Terraform Version

terraform version 0.8.4

Affected Resource(s)

  • azurerm_route_table

Terraform Configuration Files

variable "tenant_id" {}
variable "client_id" {}
variable "client_secret" {}
variable "subscription_id" {}
variable "location" {}
variable "module_name" {}
variable "vnet_address_space" {}
variable "stack_subnet1" {}
variable "tags" {
  description = "(Optional) Tags to be assigned to every resource in the module."
  type        = "map"
  default     = {}
}

provider "azurerm" {
  tenant_id       = "${var.tenant_id}"
  subscription_id = "${var.subscription_id}"
  client_id       = "${var.client_id}"
  client_secret   = "${var.client_secret}"
}

resource "azurerm_resource_group" "module" {
  name     = "${var.module_name}-Network-Infrastructure"
  location = "${var.location}"
  tags     = "${var.tags}"
}

resource "azurerm_virtual_network" "module" {
  name                = "${var.module_name}-Vnet1"
  resource_group_name = "${azurerm_resource_group.module.name}"
  address_space       = ["${var.vnet_address_space}"]
  location            = "${var.location}"
  tags                = "${var.tags}"
}

resource "azurerm_subnet" "subnet1" {
  name                 = "${var.module_name}-SubNet1"
  resource_group_name  = "${azurerm_resource_group.module.name}"
  virtual_network_name = "${azurerm_virtual_network.module.name}"
  address_prefix       = "${var.stack_subnet1}"
  route_table_id       = "${azurerm_route_table.module.id}"
}

resource "azurerm_route_table" "module" {
  name                = "${var.module_name}-RouteTable"
  location            = "${var.location}"
  resource_group_name = "${azurerm_resource_group.module.name}"
  tags                = "${var.tags}"
}

resource "azurerm_route" "route_a" {
  name                = "Test Route A"
  resource_group_name = "${azurerm_resource_group.module.name}"
  route_table_name    = "${azurerm_route_table.module.name}"

  address_prefix         = "10.100.0.0/14"
  next_hop_type          = "VirtualAppliance"
  next_hop_in_ip_address = "10.10.1.1"
}

Expected Behavior

Updating the route table should keep or update the subnet association of the table.

Actual Behavior

The initial terraform apply will create all the resources without an issue.

When updating the route table (by updating the tags for instance), the subnet association is dropped.

Steps to Reproduce

  1. terraform apply a route table with at least one route.
  2. update the module to include new or different tags.
  3. 'terraform plan` will show the only modification was the tag updates.
  4. terraform apply will update the tags, but remove the subnet association from the route table.

Azure - MS documentation referenced implies that provision_vm_agent is true by default but this is not true within Terraform itself

This issue was originally opened by @DavidR91 as hashicorp/terraform#9995. It was migrated here as part of the provider split. The original body of the issue is below.


Terraform Version

0.7.8

Affected Resource(s):

  • azurerm_virtual_machine

Terraform Configuration Files

N/A

Expected Behavior

When provisioning an Azure virtual machine, the VM agent should be provisioned by default because:

  • This is consistent with Microsoft docs on how the parameter behaves [when not explicitly supplied]
  • The agent is required in order to do certain operations (e.g. install extensions) - but importantly it is essential for getting meaningful errors out of provisioning failures (esp. ones shown in the Azure portal)

Actual Behavior

The VM agent is not provisioned by default. Changing this property after the machine is first made is not permitted.

Steps to Reproduce

Perform apply with any .tf that uses azurerm_virtual_machine where os_profile_windows_config\provision_vm_agent has not been explicitly set to a value

(You can verify it lacks the agent by viewing 'Extensions' on the portal. There is a banner at the top stating "The virtual machine agent is not installed")

Important Factoids

The Terraform docs link to https://msdn.microsoft.com/en-us/library/mt163591.aspx#Anchor_2 wherein os_profile_windows_config\provision_vm_agent (provisionVMAgent in the MS docs) is specifically documented as

When this property is not specified in the request body, default behavior is to set it to true. This will ensure that VM Agent is installed on the VM so that extensions can be added to the VM later.

Terraform does not match this behaviour (despite referencing this documentation) presumably because an explicit false is supplied unless you override the provision_vm_agent property

azurerm_storage_container.pcfsc: Error creating container

This issue was originally opened by @tariqsiddiqui as hashicorp/terraform#10636. It was migrated here as part of the provider split. The original body of the issue is below.


Hi there,

Thank you for opening an issue. Please note that we try to keep the Terraform issue tracker reserved for bug reports and feature requests. For general usage questions, please see: https://www.terraform.io/community.html.

Terraform Version

Terraform v0.7.13

Affected Resource(s)

Please list the resources as a list, for example:

  • azurerm_storage_container

Terraform Configuration Files

pcf-azure-init.txt

Debug Output

https://gist.github.com/tariqsiddiqui/0955ef0e4a6a9e0c8ef7de22173e612b

Panic Output

I don't see any crash log

Expected Behavior

Should create Storage Container

Actual Behavior

azurerm_storage_container.pcfsc: Error creating container "pcfsc" in storage account "pcfsa": storage: service returned error: StatusCode=403, ErrorCode=AuthenticationFailed, ErrorMessage=Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.
RequestId:58b77730-0001-00b5-252a-52e0fd000000
Time:2016-12-09T14:41:30.4894704Z, RequestId=58b77730-0001-00b5-252a-52e0fd000000, QueryParameterName=, QueryParameterValue=

Steps to Reproduce

  1. terraform apply

Important Factoids

I am trying to create PCF prerequisite infrastructure on Azure

References

AzureRM provider doesn't support setting CORS for blob storage

This issue was originally opened by @joaocc as hashicorp/terraform#8825. It was migrated here as part of the provider split. The original body of the issue is below.


Terraform Version

Terraform v0.7.3

Affected Resource(s)

azurerm_storage_account

Expected Behavior

There is currently no way to configure CORS via Terraform.

Reference here: https://msdn.microsoft.com/en-us/library/azure/dn535601.aspx

This means that storage accounts created via terraform cannot be used for things as simple as serving images or js scripts to be used in a website.

Actual Behavior

Cloud storage doesn't allow requests from a different origin.

azurerm_storage_share should return the key\password

This issue was originally opened by @holmesb as hashicorp/terraform#13093. It was migrated here as part of the provider split. The original body of the issue is below.


Terraform Version

v0.9.1

Affected Resource(s)

azurerm_storage_share

Expected Behavior

Some way to access the key\password needed for mapping. In other words, a "key" export attribute is required in addition to the existing id and url attributes.

Actual Behavior

Can't programmatically access the key\password needed for mapping\accessing a share. Hence mapping (net use ...) must be a manual task.

azurerm - storage account resource has no support for custom domains

This issue was originally opened by @Tasquith as hashicorp/terraform#7800. It was migrated here as part of the provider split. The original body of the issue is below.


I've been looking at the storage account resource and I can't seem to find a way to add a custom domain name, which is supported by the UI.

I've had a look at the API reference, and have found that the required functionality is there (customDomain and useSubDomainName parameters). Are there any plans to implement this?

https://msdn.microsoft.com/en-us/library/azure/mt163564.aspx

Thanks,
Tom

Azure RM VM with Subnets issue

This issue was originally opened by @dennisrd587 as hashicorp/terraform#10744. It was migrated here as part of the provider split. The original body of the issue is below.


Hi there,

Thank you for opening an issue. Please note that we try to keep the Terraform issue tracker reserved for bug reports and feature requests. For general usage questions, please see: https://www.terraform.io/community.html.

Terraform Version

Terraform v0.7.13

Affected Resource(s)

azurerm_subnet

If this issue appears to affect multiple resources, it may be an issue with Terraform's core, so please mention this.

Terraform Configuration Files

### Debug Output
Please provider a link to a GitHub Gist containing the complete debug output: https://www.terraform.io/docs/internals/debugging.html. Please do NOT paste the debug output in the issue; just paste a link to the Gist.

### Panic Output
I've attached the crash log

### Expected Behavior
I'm attempting to build out a small environment in Azure

- Vnet with subnets

- resource group

- storage account

- Windows server

### Actual Behavior
I've tried to work around this by commenting out the subnets, and it seems to get past the "terramark plan", but when I uncomment them, I get the crash log

### Steps to Reproduce
Please list the steps required to reproduce the issue, for example:
1. `terraform plan`

### Important Factoids
Trying to build in an Azure RM EA subscription

### References
Are there any other GitHub issues (open or closed) or Pull Requests that should be linked here? For example:
- GH-1234
[crash.log.txt](https://github.com/hashicorp/terraform/files/653739/crash.log.txt)

azurerm_virtual_machine_extension cannot update/create

This issue was originally opened by @TamasSzerb as hashicorp/terraform#10443. It was migrated here as part of the provider split. The original body of the issue is below.


Hello,

Terraform Version

Affected Resource(s)

azurerm_virtual_machine_extension.xxx: Creating...
location: "" => "westus"
name: "" => "docker-engine"
publisher: "" => "Microsoft.OSTCExtensions"
resource_group_name: "" => "xxx"
settings: "" => " {\n "commandToExecute": "DEBIAN_FRONTEND=noninteractive apt install -qy docker.io"\n }\n"
tags.%: "" => "1"
tags.environment: "" => "dev"
type: "" => "CustomScriptForLinux"
type_handler_version: "" => "1.2"
virtual_machine_name: "" => "xxx"
azurerm_virtual_machine_extension.xxx: Still creating... (10s elapsed)
azurerm_virtual_machine_extension.xxx: Still creating... (20s elapsed)
azurerm_virtual_machine_extension.xxx: Still creating... (30s elapsed)

and never ends or times out. From my experience, if the command is very short (eg. hostname) it will be created if we are creating new VM on new environment.

Terraform Configuration Files

# https://github.com/Azure/custom-script-extension-linux
resource "azurerm_virtual_machine_extension" "xxx" {
    name = "docker-engine"
    location = "West US"
    resource_group_name = "${azurerm_resource_group.xxx.name}"
    virtual_machine_name = "${azurerm_virtual_machine.xxx.name}"
    publisher = "Microsoft.OSTCExtensions"
    type = "CustomScriptForLinux"
    type_handler_version = "1.2"

    settings = <<EOF
    {
        "commandToExecute": "DEBIAN_FRONTEND=noninteractive apt install -qy docker.io"
    }
EOF

    tags {
        environment = "dev"
    }
}

Expected Behavior

Create/modify the OS Extension on Azure

Actual Behavior

Never ends, times out or reports that the object is not existing

Steps to Reproduce

  1. terraform apply

Important Factoids

Microsoft Azure in ARM (new) mode

Thanks,

Tamas

azurerm_dns_srv_record should support dynamic count or list of targets

This issue was originally opened by @hh as hashicorp/terraform#13386. It was migrated here as part of the provider split. The original body of the issue is below.


Terraform Version

0.9.2 / current stable

Affected Resource(s)

  • azurerm_dns_srv_record

Terraform Configuration Files

Creating A records via azurerm_dns_a_record with count of 3 results in multiple records:

resource "azurerm_dns_a_record" "A-point-to-three-ips" {
  name = "mynodes"
  zone_name = "myzone"
  resource_group_name = "mygroup"
  ttl = "300"
  # records = [ "${ var.mynode-ips }" ]
  # aka:
  records = [
    "10.10.10.1",
    "10.10.10.2",
    "10.10.10.3"
  ]
}

When we query the A record the DNS server for the zone responds with the IP list:

mynodes.myzone => [ 10.10.10.1, 10.10.10.2, 10.10.10.3]

In order to have a single SRV record with a multiple entries we should somehow programatically provide azurerm_dns_srv_record with a list.

We tried count with SRV records, but this results in a single record due to the SRV name not changing, and only a single record/entry is created. (the last one in the loop)

resource "azurerm_dns_srv_record" "SRV-point-to-three-records" {
  name = "_mysrv._tcp"
  zone_name = "myzone"
  resource_group_name = "mygroup}"
  ttl = "300"
  count = "3"

  record {
    priority = 0
    weight = 0
    port = 2379
    target = "srv${count.index}.myzone"
  }
}

Using dig at this point for _mysrv._tcp.myzone results in only one record.

_mysrv._tcp.myzone => [ srv3.myzone ]

We should support a list for record targets or support count within the record definition:

resource "azurerm_dns_srv_record" "SRV-count-to-three" {
  name = "_mysrv._tcp"
  zone_name = "myzone"
  resource_group_name = "mygroup}"
  ttl = "300"

  records {
    priority = 0
    weight = 0
    port = 2379

    ### we could support count here
    count = 3
    ### and use count in target
    target = "srv${count.index}.mytld"


    #### or we support a list via targets
    targets = [
       "srv1.mytld",
       "srv2.mytld",
       "srv3.mytld"
    ]
  }
}

Expected Behavior

To provision SRV records for high service availability we need to add multiple records per azurerm_dns_srv_record pointing to dynamic VMs using count or supplying a list of targets.

_mysrv._tcp.myzone => [ srv1.myzone, srv2.myzone, srv3.myzone ]

Actual Behavior

When using count, only one SRV record survives pointing to the last one looped through.

_mysrv._tcp.myzone => [ srv3.myzone ]

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. Use the SRV-point-to-three-records hcl tf config
  2. terraform apply
  3. dig @azure-dns-server-for-zone-ip _mysrv._tcp.myzone will return a single entry

Cannot authenticate to Azure

This issue was originally opened by @030 as hashicorp/terraform#10550. It was migrated here as part of the provider split. The original body of the issue is below.


I have followed these instructions, to authenticate to Azure, configured the following:

provider "azurerm" {
  subscription_id = ""
  client_id       = ""
  client_secret   = ""
  tenant_id       = ""
}

resource "azurerm_resource_group" "example" {
  name = "exampleGroup"
  location = "westus"
}

but it results in:

Error refreshing state: 1 error(s) occurred:

* Credentials for accessing the Azure Resource Manager API are likely to be incorrect, or
  the service principal does not have permission to use the Azure Service Management
  API.

when terraform plan or terraform apply is issued.

Terraform version

Terraform v0.7.13

OS

Centos 7

http://serverfault.com/questions/818406/how-to-connect-to-azure-using-terraform

Authentication was successful when a subscription file was used. However the actions were executed on the Classic portal instead of the new one. The following snippet created a network in the Classic Portal.

provider "azure" {
  publish_settings = "${file("file.publishsettings")}"
}

resource "azure_virtual_network" "default" {
  name = "vNet01"
  address_space = ["10.0.0.1/24"]
  location = "North Europe"
  subnet {
  name = "Subnet1"
  address_prefix = "10.0.0.1/25"
 }
}  

Enhancement: Allow more than one IP Configuration on azurerm_network_interface

This issue was originally opened by @carinadigital as hashicorp/terraform#9874. It was migrated here as part of the provider split. The original body of the issue is below.


Allow more one or more IP configurations to be attached to an azurerm_network_interface.

Terraform Version

0.7.8

Affected Resource(s)

azurerm_network_interface

Important Factoids

Some challenges that I've seen.

  1. TODO notice in code for the ip configuration loop https://github.com/hashicorp/terraform/blob/master/builtin/providers/azurerm/resource_arm_network_interface_card.go#L266
  2. Schema for top level private_ip_address can only store one value.
    https://github.com/hashicorp/terraform/blob/master/builtin/providers/azurerm/resource_arm_network_interface_card.go#L54

azurerm_template_deployment parameters from acs-engine generated parameters file

This issue was originally opened by @Globegitter as hashicorp/terraform#13241. It was migrated here as part of the provider split. The original body of the issue is below.


Hi there,

Thank you for opening an issue. Please note that we try to keep the Terraform issue tracker reserved for bug reports and feature requests. For general usage questions, please see: https://www.terraform.io/community.html.

Terraform Version

Terraform version: 0.9.2 6365269541c8e3150ebe638a5c555e1424071417+CHANGES
Go runtime version: go1.8

Affected Resource(s)

Please list the resources as a list, for example:

  • azurerm_template_deployment

Terraform Configuration Files

resource "azurerm_template_deployment" "test" {
  name                = "${var.cluster_name}"
  resource_group_name = "${var.resource_group_name}"

  template_body = "${file(azuredeploy.json)}"
  parameters = "${file(azuredeploy.parameters.json)}"
  deployment_mode = "Incremental"
  depends_on      = ["null_resource.run_acs_engine"]
}

Debug Output

1 error(s) occurred:

* azurerm_template_deployment.test: parameters: should be a map

Expected Behavior

Terraform should have created that deployment

Actual Behavior

It failed

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. Download the files from: https://gist.github.com/Globegitter/4ea17f9ed24ebdb38ce0a725fde01795
  2. terraform apply

Important Factoids

I am using acs-engine to create the template file as shown in this article: http://danielrhoades.com/2017/03/20/docker-azure-terra/

When I manually converted the azuredeploy.parameters.json into a map because it seemd to have issues with agentpool1Count = 1 (maybe because it is an int?), once I removed that and the template used it started creating with the deployment.

public ip not getting allocated in azure when running the creation of multiple host using count

This issue was originally opened by @retheshnair as hashicorp/terraform#11718. It was migrated here as part of the provider split. The original body of the issue is below.


Hi there,

Terraform Version

terraform -v
Terraform v0.8.5

Affected Resource(s)

Please list the resources as a list, for example:

  • azurerm_public_ip

If this issue appears to affect multiple resources, it may be an issue with Terraform's core, so please mention this.

Terraform Configuration Files

# create public IP
resource "azurerm_public_ip" "bastionserverpubip" {
    count               = "${var.azure_bastionservercount}"
    name                = "${var.azure_bastionserver}${format("pubip%02d", count.index + 1)}"
    location            = "${var.azure_location01}"
    resource_group_name = "${azurerm_resource_group.resourcegroup01.name}"
    public_ip_address_allocation = "static"
    domain_name_label = "${var.azure_bastionserver}${format("%02d", count.index + 1)}-${var.azure_project01_name}"
    reverse_fqdn = "${var.azure_bastionserver}${format("%02d", count.index + 1)}.rethesh.in"

    tags {
             environment = "${var.azure_project01_name}"
    }
}



resource "azurerm_availability_set" "bastionavailabilityset" {
     name = "${var.azure_bastionserver}AvailabilitySet01"
     location = "${var.azure_location01}"
     resource_group_name = "${azurerm_resource_group.resourcegroup01.name}"
     tags {
         environment = "${var.azure_project01_name}"
     }
 }



#Network Interface
resource "azurerm_network_interface" "bastion_netinterface" {
    count               = "${var.azure_bastionservercount}"
    name                = "${var.azure_bastionserver}${format("nic%02d", count.index + 1)}"
    location            = "${azurerm_resource_group.resourcegroup01.location}"
    resource_group_name = "${azurerm_resource_group.resourcegroup01.name}"
    network_security_group_id = "${azurerm_network_security_group.networksecuritygroup01.id}"

  ip_configuration {
        name            = "ipconfig1"
        subnet_id       = "${azurerm_subnet.vnet01_subnet0.id}"
        private_ip_address_allocation = "dynamic"
        public_ip_address_id = "${element(azurerm_public_ip.bastionserverpubip.*.id, count.index )}"

            }
  depends_on          = ["azurerm_public_ip.bastionserverpubip","azurerm_virtual_network.vnet01","azurerm_network_security_group.networksecuritygroup01"]
  tags {
    environment = "${var.azure_project01_name}"
  }
}
 
#Create VM
resource "azurerm_virtual_machine" "bastionserver_vms" {
    count               = "${var.azure_bastionservercount}"
    name                = "${var.azure_bastionserver}${format("%02d", count.index + 1)}"
    location            = "${azurerm_resource_group.resourcegroup01.location}"
    resource_group_name = "${azurerm_resource_group.resourcegroup01.name}"
    network_interface_ids = ["${element(azurerm_network_interface.bastion_netinterface.*.id, count.index)}"]
    vm_size = "${var.azure_bastion_vm_size}"
    availability_set_id = "${azurerm_availability_set.bastionavailabilityset.id}"
    delete_os_disk_on_termination = "${var.azure_bastionserver_os_disk_deletion}"
    delete_data_disks_on_termination = "${var.azure_bastionserver_data_disk_deletion}"


    storage_os_disk {
        os_type         = "${var.azure_os_type_lin}"
        name            = "${var.azure_bastionserver}${format("%02d_osdisk01", count.index + 1)}"
        vhd_uri         = "${azurerm_storage_account.storageaccount01.primary_blob_endpoint}${azurerm_storage_container.storagecontainer01.name}/${var.azure_bastionserver}${format("%02d_osdisk01.vhd", count.index + 1)}"
        caching         = "ReadWrite"
        create_option   = "FromImage"
        image_uri       = "${var.azure_custom_server_image01}"

    }

  #  storage_image_reference {
  #       publisher       = "${var.azure_os_publisher01}"
  #       offer           = "${var.azure_os_image_offer01}"
  #       sku             = "${var.azure_os_image_sku01}"
  #       version         = "${var.azure_image_version01}"
  #   }


  os_profile {
        computer_name   = "${var.azure_bastionserver}${format("%02d", count.index + 1)}"
        admin_username  = "${var.azure_os_admin_user}"
        admin_password  = "${var.azure_os_admin_password}"
    }
os_profile_linux_config {
  disable_password_authentication = true
   ssh_keys {
     path = "/home/${var.azure_os_admin_user}/.ssh/authorized_keys"
     key_data = "${file(var.azure_linux_ssh_key_pub)}"

   }
}
    depends_on          = ["azurerm_network_interface.bastion_netinterface", "azurerm_network_security_group.networksecuritygroup01","azurerm_storage_container.storagecontainer01","azurerm_availability_set.bastionavailabilityset"]
  tags {
    Name = "${var.azure_bastionserver}${format("%02d", count.index + 1)}"
    Environment = "${var.azure_environment}"
    Role        = "${var.azure_bastionserver_role}"
  }
   tags {
    environment = "${var.azure_project01_name}"
  }
}

output "bastionservers" {
  value = ["${azurerm_virtual_machine.bastionserver_vms.*.name}"]
}

output "bastionserver-publicip" {
  value =   ["${azurerm_public_ip.bastionserverpubip.*.ip_address}"]
}

output "bastionserver-dns" {
  value =   ["${azurerm_public_ip.bastionserverpubip.*.domain_name_label}"]
}.

Debug Output

terraform : �[31mError creating plan: 1 error(s) occurred:
At line:1 char:1

  • terraform apply
  •   + CategoryInfo          : NotSpecified: (�[31mError crea...or(s) occurred::String) [], RemoteExc 
     eption
      + FullyQualifiedErrorId : NativeCommandError
    
    
  • Resource 'azurerm_public_ip.bastionserverpubip' not found for variable
    'azurerm_public_ip.bastionserverpubip.ip_address'�[0m�[0m

Panic Output

No panic output

Expected Behavior

What should have happened?
Public IP should be allowed be allocated but not getting allowed when the count of the vm provide more than one

Actual Behavior

Public IP should be allowed be allocated for each host

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform apply

Important Factoids

Are there anything atypical about your accounts that we should know? For example: Running in EC2 Classic? Custom version of OpenStack? Tight ACLs?

References

Are there any other GitHub issues (open or closed) or Pull Requests that should be linked here? For example:

Azure Provider. A lease conflict occurs while using a custom image

This issue was originally opened by @nikitashalnov as hashicorp/terraform#11941. It was migrated here as part of the provider split. The original body of the issue is below.


Hello.

I just tried to play with terraform and create a test VM with the Linux Debian Jessie OS from a custom image. The custom image was captured from existed virtual machine by Azure Old Portal (https://manage.windowsazure.com).

But I'm getting an error:

I see that this image is in LEASE STATE Leased and in LEASE STATUS Locked. But this affects noting, because I can create a VM by Azure Cli as usual:
azure vm create mretailer-test-service jessie-template-2 nikita -l "West Europe" --ssh --no-ssh-password --ssh-cert /home/n.shalnov/azure_ssh.pem -w net -z ExtraSmall -v -n mretailer-test-service -b subnet-1 -S 10.11.11.68

Terraform successfully creates Cloud Service (resource azure_hosted_service) and fails on stage of creating a VM.

Here is more info.

Terraform Version

Terraform v0.8.6

Affected Resource

azure_instance

Terraform Configuration Files

# Configure the Azure Provider
provider "azure" {
  publish_settings = "${file("Azure.publishsettings")}"
  subscription_id = "XXXX"
}

resource "azure_hosted_service" "mret-test-service" {
    name = "mret-test-service"
    location = "West Europe"
    ephemeral_contents = false
    description = "Hosted service created by Terraform."
    label = "mr-test-01"
}

resource "azure_instance" "test-web-server" {
    name = "mret-test"
    hosted_service_name = "${azure_hosted_service.mret-test-service.name}"
    storage_service_name = "storage-account"
    image = "jessie-template-2"
    size = "ExtraSmall"
    location = "West Europe"
    username = "nikita"
    password = "XXX"
    virtual_network = "net"
    subnet = "subnet-1"
}

There is my image:

n.shalnov@nikita-backupVB:~/azure/terraform$ azure storage blob show -a storage-account vm-images jessie-template-2.vhd -k {KEY}
info:    Executing command storage blob show
+ Getting storage blob information                                             
data:    Property       Value                                 
data:    -------------  --------------------------------------
data:    container      vm-images                             
data:    name           jessie-template-2.vhd                 
data:    blobType       PageBlob                              
data:    contentLength  32212255232                           
data:    contentType    application/octet-stream Charset=UTF-8
data:    contentMD5     undefined                             
info:    storage blob show command OK

n.shalnov@nikita-backupVB:~/azure/terraform$ azure vm image show jessie-template-2
info:    Executing command vm image show
+ Fetching VM image                                                            
data:    category "User"
data:    label "jessie-template-2"
data:    location 0 "West Europe"
data:    logicalSizeInGB 30
data:    mediaLinkUri "https://storage-account.blob.core.windows.net/vm-images/jessie-template-2.vhd"
data:    name "jessie-template-2"
data:    operatingSystemType "Linux"
data:    isPremium false
data:    iOType "Standard"
info:    vm image show command OK

It looks like a bug or maybe I'm wrong.
If you need more information and something the written you didn't understand, please just tell me.

Feature Request: Support for Azure Stack

This issue was originally opened by @Tasquith as hashicorp/terraform#7299. It was migrated here as part of the provider split. The original body of the issue is below.


Hi,

I'm looking into Azure Stack and terraform/packer at the moment. I've not found a way of configuring the API endpoint to access anything other than the default public cloud via these tools.

I can see that the azure-sdk-for-go supports this, I'm not sure about riviera though.

Azure/azure-sdk-for-go#330

Are there any timescales for adding configurable endpoint support? I understand Azure Stack is not generally available until november, but it would be nice to prove against the technical preview.

Regards,

Tom

Create a virtual_network_gateway with azurerm

This issue was originally opened by @dpnl87 as hashicorp/terraform#8372. It was migrated here as part of the provider split. The original body of the issue is below.


I would like to have to possibility to create virtual network gateways with Terraform and the Azurerm provider. Currently there is no support for it so we still need to do manual actions on Azure.

Its used to link a network interface to a ExpressRoute.

If you need any additional information or require testing assistance I'm more then happy to help out.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.