GithubHelp home page GithubHelp logo

flexibleenginecloud / terraform-provider-flexibleengine Goto Github PK

View Code? Open in Web Editor NEW
30.0 30.0 53.0 25.33 MB

Terraform flexibleengine provider

Home Page: https://www.terraform.io/docs/providers/flexibleengine/

License: Mozilla Public License 2.0

Makefile 0.04% Go 99.89% Shell 0.07%
flexibleengine terraform terraform-provider

terraform-provider-flexibleengine's People

Contributors

appilon avatar azrod avatar berendt avatar chengxiangdong avatar dependabot[bot] avatar dupuy avatar edisonxiang avatar fatmcgav avatar freesky-edward avatar garyxia avatar ggiamarchi avatar grubernaut avatar jason-zhang9309 avatar jrperritt avatar jtopjian avatar julienvey avatar khdegraaf avatar liwanting0517 avatar mcanevet avatar mitchellh avatar niuzhenguo avatar radeksimko avatar sheile avatar shichangkuo avatar stack72 avatar takaishi avatar zengchen1024 avatar zhongjun2 avatar zhukun-huawei avatar zippo-wang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-provider-flexibleengine's Issues

AS Group : security group number is limited to only 1

Terraform Version

Terraform v0.11.7

Affected Resource(s)

flexibleengine_as_group_v1

Terraform Configuration Files

  scaling_group_name = "public_reverse_proxy"
  scaling_configuration_id = "${flexibleengine_as_configuration_v1.public_reverse_proxy.id}"
  desire_instance_number = 2
  min_instance_number = 0
  max_instance_number = 3
  networks          = [{id = "${flexibleengine_vpc_subnet_v1.subnet_internal.id}"}]
  security_groups   = [
    {id = "${flexibleengine_networking_secgroup_v2.public_reverse_proxy.id}"},
    {id = "${flexibleengine_networking_secgroup_v2.admin.id}"}
  ]
  vpc_id            = "${flexibleengine_vpc_v1.vpc.id}"
  lb_listener_id    = "${flexibleengine_elb_listener.listener_public_HTTP.id},${flexibleengine_elb_listener.listener_public_HTTPS.id}"
  delete_instances  = "yes"
}

Expected Behavior

Creation of an AS group

Actual Behavior

{"error":{"code":"AS.2025","message":"The number of security groups in the AS group exceeds the upper limit."}}
It seems that we can only use 1 Security group for AS group. However, the documentation says that "An array of one or more security group IDs to associate with the group"

Add security group attribute when creating CCE2 Nodes

Terraform Version

Terraform v0.11.11

Affected Resource(s)

  • CCE2 Nodes auto-generated Security groups

Expected Behavior

Export an security group (id/name) attribute when creating an CCE2 Cluster nodes to retrieves the security group generated

Actual Behavior

Any information about security groups for nodes in ressources or data.

Steps to Reproduce

Deploy a CCE2 cluster with nodes

feature request : Specifying health port on Enhanced Load Balancer

Hello,

Currently it's not possible in the terraform provider to specify a protocol_port for health check port (resource : flexibleengine_lb_monitor_v2).

It is possible in the GUI, when creating a "Backend Server Group" : https://docs.prod-cloud-ocb.orange-business.com/en-us/usermanual/elb/en-us_topic_0052569729.html

And it's documented in the API (section "Health Check / Configuring a Health Check", Didn't test it.

Could you, please add this parameter to the terraform resource. This could help a lot.

Terraform Version

Terraform v0.11.13

  • provider.flexibleengine v1.4.0

Affected Resource(s)

  • flexibleengine_lb_monitor_v2

Thanks

EIP Binding on ECS is not working

Hi there,

Not sure if I'm doing something wrong but actually my terraform plan does not bind the EIP on my network interface. If I run terraform a 2nd attempt it detect no change, if I bind manually my EIP, terraform detect no change.

Terraform Version

Terraform v0.11.13
+ provider.flexibleengine v1.4.0

Affected Resource(s)

  • flexibleengine_vpc_eip_v1

Terraform Configuration Files

resource "flexibleengine_vpc_eip_v1" "eip_bastion" {
  publicip {
    type = "5_bgp"
    port_id = "${flexibleengine_compute_instance_v2.ecs_bastion.network.0.port}"
  }
  bandwidth {
    name = "bastion"
    size = 1
    share_type = "PER"
    charge_mode = "traffic"
  }
}

Anyone have experience with this ?

Best regards,

Adding endpoints to flexibleengine_cce_cluster_v3 Data source

Hi there,

It seems that endpoints information (the IP exposed to reach CCE kubernetes cluster APIs) are not part of the cluster_c3 datasources. Do you think that it is possible to add it ? It is already exposed on platform API side and would be really useful (today we have to connect to the console to get this information).

Affected Resource(s)

Please list the resources as a list, for example:

  • cce cluster

Expected Behavior

Get the private IP of CCE cluster in order to access to Kubernetes APIs as described here : https://docs.prod-cloud-ocb.orange-business.com/en-us/api2/cce/cce_02_0238.html

Actual Behavior

Not provided : https://www.terraform.io/docs/providers/flexibleengine/d/cce_cluster_v3.html

Kind regards,

[PROPOSAL] Switch to Go Modules

As part of the preparation for Terraform v0.12, we would like to migrate all providers to use Go Modules. We plan to continue checking dependencies into vendor/ to remain compatible with existing tooling/CI for a period of time, however go modules will be used for management. Go Modules is the official solution for the go programming language, we understand some providers might not want this change yet, however we encourage providers to begin looking towards the switch as this is how we will be managing all Go projects in the future. Would maintainers please react with ๐Ÿ‘ for support, or ๐Ÿ‘Ž if you wish to have this provider omitted from the first wave of pull requests. If your provider is in support, we would ask that you avoid merging any pull requests that mutate the dependencies while the Go Modules PR is open (in fact a total codefreeze would be even more helpful), otherwise we will need to close that PR and re-run go mod init. Once merged, dependencies can be added or updated as follows:

$ GO111MODULE=on go get github.com/some/module@master
$ GO111MODULE=on go mod tidy
$ GO111MODULE=on go mod vendor

GO111MODULE=on might be unnecessary depending on your environment, this example will fetch a module @ master and record it in your project's go.mod and go.sum files. It's a good idea to tidy up afterward and then copy the dependencies into vendor/. To remove dependencies from your project, simply remove all usage from your codebase and run:

$ GO111MODULE=on go mody tidy
$ GO111MODULE=on go mod vendor

Thank you sincerely for all your time, contributions, and cooperation!

Private zone - Update DNS Record if IP change failed

Hi there,

Currently, if we update a DNS record with a new IP address (after an instance re-creation for example), there is a conflict because the FQDN already exists. This issue is also opened on the huaweicloud provider : https://github.com/terraform-providers/terraform-provider-huaweicloud/issues/54

Terraform Version

Terraform v0.11.10

provider.flexibleengine v1.2.1
provider.huaweicloud v1.2.0
provider.template v1.0.0

Affected Resource(s)

huaweicloud_dns_recordset_v2

Terraform Configuration Files

resource "huaweicloud_dns_recordset_v2" "ecs_arti_app01" {
  zone_id = "${var.dns_zone_id}"
  name = "${flexibleengine_compute_instance_v2.ecs_arti_app01.name}.local."
  ttl = 3600
  type = "A"
  records = ["${flexibleengine_compute_instance_v2.ecs_arti_app01.access_ip_v4}"]
}

-/+ flexibleengine_compute_instance_v2.ecs_arti_app01 (new resource required)
      id:                        "6bcac7fe-be6f-4c93-baf7-c9b3678eaf0c" => <computed> (forces new resource)
      access_ip_v4:              "192.168.0.43" => <computed>
      access_ip_v6:              "" => <computed>
      all_metadata.%:            "0" => <computed>
      availability_zone:         "eu-west-0b" => "eu-west-0b"
      flavor_id:                 "s3.xlarge.2" => <computed>
      flavor_name:               "s3.xlarge.2" => "s3.xlarge.2"
      image_id:                  "d9bd237a-c298-47a7-aca8-3664e3debb44" => <computed>
      image_name:                "OBS_U_Ubuntu_16.04" => "OBS_U_Ubuntu_16.04"
      key_pair:                  "keypair_ansible" => "keypair-xnrqxx24" (forces new resource)
      name:                      "dt-artiextapp-z01fe-prp" => "dt-artiextapp-z01fe-prp"
      network.#:                 "1" => "1"
      network.0.access_network:  "false" => "false"
      network.0.fixed_ip_v4:     "192.168.0.43" => <computed>
      network.0.fixed_ip_v6:     "" => <computed>
      network.0.floating_ip:     "" => <computed>
      network.0.mac:             "fa:16:3e:b5:86:88" => <computed>
      network.0.name:            "14bce6dd-a2ce-4402-99af-a4905bfb18bf" => <computed>
      network.0.port:            "" => <computed>
      network.0.uuid:            "33e2e14a-288c-4068-83f9-9d9a664fa4" => "33e2e14a-288c-4068-83f9-9d9a664fa4"
      region:                    "eu-west-0" => <computed>
      security_groups.#:         "1" => "1"
      security_groups.584365606: "secgroup_artifactory_prp" => "secgroup_artifactory_prp"
      stop_before_destroy:       "false" => "false"


  ~ huaweicloud_dns_recordset_v2.ecs_arti_app01
      records.#:                 "1" => <computed>

Debug Output

[DEBUG] plugin.terraform-provider-huaweicloud_v1.2.0_x4: 2018/12/11 14:58:01 [DEBUG] Create Options: huaweicloud.RecordSetCreateOpts{CreateOpts:recordsets.CreateOpts{Name:"dt-artiextapp-z01fe-prp.local.", Description:"", Records:[]string{"192.168.0.194"}, TTL:3600, Type:"A"}, ValueSpecs:map[string]string{}}
...
[DEBUG] plugin.terraform-provider-huaweicloud_v1.2.0_x4: 2018/12/11 14:58:01 [DEBUG] HuaweiCloud Request Body: {
[DEBUG] plugin.terraform-provider-huaweicloud_v1.2.0_x4: "name": "dt-artiextapp-z01fe-prp.local.",
[DEBUG] plugin.terraform-provider-huaweicloud_v1.2.0_x4: "records": [
[DEBUG] plugin.terraform-provider-huaweicloud_v1.2.0_x4: "192.168.0.194"
[DEBUG] plugin.terraform-provider-huaweicloud_v1.2.0_x4: ],
[DEBUG] plugin.terraform-provider-huaweicloud_v1.2.0_x4: "ttl": 3600,
[DEBUG] plugin.terraform-provider-huaweicloud_v1.2.0_x4: "type": "A"
[DEBUG] plugin.terraform-provider-huaweicloud_v1.2.0_x4: }
[DEBUG] plugin.terraform-provider-huaweicloud_v1.2.0_x4: 2018/12/11 14:58:01 [DEBUG] Openstack Response Code: 400
[DEBUG] plugin.terraform-provider-huaweicloud_v1.2.0_x4: 2018/12/11 14:58:01 [DEBUG] Openstack Response Headers:
[DEBUG] plugin.terraform-provider-huaweicloud_v1.2.0_x4: Content-Length: 117
[DEBUG] plugin.terraform-provider-huaweicloud_v1.2.0_x4: Content-Type: application/json
[DEBUG] plugin.terraform-provider-huaweicloud_v1.2.0_x4: Date: Tue, 11 Dec 2018 13:58:11 GMT
[DEBUG] plugin.terraform-provider-huaweicloud_v1.2.0_x4: Request_id: 75c2c0a7-9032-4d23-8750-89ee06a03624
[DEBUG] plugin.terraform-provider-huaweicloud_v1.2.0_x4: Set-Cookie: ***
[DEBUG] plugin.terraform-provider-huaweicloud_v1.2.0_x4: 2018/12/11 14:58:01 [DEBUG] HuaweiCloud Response Body: {
[DEBUG] plugin.terraform-provider-huaweicloud_v1.2.0_x4: "code": "DNS.0312",
[DEBUG] plugin.terraform-provider-huaweicloud_v1.2.0_x4: "message": "Attribute 'name' conflicts with Record Set 'dt-artiextapp-z01fe-prp.local.' type 'A'."
[DEBUG] plugin.terraform-provider-huaweicloud_v1.2.0_x4: }

Expected Behavior

re creation or update of the DNS record.

Actual Behavior

error (see debug output)

CCE flexibleengine_cce_cluster_v3 : cluster_version argument isn't optional

Hi there,

About the deployment of a CCE cluster (flexibleengine_cce_cluster_v3), cluster_version argument isn't optional as describe in the documentation.
If you don't put it :
error message: {"errorCode":"E.CFE.4000407","reason":"Version is too low","message":"lowest version is v1.7.3-r10"}

Terraform Version

โ””โ”€ $ โ–ถ terraform -v
Terraform v0.11.11

  • provider.flexibleengine v1.4.0

Affected Resource(s)

flexibleengine_cce_cluster_v3

Terraform Configuration Files

resource "flexibleengine_cce_cluster_v3" "ccecluster1" {
  name = "${var.cce_cluster_name}"
  cluster_type = "${var.cce_cluster_type}"
  #cluster_version = "${var.cce_cluster_version}"
  flavor_id = "${var.cce_cluster_flavorid}"
  vpc_id = "${local.vpc_id}"
  subnet_id = "${local.subnet1a_id}"
  container_network_type = "${var.cce_container_network_type}"
  description = "${var.cce_description}"
}

Debug Output

Error: Error applying plan:

1 error(s) occurred:

2019-01-29T17:21:34.244+0100 [DEBUG] plugin.terraform-provider-flexibleengine_v1.4.0_x4: 2019/01/29 17:21:34 [ERR] plugin: plugin server: accept unix /tmp/plugin807992753: use of closed network connection
* flexibleengine_cce_cluster_v3.ccecluster1: 1 error(s) occurred:

* flexibleengine_cce_cluster_v3.ccecluster1: Error creating flexibleengine Cluster: Bad request with: [POST https://cce.eu-west-0.prod-cloud-ocb.orange-business.com/api/v3/projects/a9b00e7dc984480d9e61ed2a70dff8f3/clusters], error message: {"errorCode":"E.CFE.4000407","reason":"Version is too low","message":"lowest version is v1.7.3-r10"}

Expected Behavior

Without the argument cluster_version, cce cluster is created with the last supported cce version

Actual Behavior

Without the argument cluster_version, cce cluster isn't created because of the default version used ({"errorCode":"E.CFE.4000407","reason":"Version is too low","message":"lowest version is v1.7.3-r10"})

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform apply

Important Factoids

n/a

References

n/a

Feature request : Elastic Volume Storage data source request

Hello,

Feature request:
There is no data source function available to querying details about a single EVS Disk. This function will be helpful to attach a volume with the function flexibleengine_compute_volume_attach_v2 without the need of create one.

Today, we can't because the only way to get a volume_id for the function flexibleengine_compute_volume_attach_v2 is to create a volume with the flexibleengine_blockstorage_volume_v2 function.

It would be great to adding the data block type to the flexibleengine_blockstorage_volume_v2 function.
The details about the API request are available for consultation here : https://docs.prod-cloud-ocb.orange-business.com/en-us/doc/pdf/20190321/20190321100742_49a1f8388a.pdf
Reference : 8.x
Seems like nearly every API request in 6.x are deprecated.

Use Case:
A shared disk has been already created on Flexible Engine with data on it and we want to share this disk between multiple compute instance.

Thanks a lot !

PS:
If nobody had time to spend on this feature, I will look at it as I need this.

The "ELB" module create a "classic" of LB and not an "Enhanced" type

Hi there,

Thank you for opening an issue. Please note that we try to keep the Terraform issue tracker reserved for bug reports and feature requests. For general usage questions, please see: https://www.terraform.io/community.html.

Terraform Version

Terraform v0.11.9
+ provider.flexibleengine v1.2.0

Your version of Terraform is out of date! The latest version
is 0.11.10. You can update by downloading from www.terraform.io/downloads.html

Affected Resource(s)

Please list the resources as a list, for example:

  • flexibleengine_elb_loadbalancer

Terraform Configuration Files

resource "flexibleengine_elb_loadbalancer" "elb_app_prp" {
  name = "elb_app_prp"
  type = "External"
  vpc_id         = "${data.flexibleengine_vpc_v1.vpc_tools_prp.id}"
  admin_state_up = 0
  bandwidth      = 300
#   vip_address    = "${flexibleengine_vpc_eip_v1.eip_app_prp.publicip.0.ip_address}"
}

Debug Output

Verification in the FE HMI
image

Panic Output

Expected Behavior

Creation of an "Enhanced" Elastic Load Balance

Actual Behavior

The module creates a "classic type" ELB

Steps to Reproduce

  1. terraform apply

Important Factoids

References

Assign Fixed IP to RDS

Hi Team,

Is there a way to assign fixed IP to RDS. I checked the terraform flexible engine specifications, but you can only specify subnet information under this.

Thanks

Feature: add AK/SK support

This is a feature which propose to add AK/SK configuration support.

*** background ***

Still now, as the README.md[1] descripted, this provider doesn't support AK/SK authentication. however, in flexibleengine cloud has supported AK/SK for a long time.

*** Usecase ***

The tenant admin may generate a temporary AK and SK, then delegate others to maintain the resources via terraform tool with the AK/SK. the admin also can set the available time of the AK/SK. this is a safety way that many users expect.

*** what changes ***

SDK changes
As there are two SDKs in this repos, one is gophercloud which can be able to access to flexibleengine services in OpenStack compatible way. another one is huaweicloud/golangsdk which can be able to access to other services that can not be supported by gophercloud. however, gophercloud doesn't have AK/SK feature.

so we have to move all services into huaweicloud/golangsdk. and only use this repository. so that all AK/SK feature will be added into huaweicloud/golangsdk and gophercloud will be removed from this dependencies.

Configuration changes
currently, the configuration of provider already have AK/SK option. but it was only used to access to OBS service. username/password also needed.

after this PR, if has the AK/SK config, the username/password will not work any more. that means the AK/SK has a high priority.

Terraform Version
any

Affected Resource(s)
Please list the resources as a list, for example:
N/A

References
[1]https://github.com/terraform-providers/terraform-provider-flexibleengine/blob/master/README.md#quick-start

Error on flexibleengine_cce_cluster_v3 argument : subnet_id

Hello,

It seems that there is an error with "subnet_id" argument on flexibleengine_cce_cluster_v3. Network id is waited instead.

Terraform Version

Terraform v0.11.13

Affected Resource(s)

flexibleengine_cce_cluster_v3

Terraform Configuration Files

``
resource "flexibleengine_cce_cluster_v3" "cce_cluster" {
name = "${var.cluster_name}"
cluster_type= "${var.cluster_type}"
cluster_version = "${var.cluster_version}"
flavor_id= "${var.flavor_id}"
vpc_id= "${var.vpc_id}"
subnet_id= "${var.subnet_id}"
container_network_type= "${var.container_network_type}"
description= "${var.cluster_desc}"
}
`

Expected Behavior

CCE cluster creation

Actual Behavior

TF error message: "internal server error"

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform apply using existing subnet_id

Important Factoids

It seems that value that we should use is network_id and not subnet_id.
Actual workaround is to use "data "flexibleengine_networking_network_v2"" to get the network_id but it is quite dirty.

[CCE nodes] : node label not taken into account

Hi there,

Terraform Version

0.11.14

Affected Resource(s)

  • flexibleengine_cce_nodes_v3

Expected Behavior

When i declare a label to a node, the label is present on the node in FE Console

Actual Behavior

The label is not present on node in FE console. The label declared in Terraform template is not taken into account

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform apply
  2. Template
resource "flexibleengine_cce_node_v3" "node_az_admin" {
  ...
  labels {
    admin-node = "true"
  }
  ...

backend of load-balancer will be created again when launching the same script

when re-execution the script of load-balancer, terraform will delete the older backend and create a new backend again. this is not what we want.

Terraform Version

terraform v0.11.0

Affected Resource(s)

Please list the resources as a list, for example:
โ€ขload-balancer

Terraform Configuration Files

resource "flexibleengine_lb_loadbalancer_v2" "loadbalancer" {
count = "${var.instance_count}"
name = "${var.project}-loadbalancer"
vip_subnet_id = "${flexibleengine_networking_subnet_v2.subnet.id}"
admin_state_up = "true"
depends_on = ["flexibleengine_networking_router_interface_v2.interface"]
}

resource "flexibleengine_lb_listener_v2" "listener" {
name = "${var.project}-listener"
count = "${var.instance_count}"
protocol = "HTTP"
protocol_port = 80
loadbalancer_id = "${flexibleengine_lb_loadbalancer_v2.loadbalancer.id}"
admin_state_up = "true"
#connection_limit = "-1"
}

resource "flexibleengine_lb_pool_v2" "pool" {
protocol = "HTTP"
count = "${var.instance_count}"
lb_method = "ROUND_ROBIN"
listener_id = "${flexibleengine_lb_listener_v2.listener.id}"
}

resource "flexibleengine_lb_member_v2" "member" {
count = "${var.instance_count}"
address = "${element(flexibleengine_compute_instance_v2.webserver.*.access_ip_v4, count.index)}"
pool_id = "${flexibleengine_lb_pool_v2.pool.id}"
subnet_id = "${flexibleengine_networking_subnet_v2.subnet.id}"
protocol_port = 80
}

resource "flexibleengine_lb_monitor_v2" "monitor" {
pool_id = "${flexibleengine_lb_pool_v2.pool.id}"
count = "${var.instance_count}"
type = "HTTP"
url_path = "/"
expected_codes = "200"
delay = 20
timeout = 10
max_retries = 5
}

Debug Output

-/+ module.compute.orangecloud_elb_backend.backend[0] (new resource required)
id: "d804cdf38baf4db7b689f0aec329ee67" => (forces new resource)
address: "100.66.5.33" => "10.36.251.231" (forces new resource)
listener_id: "c27e7da374bc4361ac060554dee117ea" => "c27e7da374bc4361ac060554dee117ea"
server_id: "7905524d-4dfc-4d8b-bd8b-3d4ac38edd12" => "7905524d-4dfc-4d8b-bd8b-3d4ac38edd12"

-/+ module.compute.orangecloud_elb_backend.backend[1] (new resource required)
id: "4accc42a5a304a958661a0dcd9206eb0" => (forces new resource)
address: "100.66.7.65" => "10.36.251.131" (forces new resource)
listener_id: "c27e7da374bc4361ac060554dee117ea" => "c27e7da374bc4361ac060554dee117ea"
server_id: "2f15c923-c496-4dba-a366-1d9451c25a9f" => "2f15c923-c496-4dba-a366-1d9451c25a9f"

-/+ module.compute.orangecloud_elb_backend.backend[2] (new resource required)
id: "44534271142a4c56b913ec973214394b" => (forces new resource)
address: "100.66.6.210" => "10.36.251.20" (forces new resource)
listener_id: "c27e7da374bc4361ac060554dee117ea" => "c27e7da374bc4361ac060554dee117ea"
server_id: "a3bdc926-3913-4b76-a591-e8d3e7c48549" => "a3bdc926-3913-4b76-a591-e8d3e7c48549"

Expected Behavior

The backend could not be changed.

Actual Behavior

The backend is deleted and create a new one

flexibleengine_cce_cluster_v3 domcumentation is wrong

Terraform Version

NA

Affected Resource(s)

flexibleengine_cce_cluster_v3

Terraform Configuration Files

   variable "cluster_id" { }
   variable "ssh_key" { }
   variable "availability_zone" { }

   resource "flexibleengine_cce_node_v3" "node_1" {
    cluster_id="${var.cluster_id}"
    name = "node1"
    flavor_id="s1.medium"
    iptype="5_bgp"
    availability_zone= "${var.availability_zone}"
    key_pair="${var.ssh_key}"
    root_volume = {
     size= 40,
     volumetype= "SATA"
    }
    sharetype= "PER"
    bandwidth_size= 100,
    data_volumes = [
     {
      size= 100,
      volumetype= "SATA"
     },
    ]
  }

Debug Output

error message: {"errorCode":"E.CFE.4000407","reason":"Version is too low","message":"lowest version is v1.7.3-r10"}

Expected Behavior

cluster_type is suposed to be optional:
"cluster_type - (Required) Cluster Type, Changing this parameter will create a new cluster resource."

Actual Behavior

We got an error message because the default value seems to be a "too low" version

flexibleengine_dns_zone_v2 cannot be attached to 2 VPCs

Hi there,

I think this is a bug in the APIs but I open it here to keep tracks.

When creating a DNS zone attached to 2 different VPCs, there is no error, but only the first one is really attached. Name resolution works in the first VPC but not in the second one.

Terraform Version

Terraform v0.11.13

  • provider.flexibleengine v1.4.0

Affected Resource(s)

  • flexibleengine_dns_zone_v2

Terraform Configuration Files

resource "flexibleengine_dns_zone_v2" "local" {
  name        = "local."
  email       = "[email protected]"
  description = "DNS Zone for POC subscription"
  ttl         = 3000
  zone_type   = "private"

  router = [{
    router_region = "eu-west-0"
    router_id     = "${flexibleengine_vpc_v1.vpc_tools.id}"
  },
    {
      router_region = "eu-west-0"
      router_id     = "${flexibleengine_vpc_v1.vpc_bastion.id}"
    }
  ]
}

Terraform show Output

flexibleengine_dns_zone_v2.local:
  id = ff808082678a2c9a0169b5bbc99d6fe7
  description = DNS Zone for POC subscription
  email = [email protected]
  masters.# = 0
  name = local.
  region = eu-west-0
  router.# = 2
  router.1241275220.router_id = 3eee0a99-1193-474a-a245-9c5f07c9c082
  router.1241275220.router_region = eu-west-0
  router.2802086617.router_id = 9b709168-fc32-4505-abbe-9487cbd95410
  router.2802086617.router_region = eu-west-0
  ttl = 3000
  zone_type = private

Actual Behavior

In the GUI only the first VPC is attached. In instances in the second VPC, name resolution on this zone fails.

Cheers

ECS auto recovery

Hello,

When i use the flexible engine provider, my ECS servers are provisioned with disabled auto-recovery.
Is there a way to configure this properties when creating an ECS with the terraform providers ?

The API exposes the "extendparam/support_auto_recovery" to do that.

Thanks for your help.

ccev3: Empty attributes

Terraform Version

terraform -v Terraform v0.11.7

Affected Resource(s)

flexibleengine_cce_cluster_v3

Terraform Configuration Files

  name              = "backend"
  cluster_type      = "VirtualMachine"
  flavor_id         = "${var.cce_flavorId}"
  vpc_id            = "${var.vpcId}"
  subnet_id         = "${var.subnetId}"
  container_network_type  = "overlay_l2"
  cluster_version   = "v1.9.7-r1"
}

Expected Behavior

Based on the Flex Engine provider documentation, the following attributes must be filled

  • internal - The internal network address.
  • external - The external network address.
  • external_otc - The endpoint of the cluster to be accessed through API Gateway.

Actual Behavior

The previous fields are empty, see the following state

  id = 757270cd-256a-11e9-86b8-0255ac101b11
  billing_mode = 0
  cluster_type = VirtualMachine
  cluster_version = v1.9.7-r1
  container_network_cidr = 172.16.0.0/16
  container_network_type = overlay_l2
  description =
  external =
  external_otc =
  flavor_id = cce.s1.large
  highway_subnet_id =
  internal =
  name = backend
  region = eu-west-0
  status = Available
  subnet_id = fe1cb32a-e998-4675-b3fe-5eb2c21c4434
  vpc_id = f828fc31-0207-4244-87ce-5c1973a14d53

flexibleengine_csbs_backup_policy_v1 resource wants to do changes when none are done

Hi there,

At every Terraform apply, flexibleengine_csbs_backup_policy_v1 wants to do changes, even when no changes have been done to the infrastructure.

Terraform Version

terraform -v 
Terraform v0.11.11
+ provider.flexibleengine v1.5.0
+ provider.null v2.1.2
+ provider.openstack v1.18.0

Affected Resource(s)

Please list the resources as a list, for example:

  • flexibleengine_csbs_backup_policy_v1

Terraform Configuration Files

   resource "flexibleengine_csbs_backup_policy_v1" "backup_policy_k8s_node" {
   name  = "backup-${flexibleengine_compute_instance_v2.k8s_node.*.name[count.index]}"
   count = "${ var.enable_backup ? var.number_of_k8s_nodes : 0 }"
   resource {
     id = "${flexibleengine_compute_instance_v2.k8s_node.*.id[count.index]}"
     type = "OS::Nova::Server"
     name = "${flexibleengine_compute_instance_v2.k8s_node.*.name[count.index]}"
   }
   scheduled_operation {
     enabled = true
     operation_type = "backup"
     retention_duration_days = "14"
     trigger_pattern = "BEGIN:VEVENT\nRRULE:FREQ=DAILY;INTERVAL=1;BYHOUR=19;BYMINUTE=00\nEND:VEVENT"
   }
 }

Debug Output

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:
  ~ module.compute.flexibleengine_csbs_backup_policy_v1.backup_policy_k8s_node[0]
      scheduled_operation.1400495449.description:             "" => ""
      scheduled_operation.1400495449.enabled:                 "true" => "false"
      scheduled_operation.1400495449.max_backups:             "0" => "0"
      scheduled_operation.1400495449.operation_type:          "backup" => ""
      scheduled_operation.1400495449.retention_duration_days: "14" => "0"
      scheduled_operation.1400495449.trigger_pattern:         "BEGIN:VEVENT\nRRULE:FREQ=DAILY;INTERVAL=1;BYHOUR=19;BYMINUTE=00\nEND:VEVENT" => ""
      scheduled_operation.809681942.description:              "" => ""
      scheduled_operation.809681942.enabled:                  "" => "true"
      scheduled_operation.809681942.id:                       "" => <computed>
      scheduled_operation.809681942.max_backups:              "" => ""
      scheduled_operation.809681942.name:                     "" => <computed>
      scheduled_operation.809681942.operation_type:           "" => "backup"
      scheduled_operation.809681942.permanent:                "" => <computed>
      scheduled_operation.809681942.retention_duration_days:  "" => "14"
      scheduled_operation.809681942.trigger_id:               "" => <computed>
      scheduled_operation.809681942.trigger_name:             "" => <computed>
      scheduled_operation.809681942.trigger_pattern:          "" => "BEGIN:VEVENT\nRRULE:FREQ=DAILY;INTERVAL=1;BYHOUR=19;BYMINUTE=00\nEND:VEVENT"
      scheduled_operation.809681942.trigger_type:             "" => <computed>

  ~ module.compute.flexibleengine_csbs_backup_policy_v1.backup_policy_k8s_node[1]
      scheduled_operation.1400495449.description:             "" => ""
      scheduled_operation.1400495449.enabled:                 "true" => "false"
      scheduled_operation.1400495449.max_backups:             "0" => "0"
      scheduled_operation.1400495449.operation_type:          "backup" => ""
      scheduled_operation.1400495449.retention_duration_days: "14" => "0"
      scheduled_operation.1400495449.trigger_pattern:         "BEGIN:VEVENT\nRRULE:FREQ=DAILY;INTERVAL=1;BYHOUR=19;BYMINUTE=00\nEND:VEVENT" => ""
      scheduled_operation.809681942.description:              "" => ""
      scheduled_operation.809681942.enabled:                  "" => "true"
      scheduled_operation.809681942.id:                       "" => <computed>
      scheduled_operation.809681942.max_backups:              "" => ""
      scheduled_operation.809681942.name:                     "" => <computed>
      scheduled_operation.809681942.operation_type:           "" => "backup"
      scheduled_operation.809681942.permanent:                "" => <computed>
      scheduled_operation.809681942.retention_duration_days:  "" => "14"
      scheduled_operation.809681942.trigger_id:               "" => <computed>
      scheduled_operation.809681942.trigger_name:             "" => <computed>
      scheduled_operation.809681942.trigger_pattern:          "" => "BEGIN:VEVENT\nRRULE:FREQ=DAILY;INTERVAL=1;BYHOUR=19;BYMINUTE=00\nEND:VEVENT"
      scheduled_operation.809681942.trigger_type:             "" => <computed>

  ~ module.compute.flexibleengine_csbs_backup_policy_v1.backup_policy_k8s_node[2]
      scheduled_operation.1400495449.description:             "" => ""
      scheduled_operation.1400495449.enabled:                 "true" => "false"
      scheduled_operation.1400495449.max_backups:             "0" => "0"
      scheduled_operation.1400495449.operation_type:          "backup" => ""
      scheduled_operation.1400495449.retention_duration_days: "14" => "0"
      scheduled_operation.1400495449.trigger_pattern:         "BEGIN:VEVENT\nRRULE:FREQ=DAILY;INTERVAL=1;BYHOUR=19;BYMINUTE=00\nEND:VEVENT" => ""
      scheduled_operation.809681942.description:              "" => ""
      scheduled_operation.809681942.enabled:                  "" => "true"
      scheduled_operation.809681942.id:                       "" => <computed>
      scheduled_operation.809681942.max_backups:              "" => ""
      scheduled_operation.809681942.name:                     "" => <computed>
      scheduled_operation.809681942.operation_type:           "" => "backup"
      scheduled_operation.809681942.permanent:                "" => <computed>
      scheduled_operation.809681942.retention_duration_days:  "" => "14"
      scheduled_operation.809681942.trigger_id:               "" => <computed>At every `Terraform apply`, flexibleengine_csbs_backup_policy_v1 wants to do changes, even when no changes have been done to the infrastructure.
      scheduled_operation.809681942.trigger_name:             "" => <computed>
      scheduled_operation.809681942.trigger_pattern:          "" => "BEGIN:VEVENT\nRRULE:FREQ=DAILY;INTERVAL=1;BYHOUR=19;BYMINUTE=00\nEND:VEVENT"
      scheduled_operation.809681942.trigger_type:             "" => <computed>

  ~ module.compute.flexibleengine_csbs_backup_policy_v1.backup_policy_k8s_node[3]
      scheduled_operation.1400495449.description:             "" => ""
      scheduled_operation.1400495449.enabled:                 "true" => "false"
      scheduled_operation.1400495449.max_backups:             "0" => "0"
      scheduled_operation.1400495449.operation_type:          "backup" => ""
      scheduled_operation.1400495449.retention_duration_days: "14" => "0"
      scheduled_operation.1400495449.trigger_pattern:         "BEGIN:VEVENT\nRRULE:FREQ=DAILY;INTERVAL=1;BYHOUR=19;BYMINUTE=00\nEND:VEVENT" => ""
      scheduled_operation.809681942.description:              "" => ""
      scheduled_operation.809681942.enabled:                  "" => "true"
      scheduled_operation.809681942.id:                       "" => <computed>
      scheduled_operation.809681942.max_backups:              "" => ""
      scheduled_operation.809681942.name:                     "" => <computed>
      scheduled_operation.809681942.operation_type:           "" => "backup"
      scheduled_operation.809681942.permanent:                "" => <computed>
      scheduled_operation.809681942.retention_duration_days:  "" => "14"
      scheduled_operation.809681942.trigger_id:               "" => <computed>
      scheduled_operation.809681942.trigger_name:             "" => <computed>
      scheduled_operation.809681942.trigger_pattern:          "" => "BEGIN:VEVENT\nRRULE:FREQ=DAILY;INTERVAL=1;BYHOUR=19;BYMINUTE=00\nEND:VEVENT"
      scheduled_operation.809681942.trigger_type:             "" => <computed>

Panic Output

If Terraform produced a panic, please provide a link to a GitHub Gist containing the output of the crash.log.

Expected Behavior

Do no change when none are needed

Actual Behavior

At every Terraform apply, flexibleengine_csbs_backup_policy_v1 wants to do changes, even when no changes have been done to the infrastructure.

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform apply

Important Factoids

Are there anything atypical about your accounts that we should know? For example: Which version of OrangeCloud? Tight ACLs?
No

References

No

documentation errors in flexibleengine_nat_gateway_v2

Hello

In the documentation of the flexibleengine_nat_gateway_v2 it is stated :
internal_network_id - (Optional) ID of the network this nat gateway connects to.

However this argument seems required.

Terraform Version

$ terraform -v
Terraform v0.11.10

  • provider.flexibleengine v1.2.1

Affected Resource(s)

  • flexibleengine_nat_gateway_v2

Terraform Configuration Files

resource "flexibleengine_nat_gateway_v2" "nat_gateway_example" {
  name = "nat_gateway_example"
  spec = "1"
  router_id = "${flexibleengine_vpc_v1.vpc_example.id}"

  # internal_network_id = "${flexibleengine_vpc_subnet_v1.subnet_example.id}"
}

Expected Behavior

The NAT gateway is created.

Actual Behavior

$ terraform apply

Error: flexibleengine_nat_gateway_v2.nat_gateway_example: "internal_network_id": required field is not set

DNS - Your query returned no results.

Hi,

I try to create 2 empty DNS zones and I've got an error

Terraform Version

Terraform v0.11.13
2019/07/16 11:29:30 [DEBUG] plugin: waiting for all plugin processes to complete...
+ provider.flexibleengine v1.6.0

Affected Resource(s)

Please list the resources as a list, for example:

  • flexibleengine_dns_zone_v2

Terraform Configuration Files

data "flexibleengine_dns_zone_v2" "mydomain" {
  name = "mydomain.com"
  zone_type = "public"
}

data "flexibleengine_dns_zone_v2" "myseconddomain_com" {
  name = "myseconddomain.com"
  zone_type = "public"
}

Debug Output

Hi,

I try to create 2 empty DNS zones and I've got an error

### Terraform Version

Terraform v0.11.13
2019/07/16 11:29:30 [DEBUG] plugin: waiting for all plugin processes to complete...

  • provider.flexibleengine v1.6.0

### Affected Resource(s)
Please list the resources as a list, for example:
- flexibleengine_dns_zone_v2


### Terraform Configuration Files
```hcl
data "flexibleengine_dns_zone_v2" "mydomain_com" {
  name = "mydomain.com"
  zone_type = "public"
}

data "flexibleengine_dns_zone_v2" "myseconddomain_com" {
  name = "myseconddomain.com"
  zone_type = "public"
}

Debug Output

data.flexibleengine_dns_zone_v2.mydomain_com: Refreshing state...
2019/07/16 11:26:52 [TRACE] root: eval: *terraform.EvalReadDataApply
2019-07-16T11:26:52.914+0200 [DEBUG] plugin.terraform-provider-flexibleengine_v1.6.0_x4: 2019/07/16 11:26:52 [DEBUG] FlexibleEngine Region is: eu-west-0
2019-07-16T11:26:52.914+0200 [DEBUG] plugin.terraform-provider-flexibleengine_v1.6.0_x4: 2019/07/16 11:26:52 [DEBUG] FlexibleEngine Region is: eu-west-0
data.flexibleengine_dns_zone_v2.myseconddomain_com: Refreshing state...
2019/07/16 11:26:53 [ERROR] root: eval: *terraform.EvalReadDataApply, err: data.flexibleengine_dns_zone_v2.myseconddomain_com: Your query returned no results. Please change your search criteria and try again.
2019/07/16 11:26:53 [ERROR] root: eval: *terraform.EvalSequence, err: data.flexibleengine_dns_zone_v2.myseconddomain_com: Your query returned no results. Please change your search criteria and try again.
2019/07/16 11:26:53 [TRACE] [walkRefresh] Exiting eval tree: data.flexibleengine_dns_zone_v2.myseconddomain_com

2019/07/16 11:26:53 [ERROR] root: eval: *terraform.EvalReadDataApply, err: data.flexibleengine_dns_zone_v2.mydomain_com: Your query returned no results. Please change your search criteria and try again.
2019/07/16 11:26:53 [ERROR] root: eval: *terraform.EvalSequence, err: data.flexibleengine_dns_zone_v2.mydomain_com: Your query returned no results. Please change your search criteria and try again.
2019/07/16 11:26:53 [TRACE] [walkRefresh] Exiting eval tree: data.flexibleengine_dns_zone_v2.mydomain_com
2019/07/16 11:26:53 [TRACE] dag/walk: upstream errored, not walking "provider.flexibleengine (close)"
2019/07/16 11:26:53 [TRACE] dag/walk: upstream errored, not walking "root"
2019/07/16 11:26:53 [DEBUG] plugin: waiting for all plugin processes to complete...

Best regards,

feature request : L7 forwarding policy on Enhanced Load Balancer

Hello

Currently it's not possible in the terraform provider to specify a L7 forwarding policy. It is possible in the GUI and documented in the API (section "Forwarding Policy", I did not test it).

Terraform Version

$ terraform -v
Terraform v0.11.10

  • provider.flexibleengine v1.2.1

Affected Resource(s)

  • flexibleengine_lb_loadbalancer_v2

Thanks in advance.

Extend blockstorage volume = new volume

Hi there,

Changing the size of the blockstorage in terraform will recreate the volume and recreate the attachment of the volume to the instance

Terraform Version

terraform -v
Terraform v0.11.10

  • provider.flexibleengine v1.2.1
  • provider.huaweicloud v1.2.0
  • provider.template v1.0.0

Affected Resource(s)

  • flexibleengine_blockstorage_volume_v2
  • flexibleengine_compute_volume_attach_v2
  • flexibleengine_compute_instance_v2

Terraform Configuration Files

resource "flexibleengine_blockstorage_volume_v2" "bs_ecs_arti_app01" {
  name = "ecs_arti_app01_var_opt"
  availability_zone = "${flexibleengine_compute_instance_v2.ecs_arti_app01.availability_zone}"
  volume_type = "SSD"
  size = 25
}
resource "flexibleengine_compute_volume_attach_v2" "attached_bs_ecs_arti_app01" {
  instance_id = "${flexibleengine_compute_instance_v2.ecs_arti_app01.id}"
  volume_id = "${flexibleengine_blockstorage_volume_v2.bs_ecs_arti_app01.id}"
}

resource "flexibleengine_compute_instance_v2" "ecs_arti_app01" {
  name              = "dt-artiextapp-z01fe-${var.environment}"
  image_name        = "OBS_U_Ubuntu_16.04"
  flavor_name       = "s3.xlarge.2"
  key_pair          = "keypair_ansible"
  security_groups   = ["secgroup_artifactory_prp"]
  availability_zone = "eu-west-0b"

  network {
    uuid = "${data.flexibleengine_vpc_subnet_v1.subnet_tools.id}"
  }
}

Expected Behavior

The size of the blockstorage should be increased

Actual Behavior

A new volume is created with the size requested

Maybe you need to use this API ?
https://docs.prod-cloud-ocb.orange-business.com/en-us/api/evs/en-us_topic_0058626625.html

Another point, AZ:
At the moment: availability_zone is not mandatory
Problem, the blockstorage is not multi AZ, so if you have a instance on the AZ B and the storageblock in the AZ A => error
Even in debug mode, the error message is not clear

Thank you!

Additionally, it would be also very useful to get IPs of worker nodes, as it is available with low level API : https://docs.prod-cloud-ocb.orange-business.com/en-us/api2/cce/cce_02_0243.html

Additionally, it would be also very useful to get IPs of worker nodes, as it is available with low level API : https://docs.prod-cloud-ocb.orange-business.com/en-us/api2/cce/cce_02_0243.html

"status": {
        "phase": "Active",
        "serverId": "99de97f0-a10a-4215-ace7-817de0136ff5",
        "privateIP": "192.168.0.218",
        "publicIP": "10.154.50.127"
    }

Regards

Originally posted by @SebBlin in https://github.com/terraform-providers/terraform-provider-flexibleengine/issues/198#issuecomment-508369671

Addind network to instance forces recreation

Terraform Version

Terraform v0.11.11
+ provider.flexibleengine v1.2.1
+ provider.openstack v1.12.0

Affected Resource(s)

flexibleengine_compute_instance_v2

Terraform Configuration Files

resource "flexibleengine_compute_instance_v2" "vm_pa" {
  name = "vm-pa${count.index+1}"
  count = 2
  image_name = "PA-VM-KVM-8.0.5"
  flavor_name = "s3.large.4"
  key_pair = "fgzx6022"
  security_groups = ["${flexibleengine_networking_secgroup_v2.secgroup_pa.name}"]

  network {
      uuid = "${flexibleengine_vpc_subnet_v1.vpc-transit-subnet.id}"
  }
  
  network {
      uuid = "${flexibleengine_vpc_subnet_v1.vpc-transit-ha-subnet.id}"
  }
}

Debug Output

-/+ flexibleengine_compute_instance_v2.vm_pa[0] (new resource required)
      id:                        "872f98f8-3187-4626-b66d-7bbbb2e43fc1" => <computed> (forces new resource)
      access_ip_v4:              "172.16.1.84" => <computed>
      access_ip_v6:              "" => <computed>
      all_metadata.%:            "0" => <computed>
      availability_zone:         "eu-west-0b" => <computed>
      flavor_id:                 "s3.large.4" => <computed>
      flavor_name:               "s3.large.4" => "s3.large.4"
      image_id:                  "dbef5e67-5564-4cef-9e2c-71be3fc91ac5" => <computed>
      image_name:                "PA-VM-KVM-8.0.5" => "PA-VM-KVM-8.0.5"
      key_pair:                  "fgzx6022" => "fgzx6022"
      name:                      "vm-pa1" => "vm-pa1"
      network.#:                 "1" => "2" (forces new resource)
      network.0.access_network:  "false" => "false"
      network.0.fixed_ip_v4:     "172.16.1.84" => <computed>
      network.0.fixed_ip_v6:     "" => <computed>
      network.0.floating_ip:     "" => <computed>
      network.0.mac:             "fa:16:3e:a4:70:e8" => <computed>
      network.0.name:            "ddfa4776-b670-41cd-ad93-ce500882420f" => <computed>
      network.0.port:            "" => <computed>
      network.0.uuid:            "4b6e562c-cd79-408e-9f47-445d7457fd0a" => "4b6e562c-cd79-408e-9f47-445d7457fd0a"
      network.1.access_network:  "" => "false"
      network.1.fixed_ip_v4:     "" => <computed>
      network.1.fixed_ip_v6:     "" => <computed>
      network.1.floating_ip:     "" => <computed>
      network.1.mac:             "" => <computed>
      network.1.name:            "" => <computed>
      network.1.port:            "" => <computed>
      network.1.uuid:            "" => "aaeb255a-88ca-4e37-b11d-99b156e0dce9" (forces new resource)
      region:                    "eu-west-0" => <computed>
      security_groups.#:         "1" => "1"
      security_groups.779857138: "secgroup_pa" => "secgroup_pa"
      stop_before_destroy:       "false" => "false"

-/+ flexibleengine_compute_instance_v2.vm_pa[1] (new resource required)
      id:                        "e50a0a04-c9c9-412a-982f-13774a3df125" => <computed> (forces new resource)
      access_ip_v4:              "172.16.1.95" => <computed>
      access_ip_v6:              "" => <computed>
      all_metadata.%:            "0" => <computed>
      availability_zone:         "eu-west-0b" => <computed>
      flavor_id:                 "s3.large.4" => <computed>
      flavor_name:               "s3.large.4" => "s3.large.4"
      image_id:                  "dbef5e67-5564-4cef-9e2c-71be3fc91ac5" => <computed>
      image_name:                "PA-VM-KVM-8.0.5" => "PA-VM-KVM-8.0.5"
      key_pair:                  "fgzx6022" => "fgzx6022"
      name:                      "vm-pa2" => "vm-pa2"
      network.#:                 "1" => "2" (forces new resource)
      network.0.access_network:  "false" => "false"
      network.0.fixed_ip_v4:     "172.16.1.95" => <computed>
      network.0.fixed_ip_v6:     "" => <computed>
      network.0.floating_ip:     "" => <computed>
      network.0.mac:             "fa:16:3e:61:7c:76" => <computed>
      network.0.name:            "ddfa4776-b670-41cd-ad93-ce500882420f" => <computed>
      network.0.port:            "" => <computed>
      network.0.uuid:            "4b6e562c-cd79-408e-9f47-445d7457fd0a" => "4b6e562c-cd79-408e-9f47-445d7457fd0a"
      network.1.access_network:  "" => "false"
      network.1.fixed_ip_v4:     "" => <computed>
      network.1.fixed_ip_v6:     "" => <computed>
      network.1.floating_ip:     "" => <computed>
      network.1.mac:             "" => <computed>
      network.1.name:            "" => <computed>
      network.1.port:            "" => <computed>
      network.1.uuid:            "" => "aaeb255a-88ca-4e37-b11d-99b156e0dce9" (forces new resource)
      region:                    "eu-west-0" => <computed>
      security_groups.#:         "1" => "1"
      security_groups.779857138: "secgroup_pa" => "secgroup_pa"
      stop_before_destroy:       "false" => "false"

Expected Behavior

Network should be added to the instance, without forcing recreation.

Actual Behavior

Terraform wants to create new instance.

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform apply

AK/SK not yet implemented in 1.1.0

Hi there,

README.md shows how to configure the Flexible Engine provider whit the AccessKey/SecretKet wehereas this feature is not yet in GA.

Terraform Version

$ terraform -v
Terraform v0.11.8
+ provider.flexibleengine v1.1.0
+ provider.openstack v1.9.0

Affected Resource(s)

Please list the resources as a list, for example:

  • provider.flexibleengine

Terraform Configuration Files

provider.tf

provider "flexibleengine" {
  access_key = "MY_ACCESS_KEY"
  secret_key = "MY_SECRET_KEY"
  auth_url    = "https://iam.eu-west-0.prod-cloud-ocb.orange-business.com/v3"
  region    = "eu-west-0"
}

Output

$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.


------------------------------------------------------------------------

Error: Error running plan: 1 error(s) occurred:

* provider.flexibleengine: You must provide a password to authenticate

Expected Behavior

Plan should output the resources to be created/deteted/modified.

Actual Behavior

Terraform ask for a user/password.

Steps to Reproduce

  1. Create provider.tf with the above content and your AK/SK.
  2. export TF_LOG=DEBUG
  3. terraform plan

anti-affinity ECS Group when creating compute instance not implemented !!!

Hello

Unless I missed something, when creating compute instances via terraform, it is possible to specify an anti-affinity ECS Group for the instances.

Expected Behavior
As spรฉcified in the documentation for anti-affinity ECS Group :

ECSs in this group must be deployed on different hosts.

Actual Behavior
The anti-affinity is not effective. The ECS are launched on the same host.

openstack server show frontend-01
OS-EXT-SRV-ATTR:host | pod2.eu-west-0a
OS-EXT-SRV-ATTR:hypervisor_hostname | nova007@3
OS-EXT-SRV-ATTR:instance_name | instance-000d75c8
name | frontend-01
hostId | 551ec59027d385c0183b11673467ae71dcf0394dc061d854c4cbd64d
id | f1bcc1ea-ca2d-4704-8229-b87d495dfb28

openstack server show frontend-02
OS-EXT-SRV-ATTR:host | pod2.eu-west-0a
OS-EXT-SRV-ATTR:hypervisor_hostname | nova007@9
OS-EXT-SRV-ATTR:instance_name | instance-000d75c9
name | frontend-02
hostId | 551ec59027d385c0183b11673467ae71dcf0394dc061d854c4cbd64d (the Host )
id | 90432777-e814-45b0-a3c7-bd1fee0aaa44 (the instance created)

Terraform Version

$ terraform -v
Terraform v0.11.11

  • provider.flexibleengine v1.4.0

Affected Resource(s)

flexibleengine_compute_servergroup_v2",
flexibleengine_compute_instance_v2

Steps to Reproduce

Create an anti-affinity ECS Group via Terraform,

resource "flexibleengine_compute_servergroup_v2" "ecs_frontend_Group" {
  name     = "ecs_frontend_Group"
  policies = ["anti-affinity"]
}

Create two ECS with this anti-affinity ECS Group name.

resource "flexibleengine_compute_instance_v2" "frontend" {
  count           = "2"
  ........
  scheduler_hints {
    group = "${flexibleengine_compute_servergroup_v2.ecs_frontend_Group.id}"
  }
  ........
}

Thanks !

TF will force-new share when re-execute apply without configuration change

after I created a sfs resource, TF will re-create a new share when running terraform apply again with nothing change

Terraform Version

Affected Resource(s)

Please list the resources as a list, for example:

  • sfs

If this issue appears to affect multiple resources, it may be an issue with Terraform's core, so please mention this.

Terraform Configuration Files

resource "flexibleengine_sfs_file_system_v2" "share-file"
{
    size = 50
    name = "myfile"
    access_to = "${flexibleengine_networking_router_v2.schindler_vpc_01.id}"
    access_level = "rw"
    description = "this is my test file"
    metadata = {
        "type"="nfs"
    }

   }

Debug Output

-/+ flexibleengine_sfs_file_system_v2.share-file (new resource required)
      id:                 "6b190f9f-2365-4a69-93fb-08e29eaa5436" => <computed> (forces new resource)
      access_level:       "rw" => "rw"
      access_rule_status: "active" => <computed>
      access_to:          "9029f166-831f-41bd-9877-95b8ff0f0b9e" => "9029f166-831f-41bd-9877-95b8ff0f0b9e"
      access_type:        "cert" => "cert"
      availability_zone:  "eu-west-0a" => "" (forces new resource)
      description:        "this is my test file" => "this is my test file"
      export_location:    "sfs-nas1.eu-west-0.prod-cloud-ocb.orange-business.com:/share-05dc1e79" => <computed>
      host:               "VPHOXVP28SFSDJ01@30bf0fac-8626-4d7d-b588-b6e44f3f095d#30bf0fac-8626-4d7d-b588-b6e44f3f095d" => <computed>
      metadata.%:         "1" => "1"
      metadata.type:      "nfs" => "nfs"
      name:               "myfile" => "myfile"
      region:             "eu-west-0" => <computed>
      share_access_id:    "e0b714cb-516f-431b-b5a5-54df242d8a4c" => <computed>
      share_proto:        "NFS" => "NFS"
      size:               "50" => "50"
      status:             "available" => <computed>
      volume_type:        "default" => <computed>

Panic Output

nothing

Expected Behavior

nothing to be done.

Actual Behavior

delete the old resource and create a new one

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform apply
  2. terraform apply

References

N/A

Error when creating NAT at first run

Hi,

Terraform Version

Terraform v0.11.13

Affected Resource(s)

  • flexibleengine_nat_gateway_v2.nat

Error

At first run I have this error, and second run it's working, I think nat depends on subnet not on network but the API wants the network as parameter. I'm unable to modify my stack to add a depends_on because it's not supported on module.

module.subnet_private.flexibleengine_networking_network_v2.nwk: Creating...
  admin_state_up: "" => "true"
  name:           "" => "vpnssl-d-prv"
  region:         "" => "<computed>"
  shared:         "" => "<computed>"
  tenant_id:      "" => "<computed>"
module.subnet_public.flexibleengine_networking_network_v2.nwk: Creating...
  admin_state_up: "" => "true"
  name:           "" => "vpnssl-d-pub"
  region:         "" => "<computed>"
  shared:         "" => "<computed>"
  tenant_id:      "" => "<computed>"
module.subnet_private.flexibleengine_networking_network_v2.nwk: Creation complete after 7s (ID: bade2cd9-ce3e-46ad-9e13-f6130b369c30)
module.subnet_private.flexibleengine_networking_subnet_v2.snet: Creating...
  allocation_pools.#:         "" => "<computed>"
  cidr:                       "" => "192.168.100.0/27"
  dns_nameservers.#:          "" => "2"
  dns_nameservers.213311194:  "" => "100.126.0.41"
  dns_nameservers.2317528180: "" => "100.125.0.41"
  enable_dhcp:                "" => "true"
  gateway_ip:                 "" => "192.168.100.1"
  ip_version:                 "" => "4"
  name:                       "" => "vpnssl-d-prv"
  network_id:                 "" => "bade2cd9-ce3e-46ad-9e13-f6130b369c30"
  region:                     "" => "<computed>"
  tenant_id:                  "" => "<computed>"
module.subnet_public.flexibleengine_networking_network_v2.nwk: Creation complete after 7s (ID: 4501aafb-1292-4924-aba9-f2e174da1be8)
module.subnet_public.flexibleengine_networking_subnet_v2.snet: Creating...
  allocation_pools.#:         "" => "<computed>"
  cidr:                       "" => "192.168.100.32/27"
  dns_nameservers.#:          "" => "2"
  dns_nameservers.213311194:  "" => "100.126.0.41"
  dns_nameservers.2317528180: "" => "100.125.0.41"
  enable_dhcp:                "" => "true"
  gateway_ip:                 "" => "192.168.100.33"
  ip_version:                 "" => "4"
  name:                       "" => "vpnssl-d-pub"
  network_id:                 "" => "4501aafb-1292-4924-aba9-f2e174da1be8"
  region:                     "" => "<computed>"
  tenant_id:                  "" => "<computed>"
module.nat.flexibleengine_nat_gateway_v2.nat: Creating...
  description:         "" => "<computed>"
  internal_network_id: "" => "4501aafb-1292-4924-aba9-f2e174da1be8"
  name:                "" => "vpnssl-d-nat"
  region:              "" => "<computed>"
  router_id:           "" => "9a3e92e7-4475-45f4-8eda-1d6aebec6247"
  spec:                "" => "4"
  tenant_id:           "" => "<computed>"
module.subnet_public.flexibleengine_networking_subnet_v2.snet: Creation complete after 7s (ID: 084f52fa-b64f-475b-986a-f83c49f3eed9)
module.router.flexibleengine_networking_router_interface_v2.rt-pub: Creating...
  port_id:   "" => "<computed>"
  region:    "" => "<computed>"
  router_id: "" => "9a3e92e7-4475-45f4-8eda-1d6aebec6247"
  subnet_id: "" => "084f52fa-b64f-475b-986a-f83c49f3eed9"
module.subnet_private.flexibleengine_networking_subnet_v2.snet: Creation complete after 7s (ID: b5440154-e3d7-4a1d-ae19-020ec13c9157)
module.router.flexibleengine_networking_router_interface_v2.rt-prv: Creating...
  port_id:   "" => "<computed>"
  region:    "" => "<computed>"
  router_id: "" => "9a3e92e7-4475-45f4-8eda-1d6aebec6247"
  subnet_id: "" => "b5440154-e3d7-4a1d-ae19-020ec13c9157"
module.router.flexibleengine_networking_router_interface_v2.rt-pub: Creation complete after 8s (ID: 2dcd9850-2a29-4619-a31c-4af080af31f9)
module.router.flexibleengine_networking_router_interface_v2.rt-prv: Creation complete after 9s (ID: 6bddf328-af00-4389-91f8-3e2cc14dfe1f)

Error: Error applying plan:

1 error(s) occurred:

* module.nat.flexibleengine_nat_gateway_v2.nat: 1 error(s) occurred:

* flexibleengine_nat_gateway_v2.nat: Error creatting Nat Gateway: Bad request with: [POST https://nat.eu-west-0.prod-cloud-ocb.orange-business.com/v2.0/nat_gateways], error message: {"NeutronError": {"message": "Network 4501aafb-1292-4924-aba9-f2e174da1be8 does not contain any IPv4 subnet", "type": "NetworkHasNoSubnet", "detail": ""}}

Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.

Terraform Configuration Files

variable "project" {}
variable "project_code" {}
variable "environment" {}
variable "environment_code" {}
variable "subnet_public" {}
variable "subnet_private" {}
variable "dns_primary" {}
variable "dns_secondary" {}

module "subnet_public" {
  source = "./subnet_public"

  name = "${var.project_code}-${var.environment_code}-pub"
  cidr = "${var.subnet_public}"
  dns_nameservers = ["${var.dns_primary}", "${var.dns_secondary}"]
}

module "subnet_private" {
  source = "./subnet_private"

  name   = "${var.project_code}-${var.environment_code}-prv"
  cidr   = "${var.subnet_private}"
  dns_nameservers = ["${var.dns_primary}", "${var.dns_secondary}"]
}

module "router" {
  source = "./router"

  name = "${var.project_code}-${var.environment_code}-vpc"
  subnet_public_id = "${module.subnet_public.subnet_public_id}"
  subnet_private_id = "${module.subnet_private.subnet_private_id}"
}

module "nat" {
  source = "./nat"

  name = "${var.project_code}-${var.environment_code}-nat"
  network_public_id  = "${module.subnet_public.network_public_id}"
  vpc_id  = "${module.router.vpc_id}"
}

output "network_private_id" {
  value = "${module.subnet_private.network_private_id}"
}

subnet_public/subnet_public.tf

variable "name" {}
variable "cidr" {}
variable "dns_nameservers" { type = "list" }

resource "flexibleengine_networking_network_v2" "nwk" {
  name = "${var.name}"
  admin_state_up = "true"
}

resource "flexibleengine_networking_subnet_v2" "snet" {
  name = "${var.name}"
  network_id = "${flexibleengine_networking_network_v2.nwk.id}"
  cidr = "${var.cidr}"
  gateway_ip = "${cidrhost(var.cidr, 1)}"
  ip_version = 4
  enable_dhcp = "true"
  dns_nameservers = "${var.dns_nameservers}"
}

output "subnet_public_cidr" { value = "${var.cidr}" }
output "subnet_public_id" { value = "${flexibleengine_networking_subnet_v2.snet.id}"}
output "network_public_id" { value = "${flexibleengine_networking_network_v2.nwk.id}"}

nat/nat.tf

variable "name" {}
variable "vpc_id" {}
variable "network_public_id" {}

resource "flexibleengine_nat_gateway_v2" "nat" {
  name   = "${var.name}"
  spec = "4"
  router_id = "${var.vpc_id}"
  internal_network_id = "${var.network_public_id}"
}

name of the arguments required for flexibleengine_nat_gateway_v2

Hello

The argument names for flexibleengine_nat_gateway_v2 are misleading.

spec is named "type" in the GUI, and the values can be Small, Medium, Large, Extra-Large (not "1", "2", "3", "4")
router_id is actually the VPC id.
internal_network_id is actually the subnet ID

It would be easier for users if the ressources had the same name in the GUI and in the terraform provider.

cheers

ELB Pool "Still creating" when using TCP and TERMINATED_HTTPS

Hi,

When I do this, terraform stuck in "Still creating"

resource "flexibleengine_lb_listener_v2" "listener_https" {
  name = "https"
  protocol = "TERMINATED_HTTPS"
  protocol_port = 443
  loadbalancer_id = "${flexibleengine_lb_loadbalancer_v2.elb.id}"
  default_tls_container_ref = "${var.certificate_id}"
}

resource "flexibleengine_lb_pool_v2" "pool_https" {
  protocol    = "TCP"
  lb_method   = "ROUND_ROBIN"
  listener_id = "${flexibleengine_lb_listener_v2.listener_https.id}"
  admin_state_up = true
}

This is normal, because when using the API we have this error :

{
  "NeutronError": {
    "message": "Listener protocol TERMINATED_HTTPS and pool protocol TCP are not compatible.",
    "type": "ListenerPoolProtocolMismatch",
    "detail": ""
  }
}

Probably we should had some control on this

Best regards,

Feature Request: Adding count to resource block for flexibleengine_csbs_backup_policy_v1

Hi there,

It could be great to have a way to use count in the ressource block for flexibleengine_csbs_backup_policy_v1 resource.

For now, when using count for flexibleengine_compute_instance_v2 resource, we need to use the count into the flexibleengine_csbs_backup_policy_v1. This is creating the same number of backups policies as the number of instances and not only one policy for all instances.

Being able to use count into the resource block could be a way to resolve this.

Terraform Version

terraform -v 
Terraform v0.11.11
+ provider.flexibleengine v1.5.0
+ provider.null v2.1.2
+ provider.openstack v1.18.0

Affected Resource(s)

Please list the resources as a list, for example:

  • flexibleengine_csbs_backup_policy_v1

If this issue appears to affect multiple resources, it may be an issue with Terraform's core, so please mention this.

Terraform Configuration Files

   resource "flexibleengine_csbs_backup_policy_v1" "backup_policy_k8s_node" {
   name  = "backup-${flexibleengine_compute_instance_v2.k8s_node.*.name[count.index]}"
   count = "${ var.enable_backup ? var.number_of_k8s_nodes : 0 }"
   resource {
     id = "${flexibleengine_compute_instance_v2.k8s_node.*.id[count.index]}"
     type = "OS::Nova::Server"
     name = "${flexibleengine_compute_instance_v2.k8s_node.*.name[count.index]}"
   }
   scheduled_operation {
     enabled = true
     operation_type = "backup"
     retention_duration_days = "14"
     trigger_pattern = "BEGIN:VEVENT\nRRULE:FREQ=DAILY;INTERVAL=1;BYHOUR=19;BYMINUTE=00\nEND:VEVENT"
   }
 }

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform apply

ECS SystemDisk Size parameter

Hello,

By default the system_disk is set to 40 go .
From the portal we can manage the size of this disk at the build.
IS it possible to do from terraform provider ? I did'nt found nothing in the documentation

Thanks

Jeff

flexibleengine_elb_listener documentation is wrong

Terraform Version

> terraform -v
Terraform v0.11.7

Affected Resource(s)

flexibleengine_elb_listener

Terraform Configuration Files

resource "flexibleengine_elb_listener" "listener_public_HTTP" {
  name = "listener_public_http"
  protocol = "HTTP"
  backend_protocol = "HTTP"
  protocol_port = 80
  backend_port = 80
  lb_algorithm = "roundrobin"
  loadbalancer_id = "${flexibleengine_elb_loadbalancer.elb_public.id}"
  session_sticky = "true"
  sticky_session_type = "insert"
  cookie_timeout = 1
}

Expected Behavior

Creation of the ressource

Actual Behavior

sticky_session_type is an unknown key

Steps to Reproduce

terraform apply

Solution

use session_sticky_type instead of sticky_session_type

MRS component_list serialization issue ?

Hi there,

Terraform Version

Terraform v0.10.7

Affected Resource(s)

flexibleengine_mrs_cluster_v1

Terraform Configuration Files

# Configure the FlexibleEngine Provider with AK/SK
# This will work with a single defined/default network, otherwise you need to specify network
# to fix errrors about multiple networks found.
provider "flexibleengine" {
  region      = "eu-west-0"
}

# Create a MRS cluster
resource "flexibleengine_mrs_cluster_v1" "cluster1" {
  cluster_name = "mrs-cluster-sbenewFromTerraform"
  region = "eu-west-0"
  billing_type = 12
  master_node_num = 2
  core_node_num = 3
  master_node_size = "s1.4xlarge.linux.mrs"
  core_node_size = "s1.xlarge.linux.mrs"
  available_zone_id = "eu-west-0a"
  vpc_id = "3d8f211c-f5ef-4756-b0ea-0c7d064c30cd"
  subnet_id = "4cb41d69-d560-4836-a230-003b6ec77379"
  cluster_version = "MRS 1.3.0"
  volume_type = "SSD"
  volume_size = 100
  safe_mode = 0
  cluster_type = 0
  node_public_cert_name = "keyPairSB"
  cluster_admin_secret = ""
  component_list {
      component_name = "Hadoop"
  }
  component_list {
      component_name = "Spark"
  }
  component_list {
      component_name = "Hive"
  }
}

Debug Output

https://gist.github.com/nsabernierdev1/582b453d71957ec90570384b67621353#file-terraform-log

Expected Behavior

The MRS cluster should be created

Actual Behavior

Json request is not well formed:
Actual:
"component_list":[]interface {}{map[string]interface {}{"component_name":"Hadoop"}, map[string]interface {}{"component_name":"Spark"}, map[string]interface {}{"component_name":"Hive"}}
Should be:
"component_list":[
{"component_name":"Hadoop"},
{"component_name":"Spark"},
{"component_name":"Hive"}
],

Steps to Reproduce

  1. Build master release of flexibleengine provider.
  2. terraform apply

CCE : "cluster_version" field and service pack

Hello,
I have some problem on CCE cluster provisioning concerning the field "cluster_version".
I use the version "v1.11.3-r1". After provisioning, FE change the version to "v1.11.3-r1.sp1".
If I want to modify a part of my stack, Terraform destroy my CCE cluster and create a new one because it identifies it is necessary to create a new resource.
Is there a way to fix this problem on provider source code or is it a problem on API ?
Thanks

Retreiving data makes Terraform crash with AK/SK authentication method

Hi there,

Terraform Version

$ terraform -v
Terraform v0.11.13
+ provider.flexibleengine v1.4.0

Affected Resource(s)

Please list the resources as a list, for example:

  • data flexibleengine_vpc_v1

Terraform Configuration Files

main.tf

data "flexibleengine_vpc_v1" "vpc" {
  name = "vpc-main"
}

provider.tf

provider "flexibleengine" {
  insecure = true
  access_key = "XXXXXXXXXXX"
  secret_key = "YYYYYYYYYYYYY"
  auth_url   = "https://iam.eu-west-0.prod-cloud-ocb.orange-business.com/v3"
  region     = "eu-west-0"
}

Panic Output

https://gist.github.com/osaluden/d282d520cc84eea7f03cc837437fee5f

Expected Behavior

Terraform plan should display the VPC ID if the VPC with name "vpc-main"

Actual Behavior

Terraform crashes at "plan" step

Steps to Reproduce

  1. terraform init
  2. terraform plan

Important Factoids

  • Terraform does not crashes if run it with Openstack user_name/password/tenant_id/domain_id method.
  • With AK/SK authentication methed, Terraform crashes only with "data" resources.
  • Other accounts within the same FE tenant does not crashes with AK/SK method.
  • I generated a new AK/SK pair, same issue

Provisioner file copy is not working, throwing connection error

Terraform Version

Terraform v0.11.8

  • provider.flexibleengine v1.4.0
  • provider.template v2.1.0

Affected Resource(s)

Please list the resources as a list, for example:

  • provisioner copy file

Terraform Configuration Files

resource "flexibleengine_compute_instance_v2" "optisam_bastion_proxy" {
  name = "${var.platform_server_prefix}_optisam_bastion_proxy"
  some more lines

  provisioner "file" {
    source      = "yum.repo"
    destination = "/etc/yum.repos.d"

    connection {
      type        = "ssh"
      user        = "user"
      private_key = "${file("~/.ssh/key.pem")}"
    }
  }
}

Debug Output

Dial tcp :22: connectex: No connection could be
made because the target machine actively refused it.

feature request: ability to specify the AZ of the standby node on RDS instances

Hello

Unless I missed something, when creating a RDS instance via terraform, it is not possible to specify the Availability Zone of the stanby instance.
It is required to specify the availability zone of the RDS, but this applies to the master and standby instance, and they end up in the same AZ.

This makes this service not suitable for a production environment, as there's no HA if the ressource has been created with terraform.

Terraform Version

$ terraform -v
Terraform v0.11.10

  • provider.flexibleengine v1.2.0

Affected Resource(s)

flexibleengine_rds_instance_v1

It would be great to see this implemented. For now we use terraform for almost every other ressources but we deploy RDS manually.

Thanks !

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.