opennebula / terraform-provider-opennebula Goto Github PK
View Code? Open in Web Editor NEWTerraform provider for OpenNebula
Home Page: https://www.terraform.io/docs/providers/opennebula/
License: Mozilla Public License 2.0
Terraform provider for OpenNebula
Home Page: https://www.terraform.io/docs/providers/opennebula/
License: Mozilla Public License 2.0
I have an issue where the disk I want to use for booting the vm is not being used. Instead it is the second disk I have added.
How do i force or ensure that opennebula uses a specific disk to boot from when using the provider?
resource "opennebula_virtual_machine" "vm" {
count = "${var.vm_count}"
name = "${var.vm_name}-${format("%02d", count.index+1)}"
permissions = "660"
template_id = "${opennebula_template.vm_template.id}"
cpu = "${var.cpu}"
vcpu = "${var.cpu}"
memory = "${var.memory}"
os {
arch = "x86_64"
boot = "disk0"
}
disk {
image_id = "${var.os_disk_imageid}"
size = "${var.os_disk_size}"
driver = "raw"
}
dynamic "disk" {
for_each = var.extra_disks
content {
image_id = disk.value["image_id"]
size = disk.value["size"]
driver = "raw"
}
}
nic {
network_id = 0
}
}
Seems like the second disk is being assigned the ID 0 even though it is declared after the disk I want to use to boot.
Context is not well managed on update neither on Read.
It should be possible to configure security groups for each Address range.
Provider version: 0.3
I have a simple resource:
resource "opennebula_virtual_machine" "test1" {
name = "bobrov-test1"
cpu = 1
vcpu = 1
memory = 1024
group = "mygroup"
template_id = data.opennebula_template.centos7.id
nic {
network_id = data.opennebula_virtual_network.mynetwork.id
}
}
First terraform apply
creates VM successfully. By if I run apply second time it deletes my disk:
# opennebula_virtual_machine.test1 will be updated in-place
~ resource "opennebula_virtual_machine" "test1" {
id = "1990584"
name = "bobrov-test1"
# (15 unchanged attributes hidden)
- disk {
- computed_driver = "raw" -> null
- computed_size = 10240 -> null
- computed_target = "vda" -> null
- disk_id = 0 -> null
- image_id = 13305 -> null
- size = 0 -> null
}
# (2 unchanged blocks hidden)
}
When trying to use the context
block in a OpenNebula template resource, it returns with an error. From what I understand, this should be a new feature added in 0.2.0/0.2.1, so I'm confused as to why this is happening.
https://www.terraform.io/docs/providers/opennebula/r/template.html#context
When running terraform plan
:
Error: Unsupported block type
on main.tf line 72, in resource "opennebula_template" "nginx_template":
72: context {
Blocks of type "context" are not expected here. Did you mean to define argument "context"? If so, use the equals sign to assign it a value.
Extract from main.tf file:
resource "opennebula_template" "nginx_template" {
name = "nginx_template"
cpu = 4
memory = 4096
group = "terraform"
permissions = "660"
context {
NETWORK = "YES"
HOSTNAME = "hostname"
USERNAME = "username"
}
graphics {
type = "VNC"
listen = "0.0.0.0"
}
os {
arch = "x86_64"
boot = ""
}
...
}
Output of terraform -v
:
Terraform v0.12.28
+ provider.opennebula v0.2.1
If any more detail is needed let me know, thanks for all the work!
In case there are several zone on OpenNebula cluster it should be nice to support several zones (they must be part of an OpenNebula federation)
Computed is useless for tags
Resource updates are marked as applied in the plan even when not applied on the cloud provider.
We don't handle if a part could be updated or not there is no check, and there is no documentation.
Here's the quote from a team member on the problem:
For some resources modified post-spawned VM like `tags` or in context params like`START_SCRIPT[_BASE64]`,
the ONE provider indicates that changes has been applied with success but in fact, nothing appeared in Sunstone.
It should inform the user that this resource is immutable,
or do a force-replaces to apply it effectively (like AWS do with `user_data`) instead of silently fail.
```
Terraform offer the possibility to tag resources. Tags are totally customs.
Because it is possible to add custom attributes to some OpenNebula resource this feature could be very interesting for any people who wants to use OpenNebula custom attributes with Terraform tagging.
Hi,
when deploying VMs with the following snippet, the CONTEXT of the VM is gone and only what I have defined through the terraform is present, but the defaults from the "VM template" I am using are gone.
# data "template_file" "cloudinit" {
# template = "${file("cloud-init.yaml")}"
#}
resource "opennebula_virtual_machine" "demo" {
count = 1
name = "kubespray1-${count.index}"
cpu = 2
vcpu = 2
memory = 2024
permissions = "660"
template_id = 2
context = {
NETWORK = "YES"
SET_HOSTNAME = "$NAME"
USER_DATA = "${data.template_file.cloudinit.rendered}"
START_SCRIPT = "yum install -y git"
}
graphics {
type = "VNC"
listen = "0.0.0.0"
}
os {
arch = "x86_64"
boot = "disk0"
}
disk {
image_id = 1
size = 10000
target = "vda"
driver = "qcow2"
}
nic {
# model = "virtio-pci-net"
network_id = 1
}
}
output "vm_ips" {
value = "${join(",",opennebula_virtual_machine.demo.*.ip)}"
During a terrafrom plan
if a resource has some parameters set, the data sources expects also to have them because dataSource and resource share the "Read" function.
Terraform detects a change because quotas is set as a Set with empty elements in the terraform state event if there are no quotas defined in user or groups.
It would be very use full to have the ability to handle services with terraform.
It is not listed in either the resources or limitations in the readme.
Is it already working ?
Under HTML doc there are still:
terraform 0.13.5
opennebula provider 0.2.2
template id 0 is valid from sunstone and onetemplate. when used in opennebula_virtual_machine template_id = 0, it failed. the similar error happened in disk image_id = 0.
With larger virtual machine images, or poorly performing storage, the VM instantiation process seems to time-out during the file-copy part of the process at around 3mins. This seems an extremely short timeout of VM provision (I believe the equivalent in OpenStack and AWS is ~15mins) and causes a terraform apply to fail. Re-running will, of course, continue successfully as the file has been copied.
Would it be possible to extend the default timeout, or expose it as an argument?
Thanks!
To track VM state change we rely on polling, calling RPC method one.vm.info
This method returns all VM attributes but we only need the VM state, this is wasteful, a VM may have a lot of attributes (history records, disk, NIC...).
Instead, pools methods allow to retrieve a subset of all VM datas, and to filter VMs by range of ID.
There is no other way to restrict the amount of informations we want to retrieve.
So, in the provider, instead of doing
vms, err := ctrl.VM(vmID).Info(false)
We could do:
vms, err := ctrl.VMs().Info(parameters.PoolWhoAll, vmID, vmID, -1)
The trick is to use the same VM ID at start and end of the range.
Sometimes error are not well managed
Tainting a network security group with VMs attached causes terraform apply to fail with:
Error: OpenNebula error [ACTION]: [one.secgroup.delete] Cannot delete security group. The security group has VMs using it
Surely the expected behavior would be that Terraform understands the dependency graph, and taints the VMs, causing them to be redeployed with the security group?
I'm using the template resource and have found it impossible to get the resource to pass through a terraform apply
without updating.
I've updated to the latest changes in master, and at first I experienced whitespace issues which were giving false positives. After fixing those, the diff shows no actual differences in the template but still requires an update.
I even used a template data source and passed in the very same template but it still updates each time.
Looking at the code, it's clear that there it is just a simple string comparison, so it is no real suprise it doesn't work reliably.
I noticed there was talk in another issue of creating a schema for the template which would help, but if passing a string in as a template is to be retained, there should be a way of normalising the template to avoid whitespace differences affecting the comparison.
Allow to attach/detach disks to a running/poweroff VM.
The implementation take the decision based on the comparison of the image_id argument between the new disks configured, and the disks in the terraform state.
Operations are done in this order:
image_id
is found in the state, but not in the new config, it's considered as attached, thus, the disk will be detached based on the disk_id
attribute (which is the disk number)image_id
is find in the new config, but not in state, this image ID will be attachedBetween each operations, wait on the state to be back on Running/poweroff again, and check disk presence (attach method is async)
These are the states from which it's possible to call attach/detach actions.
So, by the way I updated and refactored the method to wait on VM state, and for consistency, the method to wait on image state.
PR: #59
Feel free to drop a comment if you see some problem with this behavior
Terraform crashes when using hold_size
and ip_hold
It seems that the CI of the repository doesn't work anymore.
Here are some filtered logs.
$ rvmsudo /usr/share/one/install_gems --yes
...
Using nokogiri 1.10.9
...
Bundle complete! 44 Gemfile dependencies, 95 gems now installed.
Use `bundle info [gemname]` to see where a bundled gem is installed
$ source .travis/before_script/oned.sh
Ignoring augeas-0.6.4 because its extensions are not built. Try: gem pristine augeas --version 0.6.4
...
Ignoring nokogiri-1.10.9 because its extensions are not built. Try: gem pristine nokogiri --version 1.10.9
...
Error: nokogiri gem not installed.
The command "source .travis/before_script/oned.sh" failed and exited with 255 during .
Full logs are here: https://travis-ci.org/github/OpenNebula/terraform-provider-opennebula/builds/733708888
Update methods doesn't work all the same way there is some refactoring to do.
Some of them use Partial
and SetPartial
methods, some others not.
As a side note, SetPartial
is deprecated in later version of terraform SDK (but we don't use the SDK for now): https://www.terraform.io/docs/extend/guides/v2-upgrade-guide.html#removal-of-helper-schema-resourcedata-setpartial
Some of them don't call the read method at the end of the update.
I modified the VM update method in a previous PR (NIC update) but I still have some points to figure out around partial update of the state in case of update failure
Is it possible to have VMs in pending state when they are deployed?
Currently it only looks for the running state even if the vm resource is defined to have pending state when created. Terraform eventually timesout in 3m.
When I define a virtual network like the following:
resource "opennebula_virtual_network" "team_21_isp" {
name = "team_21_isp"
permissions = "660"
group = "opennebula-admins"
physical_device = "vlan1121"
type = "bridge"
gateway = "10.72.21.1"
mtu = 9000
security_groups = [ 0 ]
clusters = [ 103 ]
tags = {
environment = "team21"
}
ar = [ {
ar_type = "IP4",
size = 253,
ip4 = "10.72.22.2"
} ]
}
I receive the following:
Error: Unsupported argument
on main.tf line 41, in resource "opennebula_virtual_network" "team_21_isp":
41: ar = [ {
An argument named "ar" is not expected here. Did you mean "id"?
My terraform version is 0.13.4 with opennebula provider version 0.2.2. I see in the following provider code that ar is supposed to be a valid parameter. So I'm really not sure what's going on here. https://github.com/OpenNebula/terraform-provider-opennebula/blob/v0.2.2/opennebula/resource_opennebula_virtual_network.go#L179
Regression caused by: b071b27.
A terraform user suggested to make all arguments of template disk schema optional (for now, only image_id
is required).
Here is the original message:
The idea behind this is to have a map that can be define as example:
disk {}
or
disk {
image_id = null
size = 0
}
that return {} too without error during instance creation process. I think that will give use more flexibility on vm creation
We need to undeploy VMs instantiated in opennebula. We would like to be able to use terraform to do this.
As mentioned in opennebula's docs, one can use onevm to resume from the following states: STOPPED, SUSPENDED, UNDEPLOYED, POWEROFF, UNKNOWN
A suggestion would be to add a desired_state attribute, which supports all the states which can be resumed from. To change between inactive states, the virtual machine could be restarted and then the relevant action can be called.
Hey guys I was wondering how can we specify CPU MODEL while using opennebula_virtual_machine . Can you please helo with the same ?
Regression introduced by 7f2d51e
As proposed in this comment: #24 (comment)
We could add some new schemas parts for VM and template resources.
Draft proposal:
We could add the field description
on both (but also on images... etc)
Add to Template resource:
hypervisor
sched_requirements
user_inputs
EDIT: an user asked for an other placement section attribute:
shed_ds_requirements
Feel free to drop a comment to update this list
At root of VM schema, there is an ip attribute defined:
"ip": {
Type: schema.TypeString,
Computed: true,
Description: "Primary IP address assigned by OpenNebula",
},
Filled with first ip address of the list of nics (in two places):
for i, nic := range nics {
...
if i == 0 {
d.Set("ip", nicRead["computed_ip"])
}
}
This was added in initial code of the provider and we may want to remove it in later code refactoring.
For a Virtual Machine resource, if several disk or Nic are attached, there are attached in a random order.
This is because the type of disk
or nic
parameter is TypeSet
. Because the order can be important, change the type of these parameters to TypeList
This issue might be related to #1
It could be very interesting to support VM disk resize to hot resize a VM disk
I can't thank you all enough for your hard work on this provider - it's truly invaluable.
I fully appreciate the decision to focus on VMs and networking, as client workloads generate far more operational effort than managing the cluster itself.
Are there any plans to add vRouter support?
At the moment I am having to build vRouters using null-resource and firing onevrouter commands, which seems more than a little primitive given the elegance of using a TF provider.
Is there a 'features roadmap' anywhere?
I see some mention of a roadmap/timeline in https://github.com/terraform-providers/terraform-provider-opennebula/issues/4 so I assume the work has been on the features, not the future? ๐
Thank you again, team - this provider really adds so much value to OpenNebula.
Hi guys,
I have tested with both Terraform 11 and 12.
I have tested with network and image resources and the same happens.
I am using MacOS Mojave 10.14.6.
Actually it creates the resource, but it shows an error message in the end, which is strange.
opennebula_virtual_network.vnet: Destroying... [id=17]
opennebula_virtual_network.vnet: Destruction complete after 1s
opennebula_virtual_network.vnet: Creating...
Error: resource not found
on main.tf line 7, in resource "opennebula_virtual_network" "vnet":
7: resource "opennebula_virtual_network" "vnet" {
This is the code:
provider "opennebula" {
endpoint = "${var.one_endpoint}"
username = "${var.one_username}"
password = "${var.one_password}"
}
resource "opennebula_virtual_network" "vnet" {
name = "4F_bridge_network"
permissions = "660"
group = "terraform"
bridge = "br0"
physical_device = "enp8s0"
type = "bridge"
mtu = 1500
ar {
ar_type = "IP4"
size = 16
ip4 = "192.168.0.30"
}
dns = "192.168.0.1"
gateway = "192.168.0.1"
security_groups = [ 0 ]
}
It creates the resource, but because of the error, I cannot send the output to another module.
I could not find anything related to this.
May you help here?
When setting one or more specific cluster ids without the default cluster in the list (typically 0), the default cluster gets implicitly added.
Given the example:
resource "opennebula_virtual_network" "vnet" {
name = "tarravnet"
permissions = "660"
group = "${opennebula_group.group.name}"
bridge = "br0"
physical_device = "eth0"
type = "fw"
mtu = 1500
ar = [ {
ar_type = "IP4",
size = 16
ip4 = "172.16.100.101"
} ]
dns = "172.16.100.1"
gateway = "172.16.100.1"
security_groups = [ 0 ]
clusters = [100]
}
This will result in:
clusters = [
0,
100,
]
The sunstone frontend also shows that this default cluster was added implicitly
I believe this is caused by the upstream virtual network controller in the api adding the default cluster automatically if none is specified:
https://github.com/OpenNebula/one/blob/master/src/oca/go/src/goca/virtualnetwork.go#L122-L130
Then this module just adds the list of clusters to the network after creation, without clearing the default first. Compounding this, it seems that this resource doesn't update the list of clusters after updating either, so the list is never checked.
Previously created here: #42
Whenever I run terraform plan/apply, terraform always wants to update the vm in place even though there have been no changes from me.
I removed the graphics section completey from my template or my resource but terraform still wants to change the vm in place (graphics):
~ resource "opennebula_virtual_machine" "vm_" {
cpu = 4
gid = 0
gname = "oneadmin"
id = "154"
instance = "test-slv-07"
ip = "10.71.155.13"
lcmstate = 3
memory = 4096
name = "liveops-test-slv-07"
pending = false
permissions = "600"
state = 3
template_id = 26
uid = 4
uname = "deployer"
vcpu = 4
disk {
driver = "raw"
image_id = 4
size = 102400
target = "vda"
}
- graphics {}
nic {
ip = "10.71.155.13"
mac = "XX"
network = "Network"
network_id = 0
nic_id = 0
security_groups = [
0,
]
}
os {
arch = "x86_64"
boot = "disk0"
}
Also I am seeing the same issue when creating templates. It updates in place the templates even when there are no changes.
For the template resource, the attribute template
part is more than a TypeString
(it can be considered as a string, but some limitations appears due to string formatting).
It can contains a collection of elements that follows one of these formats:
pairs: key = value
vector: key = [ key1 = value1, key2 = value2 ]
For reference, the OpenNebula lex file is here
Some of these elements are well known as NIC
, DISK
, and CONTEXT
vectors, but customs elements could be added as well.
Existing schema from VM could be partly reused for these parts.
We should improve the schema to handle this template part.
Firstly, Congratulations on making this an official terraform provider! ๐
I am going to be using this for my clusters and lab, and there are a few resources that I would like to see that are currently missing.
The most important ones to me are Hosts, Clusters & Datastores.
I wondered if these are planned for the near future, and if they are, what kind of timeline is anticipated?
This issue is just about some code cleaning:
This PR added ACL methods to terraform provider: https://github.com/terraform-providers/terraform-provider-opennebula/pull/25
Later, these methods were added to Goca, with this PR: OpenNebula/one#4980
Now, these ACL methods should be removed from the provider side, the provider should only use the ACL methods from Goca.
ran terraform 0.13.5 with required_providers source = "opennebula/openebula" failed. however it works with terraform-providers/opennebula although with warning.
Initializing the backend...
Initializing provider plugins...
Error: Failed to install provider
Error while installing opennebula/opennebula v0.2.2: provider
registry.terraform.io/opennebula/opennebula 0.2.2 is not available for
linux_amd64
Initializing the backend...
Initializing provider plugins...
The following providers do not have any version constraints in configuration,
so the latest version was installed.
To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = "..." constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.
Warning: registry.terraform.io: For users on Terraform 0.13 or greater, this provider has moved to OpenNebula/opennebula. Please update your source in required_providers.
Issue originally opened by @maartendq here: #22
I'm looking for a way to add a VM GROUP (name or id + role) when instantiating a virtual machine (in order to set anti-affinity). From what I can see in your code, it's currently not possible and by looking at one's goca code it looks like it's not implemented there yet either.
Any ideas on how I could set or do this at the moment?
Same that #64 but for NICs.
To prevent accidental operations like resource deletion, OpenNebula allow to lock resources.
We could enable this in terraform via a new schema argument, and lock ressource at creation (or only via updates).
Then to destroy a locked ressource this should done in two steps:
Some PRs add/remove the ExpectNonEmptyPlan
flags to the tests, sometimes to make it pass.
We probably miss something, this should be investigated.
Some terraform doc for reference
To illustrate, here are some PR, that switch these tags at the same place in template and virtual machine tests:
Terraform version = 0.13.5
When I try and use this provider on osx, I get this error
variable "one_endpoint" {}
variable "one_username" {}
variable "one_password" {}
provider "opennebula" {
endpoint = var.one_endpoint
username = var.one_username
password = var.one_password
}
# Create a new group of users to the OpenNebula cluster
resource "opennebula_group" "group" {
# ...
}
terraform init
Initializing the backend...
Initializing provider plugins...
- Finding latest version of opennebula/opennebula...
Error: Failed to install provider
Error while installing opennebula/opennebula v0.2.2: provider
registry.terraform.io/opennebula/opennebula 0.2.2 is not available for
darwin_amd64
For VM resources, we could allow to udpate configuration parts:
Note: for VMs there is two RPC vm update method regarding the part to update.
Managing OpenNebula Users via Terraform can help administrators to automate the full scope.
/!\ Only members of "oneadmin" group will be able to use this resource
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.