kreuzwerker / terraform-provider-docker Goto Github PK
View Code? Open in Web Editor NEWTerraform Docker provider
License: Mozilla Public License 2.0
Terraform Docker provider
License: Mozilla Public License 2.0
≥ 0.12.25
The provider has a feature to deploy a docker container using the SSH protocol
provider "docker" {
host = "ssh://user@remote-host:22"
}
However there seems to be no explicit way to provide the path to an SSH identity file (private key).
The only implicit workaround seems to be creating an SSH config file on the machine running Terraform to specify the identity file.
Add two options:
identity_file
passphrase
Use this information in the SSH systemcall.
Issue in archived repo: hashicorp/terraform-provider-docker#268
Related: #18
This issue was originally opened by @pumpingiron as hashicorp/terraform-provider-docker#299. It was migrated here as a result of the community provider takeover from @kreuzwerker. The original body of the issue is below.
The boolean option referenced in the title doesn't work, even when set to true, I get two networks with the same name but different ids.
Have you got a clue ?
This issue was originally opened by @stoically as hashicorp/terraform-provider-docker#295. It was migrated here as a result of the community provider takeover from @kreuzwerker. The original body of the issue is below.
Modifying labels should recreate the container
Modifying labels tries to modify the running container
This issue was originally opened by @viceice as hashicorp/terraform-provider-docker#282. It was migrated here as a result of the community provider takeover from @kreuzwerker. The original body of the issue is below.
Terraform v0.12.29
+ provider.docker v2.7.1
+ provider.helm v1.2.4
+ provider.kubernetes v1.11.4
Please list the resources as a list, for example:
If this issue appears to affect multiple resources, it may be an issue with Terraform's core, so please mention this.
terraform {
required_providers {
docker = "2.7.1"
}
}
provider "docker" {
host = "ssh://root@docker-worker2"
}
resource "docker_image" "nginx" {
name = "nginx:1.18.0-alpine@sha256:29dc24ed982665eb88598e0129e4ec88c2049fafc63125a4a640dd67529dc6d4"
}
resource "docker_container" "nginx" {
name = "nginx-example"
image = "nginx:1.18.0-alpine@sha256:29dc24ed982665eb88598e0129e4ec88c2049fafc63125a4a640dd67529dc6d4"
restart = "unless-stopped"
start = true
}
https://gist.github.com/viceice/b1a3b6a462988cf53c49a247b4d090da
pull docker image and create container
fails with error
Unable to read Docker image into resource: Unable to find or pull image nginx:1.18.0-alpine@sha256:29dc24ed982665eb88598e0129e4ec88c2049fafc63125a4a640dd67529dc6d4
Unable to create container with image nginx:1.18.0-alpine@sha256:29dc24ed982665eb88598e0129e4ec88c2049fafc63125a4a640dd67529dc6d4: Unable to find or pull image nginx:1.18.0-alpine@sha256:29dc24ed982665eb88598e0129e4ec88c2049fafc63125a4a640dd67529dc6d4
terraform apply -auto-approve -parallelism=1
Simple docker install on ubuntu 18.04 with btrfs
Server:
Containers: 13
Running: 0
Paused: 0
Stopped: 13
Images: 82
Server Version: 19.03.12
Storage Driver: btrfs
Build Version: Btrfs v4.15.1
Library Version: 102
Logging Driver: journald
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 7ad184331fa3e55e52b890ea95e65ba581ae3429
runc version: dc9208a3303feef5b3839f4323d9beb36df0a9dd
init version: fec3683
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 5.3.0-1032-azure
Operating System: Ubuntu 18.04.4 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 3.793GiB
Name: XXXX
ID: XXXXX
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Hi team.
I have the following feature proposal:
Resource Docker Container
Provider
Thanks for your time!
This issue was originally opened by @mattwelke as hashicorp/terraform-provider-docker#303. It was migrated here as a result of the community provider takeover from @kreuzwerker. The original body of the issue is below.
Terraform v0.12.29 (using old version intentionally because I'm following a tutorial that references particular modules that don't yet support 0.13)
n/a
versions.tf
:
terraform {
required_version = "~> 0.12"
required_providers {
google = "~> 2.16"
random = "~> 2.2"
docker = "~> 2.3"
}
}
providers.tf
:
provider "google" {
credentials = file("account.json")
project = var.gcp.project_id
region = var.gcp.region
}
provider "docker" {
host = "tcp://127.0.0.1:2375/"
}
variables.tf
:
variable "gcp" {
type = object({
project_id = string
region = string
})
}
terraform.tfvars
:
gcp = {
project_id = "REDACTED"
region = "us-east1"
}
outputs.tf
:
output "addresses" {
value = {
gcp1 = module.gcp1.network_address
gcp2 = module.gcp2.network_address
loadbalancer = module.loadbalancer.network_address
}
}
main.tf
:
module "gcp1" {
source = "scottwinkler/vm/cloud//modules/gcp"
project_id = var.gcp.project_id
environment = {
name = "GCP 1"
background_color = "red"
}
}
module "gcp2" {
source = "scottwinkler/vm/cloud//modules/gcp"
project_id = var.gcp.project_id
environment = {
name = "GCP 2"
background_color = "blue"
}
}
module "loadbalancer" {
source = "scottwinkler/vm/cloud//modules/loadbalancer"
addresses = [
module.gcp1.network_address,
module.gcp2.network_address,
]
}
Please provider a link to a GitHub Gist containing the complete debug output: https://gist.github.com/mattwelke/ce34d58c1281d49930f81caaa257800e
n/a
A docker container being created.
An error applying Terraform config when it tried to use the Docker provider.
docker ps
)terraform apply
I ensured I had Docker set up to be useable from within WSL 2 first. I was able to run commands like docker ps
:
> docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
But then, when running terraform apply
, it displayed that error, saying it couldn't reach the daemon. I tried using the port 2376
instead of 2375
in the Terraform config, but that didn't work. I also tried enabling this option in Docker Desktop in Windows:
But this also made no difference (even when using port 2375
in the Terraform config).
I'm using Ubuntu 20.10 in WSL 2.
When troubleshooting, I tried the steps in the issue hashicorp/terraform-provider-docker#210, but it didn't help.
This issue was originally opened by @taiidani as hashicorp/terraform-provider-docker#302. It was migrated here as a result of the community provider takeover from @kreuzwerker. The original body of the issue is below.
0.12.29
New resource request, for a docker_secret
data source
My team uses an external automation to load Secrets into our Swarm. However, we're also trying to use the docker_service
resource. That resource requires the secret_id
which is an output of the docker_secret
resource...but that resource does not support importing.
Expected behavior of the provider would be a docker_secret
data source to match the docker_secret
resource. The data source would be able to take the name of the secret and output its id.
We have to either not use Secrets in our Swarm when using this provider, or manually look up and then hardcode their ids into our docker_service
definitions.
Attempt to define a docker_service
resource that uses a secret that was not created by a docker_secret
resource.
I'd be happy to give a PR a shot if there is an interest from the maintainers!
Hi there,
Thank you for opening an issue. Please note that we try to keep the Terraform issue tracker reserved for bug reports and feature requests. For general usage questions, please see: https://www.terraform.io/community.html.
Terraform v0.13.5
+ provider registry.terraform.io/digitalocean/digitalocean v2.2.0
+ provider registry.terraform.io/kreuzwerker/docker v2.8.0
docker_container
data "docker_registry_image" "container" {
name = "${var.registry_address}/xxxxxxx/container"
}
resource "docker_image" "container" {
name = "${data.docker_registry_image.container.name}@${data.docker_registry_image.container.sha256_digest}"
keep_locally = true
pull_triggers = [data.docker_registry_image.container.sha256_digest]
}
resource "docker_container" "container" {
name = "container"
image = docker_image.container.latest
}
docker_container.container must be replaced
-/+ resource "docker_container" "container" {
attach = false
+ bridge = (known after apply)
~ command = [
- "npm",
- "run",
- "--prefix",
- "./packages/container/backend",
- "start",
] -> (known after apply)
+ container_logs = (known after apply)
- cpu_shares = 0 -> null
- dns = [] -> null
- dns_opts = [] -> null
- dns_search = [] -> null
~ entrypoint = [
- "docker-entrypoint.sh",
] -> (known after apply)
~ env = [] -> (known after apply)
+ exit_code = (known after apply)
~ gateway = "172.18.0.1" -> (known after apply)
- group_add = [] -> null
~ hostname = "81d8ace62633" -> (known after apply)
~ id = "81d8ace62633e78d620321b7168d46059af6011864b34958eab2614c3d89e0c5" -> (known after apply)
image = "sha256:b7653b3e02763582647ea37bc6220db767478e8b484a47702114d8d8c256b560"
~ ip_address = "172.18.0.3" -> (known after apply)
~ ip_prefix_length = 16 -> (known after apply)
~ ipc_mode = "private" -> (known after apply)
- links = [] -> null
log_driver = "json-file"
log_opts = {
"max-file" = "10"
"max-size" = "100m"
}
logs = false
- max_retry_count = 0 -> null
- memory = 0 -> null
- memory_swap = 0 -> null
must_run = true
name = "container"
~ network_data = [
- {
- gateway = "172.18.0.1"
- global_ipv6_address = ""
- global_ipv6_prefix_length = 0
- ip_address = "172.18.0.3"
- ip_prefix_length = 16
- ipv6_gateway = ""
- network_name = "network"
},
] -> (known after apply)
- network_mode = "default" -> null
- privileged = false -> null
- publish_all_ports = false -> null
read_only = false
remove_volumes = true
restart = "no"
rm = false
~ shm_size = 64 -> (known after apply)
start = true
- sysctls = {} -> null
- tmpfs = {} -> null
- working_dir = "/workdir" -> null # forces replacement
}
The fact that I am not defining "working_dir" in "docker_container" should not always trigger a forced replacement.
Every time I run terraform apply
the container gets replaced because I did not define "working_dir" in "docker_container"
Hi there,
Looks like "env" and "user" vars are forcing a container replacement each time, even though they don't change
Terraform v0.14.2
In brief:
docker_container.ork-haproxy must be replaced
-/+ resource "docker_container" "ork-haproxy" {
+ bridge = (known after apply)
~ command = [
- "haproxy",
- "-f",
- "/usr/local/etc/haproxy/haproxy.cfg",
] -> (known after apply)
+ container_logs = (known after apply)
- cpu_shares = 0 -> null
- dns = [] -> null
- dns_opts = [] -> null
- dns_search = [] -> null
~ entrypoint = [
- "/docker-entrypoint.sh",
] -> (known after apply)
+ env = (known after apply) # forces replacement
+ exit_code = (known after apply)
~ gateway = "172.19.0.1" -> (known after apply)
- group_add = [] -> null
~ hostname = "b20122e5bc19" -> (known after apply)
~ id = "b20122e5bc1925196208191ecf4d117749ae50dd95b41135112b5e367358f06b" -> (known after apply)
~ ip_address = "172.19.0.5" -> (known after apply)
~ ip_prefix_length = 16 -> (known after apply)
~ ipc_mode = "shareable" -> (known after apply)
- links = [] -> null
- log_opts = {} -> null
- max_retry_count = 0 -> null
- memory = 0 -> null
- memory_swap = 0 -> null
name = "haproxy"
~ network_data = [
- {
- gateway = "172.19.0.1"
- global_ipv6_address = ""
- global_ipv6_prefix_length = 0
- ip_address = "172.19.0.5"
- ip_prefix_length = 16
- ipv6_gateway = ""
- network_name = "mail_internal"
},
] -> (known after apply)
- network_mode = "default" -> null
- privileged = false -> null
- publish_all_ports = false -> null
+ remove_volumes = true
~ shm_size = 64 -> (known after apply)
- sysctls = {} -> null
- tmpfs = {} -> null
- user = "haproxy" -> null # forces replacement
Many lint errors are detected by golangci-lint.
So I propose to fix them and introduce golangci-lint in CI.
$ golangci-lint run
docker/resource_docker_network.go:252:6: `suppressIfIPAMConfigWithIpv6Changes` is unused (deadcode)
func suppressIfIPAMConfigWithIpv6Changes() schema.SchemaDiffSuppressFunc {
^
docker/resource_docker_service_funcs.go:1411:2: `longestState` is unused (deadcode)
longestState int
^
docker/resource_docker_network_test.go:320:6: `testAccNetworkLabel` is unused (deadcode)
func testAccNetworkLabel(network *types.NetworkResource, name string, value string) resource.TestCheckFunc {
^
docker/resource_docker_volume_test.go:95:6: `testAccVolumeLabel` is unused (deadcode)
func testAccVolumeLabel(volume *types.Volume, name string, value string) resource.TestCheckFunc {
^
docker/data_source_docker_network.go:99:7: Error return value of `d.Set` is not checked (errcheck)
d.Set("name", network.Name)
^
docker/data_source_docker_network.go:100:7: Error return value of `d.Set` is not checked (errcheck)
d.Set("scope", network.Scope)
^
docker/data_source_docker_network.go:101:7: Error return value of `d.Set` is not checked (errcheck)
d.Set("driver", network.Driver)
^
docker/resource_docker_container_funcs.go:562:33: Error return value of `resourceDockerContainerDelete` is not checked (errcheck)
resourceDockerContainerDelete(d, meta)
^
docker/resource_docker_container_funcs.go:571:32: Error return value of `resourceDockerContainerDelete` is not checked (errcheck)
resourceDockerContainerDelete(d, meta)
^
docker/resource_docker_container_test.go:670:16: Error return value of `fbuf.ReadFrom` is not checked (errcheck)
fbuf.ReadFrom(tr)
^
docker/resource_docker_container_test.go:732:16: Error return value of `fbuf.ReadFrom` is not checked (errcheck)
fbuf.ReadFrom(tr)
^
docker/resource_docker_container_test.go:834:17: Error return value of `fbuf.ReadFrom` is not checked (errcheck)
fbuf.ReadFrom(tr)
^
docker/resource_docker_image_funcs.go:42:12: Error return value of `m.Display` is not checked (errcheck)
m.Display(buf, false)
^
docker/resource_docker_image_funcs.go:221:14: Error return value of `buf.ReadFrom` is not checked (errcheck)
buf.ReadFrom(out)
^
docker/resource_docker_image_test.go:209:18: Error return value of `ioutil.WriteFile` is not checked (errcheck)
ioutil.WriteFile(dfPath, []byte(testDockerFileExample), 0o644)
^
docker/resource_docker_registry_image_funcs.go:265:14: Error return value of `hasher.Write` is not checked (errcheck)
hasher.Write(s)
^
docker/resource_docker_registry_image_funcs.go:298:17: Error return value of `json.Unmarshal` is not checked (errcheck)
json.Unmarshal(streamBytes, &errorMessage)
^
docker/resource_docker_service_test.go:32:16: Error return value of `checkAttribute` is not checked (errcheck)
checkAttribute(t, "Username", foundAuthConfig.Username, "myuser")
^
docker/resource_docker_service_test.go:33:16: Error return value of `checkAttribute` is not checked (errcheck)
checkAttribute(t, "Password", foundAuthConfig.Password, "mypass")
^
docker/resource_docker_service_test.go:34:16: Error return value of `checkAttribute` is not checked (errcheck)
checkAttribute(t, "Email", foundAuthConfig.Email, "")
^
docker/data_source_docker_network.go:113:2: ineffectual assignment to `err` (ineffassign)
err = d.Set("ipam_config", ipam)
^
docker/resource_docker_image_funcs.go:129:3: ineffectual assignment to `imageName` (ineffassign)
imageName = imageName + ":latest"
^
docker/resource_docker_registry_image_funcs.go:180:22: ineffectual assignment to `err` (ineffassign)
dockerBuildContext, err := os.Open(dockerContextTarPath)
^
docker/resource_docker_registry_image_funcs.go:403:15: ineffectual assignment to `err` (ineffassign)
oauthResp, err := client.Do(req)
^
docker/resource_docker_service_funcs.go:1332:2: ineffectual assignment to `auth` (ineffassign)
auth := types.AuthConfig{}
^
docker/resource_docker_container_migrate.go:100:46: S1019: should use make(map[string]interface{}) instead (gosimple)
outputPort := make(map[string]interface{}, 0)
^
docker/resource_docker_network_funcs.go:209:29: S1019: should use make([]interface{}, len(in)) instead (gosimple)
out := make([]interface{}, len(in), len(in))
^
docker/structures_service.go:59:29: S1019: should use make([]interface{}, 0) instead (gosimple)
out := make([]interface{}, 0, 0)
^
docker/structures_service.go:72:29: S1019: should use make([]interface{}, 0) instead (gosimple)
out := make([]interface{}, 0, 0)
^
docker/structures_service.go:89:29: S1019: should use make([]interface{}, 0) instead (gosimple)
out := make([]interface{}, 0, 0)
^
docker/structures_service.go:181:29: S1019: should use make([]interface{}, 1) instead (gosimple)
out := make([]interface{}, 1, 1)
^
docker/structures_service.go:185:35: S1019: should use make([]interface{}, 1) instead (gosimple)
credSpec := make([]interface{}, 1, 1)
^
docker/structures_service.go:193:41: S1019: should use make([]interface{}, 1) instead (gosimple)
seLinuxContext := make([]interface{}, 1, 1)
^
docker/structures_service.go:208:29: S1019: should use make([]interface{}, len(in)) instead (gosimple)
out := make([]interface{}, len(in), len(in))
^
docker/structures_service.go:217:52: S1019: should use make(map[string]interface{}) instead (gosimple)
bindOptionsItem := make(map[string]interface{}, 0)
^
docker/structures_service.go:229:54: S1019: should use make(map[string]interface{}) instead (gosimple)
volumeOptionsItem := make(map[string]interface{}, 0)
^
docker/structures_service.go:283:29: S1019: should use make([]interface{}, len(in)) instead (gosimple)
out := make([]interface{}, len(in), len(in))
^
docker/config.go:41:5: S1009: should omit nil check; len() for nil slices is defined as zero (gosimple)
if caPEMCert == nil || len(caPEMCert) == 0 {
^
docker/structures_service.go:397:5: S1009: should omit nil check; len() for nil slices is defined as zero (gosimple)
if in != nil && len(in) > 0 {
^
docker/structures_service.go:454:5: S1009: should omit nil check; len() for nil slices is defined as zero (gosimple)
if in == nil || len(in) == 0 {
^
docker/structures_service.go:558:5: S1009: should omit nil check; len() for nil maps is defined as zero (gosimple)
if in == nil || len(in) == 0 {
^
docker/validators.go:52:10: S1034: assigning the result of this type assertion to a variable (switch v := v.(type)) could eliminate type assertions in switch cases (gosimple)
switch v.(type) {
^
docker/resource_docker_service_funcs.go:1138:4: SA4004: the surrounding loop is unconditionally terminated (staticcheck)
return &logDriver, nil
^
docker/structures_service.go:28:5: SA4003: every value of type uint64 is >= 0 (staticcheck)
if in.ForceUpdate >= 0 {
^
docker/resource_docker_registry_image_funcs.go:213:2: SA4006: this value of `err` is never used (staticcheck)
err = filepath.Walk(buildContext, func(file string, info os.FileInfo, err error) error {
^
docker/resource_docker_registry_image_funcs.go:181:2: SA5001: should check returned error before deferring dockerBuildContext.Close() (staticcheck)
defer dockerBuildContext.Close()
^
docker/resource_docker_network_test.go:262:6: func `testAccNetworkIPv6` is unused (unused)
At first, we can start the default rules.
This issue was originally opened by @SebastianFaulborn as hashicorp/terraform-provider-docker#313. It was migrated here as a result of the community provider takeover from @kreuzwerker. The original body of the issue is below.
When creating a docker swarm service and the container happens to run on the DOCKER_HOST
(parameter host
in provider) then the container started as part of the service receives after terraform destroy
a SIGKILL immediately rather than stop_signal
(and optionally SIGKILL after stop_grace_period
if container has not shutdown until then). For containers which happen to be created on worker nodes the code works as expected (and likely also for containers which are created on manager nodes which are not the DOCKER_HOST).
This is a show stopper for all database applications and those containers which contain databases within (such as Nexus3) and can lead to catastrophic data loss as we have learned the hard way.
Terraform v0.13.5
variable "docker_host" {
type = string
}
variable "node" {
type = string
}
terraform {
required_providers {
docker = {
source = "terraform-providers/docker"
version = "2.7.2"
}
}
}
provider "docker" {
host = var.docker_host
}
resource "docker_service" "test" {
name = "test"
task_spec {
container_spec {
image = "ubuntu"
stop_signal = "SIGTERM"
stop_grace_period = "30s"
command = [ "/bin/bash" ]
args = [ "-c", "echo test; trap 'echo SIGTERM && exit 0' SIGTERM; while true; do sleep 1; done" ]
}
placement {
constraints = [
"node.role==${var.node}"
]
}
}
}
none
none
Container receives upon shutdown first SIGTERM:
xyz$ docker service logs -f test
test.1.z243xk5gqgk0@ci-jct-swarm-cluster-worker-node-0 | test
test.1.z243xk5gqgk0@ci-jct-swarm-cluster-worker-node-0 | SIGTERM
When container runs on DOCKER_HOST it receives immediately SIGKILL:
xyz$ docker service logs -f test
test.1.z243xk5gqgk0@ci-jct-swarm-cluster-manager-node-0 | test
Create a docker swarm cluster with just 1 manager and 1 worker (this just makes it easy to get the test case working). Then:
terraform init
terraform apply -var 'node=manager' -var 'docker_host=tcp://<place here the DOCKER_HOST ip>:2375'
DOCKER_HOST=tcp://<place here the DOCKER_HOST ip>:2375 docker service logs -f test
terraform destroy -var 'node=manager' -var 'docker_host=tcp://<place here the DOCKER_HOST ip>:2375'
To run the working test case:
Do the same as before but replace -var 'node=manager'
with -var 'node=worker'
Looking at the source code (I am not a go expert!) I found the following:
https://github.com/terraform-providers/terraform-provider-docker/blob/master/docker/resource_docker_service_funcs.go
Line 270ff
func deleteService(serviceID string, d *schema.ResourceData, client *client.Client)
Line 297 actually removes the service (and as I believe it is the only line needed and everything further down is just for historic reasons - docker now does all needed by himself):
if err := client.ServiceRemove(context.Background(), serviceID); err != nil {
Note: you can see in the docker daemon debug log of the DOCKER_HOST a line such as:
time="2020-11-18T18:05:10.871048408Z" level=debug msg="Calling DELETE /v1.40/services/so1zejuqgruzyrsz9c5bz1isn"
... which is the only rest api call when doing a docker service rm so1zejuqgruzyrsz9c5bz1isn
using docker CLI
Line 309 is supposed to wait until the container has been removed - but actually always returns immediately:
exitCode, _ := client.ContainerWait(ctx, containerID, container.WaitConditionRemoved)
2020-11-19T16:46:45.960+0100 [DEBUG] plugin.terraform-provider-docker_v2.7.2_x4: 2020/11/19 16:46:45 [INFO] Found container ['running'] for destroying: '7b00d2424f16fa19c4cd6e39dcd148f1337f9c383c0a3373091aa7c6b2f11736'
2020-11-19T16:46:45.960+0100 [DEBUG] plugin.terraform-provider-docker_v2.7.2_x4: 2020/11/19 16:46:45 [INFO] Deleting service: 'i1b30zzgweff3he63r35ky3i3'
2020-11-19T16:46:45.994+0100 [DEBUG] plugin.terraform-provider-docker_v2.7.2_x4: 2020/11/19 16:46:45 [INFO] Waiting for container: '7b00d2424f16fa19c4cd6e39dcd148f1337f9c383c0a3373091aa7c6b2f11736' to exit: max 30s
2020-11-19T16:46:46.027+0100 [DEBUG] plugin.terraform-provider-docker_v2.7.2_x4: 2020/11/19 16:46:46 [INFO] Container exited with code [0xc000094660]: '7b00d2424f16fa19c4cd6e39dcd148f1337f9c383c0a3373091aa7c6b2f11736'
2020-11-19T16:46:46.027+0100 [DEBUG] plugin.terraform-provider-docker_v2.7.2_x4: 2020/11/19 16:46:46 [INFO] Removing container: '7b00d2424f16fa19c4cd6e39dcd148f1337f9c383c0a3373091aa7c6b2f11736'
Note: between waiting and removing container passes just 33ms.
Line 318 is supposed to remove the container (why? docker does this by himself!).
if err := client.ContainerRemove(context.Background(), containerID, removeOpts); err != nil {
There are 2 problems here:
a) this line has the same effect as docker container rm --force <containerID>
which will as the result of --force
immediately send a SIGKILL to the container. Due to the first problem (no waiting is happening) this will happen before docker had a chance to remove the service and send SIGTERM to all the containers. Therefore the effect of killing the containers on the manager node.
time="2020-11-18T17:56:36.644210735Z" level=debug msg="Calling DELETE /v1.40/services/dll5o060pagmxr0sg9lasjit3"
time="2020-11-18T17:56:36.689439754Z" level=debug msg="Calling POST /v1.40/containers/b00490988356851e84099901a2011264c22024f3bed977b1059fbe0591aa6b5a/wait?condition=remo
ved"
time="2020-11-18T17:56:36.769538131Z" level=debug msg="Calling DELETE /v1.40/containers/b00490988356851e84099901a2011264c22024f3bed977b1059fbe0591aa6b5a?force=1&v=1"
time="2020-11-18T17:56:36.769598227Z" level=debug msg="Sending kill signal 9 to container b00490988356851e84099901a2011264c22024f3bed977b1059fbe0591aa6b5a"
Note: have a look at the time code!
b) The code does not seem to anticipate that when we issue container commands (rather than service commands) that they have to be send to the exact node which runs the container. This is the reason why this is only happening on the DOCKER_HOST
: the workers (or any other node - this I haven't checked) will simply never receive the ContainerRemove
command and the DOCKER_HOST says that this container is unknown to him.
Disclaimer: these are just my findings by looking hard at the source code and I wanted to share this with you in the hope to be useful and maybe saves some time.
none
go generate
from https://github.com/hashicorp/terraform-provider-scaffolding to have an example implementationExactlyOneOf
property -> #6I thought about something simple like:
Those are to evaluate later then
changelog generation via one of the tools
locals {
image_drone_runner = "drone/drone-runner-docker:1.6.1" # renovate: depName=drone-runner-docker
image_docuum = "stephanmisc/docuum:0.16.0" # renovate: depName=docuum
}
resource "docker_container" "drone_runner" {
name = "drone_runner"
image = "harbor.visualon.de/docker-hub/${local.image_drone_runner}"
restart = "unless-stopped"
env = [
"DRONE_RPC_PROTO=https",
"DRONE_RPC_HOST=drone.visualon.de",
"DRONE_RPC_SECRET=${var.passwords.drone.rpc_sk}",
"DRONE_RUNNER_CAPACITY=${var.drone.capacity}",
"DRONE_RUNNER_NAME=${var.hostname}"
]
volumes {
container_path = "/var/run/docker.sock"
host_path = "/var/run/docker.sock"
}
}
Variables don't change on run.
# module.docker_d02.docker_container.drone_runner must be replaced | 51s
-- | --
356 | -/+ resource "docker_container" "drone_runner" { | 51s
357 | + bridge = (known after apply) | 51s
358 | ~ command = [] -> (known after apply) | 51s
359 | + container_logs = (known after apply) | 51s
360 | - cpu_shares = 0 -> null | 51s
361 | - dns = [] -> null | 51s
362 | - dns_opts = [] -> null | 51s
363 | - dns_search = [] -> null | 51s
364 | ~ entrypoint = [ | 51s
365 | - "/bin/drone-runner-docker", | 51s
366 | ] -> (known after apply) | 51s
367 | + exit_code = (known after apply) | 51s
368 | ~ gateway = "172.17.0.1" -> (known after apply) | 51s
369 | - group_add = [] -> null | 51s
370 | ~ hostname = "67ac24aa10c1" -> (known after apply) | 51s
371 | ~ id = "67ac24aa10c1e28702f7c30e1688e87b25bb8c2d5722604efdfbacfe1c17f3e5" -> (known after apply) | 51s
372 | ~ image = "sha256:d70629b463daa6ae84d62fa5f7b42e5e04d58be75792f984feeab2fa3413a103" -> "harbor.visualon.de/docker-hub/drone/drone-runner-docker:1.6.1" # forces replacement | 51s
373 | ~ ip_address = "172.17.0.4" -> (known after apply) | 51s
374 | ~ ip_prefix_length = 16 -> (known after apply) | 51s
375 | ~ ipc_mode = "private" -> (known after apply) | 51s
376 | - links = [] -> null | 51s
377 | - log_opts = {} -> null | 51s
378 | - max_retry_count = 0 -> null | 51s
379 | - memory = 0 -> null | 51s
380 | - memory_swap = 0 -> null | 51s
381 | name = "drone_runner" | 51s
382 | ~ network_data = [ | 51s
383 | - { | 51s
384 | - gateway = "172.17.0.1" | 51s
385 | - global_ipv6_address = "" | 51s
386 | - global_ipv6_prefix_length = 0 | 51s
387 | - ip_address = "172.17.0.4" | 51s
388 | - ip_prefix_length = 16 | 51s
389 | - ipv6_gateway = "" | 51s
390 | - network_name = "bridge" | 51s
391 | }, | 51s
392 | ] -> (known after apply) | 51s
393 | - network_mode = "default" -> null | 51s
394 | - privileged = false -> null | 51s
395 | - publish_all_ports = false -> null | 51s
396 | ~ shm_size = 64 -> (known after apply) | 51s
397 | - sysctls = {} -> null | 51s
398 | - tmpfs = {} -> null | 51s
399 | # (10 unchanged attributes hidden) | 51s
400 | | 51s
401 | + labels { | 51s
402 | + label = (known after apply) | 51s
403 | + value = (known after apply) | 51s
404 | } | 51s
405 | | 51s
406 | - volumes { | 51s
407 | - container_path = "/var/run/docker.sock" -> null | 51s
408 | - host_path = "/var/run/docker.sock" -> null | 51s
409 | - read_only = false -> null | 51s
410 | } | 51s
411 | + volumes { | 51s
412 | + container_path = "/var/run/docker.sock" | 51s
413 | + host_path = "/var/run/docker.sock" | 51s
414 | } | 51s
415 | }
If Terraform produced a panic, please provide a link to a GitHub Gist containing the output of the crash.log
.
Simply do not recreate docker container on every terraform apply
Docker containers are recreated on every terraform apply
Please list the steps required to reproduce the issue, for example:
terraform apply
Not sure. Default docker installation on ubuntu focal (btrfs and xfs baking store tested
When expanding the plan for module.dns-lan-backup.docker_container.container
to include new values learned so far during apply, provider
"registry.terraform.io/terraform-providers/docker" produced an invalid new
value for .networks_advanced: block set length changed from 1 to 2.
^ this error happens when the networks are being created
Re-applying (with the networks already created from the first run), will then work fine.
I'm using inside a docker_container:
dynamic "networks_advanced" {
for_each = local.container_networks
content {
name = networks_advanced.key
ipv4_address = networks_advanced.value
}
}
And the bottom-line is, this happens when I pass in multiple networks (macvlan + bridge).
@mavogel usually I figure my problems out and submit a patch but this one is out of my league.
Any idea where I should start with this?
Hi there,
Thank you for opening an issue. Please note that we try to keep the Terraform issue tracker reserved for bug reports and feature requests. For general usage questions, please see: https://www.terraform.io/community.html.
Terraform v0.12.24
Please list the resources as a list, for example:
If this issue appears to affect multiple resources, it may be an issue with Terraform's core, so please mention this.
resource "docker_image" "app" {
name = "appondocker"
build {
path = "."
}
}
Build a local Dockerfile.
Error: Unsupported block type
on myfirstdocker.tf line 17, in resource "docker_image" "app":
17: build {
Please list the steps required to reproduce the issue, for example:
terraform plan
From the old repository: https://github.com/terraform-providers/terraform-provider-docker/issues
Example:
-/+ resource "docker_network" "dubo-vlan" {
attachable = false
check_duplicate = true
driver = "ipvlan"
~ id = "2e5dda2e5a90ce693456bad29cc3c5afd6c8f9a58568a72325e6569d375cea78" -> (known after apply)
- ingress = false -> null
internal = false
ipam_driver = "default"
ipv6 = false
name = "dubo-lan-vlan"
options = {
"ipvlan_mode" = "l2"
"parent" = "wlan0"
}
~ scope = "local" -> (known after apply)
+ ipam_config { # forces replacement
+ aux_address = {
+ "dns" = "10.0.4.42"
+ "link" = "10.0.4.43"
}
+ gateway = "10.0.4.1"
+ ip_range = "10.0.4.42/31"
+ subnet = "10.0.4.1/24"
}
- ipam_config { # forces replacement
- aux_address = {} -> null
- gateway = "10.0.4.1" -> null
- ip_range = "10.0.4.42/31" -> null
- subnet = "10.0.4.1/24" -> null
}
}
Here is the existing docker network that was created earlier:
pi@nightingale:~ $ docker inspect dubo-lan-vlan
[
{
"Name": "dubo-lan-vlan",
"Id": "2e5dda2e5a90ce693456bad29cc3c5afd6c8f9a58568a72325e6569d375cea78",
"Created": "2020-12-02T03:01:25.262796547Z",
"Scope": "local",
"Driver": "ipvlan",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.4.1/24",
"IPRange": "10.0.4.42/31",
"Gateway": "10.0.4.1",
"AuxiliaryAddresses": {
"dns": "10.0.4.42",
"link": "10.0.4.43"
}
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {
"ipvlan_mode": "l2",
"parent": "wlan0"
},
"Labels": {}
}
]
This issue was originally opened by @project0 as hashicorp/terraform-provider-docker#279. It was migrated here as a result of the community provider takeover from @kreuzwerker. The original body of the issue is below.
Terraform v0.12.28
+ provider.docker v2.7.1
resource "docker_network" "server" {
name = local.network_name
check_duplicate = true
driver = "bridge"
ipv6 = true
ipam_config {
subnet = var.subnets.ipv4_cidr
}
ipam_config {
subnet = var.subnets.ipv6_cidr
}
}
# module.base.docker_network.server must be replaced
-/+ resource "docker_network" "server" {
- attachable = false -> null
check_duplicate = true
driver = "bridge"
~ id = "f565f7f1a0db0c184f69c24e8f113ce7d88c12e43ae4bf862b0f51c15940a633" -> (known after apply)
- ingress = false -> null
~ internal = false -> (known after apply)
ipam_driver = "default"
ipv6 = true
name = "server-vagrant"
~ options = {} -> (known after apply)
~ scope = "local" -> (known after apply)
- ipam_config { # forces replacement
- aux_address = {} -> null
- gateway = "172.16.10.1" -> null
- subnet = "172.16.10.0/24" -> null
}
- ipam_config { # forces replacement
- aux_address = {} -> null
- gateway = "2001:1:6d:99f:400::1" -> null
- subnet = "2001:1:006d:099f:0100:0000:0000:0000/80" -> null
}
+ ipam_config { # forces replacement
+ subnet = "172.16.10.0/24"
}
+ ipam_config { # forces replacement
+ subnet = "2001:1:006d:099f:0100:0000:0000:0000/80"
}
}
docker inspect after creation:
{
"Name": "server-vagrant",
"Id": "7347683ef27d04299465172f017fc89b352e7a1546ba50ec2bd7b3223a9721fa",
"Created": "2020-07-09T16:53:23.763556354Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": true,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.16.10.0/24"
},
{
"Subnet": "2001:1:006d:099f:0100:0000:0000:0000/80"
}
]
}
}
docker inspect after reboot:
{
"Name": "server-vagrant",
"Id": "f565f7f1a0db0c184f69c24e8f113ce7d88c12e43ae4bf862b0f51c15940a633",
"Created": "2020-07-06T18:43:47.35749164Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": true,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.16.10.0/24",
"Gateway": "172.16.10.1"
},
{
"Subnet": "2001:1:006d:099f:0100:0000:0000:0000/80",
"Gateway": "2001:1:6d:99f:100::1"
}
]
}
}
Ubuntu-latest workflows will use Ubuntu-20.04 soon. For more details, see https://github.com/actions/virtual-environments/issues/1816
Terraform Version
0.13.5
Affected Resource(s)
docker_plugin
Expected Behavior
Configure and manage docker plugins on a daemon
Actual Behavior
Not implemented
References
Benefit: to keep the repository and issues clean we could use the help of server bots like goreleaser does
This issue was originally opened by @stoically as hashicorp/terraform-provider-docker#301. It was migrated here as a result of the community provider takeover from @kreuzwerker. The original body of the issue is below.
Terraform v0.13.3
data "docker_registry_image" "traefik" {
name = "traefik:latest"
}
resource "docker_image" "traefik" {
name = data.docker_registry_image.traefik.name
pull_triggers = [data.docker_registry_image.traefik.sha256_digest]
}
Should always silently upgrade the image / container
Error: Unable to remove Docker image: Error response from daemon: conflict: unable to delete 1a3f0281f41e (must be forced) - image is referenced in multiple repositories
Unfortunately not sure how to reproduce
Would it be safe to always force-remove images?
This issue was originally opened by @wereinse as hashicorp/terraform-provider-docker#267. It was migrated here as a result of the community provider takeover from @kreuzwerker. The original body of the issue is below.
Thank you for opening an issue. Please note that we try to keep the Terraform issue tracker reserved for bug reports and feature requests. For general usage questions, please see: https://www.terraform.io/community.html.
Terraform v0.12.21
Please list the resources as a list, for example:
Docker image version
If this issue appears to affect multiple resources, it may be an issue with Terraform's core, so please mention this.
# Copy-paste your Terraform configurations here - for large Terraform configs,
# please use a service like Dropbox and share a link to the ZIP file. For
# security, you can also encrypt the files using our GPG public key.
See changes below:
Changes made are below:
This is from : https://github.com/retaildevcrews/helium-iac/blob/master/src/modules/db/main.tf
resource "azurerm_cosmosdb_sql_container" "cosmosdb-movies" {
for_each = var.INSTANCES
name = "movies"
resource_group_name = var.COSMOS_RG_NAME
account_name = azurerm_cosmosdb_account.cosmosdb.name
database_name = azurerm_cosmosdb_sql_database.cosmosdb-imdb[each.key].name
partition_key_path = "/partitionKey"
}
data "docker_registry_image" "imdb-import" {
name = "retaildevcrew/imdb-import"
}
resource "docker_image" "imdb-import" {
name = data.docker_registry_image.imdb-import.name
pull_triggers = ["${data.docker_registry_image.imdb-import.sha256_digest}"]
Please provider a link to a GitHub Gist containing the complete debug output: https://www.terraform.io/docs/internals/debugging.html. Please do NOT paste the debug output in the issue; just paste a link to the Gist.
If Terraform produced a panic, please provide a link to a GitHub Gist containing the output of the crash.log
.
What should have happened?
resource "docker_image" "imdb-import" should have pulled the latest image
What actually happened?
The above command pulled a cached image instead of going back for the original.
Please list the steps required to reproduce the issue, for example:
terraform apply
with the resource "docker_image" "imdb-import" command in place .Are there anything atypical about your accounts that we should know? For example: Running in EC2 Classic? Custom version of OpenStack? Tight ACLs?
Are there any other GitHub issues (open or closed) or Pull Requests that should be linked here? For example:
This issue was originally opened by @johnlane as hashicorp/terraform-provider-docker#294. It was migrated here as a result of the community provider takeover from @kreuzwerker. The original body of the issue is below.
image
argument of v2.6.0 and v2.7.2.
The image specification shown below works with the 2.6.0 Docker provider but not with version 2.7.2.
resource "docker_container" "portainer" {
image = "portainer/portainer:1.23.0"
...
The documentation now shows having a reference to an image resource:
image = "${docker_image.ubuntu.latest}"
and configuring that image resource to specify the image name.
With 2.7.2 there is a perpetual mismatch between the image name spcified in the config and the image id that terraform plan
and apply
identifies.
I can't find any documentation covering this breaking change between versions, I don't know if this break is intentional or something that would be fixed. I see many examples illustrating the succinct way of referencing an image that works in version 2.6.0. It would be good if specifying the image directly in image
continued to work rather than requiring an additional image resource block containing the image name.
If the form that worked in 2.6.0 is not supported any more than an error during plan or apply to prevent its use would make this clear. Currently it's accepted as valid but causes resource replacement on every apply.
Terraform 0.13.3 with Docker provider 2.6.0 and 2.7.2
docker_container
The resource should be matched with previous state so unnecessary changes are not made.
The resource is detected as a change because the given image value is matched against SHA and therefore is always detected as a change, requiring replacement of the resource.
terraform apply
Also hashicorp/terraform-provider-docker#291 is similar.
This issue was originally opened by @Morodar as hashicorp/terraform-provider-docker#277. It was migrated here as a result of the community provider takeover from @kreuzwerker. The original body of the issue is below.
v0.12.28 on Windows 10 Pro Build 18363
provider "docker" {
host = "tcp://127.0.0.1:2375/"
version = "2.7.1"
}
resource "docker_image" "nginx_alpine" {
name = "nginx:1.19.0-alpine"
}
resource "docker_container" "reverse_proxy" {
image = docker_image.nginx_alpine.latest
name = "reverse_proxy"
ports {
internal = 80
external = var.port
}
volumes {
container_path = "/etc/nginx/conf.d"
host_path = abspath(path.root)
read_only = true
}
}
What should have happened?
# module.docker_reverse_proxy_setup.docker_container.reverse_proxy will be created
+ resource "docker_container" "reverse_proxy" {
[...](omitted)
+ volumes {
+ container_path = "/etc/nginx/conf.d"
+ host_path = "C:/absolute/root/path/"
+ read_only = true
}
}
Error: "volumes.0.host_path" must be an absolute path
on modules\docker-nginx\resources.tf line 7, in resource "docker_container" "reverse_proxy":
7: resource "docker_container" "reverse_proxy" {
terraform apply
On windows:
host_path = "G:\\ABC" # --> absolute path
host_path = "/G/ABC" # --> absolute path
host_path = abspath(path.root) # --> not an absolute path
host_path = path.root # --> not an absolute path
host_path = G:/ABC # --> not an absolute path
abspath(path.root)
returns C:/path/to/root
which is an absolute path on Windows and Terraform but not for the docker_container resource.
Am I missing something or is this a bug?
I try to write terraform code which should run both on Windows and Linux.
But I fail to handle absolute paths on Windows correctly.
terraform 0.13upgrade
commandThis issue was originally opened by @alexborisov as hashicorp/terraform-provider-docker#259. It was migrated here as a result of the community provider takeover from @kreuzwerker. The original body of the issue is below.
This issue was originally opened by @michcio1234 as hashicorp/terraform-provider-docker#273. It was migrated here as a result of the community provider takeover from @kreuzwerker. The original body of the issue is below.
I want to make a certain Docker image from my ECR repository present on an EC2 instance, using Terraform.
Terraform version:
Terraform v0.12.24
+ provider.aws v2.64.0
+ provider.docker v2.7.0
+ provider.null v2.1.2
I have configured permissions (so that instance profile role can log in and pull images) and installed credentials helper on the instance. I put {"credsStore": "ecr-login"}
in /home/ec2-user/.docker/config.json
. I can SSH into the instance and do docker pull image:tag
- this works, no need to do docker login
.
I can also do the same using my local docker client by doing docker -H ssh://[email protected]:22 pull image:tag
- the image gets pulled onto the instance.
I was trying to do it in Terraform using Docker provider. Here's what I have:
provider "docker" {
version = "~> 2.7"
host = "ssh://ec2-user@${aws_instance.main.public_dns}:22"
}
data "docker_registry_image" "backend" {
name = "image:tag"
}
resource "docker_image" "remote_backend" {
name = data.docker_registry_image.backend.name
pull_triggers = [data.docker_registry_image.backend.sha256_digest]
}
IIUC, this should pull the image onto the remote machine. However, Terraform exits with this error:
Error: Got error when attempting to fetch image version from registry: Bad credentials: 401 Unauthorized
on swarm.tf line 1, in data "docker_registry_image" "backend":
1: data "docker_registry_image" "backend" {
I verified that Terraform actually connects to the remote Docker host (I could create docker service) - it just won't authenticate to the registry.
I also tried defining config like this (not sure if this is a good approach since I want to use credentials helper on a remote machine, not my local one):
provider "docker" {
version = "~> 2.7"
host = "ssh://ec2-user@${aws_instance.main.public_dns}:22"
registry_auth {
address = local.docker_registry_url
config_file_content = "{\"credsStore\": \"ecr-login\"}"
}
}
But this in turn results in a following error:
Error: Error loading registry auth config: Error parsing docker registry config json: json: cannot unmarshal string into Go value of type docker.dockerConfig
How can I make this work, so that an image is pulled on the remote machine, using credentials provided by ecr-login
helper which runs on that machine?
Or maybe it's a bug of Docker Terraform provider?
This issue was originally opened by @brandonbumgarner as hashicorp/terraform-provider-docker#258. It was migrated here as a result of the community provider takeover from @kreuzwerker. The original body of the issue is below.
0.11.14
terraform {
backend "s3" {
bucket = "bucket"
region = "region"
profile = "profile"
shared_credentials_file = "credentials"
}
}
provider "docker" {
version = "~> 2.7.0"
host = "tcp://${var.environment}.${var.domain}:9999/"
}
variable "domain" {
default = "domain"
}
variable "environment" {}
resource "docker_service" "service" {}
https://gist.github.com/brandonbumgarner/163a3145f7d1613a4d55dadc5dd0d0f2
The service should be imported into a state file stored in S3.
Terraform crashes. It seems like the state is captured, but when the state is refreshed, it crashes. The crash.log shows the docker inspect command output(stripped from gist, but in the actual it is there). So it seems that it is communicating with the docker node and getting the data.
Please keep in mind I have stripped the service id and any "sensitive" information from the logs and output. In the actual logs and output it has the specific environment, domain, service id, etc. For example: service-id is actually the id of a service that exists in that environment
terraform import -var "environment=env" docker_service.service service-id
docker_service.service: Importing from ID "service-id"...
docker_service.service: Import complete!
Imported docker_service (ID: service-id)
docker_service.service: Refreshing state... (ID: service-id)
Error: docker_service.service (import id: service-id): 1 error occurred:
* import docker_service.service result: service-id: docker_service.service: unexpected EOF
terraform init -var "environment=env" -backend-config="key=folder/service.state"
terraform import -var "environment=env" docker_service.service service-id
Right now I have the Docker provider pinned to 2.7.0, but have also tried 2.6.0. This had been working for most of our other services, but stopped working last week without any changes to the tf file.
This issue was originally opened by @rolandcrosby as hashicorp/terraform-provider-docker#275. It was migrated here as a result of the community provider takeover from @kreuzwerker. The original body of the issue is below.
Terraform v0.12.26
+ provider.docker v2.7.1
Docker provider
locals {
docker_registry_url = "example.com"
}
provider "docker" {
registry_auth {
address = local.docker_registry_url
config_file_content = jsonencode({
"credsStore" = "ecr-login"
})
}
}
resource "docker_image" "hello_world" {
name = "${local.docker_registry_url}/hello_world:latest"
}
n/a
The Docker provider should parse the Docker auth config file and fetch the appropriate credentials the same way the native Docker CLI does. Specifically:
credsStore
helper is set in the config file contents, the provider should use that creds store to fetch authentication data for the registry address specified in the configuration, regardless of whether there is anything in the auths
key in the config file contents.credHelpers
are specified as described in the Docker documentation here, the appropriate helper should be detected and used to fetch credentials for the specified registry, again without regard to whether the registry is present in auths
.If a configuration like the above one is passed (where credsStore
is present but auths
is empty or not present), the provider does not do anything with the credsStore
property (see lines 295-303 of provider.go
). Instead, if the provider sees no auths
, it will attempt to parse the configuration as if it were in the following legacy format:
{
"some-registry-url": {"auth": "some credential", "email": "some credential here"}
}
When this fails, the user is presented with the following confusing and misleading error: json: cannot unmarshal string into Go value of type docker.dockerConfig
(see #273).
(The credHelpers
section of Docker's config.json doesn't work at all with this provider and also leads to the same misleading error; this fact is not documented anywhere.)
In a directory with the above Terraform file present, run terraform init
to download the Docker provider, then terraform apply
.
This issue was originally opened by @captn3m0 as hashicorp/terraform-provider-docker#257. It was migrated here as a result of the community provider takeover from @kreuzwerker. The original body of the issue is below.
v0.12.23
Please list the resources as a list, for example:
docker_container
resource docker_container a {
name = ""
gpus = "all"
}
docker_container.a: : invalid or unknown key: gpus
The --gpus
flag should be supported.
The --gpus
flag is not supported.
Try to run a container with NVIDIA Docker support, which is now natively available in Docker 19.03
This issue was originally opened by @betaboon as hashicorp/terraform-provider-docker#260. It was migrated here as a result of the community provider takeover from @kreuzwerker. The original body of the issue is below.
This issue was originally opened by @hongkongkiwi as hashicorp/terraform-provider-docker#290. It was migrated here as a result of the community provider takeover from @kreuzwerker. The original body of the issue is below.
Terraform v0.13.0
+ provider registry.terraform.io/terraform-providers/docker v2.7.2
For example, this will fail with error during terraform validate:
Error: missing provider provider["registry.terraform.io/hashicorp/docker"].foo
provider "docker" {
alias = "foo"
host = "tcp://127.0.0.1:2376/"
}
module "mycustommodule" {
source = "./test2"
providers = {
docker = docker.foo
}
}
0.13.5
docker_network
resource "docker_network" "vlan" {
ipam_config {
aux_address = {
foo: "1.2.3.4"
}
}
}
I can't do that right now.
The config is massive and does leak a lot of important info that I would have to scrub out first.
The bug is easy to reproduce though.
Not crash.
Crash.
terraform apply
I'm out of my depth here.
Feels to me though that:
^ is problematic
This issue was originally opened by @martinezhenry as hashicorp/terraform-provider-docker#304. It was migrated here as a result of the community provider takeover from @kreuzwerker. The original body of the issue is below.
terraform import
command of a docker service using a resource docker_service the provider throw a crash.
module.services.module.novotrans.docker_service.create_replicated_service["jenkins"]: Importing from ID "o0i2rg24dflqequp6pz3pivm9"...
module.services.module.novotrans.docker_service.create_replicated_service["jenkins"]: Import prepared!
Prepared docker_service for import
module.services.module.novotrans.docker_service.create_replicated_service["jenkins"]: Refreshing state... [id=o0i2rg24dflqequp6pz3pivm9]
Error: rpc error: code = Unavailable desc = transport is closing
data "docker_registry_image" "reg_image_replicated" {
for_each = {for image in local.images: replace(image, ":", "") => image}
name = each.value
}
resource "docker_image" "image_replicated" {
for_each = {for data in data.docker_registry_image.reg_image_replicated: replace(data.name, ":", "") => data}
name = each.value.name
pull_triggers = [
each.value.sha256_digest]
}
resource "docker_service" "create_replicated_service" {
for_each = {for stack in var.replicated_services: stack.name => stack}
name = each.value.name
task_spec {
container_spec {
image = join(":", [
each.value.image,
each.value.version])
hostname = each.value.hostname
dynamic "mounts" {
for_each = [for mount in each.value.mounts: {
source = mount.source
target = mount.target
type = mount.type
external = mount.external
}]
content {
source = mounts.value.external ? mounts.value.source : join(".", [each.key, mounts.value.source])
target = mounts.value.target
type = mounts.value.type
}
}
dynamic "hosts" {
for_each = [for host in each.value.hosts: {
host = host.host
ip = host.ip
}]
content {
host = hosts.value.host
ip = hosts.value.ip
}
}
env = {for environment in each.value.environments: environment.name => environment.value }
}
resources {
limits {
#nano_cpus = 1000000
memory_bytes = each.value.memory
}
}
restart_policy = var.restart_policy
}
mode {
replicated {
replicas = each.value.replicas_nr
}
}
endpoint_spec {
dynamic "ports" {
for_each = [for port in each.value.ports: {
published = port.published
target = port.target
}]
content {
target_port = ports.value.target
published_port = ports.value.published
}
}
}
}
resource "docker_service" "create_global_service" {
for_each = {for stack in var.global_services: stack.name => stack}
name = each.value.name
task_spec {
container_spec {
image = join(":", [
each.value.image,
each.value.version])
hostname = each.value.hostname
dynamic "mounts" {
for_each = [for mount in each.value.mounts: {
source = mount.source
target = mount.target
type = mount.type
external = mount.external
}]
content {
source = mounts.value.external ? mounts.value.source : join(".", [each.key, mounts.value.source])
target = mounts.value.target
type = mounts.value.type
}
}
dynamic "hosts" {
for_each = [for host in each.value.hosts: {
host = host.host
ip = host.ip
}]
content {
host = hosts.value.host
ip = hosts.value.ip
}
}
env = {for environment in each.value.environments: environment.name => environment.value }
}
networks = [for network in each.value.networks: network.name]
resources {
limits {
#nano_cpus = 1000000
memory_bytes = each.value.memory
}
}
}
mode {
global = each.value.global
}
endpoint_spec {
dynamic "ports" {
for_each = [for port in each.value.ports: {
published = port.published
target = port.target
}]
content {
target_port = ports.value.target
published_port = ports.value.published
}
}
}
}
https://gist.github.com/martinezhenry/2584b772cf25abcd4388c83e3ece43b4
https://gist.github.com/martinezhenry/2584b772cf25abcd4388c83e3ece43b4
Import service docker currently running
terrafrom crash and import failed
Please list the steps required to reproduce the issue, for example:
terraform import 'module.services.module.novotrans.docker_service.create_replicated_service["jenkins"]' $(docker service inspect --format="{{.ID}}" jenkins)
N/A
N/A
This issue was originally opened by @hashibot[bot] as hashicorp/terraform-provider-docker#280. It was migrated here as a result of the community provider takeover from @kreuzwerker. The original body of the issue is below.
I have a terrafom file with name main.tf that resource is only docker_container
my terraform version is
Terraform v0.12.28
main.tf file is:
provider "docker" {
}
#data "docker_registry_image" "baresip" {
# name = "registry.gitlab.com/greenmns/test"
#}
#resource "docker_image" "baresip" {
# name = "registry.gitlab.com/greenmns/test"
# pull_triggers = ["${data.docker_registry_image.baresip.sha256_digest}"]
#}
resource "docker_container" "baresipcallee" {
name = "baresipcallee"
image = "registry.gitlab.com/greenmns/test"
command = ["1"]
rm = true
}
image registry.gitlab.com/greenmns/test is locally in my computer.
when i run terraform apply
to generate a container i saw with docker ps
my container is up but when i run terraform destroy
container test is run and it didn't destory
I think this is a buge of terraform
In a shell:
+ devices { # forces replacement
+ host_path = "/dev/snd"
}
- devices { # forces replacement
- container_path = "/dev/snd" -> null
- host_path = "/dev/snd" -> null
- permissions = "rwm" -> null
}
This issue was originally opened by @b4nst as hashicorp/terraform-provider-docker#261. It was migrated here as a result of the community provider takeover from @kreuzwerker. The original body of the issue is below.
Terraform v0.12.24
Resource accept entrypoint option, as specified in docker service_create doc
Resource does not accept an entrypoint option
Hi there,
Thank you for opening an issue. Please note that we try to keep the Terraform issue tracker reserved for bug reports and feature requests. For general usage questions, please see: https://www.terraform.io/community.html.
Run terraform -v
to show the version. If you are not running the latest version of Terraform, please upgrade because your issue may have already been fixed.
❯ terraform version
Terraform v0.14.2
+ provider registry.terraform.io/kreuzwerker/docker v2.8.0
Please list the resources as a list, for example:
If this issue appears to affect multiple resources, it may be an issue with Terraform's core, so please mention this.
terraform {
required_providers {
docker = {
source = "kreuzwerker/docker"
version = "2.8.0"
}
}
}
variable "image" {
default = "ghcr.io/andyshinn/wbld:latest"
}
variable "github_token" {
description = "GitHub token for GHCR auth"
}
variable "discord_token" {
description = "Discord auth token"
}
provider "docker" {
host = "tcp://localhost:2375"
registry_auth {
address = "ghcr.io"
username = "andyshinn"
password = var.github_token
}
}
resource "docker_container" "wbld" {
name = "wbld"
image = docker_image.wbld.latest
command = ["python3", "-m", "wbld.bot"]
env = ["DISCORD_TOKEN=${var.discord_token}"]
volumes {
volume_name = "wbld_platformio"
container_path = "/root/.platformio"
}
volumes {
volume_name = "wbld_buildcache"
container_path = "/root/.buildcache"
}
}
data "docker_registry_image" "wbld" {
name = var.image
}
resource "docker_image" "wbld" {
name = data.docker_registry_image.wbld.name
pull_triggers = [data.docker_registry_image.wbld.sha256_digest]
keep_locally = true
}
Please provider a link to a GitHub Gist containing the complete debug output: https://www.terraform.io/docs/internals/debugging.html. Please do NOT paste the debug output in the issue; just paste a link to the Gist.
https://gist.github.com/andyshinn/2638d4ad091910b55d055b01301bc69a
If Terraform produced a panic, please provide a link to a GitHub Gist containing the output of the crash.log
.
The Docker image should not be removed on Destroy. Since the image is in use by the container it fails tring to remove the image.
The Docker image is being removed even though keep_locally
is specified.
Please list the steps required to reproduce the issue, for example:
terraform apply
Are there anything atypical about your accounts that we should know? For example: Running in EC2 Classic? Custom version of OpenStack? Tight ACLs?
Running through GitHub Actions. But tested locally and same behavior occurs.
Hi,
When creating a docker swarm service and the container happens to run on the DOCKER_HOST
(parameter host
in provider) then the container started as part of the service receives after terraform destroy
a SIGKILL
immediately rather than stop_signal
(and optionally SIGKILL
after stop_grace_period
if container has not shutdown until then). For containers which happen to be created on worker nodes the code works as expected (and likely also for containers which are created on manager nodes which are not the DOCKER_HOST).
Note: docker CLI (ie. docker service rm
does work as expected)
This is a show stopper for all database applications and those containers which contain databases within (such as Nexus3) and can lead to catastrophic data loss as we have learned the hard way.
I already have written to here - but did not receive any answers at all (maybe wrong place to post issues):
hashicorp/terraform-provider-docker#313
Terraform v0.13.5
docker = {
source = "terraform-providers/docker"
version = "2.7.2"
}
Note: kreuzwerker/terraform-provider-docker V2.8.0 also has this problem.
docker_service
variable "docker_host" {
type = string
}
variable "node" {
type = string
}
terraform {
required_providers {
docker = {
source = "terraform-providers/docker"
version = "2.7.2"
}
}
}
provider "docker" {
host = var.docker_host
}
resource "docker_service" "test" {
name = "test"
task_spec {
container_spec {
image = "ubuntu"
stop_signal = "SIGTERM"
stop_grace_period = "30s"
command = [ "/bin/bash" ]
args = [ "-c", "echo test; trap 'echo SIGTERM && exit 0' SIGTERM; while true; do sleep 1; done" ]
}
placement {
constraints = [
"node.role==${var.node}"
]
}
}
}
none
none
Container receives upon shutdown first SIGTERM:
xyz$ docker service logs -f test
test.1.z243xk5gqgk0@ci-jct-swarm-cluster-worker-node-0 | test (Note: output after start)
test.1.z243xk5gqgk0@ci-jct-swarm-cluster-worker-node-0 | SIGTERM (Note: output after receiving SIGTERM signal)
When container runs on DOCKER_HOST it receives immediately SIGKILL:
xyz$ docker service logs -f test
test.1.z243xk5gqgk0@ci-jct-swarm-cluster-manager-node-0 | test (Note: output after start. Output after receiving SIGTERM is missing)
Create a docker swarm cluster with just 1 manager and 1 worker (this just makes it easy to get the test case working). Then:
terraform init
terraform apply -var 'node=manager' -var 'docker_host=tcp://<place here the DOCKER_HOST ip>:2375'
DOCKER_HOST=tcp://<place here the DOCKER_HOST ip>:2375 docker service logs -f test
terraform destroy -var 'node=manager' -var 'docker_host=tcp://<place here the DOCKER_HOST ip>:2375'
To run the working test case:
Do the same as before but replace -var 'node=manager'
with -var 'node=worker'
Looking at the source code (I am not a go expert!) I found the following:
https://github.com/terraform-providers/terraform-provider-docker/blob/master/docker/resource_docker_service_funcs.go
Line 270ff
func deleteService(serviceID string, d *schema.ResourceData, client *client.Client)
Line 297 actually removes the service (and as I believe it is the only line needed and everything further down is just for historic reasons - docker now does all needed by himself):
if err := client.ServiceRemove(context.Background(), serviceID); err != nil {
Note: you can see in the docker daemon debug log of the DOCKER_HOST a line such as:
time="2020-11-18T18:05:10.871048408Z" level=debug msg="Calling DELETE /v1.40/services/so1zejuqgruzyrsz9c5bz1isn"
... which is the only rest api call when doing a docker service rm so1zejuqgruzyrsz9c5bz1isn
using docker CLI
Line 309 is supposed to wait until the container has been removed - but actually always returns immediately:
exitCode, _ := client.ContainerWait(ctx, containerID, container.WaitConditionRemoved)
2020-11-19T16:46:45.960+0100 [DEBUG] plugin.terraform-provider-docker_v2.7.2_x4: 2020/11/19 16:46:45 [INFO] Found container ['running'] for destroying: '7b00d2424f16fa19c4cd6e39dcd148f1337f9c383c0a3373091aa7c6b2f11736'
2020-11-19T16:46:45.960+0100 [DEBUG] plugin.terraform-provider-docker_v2.7.2_x4: 2020/11/19 16:46:45 [INFO] Deleting service: 'i1b30zzgweff3he63r35ky3i3'
2020-11-19T16:46:45.994+0100 [DEBUG] plugin.terraform-provider-docker_v2.7.2_x4: 2020/11/19 16:46:45 [INFO] Waiting for container: '7b00d2424f16fa19c4cd6e39dcd148f1337f9c383c0a3373091aa7c6b2f11736' to exit: max 30s
2020-11-19T16:46:46.027+0100 [DEBUG] plugin.terraform-provider-docker_v2.7.2_x4: 2020/11/19 16:46:46 [INFO] Container exited with code [0xc000094660]: '7b00d2424f16fa19c4cd6e39dcd148f1337f9c383c0a3373091aa7c6b2f11736'
2020-11-19T16:46:46.027+0100 [DEBUG] plugin.terraform-provider-docker_v2.7.2_x4: 2020/11/19 16:46:46 [INFO] Removing container: '7b00d2424f16fa19c4cd6e39dcd148f1337f9c383c0a3373091aa7c6b2f11736'
Note: between waiting and removing container passes just 33ms.
I don't know why it always returned immediately.
Line 318 is supposed to remove the container (why? docker does this by himself!).
if err := client.ContainerRemove(context.Background(), containerID, removeOpts); err != nil {
There are 2 problems here:
a) this line has the same effect as docker container rm --force <containerID>
which will as the result of --force
immediately send a SIGKILL
to the container. Due to the first problem (no waiting is happening) this will happen before docker had a chance to remove the service and send SIGTERM to all the containers. Therefore the effect of killing the containers on the manager node.
time="2020-11-18T17:56:36.644210735Z" level=debug msg="Calling DELETE /v1.40/services/dll5o060pagmxr0sg9lasjit3"
time="2020-11-18T17:56:36.689439754Z" level=debug msg="Calling POST /v1.40/containers/b00490988356851e84099901a2011264c22024f3bed977b1059fbe0591aa6b5a/wait?condition=remo
ved"
time="2020-11-18T17:56:36.769538131Z" level=debug msg="Calling DELETE /v1.40/containers/b00490988356851e84099901a2011264c22024f3bed977b1059fbe0591aa6b5a?force=1&v=1"
time="2020-11-18T17:56:36.769598227Z" level=debug msg="Sending kill signal 9 to container b00490988356851e84099901a2011264c22024f3bed977b1059fbe0591aa6b5a"
Note: have a look at the time code!
b) The code does not seem to anticipate that when we issue container commands (rather than service commands) that they have to be send to the exact node which runs the container. This is the reason why this is only happening on the DOCKER_HOST: the workers (or any other node - this I haven't checked) will simply never receive the ContainerRemove command and the DOCKER_HOST says that this container is unknown to him.
Disclaimer: these are just my findings by looking hard at the source code and I wanted to share this with you in the hope to be useful and maybe saves some time.
none
This issue was originally opened by @ragurakesh as hashicorp/terraform-provider-docker#291. It was migrated here as a result of the community provider takeover from @kreuzwerker. The original body of the issue is below.
terraform:0.12.29
provider-docker version: 2.5.0
provider "docker" {
version = "~> 2.5.0"
alias = "default"
}
resource "docker_service" "foo" {
name = "foo-service"
task_spec {
container_spec {
image = "nginx:latest"
}
}
endpoint_spec {
ports {
target_port = "8080"
}
}
}
$ terraform apply
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# docker_service.foo will be created
+ resource "docker_service" "foo" {
+ id = (known after apply)
+ labels = (known after apply)
+ name = "foo-service"
+ endpoint_spec {
+ mode = (known after apply)
+ ports {
+ protocol = "tcp"
+ publish_mode = "ingress"
+ target_port = 8080
}
}
+ mode {
+ global = (known after apply)
+ replicated {
+ replicas = (known after apply)
}
}
+ task_spec {
+ force_update = (known after apply)
+ restart_policy = (known after apply)
+ runtime = (known after apply)
+ container_spec {
+ image = "nginx:latest"
+ isolation = "default"
+ stop_grace_period = (known after apply)
+ dns_config {
+ nameservers = (known after apply)
+ options = (known after apply)
+ search = (known after apply)
}
+ healthcheck {
+ interval = (known after apply)
+ retries = (known after apply)
+ start_period = (known after apply)
+ test = (known after apply)
+ timeout = (known after apply)
}
}
+ placement {
+ constraints = (known after apply)
+ prefs = (known after apply)
+ platforms {
+ architecture = (known after apply)
+ os = (known after apply)
}
}
+ resources {
+ limits {
+ memory_bytes = (known after apply)
+ nano_cpus = (known after apply)
+ generic_resources {
+ discrete_resources_spec = (known after apply)
+ named_resources_spec = (known after apply)
}
}
+ reservation {
+ memory_bytes = (known after apply)
+ nano_cpus = (known after apply)
+ generic_resources {
+ discrete_resources_spec = (known after apply)
+ named_resources_spec = (known after apply)
}
}
}
}
}
Plan: 1 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
docker_service.foo: Creating...
docker_service.foo: Creation complete after 5s [id=dgyjey68z60ke8wploo9jpfme]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
$
Once the above configuration is applied, docker service shall run nginx container, and applying the same terraform configuration again shall not cause the running container to be recycled.
Plan shows change in the image configured in the docker_service. If applied again, it causes the running container to get recycled.
$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
docker_service.foo: Refreshing state... [id=dgyjey68z60ke8wploo9jpfme]
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
# docker_service.foo will be updated in-place
~ resource "docker_service" "foo" {
id = "dgyjey68z60ke8wploo9jpfme"
labels = {}
name = "foo-service"
endpoint_spec {
mode = "vip"
ports {
protocol = "tcp"
publish_mode = "ingress"
published_port = 0
target_port = 8080
}
}
mode {
global = false
replicated {
replicas = 1
}
}
~ task_spec {
force_update = 0
networks = []
restart_policy = {
"condition" = "any"
"max_attempts" = "0"
}
runtime = "container"
~ container_spec {
args = []
command = []
env = {}
groups = []
~ image = "nginx:latest@sha256:b0ad43f7ee5edbc0effbc14645ae7055e21bc1973aee5150745632a24a752661" -> "nginx:latest"
isolation = "default"
labels = {}
read_only = false
stop_grace_period = "0s"
dns_config {}
healthcheck {
interval = "0s"
retries = 0
start_period = "0s"
test = []
timeout = "0s"
}
}
placement {
constraints = []
prefs = []
platforms {
architecture = "amd64"
os = "linux"
}
}
resources {
}
}
}
Plan: 0 to add, 1 to change, 0 to destroy.
------------------------------------------------------------------------
Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.
$
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
07522dc88570 nginx:latest "/docker-entrypoint.…" 23 seconds ago Up 19 seconds 80/tcp foo-service.1.v7t3mqux9w3h4xdwft8hxpx9p
8e2cc03cf8e1 nginx:latest "/docker-entrypoint.…" 9 minutes ago Exited (137) 22 seconds ago foo-service.1.4humubt05iongzdxf5xvjsm79
$
terraform apply
above configuraionterraform apply
again the same configurationdocker ps -a
Hi everyone!
Can someone show a working example that builds and pushes an image to ECR using the docker_registry_image
resource?
Terraform v0.13.5
Please list the resources as a list, for example:
provider "docker" {
registry_auth {
address = "152235879155.dkr.ecr.us-east-1.amazonaws.com"
username = data.aws_ecr_authorization_token.repo.user_name
password = data.aws_ecr_authorization_token.repo.password
config_file_content = jsonencode({
"auths" = {
"152235879155.dkr.ecr.us-east-1.amazonaws.com" = {}
}
"credHelpers" = {
"152235879155.dkr.ecr.us-east-1.amazonaws.com" = "ecr-login"
}
})
}
}
/*
# Also tried this config
config_file_content = jsonencode({
"auths" = {
"152235879155.dkr.ecr.us-east-1.amazonaws.com" = {
"auth": data.aws_ecr_authorization_token.repo.authorization_token,
"email": ""
}
}
"credsStore" = "ecr-login"
*/
data "aws_ecr_authorization_token" "repo" {}
resource "docker_registry_image" "helloworld" {
name = "something-amazing/helloworld:2.0"
build {
context = "context"
}
}
An image seems to be building correctly but it can't be pushed to ECR.
Error: Error pushing docker image: Error response from daemon: Bad parameters and missing X-Registry-Auth: EOF
Hi there, I am writing for a feature request to support network alias in docker_service resource. Currently docker_service only support a list of network ids. And as docker_container already support network_advanced block, I think is a simple code change to support alias in docker_service.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.