GithubHelp home page GithubHelp logo

docker-image-resource's Introduction

Docker Image Resource

Tracks and builds Docker images.

Note: docker registry must be v2.

Source Configuration

  • repository: Required. The name of the repository, e.g. concourse/docker-image-resource.

    Note: When configuring a private registry which requires a login, the registry's address must contain at least one '.' e.g. registry.local or contain the port (e.g. registry:443 or registry:5000). Otherwise docker hub will be used.

    Note: When configuring a private registry using a non-root CA, you must include the port (e.g. :443 or :5000) even though the docker CLI does not require it.

  • tag: Optional. The tag to track. Defaults to latest.

  • username: Optional. The username to authenticate with when pushing.

  • password: Optional. The password to use when authenticating.

  • additional_private_registries: Optional. An array of objects with the following format:

    additional_private_registries:
    - registry: example.com/my-private-docker-registry
      username: my-username
      password: ((my-secret:my-secret))
    - registry: example.com/another-private-docker-registry
      username: another-username
      password: ((another-secret:another-secret))

    Each entry specifies a private docker registry and credentials to be passed to docker login. This is used when a Dockerfile contains a FROM instruction referring to an image hosted in a docker registry that requires a login.

  • aws_access_key_id: Optional. AWS access key to use for acquiring ECR credentials.

  • aws_secret_access_key: Optional. AWS secret key to use for acquiring ECR credentials.

  • aws_session_token: Optional. AWS session token (assumed role) to use for acquiring ECR credentials.

  • insecure_registries: Optional. An array of CIDRs or host:port addresses to whitelist for insecure access (either http or unverified https). This option overrides any entries in ca_certs with the same address.

  • registry_mirror: Optional. A URL pointing to a docker registry mirror service.

    Note: registry_mirror is ignored if repository contains an explicitly-declared registry-hostname-prefixed value, such as my-registry.com/foo/bar, in which case the registry cited in the repository value is used instead of the registry_mirror.

  • ca_certs: Optional. An array of objects with the following format:

    ca_certs:
    - domain: example.com:443
      cert: |
        -----BEGIN CERTIFICATE-----
        ...
        -----END CERTIFICATE-----
    - domain: 10.244.6.2:443
      cert: |
        -----BEGIN CERTIFICATE-----
        ...
        -----END CERTIFICATE-----

    Each entry specifies the x509 CA certificate for the trusted docker registry residing at the specified domain. This is used to validate the certificate of the docker registry when the registry's certificate is signed by a custom authority (or itself).

    The domain should match the first component of repository, including the port. If the registry specified in repository does not use a custom cert, adding ca_certs will break the check script. This option is overridden by entries in insecure_registries with the same address or a matching CIDR.

  • client_certs: Optional. An array of objects with the following format:

    client_certs:
    - domain: example.com
      cert: |
        -----BEGIN CERTIFICATE-----
        ...
        -----END CERTIFICATE-----
      key: |
        -----BEGIN RSA PRIVATE KEY-----
        ...
        -----END RSA PRIVATE KEY-----
    - domain: 10.244.6.2
      cert: |
        -----BEGIN CERTIFICATE-----
        ...
        -----END CERTIFICATE-----
      key: |
        -----BEGIN RSA PRIVATE KEY-----
        ...
        -----END RSA PRIVATE KEY-----

    Each entry specifies the x509 certificate and key to use for authenticating against the docker registry residing at the specified domain. The domain should match the first component of repository.

  • max_concurrent_downloads: Optional. Maximum concurrent downloads.

    Limits the number of concurrent download threads.

  • max_concurrent_uploads: Optional. Maximum concurrent uploads.

    Limits the number of concurrent upload threads.

Behavior

check: Check for new images.

The current image digest is fetched from the registry for the given tag of the repository.

in: Fetch the image from the registry.

Pulls down the repository image by the requested digest.

The following files will be placed in the destination:

  • /image: If save is true, the docker saved image will be provided here.
  • /repository: The name of the repository that was fetched.
  • /tag: The tag of the repository that was fetched.
  • /image-id: The fetched image ID.
  • /digest: The fetched image digest.
  • /rootfs.tar: If rootfs is true, the contents of the image will be provided here.
  • /metadata.json: Collects custom metadata. Contains the container env variables and running user.
  • /docker_inspect.json: Output of the docker inspect on image_id. Useful if collecting LABEL metadata from your image.

Parameters

  • save: Optional. Place a docker saved image in the destination.
  • rootfs: Optional. Place a .tar file of the image in the destination.
  • skip_download: Optional. Skip docker pull of image. Artifacts based on the image will not be present.

As with all concourse resources, to modify params of the implicit get step after each put step you may also set these parameters under a put get_params. For example:

put: foo
params: {...}
get_params: {skip_download: true}

out: Push an image, or build and push a Dockerfile.

Push a Docker image to the source's repository and tag. The resulting version is the image's digest.

Parameters

  • additional_tags: Optional. Path to a file containing a whitespace-separated list of tags. The Docker build will additionally be pushed with those tags.

  • build: Optional. The path of a directory containing a Dockerfile to build.

  • build_args: Optional. A map of Docker build-time variables. These will be available as environment variables during the Docker build.

    While not stored in the image layers, they are stored in image metadata and so it is recommend to avoid using these to pass secrets into the build context. In multi-stage builds ARGs in earlier stages will not be copied to the later stages, or in the metadata of the final stage.

    The build metadata environment variables provided by Concourse will be expanded in the values (the syntax is $SOME_ENVVAR or ${SOME_ENVVAR}).

    Example:

    build_args:
      DO_THING: true
      HOW_MANY_THINGS: 2
      EMAIL: [email protected]
      CI_BUILD_ID: concourse-$BUILD_ID
  • build_args_file: Optional. Path to a JSON file containing Docker build-time variables.

    Example file contents:

    { "EMAIL": "[email protected]", "HOW_MANY_THINGS": 1, "DO_THING": false }
  • secrets: Optional. A map of Docker build-time secrets. These will be available as mounted paths only during the docker build phase.

    Secrets are not stored in any metadata or layers, so they are safe to use for access tokens and the like during the build.

    Example:

    secrets:
      secret1: 
        env: BUILD_ID
      secret2:
        source: /a/secret/file.txt
  • cache: Optional. Default false. When the build parameter is set, first pull image:tag from the Docker registry (so as to use cached intermediate images when building). This will cause the resource to fail if it is set to true and the image does not exist yet.

  • cache_from: Optional. An array of images to consider as cache, in order to reuse build steps from a previous build. The array elements are paths to directories generated by a get step with save: true. This has a similar aim of cache, but it loads the images from disk instead of pulling them from the network, so that Concourse resource caching can be used. It also allows more than one image to be specified, which is useful for multi-stage Dockerfiles. If you want to cache an image used in a FROM step, you should put it in load_bases instead.

  • cache_tag: Optional. Default tag. The specific tag to pull before building when cache parameter is set. Instead of pulling the same tag that's going to be built, this allows picking a different tag like latest or the previous version. This will cause the resource to fail if it is set to a tag that does not exist yet.

  • dockerfile: Optional. The path of the Dockerfile in the directory if it's not at the root of the directory.

  • docker_buildkit: Optional. This enables a Docker BuildKit build. The value should be set to 1 if applicable.

  • import_file: Optional. A path to a file to docker import and then push.

  • labels: Optional. A map of labels that will be added to the image.

    Example:

    labels:
      commit: b4d4823
      version: 1.0.3
  • labels_file: Optional. Path to a JSON file containing the image labels.

    Example file contents:

    { "commit": "b4d4823", "version": "1.0.3" }
  • load: Optional. The path of a directory containing an image that was fetched using this same resource type with save: true.

  • load_base: Optional. A path to a directory containing an image to docker load before running docker build. The directory must have image, image-id, repository, and tag present, i.e. the tree produced by /in.

  • load_bases: Optional. Same as load_base, but takes an array to load multiple images.

  • load_file: Optional. A path to a file to docker load and then push. Requires load_repository.

  • load_repository: Optional. The repository of the image loaded from load_file.

  • load_tag: Optional. Default latest. The tag of image loaded from load_file

  • pull_repository: Optional. DEPRECATED. Use get and load instead. A path to a repository to pull down, and then push to this resource.

  • pull_tag: Optional. DEPRECATED. Use get and load instead. Default latest. The tag of the repository to pull down via pull_repository.

  • tag: DEPRECATED - Use tag_file instead

  • tag_file: Optional. The value should be a path to a file containing the name of the tag. When not set, the Docker build will be pushed with tag value set by tag in source configuration.

  • tag_as_latest: Optional. Default false. If true, the pushed image will be tagged as latest in addition to whatever other tag was specified.

  • tag_prefix: Optional. If specified, the tag read from the file will be prepended with this string. This is useful for adding v in front of version numbers.

  • target_name: Optional. Specify the name of the target build stage. Only supported for multi-stage Docker builds

Example

resources:
- name: git-resource
  type: git
  source: # ...

- name: git-resource-image
  type: docker-image
  source:
    repository: concourse/git-resource
    username: username
    password: password

- name: git-resource-rootfs
  type: s3
  source: # ...

jobs:
- name: build-rootfs
  plan:
  - get: git-resource
  - put: git-resource-image
    params: {build: git-resource}
    get_params: {rootfs: true}
  - put: git-resource-rootfs
    params: {file: git-resource-image/rootfs.tar}

Development

Prerequisites

  • golang is required - version 1.9.x is tested; earlier versions may also work.
  • docker is required - version 17.06.x is tested; earlier versions may also work.

Running the tests

The tests have been embedded with the Dockerfile; ensuring that the testing environment is consistent across any docker enabled platform. When the docker image builds, the test are run inside the docker container, on failure they will stop the build.

Run the tests with the following command:

docker build -t docker-image-resource --build-arg base_image=paketobuildpacks/run-jammy-base:latest .

To use the newly built image, push it to a docker registry that's accessible to Concourse and configure your pipeline to use it:

resource_types:
- name: docker-image-resource
  type: docker-image
  privileged: true
  source:
    repository: example.com:5000/docker-image-resource
    tag: latest

resources:
- name: some-image
  type: docker-image-resource
  ...

Contributing

Please make all pull requests to the master branch and ensure tests pass locally.

docker-image-resource's People

Contributors

chrishiestand avatar d avatar databus23 avatar dependabot[bot] avatar dhinus avatar drahnr avatar endzyme avatar ghostsquad avatar hephex avatar kichik avatar kmacoskey avatar lnguyen avatar mariash avatar mason-jones-ck avatar mdb avatar mhuangpivotal avatar mook-as avatar mstandley-tempus avatar norbertbuchmueller avatar nzwei avatar paroxp avatar petegoo avatar pivotal-bin-ju avatar pvaramballypivot avatar robdimsdale avatar taylorsilva avatar vito avatar xoebus avatar xtremerui avatar youssb avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

docker-image-resource's Issues

http repositories unsupported?

Have an http Docker repository on the local network. Trying to pull an image from it and getting the following in fly execute:

initializing
Pulling 10.0.2.2:5000/illest/build-worker-arch@sha256:d55c7ab102fc3edd71ffa737a44ec6461f962be0a350a214223086dabed907bb...
Error response from daemon: Get https://10.0.2.2:5000/v2/: tls: oversized record received with length 20527

Pulling 10.0.2.2:5000/illest/build-worker-arch@sha256:d55c7ab102fc3edd71ffa737a44ec6461f962be0a350a214223086dabed907bb (attempt 2 of 3)...
Error response from daemon: Get https://10.0.2.2:5000/v2/: tls: oversized record received with length 20527

Pulling 10.0.2.2:5000/illest/build-worker-arch@sha256:d55c7ab102fc3edd71ffa737a44ec6461f962be0a350a214223086dabed907bb (attempt 3 of 3)...
Error response from daemon: Get https://10.0.2.2:5000/v2/: tls: oversized record received with length 20527


Failed to pull image 10.0.2.2:5000/illest/build-worker-arch@sha256:d55c7ab102fc3edd71ffa737a44ec6461f962be0a350a214223086dabed907bb.resource script '/opt/resource/in [/tmp/build/get]' failed: exit status 1
errored

The sha256 sum is correct, so there's some step that accepts http, but then some step further along that (perhaps) hardcodes https.

sslmode is invalid

Using docker compose as per:
http://concourse.ci/docker-repository.html

my web node is inaccessible, and invoking docker logs concourse_concourse-web_1 yields:

{"timestamp":"1482106839.786499500","source":"atc","message":"atc.db.migrations.failed-to-acquire-lock-retrying","log_level":2,"data":{"error":"dial tcp 172.17.0.3:5432: getsockopt: connection refused","session":"2"}}
{"timestamp":"1482106844.796804667","source":"atc","message":"atc.db.migrations.migration-lock-acquired","log_level":1,"data":{"session":"2"}}
{"timestamp":"1482106844.804872274","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"InitialSchema","session":"2.1"}}
{"timestamp":"1482106844.835951090","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"31.033737ms","migration":"InitialSchema","session":"2.1"}}
{"timestamp":"1482106844.835983038","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"MoveSourceAndMetadataToVersionedResources","session":"2.2"}}
{"timestamp":"1482106844.837579966","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"1.574306ms","migration":"MoveSourceAndMetadataToVersionedResources","session":"2.2"}}
{"timestamp":"1482106844.837621927","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddTypeToVersionedResources","session":"2.3"}}
{"timestamp":"1482106844.839020014","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"1.380141ms","migration":"AddTypeToVersionedResources","session":"2.3"}}
{"timestamp":"1482106844.839043140","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"RemoveTransitionalCurrentVersions","session":"2.4"}}
{"timestamp":"1482106844.840635300","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"1.576423ms","migration":"RemoveTransitionalCurrentVersions","session":"2.4"}}
{"timestamp":"1482106844.840671778","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"NonNullableVersionInfo","session":"2.5"}}
{"timestamp":"1482106844.842407465","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"1.717843ms","migration":"NonNullableVersionInfo","session":"2.5"}}
{"timestamp":"1482106844.842438459","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddOneOffNameSequence","session":"2.6"}}
{"timestamp":"1482106844.843148947","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"693.688µs","migration":"AddOneOffNameSequence","session":"2.6"}}
{"timestamp":"1482106844.843172789","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddHijackURLToBuilds","session":"2.7"}}
{"timestamp":"1482106844.843564987","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"377.216µs","migration":"AddHijackURLToBuilds","session":"2.7"}}
{"timestamp":"1482106844.843586922","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddTimestampsToBuilds","session":"2.8"}}
{"timestamp":"1482106844.844316006","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"712.853µs","migration":"AddTimestampsToBuilds","session":"2.8"}}
{"timestamp":"1482106844.844337940","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"CreateLocks","session":"2.9"}}
{"timestamp":"1482106844.845510960","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"1.157403ms","migration":"CreateLocks","session":"2.9"}}
{"timestamp":"1482106844.845534563","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddBuildEvents","session":"2.10"}}
{"timestamp":"1482106844.850614309","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"5.063641ms","migration":"AddBuildEvents","session":"2.10"}}
{"timestamp":"1482106844.850646734","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"ReplaceBuildsAbortHijackURLsWithGuidAndEndpoint","session":"2.11"}}
{"timestamp":"1482106844.852467775","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"1.801028ms","migration":"ReplaceBuildsAbortHijackURLsWithGuidAndEndpoint","session":"2.11"}}
{"timestamp":"1482106844.852497339","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"ReplaceBuildEventsIDWithEventID","session":"2.12"}}
{"timestamp":"1482106844.855352163","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"2.837185ms","migration":"ReplaceBuildEventsIDWithEventID","session":"2.12"}}
{"timestamp":"1482106844.855375051","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddLocks","session":"2.13"}}
{"timestamp":"1482106844.858919382","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"3.526888ms","migration":"AddLocks","session":"2.13"}}
{"timestamp":"1482106844.858950377","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"DropOldLocks","session":"2.14"}}
{"timestamp":"1482106844.859784126","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"816.903µs","migration":"DropOldLocks","session":"2.14"}}
{"timestamp":"1482106844.859807253","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddConfig","session":"2.15"}}
{"timestamp":"1482106844.861297369","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"1.474879ms","migration":"AddConfig","session":"2.15"}}
{"timestamp":"1482106844.861325264","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddNameToBuildInputs","session":"2.16"}}
{"timestamp":"1482106844.862549305","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"1.208134ms","migration":"AddNameToBuildInputs","session":"2.16"}}
{"timestamp":"1482106844.862572908","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddEngineAndEngineMetadataToBuilds","session":"2.17"}}
{"timestamp":"1482106844.864915133","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"2.327525ms","migration":"AddEngineAndEngineMetadataToBuilds","session":"2.17"}}
{"timestamp":"1482106844.864938498","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddVersionToBuildEvents","session":"2.18"}}
{"timestamp":"1482106844.865935087","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"981.005µs","migration":"AddVersionToBuildEvents","session":"2.18"}}
{"timestamp":"1482106844.865956306","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddCompletedToBuilds","session":"2.19"}}
{"timestamp":"1482106844.870669127","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"4.69734ms","migration":"AddCompletedToBuilds","session":"2.19"}}
{"timestamp":"1482106844.870698929","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddWorkers","session":"2.20"}}
{"timestamp":"1482106844.873037577","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"2.319677ms","migration":"AddWorkers","session":"2.20"}}
{"timestamp":"1482106844.873068094","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddEnabledToBuilds","session":"2.21"}}
{"timestamp":"1482106844.876749277","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"3.660616ms","migration":"AddEnabledToBuilds","session":"2.21"}}
{"timestamp":"1482106844.876771688","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"CreateEventIDSequencesForInFlightBuilds","session":"2.22"}}
{"timestamp":"1482106844.877596378","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"794.261µs","migration":"CreateEventIDSequencesForInFlightBuilds","session":"2.22"}}
{"timestamp":"1482106844.877616167","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddResourceTypesToWorkers","session":"2.23"}}
{"timestamp":"1482106844.877990484","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"359.804µs","migration":"AddResourceTypesToWorkers","session":"2.23"}}
{"timestamp":"1482106844.878010273","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddPlatformAndTagsToWorkers","session":"2.24"}}
{"timestamp":"1482106844.878588438","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"558.294µs","migration":"AddPlatformAndTagsToWorkers","session":"2.24"}}
{"timestamp":"1482106844.878607273","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddIdToConfig","session":"2.25"}}
{"timestamp":"1482106844.881757736","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"3.131538ms","migration":"AddIdToConfig","session":"2.25"}}
{"timestamp":"1482106844.881778002","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"ConvertJobBuildConfigToJobPlans","session":"2.26"}}
{"timestamp":"1482106844.882108927","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"316.333µs","migration":"ConvertJobBuildConfigToJobPlans","session":"2.26"}}
{"timestamp":"1482106844.882127523","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddCheckErrorToResources","session":"2.27"}}
{"timestamp":"1482106844.882515192","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"374.237µs","migration":"AddCheckErrorToResources","session":"2.27"}}
{"timestamp":"1482106844.882546663","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddPausedToResources","session":"2.28"}}
{"timestamp":"1482106844.885421276","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"2.857893ms","migration":"AddPausedToResources","session":"2.28"}}
{"timestamp":"1482106844.885450602","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddPausedToJobs","session":"2.29"}}
{"timestamp":"1482106844.888463259","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"2.996444ms","migration":"AddPausedToJobs","session":"2.29"}}
{"timestamp":"1482106844.888488054","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"CreateJobsSerialGroups","session":"2.30"}}
{"timestamp":"1482106844.892222881","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"3.714929ms","migration":"CreateJobsSerialGroups","session":"2.30"}}
{"timestamp":"1482106844.892264605","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"CreatePipes","session":"2.31"}}
{"timestamp":"1482106844.894329548","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"2.046115ms","migration":"CreatePipes","session":"2.31"}}
{"timestamp":"1482106844.894355774","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"RenameConfigToPipelines","session":"2.32"}}
{"timestamp":"1482106844.894688606","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"319.247µs","migration":"RenameConfigToPipelines","session":"2.32"}}
{"timestamp":"1482106844.894708395","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"RenamePipelineIDToVersionAddPrimaryKey","session":"2.33"}}
{"timestamp":"1482106844.898811817","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"4.08267ms","migration":"RenamePipelineIDToVersionAddPrimaryKey","session":"2.33"}}
{"timestamp":"1482106844.898832560","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddNameToPipelines","session":"2.34"}}
{"timestamp":"1482106844.900281429","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"1.434629ms","migration":"AddNameToPipelines","session":"2.34"}}
{"timestamp":"1482106844.900304794","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddPipelineIDToResources","session":"2.35"}}
{"timestamp":"1482106844.909390688","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"9.069373ms","migration":"AddPipelineIDToResources","session":"2.35"}}
{"timestamp":"1482106844.909412384","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddPipelineIDToJobs","session":"2.36"}}
{"timestamp":"1482106844.921298265","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"11.873018ms","migration":"AddPipelineIDToJobs","session":"2.36"}}
{"timestamp":"1482106844.921320915","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddPausedToPipelines","session":"2.37"}}
{"timestamp":"1482106844.924618959","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"3.285409ms","migration":"AddPausedToPipelines","session":"2.37"}}
{"timestamp":"1482106844.924644470","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddOrderingToPipelines","session":"2.38"}}
{"timestamp":"1482106844.928313255","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"3.655498ms","migration":"AddOrderingToPipelines","session":"2.38"}}
{"timestamp":"1482106844.928340435","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddInputsDeterminedToBuilds","session":"2.39"}}
{"timestamp":"1482106844.931806803","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"3.451801ms","migration":"AddInputsDeterminedToBuilds","session":"2.39"}}
{"timestamp":"1482106844.931831598","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddExplicitToBuildOutputs","session":"2.40"}}
{"timestamp":"1482106844.933360100","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"1.515725ms","migration":"AddExplicitToBuildOutputs","session":"2.40"}}
{"timestamp":"1482106844.933379412","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddLastCheckedToResources","session":"2.41"}}
{"timestamp":"1482106844.936863184","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"3.469877ms","migration":"AddLastCheckedToResources","session":"2.41"}}
{"timestamp":"1482106844.936903238","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddLastTrackedToBuilds","session":"2.42"}}
{"timestamp":"1482106844.940144777","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"3.225711ms","migration":"AddLastTrackedToBuilds","session":"2.42"}}
{"timestamp":"1482106844.940171480","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddLastScheduledToPipelines","session":"2.43"}}
{"timestamp":"1482106844.943801403","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"3.613166ms","migration":"AddLastScheduledToPipelines","session":"2.43"}}
{"timestamp":"1482106844.943863153","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddCheckingToResources","session":"2.44"}}
{"timestamp":"1482106844.947680950","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"3.792187ms","migration":"AddCheckingToResources","session":"2.44"}}
{"timestamp":"1482106844.947761774","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddUniqueConstraintToResources","session":"2.45"}}
{"timestamp":"1482106844.949993610","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"2.191986ms","migration":"AddUniqueConstraintToResources","session":"2.45"}}
{"timestamp":"1482106844.950045586","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"RemoveSourceFromVersionedResources","session":"2.46"}}
{"timestamp":"1482106844.950448036","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"378.722µs","migration":"RemoveSourceFromVersionedResources","session":"2.46"}}
{"timestamp":"1482106844.950472116","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddIndexesToABunchOfStuff","session":"2.47"}}
{"timestamp":"1482106844.954310417","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"3.817523ms","migration":"AddIndexesToABunchOfStuff","session":"2.47"}}
{"timestamp":"1482106844.954331398","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"DropLocks","session":"2.48"}}
{"timestamp":"1482106844.955250740","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"907.387µs","migration":"DropLocks","session":"2.48"}}
{"timestamp":"1482106844.955268860","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddBaggageclaimURLToWorkers","session":"2.49"}}
{"timestamp":"1482106844.958194256","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"2.906294ms","migration":"AddBaggageclaimURLToWorkers","session":"2.49"}}
{"timestamp":"1482106844.958223581","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddContainers","session":"2.50"}}
{"timestamp":"1482106844.960414171","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"2.175941ms","migration":"AddContainers","session":"2.50"}}
{"timestamp":"1482106844.960435629","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddNameToWorkers","session":"2.51"}}
{"timestamp":"1482106844.962730646","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"2.28213ms","migration":"AddNameToWorkers","session":"2.51"}}
{"timestamp":"1482106844.962751150","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddLastScheduledToBuilds","session":"2.52"}}
{"timestamp":"1482106844.966593981","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"3.829099ms","migration":"AddLastScheduledToBuilds","session":"2.52"}}
{"timestamp":"1482106844.966645479","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddCheckTypeAndCheckSourceToContainers","session":"2.53"}}
{"timestamp":"1482106844.967392921","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"722.041µs","migration":"AddCheckTypeAndCheckSourceToContainers","session":"2.53"}}
{"timestamp":"1482106844.967412949","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddStepLocationToContainers","session":"2.54"}}
{"timestamp":"1482106844.970236063","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"2.809739ms","migration":"AddStepLocationToContainers","session":"2.54"}}
{"timestamp":"1482106844.970255375","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddVolumesAndCacheInvalidator","session":"2.55"}}
{"timestamp":"1482106844.974221230","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"3.95204ms","migration":"AddVolumesAndCacheInvalidator","session":"2.55"}}
{"timestamp":"1482106844.974241018","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddCompositeUniqueConstraintToVolumes","session":"2.56"}}
{"timestamp":"1482106844.975455284","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"1.201277ms","migration":"AddCompositeUniqueConstraintToVolumes","session":"2.56"}}
{"timestamp":"1482106844.975474834","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddWorkingDirectoryToContainers","session":"2.57"}}
{"timestamp":"1482106844.975799799","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"308.957µs","migration":"AddWorkingDirectoryToContainers","session":"2.57"}}
{"timestamp":"1482106844.975822926","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"MakeContainerWorkingDirectoryNotNull","session":"2.58"}}
{"timestamp":"1482106844.976839542","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"995.964µs","migration":"MakeContainerWorkingDirectoryNotNull","session":"2.58"}}
{"timestamp":"1482106844.976860046","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddEnvVariablesToContainers","session":"2.59"}}
{"timestamp":"1482106844.979813099","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"2.93797ms","migration":"AddEnvVariablesToContainers","session":"2.59"}}
{"timestamp":"1482106844.979831219","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddModifiedTimeToVersionedResourcesAndBuildOutputs","session":"2.60"}}
{"timestamp":"1482106844.985285282","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"5.439687ms","migration":"AddModifiedTimeToVersionedResourcesAndBuildOutputs","session":"2.60"}}
{"timestamp":"1482106844.985327721","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"ReplaceStepLocationWithPlanID","session":"2.61"}}
{"timestamp":"1482106844.986026287","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"681.493µs","migration":"ReplaceStepLocationWithPlanID","session":"2.61"}}
{"timestamp":"1482106844.986050844","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddTeamsColumnToPipelinesAndTeamsTable","session":"2.62"}}
{"timestamp":"1482106844.992112160","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"6.045483ms","migration":"AddTeamsColumnToPipelinesAndTeamsTable","session":"2.62"}}
{"timestamp":"1482106844.992139339","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"CascadePipelineDeletes","session":"2.63"}}
{"timestamp":"1482106845.007105589","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"14.95037ms","migration":"CascadePipelineDeletes","session":"2.63"}}
{"timestamp":"1482106845.007128000","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddTeamIDToPipelineNameUniqueness","session":"2.64"}}
{"timestamp":"1482106845.008432388","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"1.290541ms","migration":"AddTeamIDToPipelineNameUniqueness","session":"2.64"}}
{"timestamp":"1482106845.008459091","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"MakeVolumesExpiresAtNullable","session":"2.65"}}
{"timestamp":"1482106845.008754015","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"276.577µs","migration":"MakeVolumesExpiresAtNullable","session":"2.65"}}
{"timestamp":"1482106845.008772850","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddAuthFieldsToTeams","session":"2.66"}}
{"timestamp":"1482106845.009393930","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"607.325µs","migration":"AddAuthFieldsToTeams","session":"2.66"}}
{"timestamp":"1482106845.009412050","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddAdminToTeams","session":"2.67"}}
{"timestamp":"1482106845.012938499","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"3.511082ms","migration":"AddAdminToTeams","session":"2.67"}}
{"timestamp":"1482106845.012960911","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"MakeContainersLinkToPipelineIds","session":"2.68"}}
{"timestamp":"1482106845.014106035","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"1.128477ms","migration":"MakeContainersLinkToPipelineIds","session":"2.68"}}
{"timestamp":"1482106845.014125586","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"MakeContainersLinkToResourceIds","session":"2.69"}}
{"timestamp":"1482106845.015459538","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"1.320947ms","migration":"MakeContainersLinkToResourceIds","session":"2.69"}}
{"timestamp":"1482106845.015481234","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"MakeContainersBuildIdsNullable","session":"2.70"}}
{"timestamp":"1482106845.016091347","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"596.226µs","migration":"MakeContainersBuildIdsNullable","session":"2.70"}}
{"timestamp":"1482106845.016112804","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"MakeContainersLinkToWorkerIds","session":"2.71"}}
{"timestamp":"1482106845.022173882","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"6.045328ms","migration":"MakeContainersLinkToWorkerIds","session":"2.71"}}
{"timestamp":"1482106845.022202969","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"RemoveVolumesWithExpiredWorkers","session":"2.72"}}
{"timestamp":"1482106845.022863626","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"640.069µs","migration":"RemoveVolumesWithExpiredWorkers","session":"2.72"}}
{"timestamp":"1482106845.022884607","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddWorkerIDToVolumes","session":"2.73"}}
{"timestamp":"1482106845.026160955","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"3.262195ms","migration":"AddWorkerIDToVolumes","session":"2.73"}}
{"timestamp":"1482106845.026182175","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"RemoveWorkerIds","session":"2.74"}}
{"timestamp":"1482106845.032323599","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"6.126783ms","migration":"RemoveWorkerIds","session":"2.74"}}
{"timestamp":"1482106845.032345295","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddAttemptsToContainers","session":"2.75"}}
{"timestamp":"1482106845.032691002","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"327.023µs","migration":"AddAttemptsToContainers","session":"2.75"}}
{"timestamp":"1482106845.032710075","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddStageToContainers","session":"2.76"}}
{"timestamp":"1482106845.036273718","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"3.550022ms","migration":"AddStageToContainers","session":"2.76"}}
{"timestamp":"1482106845.036294460","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddImageResourceVersions","session":"2.77"}}
{"timestamp":"1482106845.040320158","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"4.007277ms","migration":"AddImageResourceVersions","session":"2.77"}}
{"timestamp":"1482106845.040347338","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"MakeContainerIdentifiersUnique","session":"2.78"}}
{"timestamp":"1482106845.040361881","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"333ns","migration":"MakeContainerIdentifiersUnique","session":"2.78"}}
{"timestamp":"1482106845.040374756","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"CleanUpMassiveUniqueConstraint","session":"2.79"}}
{"timestamp":"1482106845.040713310","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"324.79µs","migration":"CleanUpMassiveUniqueConstraint","session":"2.79"}}
{"timestamp":"1482106845.040734053","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddPipelineBuildEventsTables","session":"2.80"}}
{"timestamp":"1482106845.051132917","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"10.384582ms","migration":"AddPipelineBuildEventsTables","session":"2.80"}}
{"timestamp":"1482106845.051156521","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddBuildPreparation","session":"2.81"}}
{"timestamp":"1482106845.054059982","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"2.883749ms","migration":"AddBuildPreparation","session":"2.81"}}
{"timestamp":"1482106845.054080248","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"DropCompletedFromBuildPreparation","session":"2.82"}}
{"timestamp":"1482106845.054534197","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"439.569µs","migration":"DropCompletedFromBuildPreparation","session":"2.82"}}
{"timestamp":"1482106845.054553747","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddInputsSatisfiedToBuildPreparation","session":"2.83"}}
{"timestamp":"1482106845.057541609","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"2.973019ms","migration":"AddInputsSatisfiedToBuildPreparation","session":"2.83"}}
{"timestamp":"1482106845.057569027","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddOrderToVersionedResources","session":"2.84"}}
{"timestamp":"1482106845.062045336","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"4.45286ms","migration":"AddOrderToVersionedResources","session":"2.84"}}
{"timestamp":"1482106845.062072992","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddImageResourceTypeAndSourceToContainers","session":"2.85"}}
{"timestamp":"1482106845.062751293","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"662.637µs","migration":"AddImageResourceTypeAndSourceToContainers","session":"2.85"}}
{"timestamp":"1482106845.062772036","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddUserToContainer","session":"2.86"}}
{"timestamp":"1482106845.066073656","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"3.280631ms","migration":"AddUserToContainer","session":"2.86"}}
{"timestamp":"1482106845.066095114","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"ResetPendingBuilds","session":"2.87"}}
{"timestamp":"1482106845.066563368","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"454.054µs","migration":"ResetPendingBuilds","session":"2.87"}}
{"timestamp":"1482106845.066582680","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"ResetCheckOrder","session":"2.88"}}
{"timestamp":"1482106845.066959143","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"362.621µs","migration":"ResetCheckOrder","session":"2.88"}}
{"timestamp":"1482106845.066978455","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddTTLToContainers","session":"2.89"}}
{"timestamp":"1482106845.070401192","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"3.407975ms","migration":"AddTTLToContainers","session":"2.89"}}
{"timestamp":"1482106845.070429325","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddOriginalVolumeHandleToVolumes","session":"2.90"}}
{"timestamp":"1482106845.070825100","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"380.544µs","migration":"AddOriginalVolumeHandleToVolumes","session":"2.90"}}
{"timestamp":"1482106845.070849419","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"DropNotNullResourceConstraintsOnVolumes","session":"2.91"}}
{"timestamp":"1482106845.071451187","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"586.697µs","migration":"DropNotNullResourceConstraintsOnVolumes","session":"2.91"}}
{"timestamp":"1482106845.071471453","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddOutputNameToVolumes","session":"2.92"}}
{"timestamp":"1482106845.071826458","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"341.474µs","migration":"AddOutputNameToVolumes","session":"2.92"}}
{"timestamp":"1482106845.071850300","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"CreateResourceTypes","session":"2.93"}}
{"timestamp":"1482106845.076033354","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"4.164597ms","migration":"CreateResourceTypes","session":"2.93"}}
{"timestamp":"1482106845.076059580","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddLastCheckedAndCheckingToResourceTypes","session":"2.94"}}
{"timestamp":"1482106845.081540108","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"5.464789ms","migration":"AddLastCheckedAndCheckingToResourceTypes","session":"2.94"}}
{"timestamp":"1482106845.081563473","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddHttpProxyHttpsProxyNoProxyToWorkers","session":"2.95"}}
{"timestamp":"1482106845.082071066","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"485.848µs","migration":"AddHttpProxyHttpsProxyNoProxyToWorkers","session":"2.95"}}
{"timestamp":"1482106845.082091093","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddModifiedTimeToBuildInputs","session":"2.96"}}
{"timestamp":"1482106845.086103439","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"3.992949ms","migration":"AddModifiedTimeToBuildInputs","session":"2.96"}}
{"timestamp":"1482106845.086130619","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddPathToVolumes","session":"2.97"}}
{"timestamp":"1482106845.086511374","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"360.687µs","migration":"AddPathToVolumes","session":"2.97"}}
{"timestamp":"1482106845.086531401","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddHostPathVersionToVolumes","session":"2.98"}}
{"timestamp":"1482106845.086880207","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"326.715µs","migration":"AddHostPathVersionToVolumes","session":"2.98"}}
{"timestamp":"1482106845.086899519","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddBestIfUsedByToContainers","session":"2.99"}}
{"timestamp":"1482106845.087269068","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"351.09µs","migration":"AddBestIfUsedByToContainers","session":"2.99"}}
{"timestamp":"1482106845.087288141","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddStartTimeToWorkers","session":"2.100"}}
{"timestamp":"1482106845.087649345","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"343.081µs","migration":"AddStartTimeToWorkers","session":"2.100"}}
{"timestamp":"1482106845.087668180","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddReplicatedFromToVolumes","session":"2.101"}}
{"timestamp":"1482106845.088038445","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"351.498µs","migration":"AddReplicatedFromToVolumes","session":"2.101"}}
{"timestamp":"1482106845.088057041","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddSizeToVolumes","session":"2.102"}}
{"timestamp":"1482106845.092068911","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"3.991924ms","migration":"AddSizeToVolumes","session":"2.102"}}
{"timestamp":"1482106845.092097044","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddFirstLoggedBuildIDToJobsAndReapTimeToBuildsAndLeases","session":"2.103"}}
{"timestamp":"1482106845.100163221","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"8.044603ms","migration":"AddFirstLoggedBuildIDToJobsAndReapTimeToBuildsAndLeases","session":"2.103"}}
{"timestamp":"1482106845.100190401","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddMissingInputReasonsToBuildPreparation","session":"2.104"}}
{"timestamp":"1482106845.103264332","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"3.031142ms","migration":"AddMissingInputReasonsToBuildPreparation","session":"2.104"}}
{"timestamp":"1482106845.103297234","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"MakeVolumeSizeBigint","session":"2.105"}}
{"timestamp":"1482106845.107950449","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"4.632296ms","migration":"MakeVolumeSizeBigint","session":"2.105"}}
{"timestamp":"1482106845.107976437","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"MakeContainersExpiresAtNullable","session":"2.106"}}
{"timestamp":"1482106845.108308554","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"312.082µs","migration":"MakeContainersExpiresAtNullable","session":"2.106"}}
{"timestamp":"1482106845.108334064","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddContainerIDToVolumes","session":"2.107"}}
{"timestamp":"1482106845.114335060","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"5.982749ms","migration":"AddContainerIDToVolumes","session":"2.107"}}
{"timestamp":"1482106845.114355087","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddOnDeleteSetNullToFKeyContainerId","session":"2.108"}}
{"timestamp":"1482106845.115856409","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"1.477158ms","migration":"AddOnDeleteSetNullToFKeyContainerId","session":"2.108"}}
{"timestamp":"1482106845.115880251","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddUAAAuthToTeams","session":"2.109"}}
{"timestamp":"1482106845.116235495","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"337.938µs","migration":"AddUAAAuthToTeams","session":"2.109"}}
{"timestamp":"1482106845.116259813","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddTeamIDToBuilds","session":"2.110"}}
{"timestamp":"1482106845.117881298","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"1.604335ms","migration":"AddTeamIDToBuilds","session":"2.110"}}
{"timestamp":"1482106845.117900372","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddPublicToPipelines","session":"2.111"}}
{"timestamp":"1482106845.122563362","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"4.645519ms","migration":"AddPublicToPipelines","session":"2.111"}}
{"timestamp":"1482106845.122589588","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddTeamIDToWorkers","session":"2.112"}}
{"timestamp":"1482106845.123508215","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"898.756µs","migration":"AddTeamIDToWorkers","session":"2.112"}}
{"timestamp":"1482106845.123534441","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddTeamIDToContainers","session":"2.113"}}
{"timestamp":"1482106845.124430656","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"876.62µs","migration":"AddTeamIDToContainers","session":"2.113"}}
{"timestamp":"1482106845.124461889","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddTeamIDToVolumes","session":"2.114"}}
{"timestamp":"1482106845.125264645","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"777.135µs","migration":"AddTeamIDToVolumes","session":"2.114"}}
{"timestamp":"1482106845.125284433","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddNextBuildInputs","session":"2.115"}}
{"timestamp":"1482106845.141204119","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"15.900754ms","migration":"AddNextBuildInputs","session":"2.115"}}
{"timestamp":"1482106845.141245842","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddCaseInsenstiveUniqueIndexToTeamsName","session":"2.116"}}
{"timestamp":"1482106845.142685413","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"1.406725ms","migration":"AddCaseInsenstiveUniqueIndexToTeamsName","session":"2.116"}}
{"timestamp":"1482106845.142706871","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddNonEmptyConstraintToTeamName","session":"2.117"}}
{"timestamp":"1482106845.143285036","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"558.897µs","migration":"AddNonEmptyConstraintToTeamName","session":"2.117"}}
{"timestamp":"1482106845.143304586","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddGenericOAuthToTeams","session":"2.118"}}
{"timestamp":"1482106845.143684387","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"359.468µs","migration":"AddGenericOAuthToTeams","session":"2.118"}}
{"timestamp":"1482106845.143702507","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"MigrateFromLeasesToLocks","session":"2.119"}}
{"timestamp":"1482106845.150577068","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"6.855473ms","migration":"MigrateFromLeasesToLocks","session":"2.119"}}
{"timestamp":"1482106845.150604248","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddTeamNameToPipe","session":"2.120"}}
{"timestamp":"1482106845.152346849","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"1.722645ms","migration":"AddTeamNameToPipe","session":"2.120"}}
{"timestamp":"1482106845.152368307","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"AddConfigToJobsResources","session":"2.121"}}
{"timestamp":"1482106845.167191505","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"14.791682ms","migration":"AddConfigToJobsResources","session":"2.121"}}
{"timestamp":"1482106845.167261839","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"CascadeTeamDeletes","session":"2.122"}}
{"timestamp":"1482106845.175655603","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"8.371956ms","migration":"CascadeTeamDeletes","session":"2.122"}}
{"timestamp":"1482106845.175718307","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"CascadeTeamDeletesOnPipes","session":"2.123"}}
{"timestamp":"1482106845.177376032","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"1.6299ms","migration":"CascadeTeamDeletesOnPipes","session":"2.123"}}
{"timestamp":"1482106845.177399158","source":"atc","message":"atc.db.migrations.migrating.starting-migration","log_level":1,"data":{"migration":"RemoveResourceCheckingFromJobsAndAddManualyTriggeredToBuilds","session":"2.124"}}
{"timestamp":"1482106845.183085918","source":"atc","message":"atc.db.migrations.migrating.finishing-migration","log_level":1,"data":{"duration":"5.664395ms","migration":"RemoveResourceCheckingFromJobsAndAddManualyTriggeredToBuilds","session":"2.124"}}
sslmode is invalid
{"timestamp":"1482106847.561433315","source":"atc","message":"atc.db.migrations.migration-lock-acquired","log_level":1,"data":{"session":"2"}}
sslmode is invalid
{"timestamp":"1482106850.000019312","source":"atc","message":"atc.db.migrations.migration-lock-acquired","log_level":1,"data":{"session":"2"}}
sslmode is invalid
{"timestamp":"1482106852.598994732","source":"atc","message":"atc.db.migrations.migration-lock-acquired","log_level":1,"data":{"session":"2"}}
sslmode is invalid
{"timestamp":"1482106855.583802938","source":"atc","message":"atc.db.migrations.migration-lock-acquired","log_level":1,"data":{"session":"2"}}
sslmode is invalid

This used to work, so presumably the postrgesql container has changed to prohibit disabling ssl?

Put should fail when `cat: can't open 'yourimagename.tar': No such file or directory` occurs

waiting for docker to come up...
Login Succeeded
cat: can't open 'input_volume_path/rootfs_file_name.tar': No such file or directory
sha256:1d285d20fe4bef24bd89924b5558bb34f2eee0eceb08d46db1ec8b5b5c7bd099
The push refers to a repository [docker.io/our-dockerhub-org/our-image-name]
e3b0c44298fc: Preparing
e3b0c44298fc: Pushing
e3b0c44298fc: Pushed
our.tag.number: digest: sha256:681229f14a0c2be154fa9e40357f2ecee63ecfed7310520b037c6cb1cf033e17 size: 500

We're not sure why this happened (output from previous step being mounted as empty volume is the most likely culprit), but the rootfs tar archive that the docker commands were referring to didn't exist. The resource put should error out in this case, because when it doesn't, it pushes a 23 byte broken docker image to our repository.

Make logging distinct

This resource is often used as source for another resource when using resource types. When there is an error it is not always clear it is the docker-image-resource failing or an error message of the actual resource.

Maybe a prefix could be added to the docker resource log/error output to make this distinction more clear?

Upgrade to docker 1.8.1

I'm not sure if the resources are built on every release of concourse, or if it's a manual process. If it's manual, please rebuild the image to get the latest docker. Thanks!

What happened to server_args?

The network range for the servers where I'm looking to run concourse fall in the magic 172.17.0.0/16 range, so I need to specify the '--bip' option to the docker daemon on startup or my build scripts can't access any of my other servers.

I see in #6 this was added, but then disappeared without any mention that I can find.

Adding docker daemon commands specific to my infrastructure to the build seems a little strange, but without I can't get this working at all.

Support for docker repository cache

We have the need to use a caching docker repository proxy so that we can speed up our builds by pulling docker images from our local repository instead of getting them from the official docker hub each time.

To accomplish this, the docker daemon must be started with --registry-mirror=https://my-docker-cache
From Run a local registry mirror

To facilitate this, we can add a new param called registry_mirror to the docker-image-resource out and in and have it conditionally add this to the docker daemon command in start_docker().

Not able to use custom build base image with private registry

I'm building several docker images that use a common base image. I push all these images to a private registry on my local network. Building the base image and pushing it to the registry works like a charm but when building the derived imaged (cross-compiler-darwin-x64) I'm unable to fetch the base image. It always tries to pull the base image from docker.io regardless of me docker loading the base image upfront.

Is there an error in my configuration?

This is my configuration:

resource:
- name: cross-compiler-base
  type: docker-image
  source:
    repository: 192.168.1.128:5000/baszalmstra/cross-compiler-base
    insecure_registries: [
      "192.168.1.128:5000"
    ]

- name: cross-compiler-darwin-x64
  type: docker-image
  source:
    repository: 192.168.1.128:5000/baszalmstra/cross-compiler-darwin-x64
    insecure_registries: [
      "192.168.1.128:5000"
    ]

jobs:
- name: build-base
  plan:
  - get: cross-compiler-source
    trigger: true
  - put: cross-compiler-base
    params: { build: cross-compiler-source }

- name: build-darwin-x64
  plan:
  - aggregate:
    - get: cross-compiler-source
      trigger: true
      passed: [build-base]
    - get: cross-compiler-base
      passed: [build-base]
      trigger: true
      params: {save: true}
  - put: cross-compiler-darwin-x64
    params:
      build: cross-compiler-source/darwin-x64
      load_base: cross-compiler-base
    get_params: {rootfs: true}

This is the error I get:

Sending build context to Docker daemon  5.12 kB
Step 1 : FROM baszalmstra/cross-compiler-base
Pulling repository docker.io/baszalmstra/cross-compiler-base
Error: image baszalmstra/cross-compiler-base not found

skip_download not working

I have found that skip_download is not working. The issue appears to be that even if the image is not download 'docker inspect' is called. But there is no image to inspect so it fails.

Protocol port of source config

Would you guys be opposed to being able to select a protocol as part of source config?

My issue right now is that I'm using a private docker repo that uses https (not http)

Using resource in "get" doesn't populate /tag correctly

Hi,

First I have a file called image_tag with the following content:

123abc

I'm using the resource in get and then as put in another job, a simplified example.:

jobs:
  - name: build
    plan:
      - task: build
        file: build.yml
      - put: my-image
        params:
          build: build
          tag: build/image_tag
  - name: deploy
    plan:
      - get: my-image
        passed: [build]
        trigger: true    
        params:
          skip-download: true
      - task: deploy
        file: deploy.yml

So according to the README, it says:

/tag: The tag of the repository that was fetched.

However when I fetch it I see this:

root@f688e206-4802-4915-4082-7ad25f7dfa2e:/tmp/build/b0d51b9f/my-image# cat tag
latest

But if you look at my jobs, and especially at the put params, you see I don't use latest. Instead I'm using my own tag file with the content 123abc.

I assume that with get, the content of /tag should also be 123abc, but right now it's not.

Do I miss something or is /tag always latest?

Stops working if digest disappears from Artifactory registry

I am finding that the check script fails if the previous digest can't be found. This can happen if, for example, you are looking for 'some_repo:latest' and a new image is pushed to 'some_repo:latest' outside of the control of concourse.
In this case the old image layers can get garbage collected, and so the manifest can't be retrieved using the previous digest, and a fatal error occurs. The check job never recovers.

example in README.md is incorrect

Talks about another resource type, not one with the word "docker" in it. At the very least it's confusing and invalidates any claim to authenticity or correctness the README might be trying to impress upon the reader.

Parsing Repository Bug

When specifying a repository URL in a concourse task, was getting this error:

resource script '/opt/resource/check []' failed: exit status 1

stderr:
malformed repository url

From investigating further, found the parsing code can only handle up to 1 folder deep. See https://github.com/concourse/docker-image-resource/blob/master/cmd/check/main.go#L258-L276

WORKS: registry.domain.com/1/image
FAILS: registry.domain.com/1/2/image

Unless there is a requirement that docker image registries can only go one folder deep?

Caching is broken with docker 1.10

By ellerycrane in slack:

Because of the changes in docker 10, the only way to preserve the build cache is to use docker save of the image ​_from the place it was built_​, and then use docker load on the output of the former when building it again elsewhere.
the docker-image-resource does the docker build step in the /out script,
and has an option to use save=true in the /in script
However, in the context of /in, it will do a fresh docker pull which will not contain the build cache we want to preserve. So instead, I'd like to take the image built as part of the /out script and use it to update the latest version of the resource, rather than doing /in

See more: moby/moby#20316

Support .dockerignore via params

Would be nice for occasions like this:

- name: build-atc-ci
  public: true
  serial: true
  plan:
  - aggregate:
    - get: concourse-atc-ci
      trigger: true
    - get: selenium-firefox
      trigger: true
      params: {save: true}
    - get: golang-1.7
      trigger: true
  - put: concourse-atc-ci-image
    params:
      load_base: selenium-firefox
      build: .
      dockerfile: concourse-atc-ci/Dockerfile # needs golang-*
      ignore: [selenium-firefox]

Number literals in tag field cause tasks to fail

Using an image resource defined in my task like this:

image_resource:
  type: docker-image
  source:
    repository: java
    tag: 8

I get the following error when my task runs:

resource script '/opt/resource/check []' failed: exit status 1

stderr:
failed to read request: json: cannot unmarshal number into Go value of type string

Works fine if I surround the 8 with quotes so it infers that the value is a string rather than a number. As far as I'm aware however a docker tag will always be a string so it would be nice if the resource didn't trip over this.

Docker Cache

Hello,

can you please add some other example ... in particular how to use the docker caching layer to (re)build the image.

actually my pipeline is this :

resources:
- name: python2.7_source
  type: git
  source:
    uri: http://concourseci:[email protected]_domain.lan/group_xxx/python.2.7.git
    branch: master
    paths: [Dockerfile]

- name: python2.7_image
  type: docker-image
  source:
    insecure_registries:
      - registry.my_domain.lan:5000
    repository: registry.my_domain.lan:5000/my.python2.7

jobs:
- name: python2.7_build
  plan:
    - get: python2.7_source
      trigger: true
    - put: python2.7_image
      params: {build: python2.7_source, cache: true }

but as doc say if the image still don't exist job fail .... how to overcome this ?

Tnx in advance

Support pushing both `latest` and a specific version tag in a single `put`

It would be handy to be able to push multiple tags with one put. My specific use case is pushing a version tag and a latest tag.

At the moment I am doing this, which seems to be quite inefficient:

 - do:
   - put: docker-image-latest
     resource: docker-image
     params:
       build: package
   - put: docker-image-versioned
     resource: docker-image
     params:
       tag_prefix: v
       tag: version/number
       build: package

The main use case I can see for this is pushing a latest and a specific version, which could perhaps be configured with an extra resource param. Something like this?

   - put: docker-image
     params:
       tag_prefix: v
       tag: version/number
       also_tag_latest: true
       build: package

Manifest fetch fails with 406 error

I'm getting failed to fetch digest: 406 Not Acceptable that I've narrowed down to the check command. The current theory is that our registry is returning type application/json while check is looking for application/vnd.docker.distribution.manifest.v2+json. See this line: https://github.com/concourse/docker-image-resource/blob/master/cmd/check/main.go#L49

If the header can be configurable, or simply add application/json, that would be great.

Our private registry is an Artifactory Docker-type repository, which may explain why the type is different than what's expected.

Support pulling images by sha id

Unfortunately, some images on docker-hub only publish a latest tag. It would be nice if we could specify an image by sha-id to allow more reproducible builds in this case without requiring me to use my own registry.

Fails to push image to ecr

Hello there. Thank you for the good work on concourse.

I am trying to use this resource to build and push an image to aws ecr using concourse 1.6.0 and I am getting

Removing intermediate container c08ba7172930
Successfully built b963e8fe2ad4
The push refers to a repository [515225352273.dkr.ecr.us-east-1.amazonaws.com/my-image-name]
...
102fca64f924: Preparing
no basic auth credentials
_

My resource config looks like this (variables correctly replaced):

- name: sensor-registry-image
      type: docker-image
      source:
            repository: 911315152731.dkr.ecr.us-east-1.amazonaws.com/utraffik/sensor-registry
            aws_access_key_id: {{aws_access_key_id}}
            aws_secret_access_key: {{aws_secret_access_key}}

And my job looks like this (indented better ofcourse):

- name: build-image
   plan:
      - get: sensor-registry
            passed: [integration]
            trigger: false
      - put: sensor-registry-image
            params: {build: sensor-registry}
      - task: debug
            file: sensor-registry/scripts/deploy.yml

Questions

  • Is there something I am doing wrong?
  • Is there a way I can debug what the resource is doing, for instance seeing what is in ~/.docker/config.json after the resource gets login from aws?
  • How do I know which version of concourse the documentation for this resource works with?

Concourse version= 1.6.0

Help will be greatly appreciated. Thank you.

build image error

hi,
i found when i build image and the FROM IMAGE in Dockerfile is a private registry image and the registry address different with the pushed registry, concourse would tell me no basic auth credentials.
Furthermore, when i try get the FROM IMAGE before build in task, when running docker build, the FROM IMAGE still need to be download from registry.

below are the sample config.

pipeline.yml:

resources:
    - name: image need be build and push
      type: docker-image
      source:
        repository: registry1
    - name: the FROM IMAGE
      type: docker-image
      source:
        repository: registry2

jobs:
    - name: build image and push
      plan:
        - get: the FROM IMAGE
        - put: image need be build and push
          params:
            dockerfile: Dockerfile

Dockerfile:

FROM registry2
...

Resource doesn't show orange on pipeline view when check fails

We observe a failed check in a docker-image-resource in the resource view, but we don't see the resource go "orange" in the pipeline view.

This may be an issue with this docker-image-resource, or could be a more general Concourse failure. I was unable to do an assessment if I could reproduce a similar error with another resource type.

screen shot 2016-03-11 at 6 03 11 pm

Artificial restriction on docker repository URIs?

Hello,

I'm trying out concourse and I came across a strange issue. I was trying a docker URI that looks something like this:

repo_host/project/ci/img_name

Unfortunately, this was failing even though I knew this was a valid URI. Looking at the code, I found this:

func parseRepository(repository string) (string, string) {
    segs := strings.Split(repository, "/")

    switch len(segs) {
    case 3:
        return segs[0], segs[1] + "/" + segs[2]
    case 2:
        if strings.Contains(segs[0], ":") || strings.Contains(segs[0], ".") {
            return segs[0], segs[1]
        } else {
            return officialRegistry, segs[0] + "/" + segs[1]
        }
    case 1:
        return officialRegistry, "library/" + segs[0]
    }

    fatal("malformed repository url")
    panic("unreachable")
}

For some reason, this is expecting that there only be at most 2 "/" characters in the URI. Why is this? This seems to be very arbitrary and restrictive since Docker supports deeper hierarchies.

Split repository into org and image name

It would nice if this:

 name: git-resource-image
  type: docker-image
  source:
    repository: concourse/git-resource
    email: [email protected]
    username: username
    password: password

was like this

 name: git-resource-image
  type: docker-image
  source:
    org: concourse
    image_name: git-resource
    email: [email protected]
    username: username
    password: password

so I could do this

 name: git-resource-image
  type: docker-image
  source:
    org: {{org-name}}
    image_name: git-resource
    email: [email protected]
    username: username
    password: password

and easily run someone else's pipeline without needing their creds.

Amazon ECR Support Proposal

The Scope of the Problem

I've been playing with Concourse for some time. As it turns out there's just a single thing that keeps me from bringing it into our CI process: I couldn't find a way to provide ECR credentials to the docker-image resource.

The problem lies in the fact that the ECR uses temporary tokens to provide access to repos. One would naïvely try to obtain the token manually and provide it to the resource as the password. But then the token expires in 12 hours and the pipeline fails to push the image. In this state the whole idea of Concourse as a CI system is kind of pointless, as one needs to update the credentials in the pipeline.yml file and then push the pipeline via fly each 12 hours.

How This Could Be Improved Upon

When the repository param for the resource is set to an ECR repo like XXXX.dkr.ecr.REGION.amazonaws.com/REPO_NAME, the resource may treat the username and password fields as the AWS access and secret keys. It would then use these to fetch an ECR token from AWS and use that for further docker calls.

Do you think this would be an adequate strategy for tackling the problem?

I would be happy to try to implement this and make a PR if this all makes sense.

Not a valid repository/tag @for

Get the following error when trying to "put" a docker image into an insecure registry (note: I haven't tried this with a proper repo).

Pulling my.insecure.registry:5000/my/application@for (attempt 3 of 3)...
Error parsing reference: "my.insecure.registry:5000/my/application@for" is not a valid repository/tag
Pulling my.insecure.registry:5000/my/styleguide@for (attempt 3 of 3)...
Error parsing reference: "my.insecure.registry:5000/my/styleguide@for" is not a valid repository/tag

Here are the resources:

  - name: application-service
    type: docker-image
    source:
      repository: "my.insecure.registry:5000/my/application"
      insecure_registries: [ "my.insecure.registry:5000" ]

  - name: styleguide-service
    type: docker-image
    source:
      repository: "my.insecure.registry:5000/my/styleguide"
      insecure_registries: [ "my.insecure.registry:5000" ]

Ensure - If does not exist put option

I have a scenario where build of the same docker image can be triggered from many different pipelines. I would like to reduce the time this task will take if the image already exists.

I know that this does not seem to make a lot of sense - if the existing one is not exactly the same as the one that is going to be produced. I am handling this outside, but build if needed option could be also considered for the plugin (it is not as hard if you design the dockerfile with such feature in mind).

Anyways, what I would like to see is if this option is enabled the put command to first check if a docker image at the repo with the tag is already there and simply exit with success.

Resource tries to pull image with incorrect digest

Hi!
I am running fly execute but it fails while pulling a docker image. From the output I can see that it tries to pull a version of a container that has never existed in the repository (private http repository).

Here is the task.yml part:

...
image_resource:
  type: docker-image
  source:
   repository: 1.2.3.4:5000/my-image
   insecure_registries: ["1.2.3.4:5000"]
   email: [email protected]
   username: martins
   password: pass
...

Fly command:

fly -t lc execute -c task.yml

Output of the fly command:

...
initializing
WARNING: login credentials saved in /root/.docker/config.json
Login Succeeded
Pulling 1.2.3.4:5000/my-image@sha256:59795eb87abd170f1b97f82f5838db5637890962d7bd623f5ce04c7833ef29f0...
Error response from daemon: manifest unknown: manifest unknown

Pulling 1.2.3.4:5000/my-image@sha256:59795eb87abd170f1b97f82f5838db5637890962d7bd623f5ce04c7833ef29f0 (attempt 2 of 3)...
Error response from daemon: manifest unknown: manifest unknown

Pulling 1.2.3.4:5000/my-image@sha256:59795eb87abd170f1b97f82f5838db5637890962d7bd623f5ce04c7833ef29f0 (attempt 3 of 3)...
Error response from daemon: manifest unknown: manifest unknown


Failed to pull image 1.2.3.4:5000/my-image@sha256:59795eb87abd170f1b97f82f5838db5637890962d7bd623f5ce04c7833ef29f0.resource script '/opt/resource/in [/tmp/build/get]' failed: exit status 1
errored

The docker registry doesn't contain a version of this image that has sha256 like 59795eb87abd. I also checked the directory where the images are stored to make sure.

Private registries with no port are not properly handled

Hello,

This issue can probably be seen as an extension to #17 in some sort of way.

The current project I am working on makes use of JFrog Bintray's hosted docker repositories and whilst it generally works, I'm also experiencing some problems.

Sample bintray repositories:

companyrepos-docker-deploy-dev.bintray.io/containername
companyrepos-docker-deploy-dev.bintray.io/containername:latest

Whilst I have approximately 10 Bintray resources defined (which works), I also rely on a public image being pulled from Docker Hub.

As per cmd/check/main.go the above listed sample repositories matches the second case in the code and as such the resource tries to look for the images on Docker Hub which hits the following code path:

return officialRegistry, segs[0] + "/" + segs[1].

This along with the check resource type scripts means that Docker Hub gets hit multiple times for non Docker Hub hosted images which results into rate limiting and eventually 429 Too Many Requests error messages when the pipeline tries to pull the Docker Hub hosted image. If I pause the pipeline for an hour, I'm able to resume docker push/pull operations against docker hub as normal.

The exact error message I get is:

resource script '/opt/resource/check []' failed: exit status 1

stderr:
failed to fetch manifest: Get https://registry-1.docker.io/v2/companyrepos-docker-deploy-dev.bintray.io/api/manifests/latest: token auth attempt for registry: https://auth.docker.io/token?account=username&scope=repository%3Acompanyrepos-docker-deploy-dev.bintray.io%2Fapi%3Apull&service=registry.docker.io request failed with status: 429 Too Many Requests
const officialRegistry = "registry-1.docker.io"

func parseRepository(repository string) (string, string) {
    segs := strings.Split(repository, "/")

    switch len(segs) {
    case 3:
        return segs[0], segs[1] + "/" + segs[2]
    case 2:
        if strings.Contains(segs[0], ":") {
            return segs[0], segs[1]
        } else {
            return officialRegistry, segs[0] + "/" + segs[1]
        }
    case 1:
        return officialRegistry, "library/" + segs[0]
    }

    fatal("malformed repository url")
    panic("unreachable")
}

Are there any plans to add better support for popular hosted repositories like bintray.io, quay.io?

Perhaps something like:

 name: git-resource-image
  type: docker-image
  source:
    provider: bintray.io
    repository: companyrepos-docker-deploy-dev.bintray.io/containername
    email: [email protected]
    username: username
    password: password

Or anything else you feel appropriate. Locally I'm trying to work around the issue by adding a list of common docker repositories and then checking if what's provided in repository is in the list and if found, just returning that repo. It's not complete though as I'm trying to understand Go at the same time and would have much rather preferred to provide a patch than raise this issue.

There is a bintray-resource (https://github.com/jamiemonserrate/bintray-resource) though this only seems to be for non docker images.

Multiline Build Arguments Break Docker Build/Put

We frequently inject ssh-keys into containers at build time (such as to allow rubygems or npm access to private git repos during build) using --build-args. When attempting to do this in Concourse, two different error conditions appear.

  - put: docker
    params:
      build: master
      tag: version/version
      build_args:
        KEY: {{ssh-key}}

Results in the following error:

bad flag syntax: -----END

Using | notation to denote multiline results in slightly different behavior.

  - put: docker
    params:
      build: master
      tag: version/version
      build_args:
        KEY: |
          {{ssh-key}}

Error:

"docker build" requires exactly 1 argument(s).
See 'docker build --help'.

Usage:  docker build [OPTIONS] PATH | URL | -

Build an image from a Dockerfile

`import_file` configuration should support globs

im using it to push out assets that are typically names blah-(semver).tar. want to specify blah-*.tar to pick up configuration. current workaround is to use a separate task to rename tar to a stable name.

Doesn't cope with private registry with no '.' in hostname

If your private registry does not contain a '.', then due to the logic in the following code, the function assumes that you are not using a private registry and attempts to use docker hub.

if echo "${registry}" | fgrep -q '.' ; then

Scenarios like this can crop up when concourse and your registry are on the same network, say within a docker network.

I'm using nexus for my private docker registry/proxy, but if I use the docker FQDN when logging in I get an HTTP 400; yes this is a Nexus issue, but the assumption in the code above is still wrong.

assets/in script does not handle images built using "scratch" repo

Given this pipeline configuration, builds always fail because the source image uses scratch as a FROM, so there is no /etc/passwd file.

Pipeline configuration:

jobs:
  - name: docker-images
    plan:
      - get: nats-image-in
      - put: nats-image-out

resources:
  - name: nats-image-in
    type: docker-image
    source:
      repository: nats
      tag: 0.7.2

  - name: nats-image-out
    type: docker-image
    source:
      insecure_registries: ["192.168.41.159:5000"]
      repository: 192.168.41.159:5000/my-repo/nats
      tag: mytag

Bug is around here:

docker run \
--cidfile=/tmp/container.cid \
-v /opt/resource/print-metadata:/tmp/print-metadata \
--entrypoint /tmp/print-metadata \
"$image_name" > ${destination}/metadata.json

print-metadata should handle this scenario gracefully, these are perfectly valid Docker images.

docker-image-resource doesnt play well with private registry

I have some private registry (basically docker pull registry && docker run -p 5000:5000 registry) and I would like for docker-image-resource to play with it.

In the registry I keep some of my base image which I'm using for all commands, tasks etc. It is basically an ubuntu:14.04 with bunch of stuff pre-loaded (like git, curl, etc.). So, whenever I'm adding task with image: docker://192.168.59.103:5000/base-ubuntu it works. But, inside job in concourse I would like to build another image out of that base image of mine. Assuming that I have a folder with Dockerfile starting with FROM base-ubuntu, from shell it is easy to do:

docker pull 192.168.59.103:5000/base-ubuntu
docker tag 192.168.59.103:5000/base-ubuntu base-ubuntu
docker build -t my-image .
docker tag my-image 192.168.59.103:5000/my-image
docker push 192.168.59.103:5000/my-image

This sequence isn't really possible now with docker-image-resource. Having the folder done in previous step, my original approach was:

- put: my-image
  params:
    build: my-folder/foo

And that basically have 2 problems:

  1. registry is insecure
  2. base-image is not found

I tried to add:

  - get: base-image
    resource: base-image

with appropriate resource, but it didn't helped, again due to insecure registry.

I'm not sure if get should be here at all, maybe not. Either way, I'd love to see docker-image-registry supporting my scenarios, as basically now the only alternative is to have custom docker scripts running against custom docker server.

resource not working when running on coreos

This is a followup to my questions in the concourse slack channel.

I'm trying to get concourse running in docker/kubernetes on CoreOS. I'm having problems getting concourse worker to run properly in a privileged docker container. While I managed to get the worker to start, the docker-image resource is not able to start the docker daemon.
The get and put steps get stuck in a loop waiting for docker to come up....
When hijacking into the resource container /tmp/setup_graph.log shows that it failed to mount the loop device:

[...]
+ mount /dev/loop1 /var/lib/docker
mount: mounting /dev/loop1 on /var/lib/docker failed: No such device or address

This errors boggles my mind. For an unknown reason the resource container is unable to mount the loop device it successfully configured with losetup. There are more then enough loop devices available and losetup -a shows the loop device (e.g. /dev/loop8) as being present with the /tmp/docker.img.XXXX. I'm able to mount this loop device from the host system and from inside the privileged docker container running the worker but not from within the garden container.

The docker daemon then itself fails to start with rather short and non-descriptive error:

time="2016-02-29T13:39:02.815050686Z" level=warning msg="devmapper: Udev sync is not supported. This will lead to unexpected behavior, data loss and errors. For more information, see https://docs.docker.com/reference/commandline/daemon/#daemon-storage-driver-option"
time="2016-02-29T13:39:02.854841326Z" level=fatal msg="Error starting daemon: error initializing graphdriver: devicemapper: Error running deviceCreate (CreatePool) dm_task_run failed"

I think it might suffer from the same underlying problem that the devmapper driver somehow is not able to setup its loop devices but I'm not sure.

I noticed that /sys is mounted ro in the resource container but remounting it rw unfortunately does nothing to fix the problem.

I'm spend quite some time trying to figure this out but I'm pretty out of ideas at this point. I would gladly appreciate it if somebody could help me debugging this. I'm pretty excited about the idea to have concourse running in kubernetes and would really like to get this working.

I cobbled together a small vagrant based project to reproduce the problem:

git clone https://github.com/databus23/coreos-concourse.git
cd concourse-coreos/
./setup.sh
#When concourse comes up and triggered the first build the container can be hijacked with
./fly -t vagrant hijack -j test/test /bin/sh

Note: This project contains a rebuild v0.74.0 concourse binary with a patched setup.sh to have garden-linux not trip over cgroup subsystems mounted as pairs (e.g. cpu,cpuacct). Moreover it contains the latest docker-image-resource to address similar issues in the docker-image resource. The build instructions can be found in the build/ subdirectory.

The error happens on CoreOS stable, beta and alpha which ranges from kernel 4.2.2 to 4.4.1 and docker 1.8.3 to 1.10.1

/cc @vito

Pulling docker images behind a proxy

In concourse v1.1.0 the groundcrew job accepts http and https proxy (via http_proxy_url and https_proxy_url respectively)

Please make sure the Docker daemon is started with the http_proxy, https_proxy and no_proxy env variables in order for the Docker client to be able to fetch images when running behind a proxy.

`check` should support discovering older digests of the image

currently it just fetches the tag's current digest and blindly emits it.

instead, it should:

  1. read version from the request
  2. if present, fetch d1 as the given digest
  3. fetch d2 as the digest at the configured tag
  4. if not the same, emit {d1, d2}.
  5. if they're the same, or no version was given, emit {d2}.

"concourse-web" container is failing and "concourse-worker" container cannot be removed.

I'm trying to test out concourse on an ubuntu 14.04 ec2 instance. I am attempting to use the containerized version of the software with the docker-compose example shown here in the documentation. However on any attempt the concourse-web container fails after about 15 seconds.

More info:

Here is the script I am using to get it up and running:

mkdir concourse
cd concourse
mkdir -p keys/web keys/worker

ssh-keygen -t rsa -f ./keys/web/tsa_host_key -N ''
ssh-keygen -t rsa -f ./keys/web/session_signing_key -N ''

ssh-keygen -t rsa -f ./keys/worker/worker_key -N ''

cp ./keys/worker/worker_key.pub ./keys/web/authorized_worker_keys
cp ./keys/web/tsa_host_key.pub ./keys/worker

# for ec2
export CONCOURSE_EXTERNAL_URL=$(wget -q -O - http://instance-data/latest/meta-data/public-ipv4)

#creating docker compose file
echo 'concourse-db:
  image: postgres:9.5
  environment:
    POSTGRES_DB: concourse
    POSTGRES_USER: concourse
    POSTGRES_PASSWORD: changeme
    PGDATA: /database

concourse-web:
  image: concourse/concourse
  links: [concourse-db]
  command: web
  ports: ["8080:8080"]
  volumes: ["./keys/web:/concourse-keys"]
  environment:
    CONCOURSE_BASIC_AUTH_USERNAME: concourse
    CONCOURSE_BASIC_AUTH_PASSWORD: changeme
    CONCOURSE_EXTERNAL_URL: "${CONCOURSE_EXTERNAL_URL}"
    CONCOURSE_POSTGRES_DATA_SOURCE: |
      postgres://concourse:changeme@concourse-db:5432/concourse?sslmode=disable

concourse-worker:
  image: concourse/concourse
  privileged: true
  links: [concourse-web]
  command: worker
  volumes: ["./keys/worker:/concourse-keys"]
  environment:
    CONCOURSE_TSA_HOST: concourse-web' > docker-compose.yml

docker-compose up -d

However about 15 seconds after doing a docker-compose up -d the concorse_concourse-web_1 container stops running and I cannot connect to it through a browser at any point. Here are the docker logs of the container at the end right when it fails:

panic: runtime error: invalid memory address or nil pointer dereference
[signal 0xb code=0x1 addr=0x0 pc=0x5e093a]

goroutine 1 [running]:
panic(0xfba6c0, 0xc820016070)
        /usr/local/go/src/runtime/panic.go:481 +0x3e6
github.com/concourse/atc/atccmd.(*ATCCommand).constructAPIHandler(0xc82023c608, 0x7ff484d1b5d0, 0xc8200501e0, 0xc82026f0e0, 0xc8202c9300, 0x7ff484d1d858, 0xc82030c5c0, 0x7ff484d1d980, 0xc8202afda0, 0x7ff484d1d958, ...)
        /tmp/build/9674af12/concourse/src/github.com/concourse/atc/atccmd/command.go:787 +0x121a
github.com/concourse/atc/atccmd.(*ATCCommand).Runner(0xc82023c608, 0xc820270d30, 0x0, 0x1, 0x0, 0x0, 0x0, 0x0)
        /tmp/build/9674af12/concourse/src/github.com/concourse/atc/atccmd/command.go:221 +0xe44
main.(*WebCommand).Execute(0xc82023c608, 0xc820270d30, 0x0, 0x1, 0x0, 0x0)
        /tmp/build/9674af12/gopath/src/github.com/concourse/bin/cmd/concourse/web.go:54 +0x297
github.com/concourse/bin/vendor/github.com/vito/twentythousandtonnesofcrudeoil.installEnv.func2(0x7ff484d0b5e0, 0xc82023c608, 0xc820270d30, 0x0, 0x1, 0x0, 0x0)
        /tmp/build/9674af12/gopath/src/github.com/concourse/bin/vendor/github.com/vito/twentythousandtonnesofcrudeoil/environment.go:30 +0x81
github.com/concourse/bin/vendor/github.com/jessevdk/go-flags.(*Parser).ParseArgs(0xc8200512c0, 0xc82000a150, 0x1, 0x1, 0x0, 0x0, 0x0, 0x0, 0x0)
        /tmp/build/9674af12/gopath/src/github.com/concourse/bin/vendor/github.com/jessevdk/go-flags/parser.go:312 +0xa34
github.com/concourse/bin/vendor/github.com/jessevdk/go-flags.(*Parser).Parse(0xc8200512c0, 0x0, 0x0, 0x0, 0x0, 0x0)
        /tmp/build/9674af12/gopath/src/github.com/concourse/bin/vendor/github.com/jessevdk/go-flags/parser.go:185 +0x9b
main.main()
        /tmp/build/9674af12/gopath/src/github.com/concourse/bin/cmd/concourse/main.go:29 +0x10d

Also after trying to stop and remove the containers the concorse_concourse-worker_1 container cannot be removed and shows up in a docker ps -a as Dead. The following error message occurs when attempting to remove it:

ubuntu@ip-172-31-59-167:~/concorse$ docker rm a005503d568b
Error response from daemon: Driver aufs failed to remove root filesystem a005503d568b4931f860334e95ff37265dc0913083d3592f0291e023275bbf20: rename /var/lib/docker/aufs/diff/9bcff3a39934ea3525bf8a06ef900bf9dfba59a5187747beb65e9ba5709ebf75 /var/lib/docker/aufs/diff/9bcff3a39934ea3525bf8a06ef900bf9dfba59a5187747beb65e9ba5709ebf75-removing: device or resource busy

put should support build-arg

Docker 1.9 introduced build-arg, it could be nice to support this feature.

Something like this (similar to ca-certs)
build-arg: Optional. An array of objects with the following format:

build-arg:
- key: maintainer
  value: [email protected]
- key: ruby-gem-mirror
  value: https://my-mirror.com/

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.