GithubHelp home page GithubHelp logo

concourse-bosh-release's Introduction

concourse-bosh-release

A BOSH release for the concourse binary.

This repository contains the official BOSH release of Concourse. It packages up the concourse binary and exposes all flags via properties on the web and worker jobs. These jobs represent the web node and the worker node, respectively.

Requirements

Usage

Check out the concourse-bosh-deployment repository repository for a stub manifest and various ops files.

If you're not familiar with BOSH, you may want to check out the BOSH documentation first.

If you're just looking to get Concourse running quickly and kick the tires, you may want to try the Quick Start instead.

Developing

To add new Concourse flags/env vars to one of the job specs, do the following:

  1. Update the spec file located in the relevant jobs/<job>/ directory
  2. Run ./scripts/generate-job-templates to add the flags to the job template(s)

Note about default values

When adding a new Concourse flag, don't define a default value that mirrors a default set by the Concourse binary.

Instead, mention the default in the description. This prevents the possibility of drift if the Concourse binary default value changes.

containerd.max_containers:
    env: CONCOURSE_CONTAINERD_MAX_CONTAINERS
    description: |
      Maximum container capacity. 0 means no limit. Defaults to 250.

We understand that the comment stating the binary's default can become stale. The current solution is a suboptimal one. It may be improved in the future by generating a list of the default values from the binary.

concourse-bosh-release's People

Contributors

aeijdenberg avatar andrewedstrom avatar chenbh avatar chendrix avatar clarafu avatar concourse-bot avatar drnic avatar dylangriffith avatar evashort avatar jmcarp avatar jmelchio avatar joshzarrabi avatar jtarchie avatar lasred avatar mariash avatar markkropf avatar mhuangpivotal avatar muntac avatar nwdenton avatar pivotal-ahirji avatar robdimsdale avatar tanglisha avatar taylorsilva avatar vito avatar xenophex avatar xoebus avatar xtreme-sameer-vohra avatar xtremerui avatar youssb avatar zachgersh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

concourse-bosh-release's Issues

Patch release v5.2.1?

Would it be possible to cut a patch release containing the fix of #31? Would be highly appreciated.

Go 1.15 breaks LDAP integration with AD controllers due to CN x509 field deprecation

Bug Report

Following an upgrade from 6.4.0 to 6.6.0, we're now getting the following error because of our LDAP integration with our Active Directory servers:

"level":"error","source":"atc","message":"atc.dex.event","data":{"fields":{},"message":"Failed to login user: failed to connect: LDAP Result Code 200 \"Network Error\": x509: certificate relies on legacy Common Name field, use SANs or temporarily enable Common Name matching with GODEBUG=x509ignoreCN=0","session":"15"}}

The error is correct in that our AD certificates do not have a SAN entry that matches the CommonName, and so we depend on the CN alone. Because the AD servers and the PKI that issues certificates to them, are out of our control, we have no easy way to remedy the issue ourselves at the moment.

The error is caused by the following breaking chage in Go 1.15:
https://golang.org/doc/go1.15#commonname

Could we please introduce an additional env: variable in the bpm.yml file that allows us to override the deprecation in the Go package by setting (or adding to) the following environment variable:

GODEBUG='x509ignoreCN=0'


  • Concourse version: 6.6.0
  • Deployment type (BOSH/Docker/binary): BOSH
  • Infrastructure/IaaS: vSphere
  • Browser (if applicable): N/A
  • Did this used to work? Yes

Upgrade registry-image resource to 0.10.0

#Feature Request

What challenge are you facing?

registry-image resource appears to be out of date and does not support new sts authentication method (aws_session_token)

run check step: run check step: check: resource script '/opt/resource/check []' failed: exit status 1

stderr:
�[31mERRO�[0m[0000] invalid payload: json: unknown field "aws_session_token" 

A Modest Proposal

Should be able to use the aws_session_token to put docker images to AWS ECR, but the release appears out of date for this bosh release.

concourse/registry-image-resource#161

Increase systemd TasksMax to prevent "fork rejected by pids controller" errors

We're seeing fork rejected by pids controller happen on workers running many processes. We see this issue discussed in the past: concourse/concourse#480 and concourse/concourse#2296

I feel like maybe this was fixed in the past with some change to garden init files, but now that garden is gone, it's come back?

Would it make sense to add TasksMax=infinity to: https://github.com/concourse/concourse-bosh-release/blob/master/jobs/worker/templates/concourse.service?

Or if we don't want to change it for everybody, expose a new property in the spec file?

Allow configuration of `web.noop`

Feature Request

What challenge are you facing?

When facing a disruption in a Concourse installation, it's useful for operators to have the ability to impede the scheduling while performing maintenance, i.e., being able to leverage the --noop option:

      -n, --noop            Don't actually do any automatic scheduling or checking. [$CONCOURSE_NOOP]

A Modest Proposal

Have the noop flag configurable.

The following can also be handy:

  • Concourse version: 5.2.1
  • Deployment type (BOSH/Docker/binary): BOSH
  • Infrastructure/IaaS: ~
  • Browser (if applicable): ~

UAA Oauth via LDAP for main_team.oauth.groups?

Bug Report

Hello! We are unable to use UAA as an Oauth2 provider to log into a concourse bosh deployment in 5.2.1. We are used to using 4.2.1, in which we were able to allow all users access to the main team. Our UAA provider is running on our bosh director, which is using LDAP as a source and storing users in a UAA DB. When attempting to log in, we recieve a 401 error after the redirect back to concourse and the following log entry:

{"timestamp":"2019-08-12T23:10:28.881203774Z","level":"error","source":"atc","message":"atc.sky.callback.failed-to-issue-concourse-token","data":{"error":"user doesn't belong to any team","session":"5.10"}}

The previous output from the web logs shows the user as having no groups (groups=[]). How do we expose these groups to concourse to specify in main_team.oauth.groups?

  • Concourse version: 5.4.1
  • Deployment type : bosh
  • Did this used to work? Yes, when we could allow all users access to MAIN team

concourse_tls_ca_cert issue while upgrading from v6.7.5 to v7.1.0

Hi there!

Bug Report

We have upgraded from concourse V6.7.5 to v7.1.0 and post the upgrade accessing the concourse-web was returning the following error:
didnt accept your login certificate, or one may not have been provided
Try contacting the system domain
ERR_BAD_SSL_CLIENT_AUTH_CERT

Upon this looking into the concourse-bosh-release commit: 2bce683 . We removed the CONCOURSE_TLS_CA_CERT from '/var/vcap/jobs/web/config/bpm.yml' and restarted the web process - 'monit web restart'. Post the restart , we were able to access concourse.

Following are the error's seen in web node stderr.log:
2021/03/26 10:34:55 http: TLS handshake error from 172.26.126.205:52072: tls: client didn't provide a certificate
2021/03/26 10:34:56 http: TLS handshake error from 172.26.126.205:52073: tls: client didn't provide a certificate
2021/03/26 10:34:58 http: TLS handshake error from 172.26.126.205:52074: tls: client didn't provide a certificate
2021/03/26 10:34:59 http: TLS handshake error from 172.26.126.205:52075: tls: client didn't provide a certificate
2021/03/26 10:35:00 http: TLS handshake error from 172.26.126.205:52076: tls: client didn't provide a certificate
2021/03/26 10:35:01 http: TLS handshake error from 172.26.126.205:52077: tls: client didn't provide a certificate
2021/03/26 10:35:02 http: TLS handshake error from 172.26.126.205:52079: tls: client didn't provide a certificate

NOTE: We did not set up anything about client cert authentication in the config.

The following can also be handy:

  • Concourse version: v7.1.0
  • Deployment type (BOSH/Docker/binary): BOSH
  • Infrastructure/IaaS: Vsphere
  • Browser (if applicable): Chrome
  • Did this used to work?: Yes

Gdn assets are not updated on upgrade

Bug Report

When upgrading to 6.6, folks have observed the following error

runc run: exit status 1: container_linux.go:349: starting container process caused "process_linux.go:439: container init caused \"process_linux.go:405: setting cgroup config for procHooks process caused \\\"failed to write \\\\\\\"c 5:1 rwm\\\\\\\" to \\\\\\\"/sys/fs/cgroup/devices/system.slice/concourse.service/garden/a206550f-f6dd-4609-4f13-0a11afd3fd93/devices.allow\\\\\\\": write /sys/fs/cgroup/devices/system.slice/concourse.service/garden/a206550f-f6dd-4609-4f13-0a11afd3fd93/devices.allow: operation not permitted\\\"\""

Context

BOSH does not update items under /var/gdn/assets if the directory already exists causing the old runc to stick around. That's why the BOSH recreate helped.

Workarounds

When you do the BOSH deploy you can use the --recreate flag. This will recreate everything in the deployment, and depending on the configuration of canaries can happen on a rolling basis so you shouldn't see much of a downtime. Alternatively, updating the stemcell would have the same impact.

The following can also be handy:

  • Concourse version: 6.6
  • Deployment type (BOSH/Docker/binary): BOSH
  • Infrastructure/IaaS: n/a
  • Browser (if applicable): n/a
  • Did this used to work? Yes

Load balancing on web servers

Hi we are having issues trying to create a cluster for our web servers. When we try to access the web server through the gui, it is only successful on one of them. Here are example of a test run:

     1. manipulate host file to point at web server 1 for our url
 2. type in the url in the browser, and successfully reach the “welcome to concourse” page
 3.click login and logging in with my LDAP user successfully.
 4.manipulate host file to point at web server 2 for our url
 5.type in the url in the browser, and successfully reach the “welcome to concourse” page
 6. click login and logging in with my LDAP user, but now I get this message: {"error":"invalid_client","error_description":"Invalid client credentials."}

Also, if I try a monit restart on the web servers, it is the server that is restarted latest that will be possible to login to.

We also have one error in the log on the postgresql server:

“FATAL: database "vcap" does not exist”

prod-db-backup pipeline is failing

Description

The prod-db-backup has not been running for almost a month.
https://ci.concourse-ci.org/teams/main/pipelines/prod-db-backup

Appears to not be pointing to the ci branch.

Task List

Created a branch here: https://github.com/concourse/ci/tree/clean-bbr-test-pipelines

Independent release cycle

We should give this release its own release cycle and its own release notes; cramming them into the concourse-ci.org release notes feels wrong, and depending on new versions of Concourse feels pretty limiting when it comes to BOSH-specific bugs like #18 and #19.

We'll need to come up with a versioning scheme that isn't coupled to Concourse versions, and be careful to note which Concourse version is embedded.

base_resource_type_defaults does not start concourse if you set

Bug Report

  • Concourse version: v6.7.1
  • Deployment type (BOSH/Docker/binary): BOSH
  • Infrastructure/IaaS: AWS
  • Browser (if applicable): nop

The following can also be handy

Added the following ops-file to support the pull limit of dockerhub.

- type: replace
  path: /instance_groups/name=concourse/jobs/name=web/properties?/base_resource_type_defaults
  value:
    registry-image:
      registry_mirror:
        host: https://<<docker-proxy-url>>

The following errors are output in the concourse log

+ [[ vcap != '' ]]
+ chown vcap:vcap /var/vcap/jobs/web/config/env/CONCOURSE_TSA_HOST_KEY
+ chmod 0600 /var/vcap/jobs/web/config/env/CONCOURSE_TSA_HOST_KEY
error: invalid argument for flag `--base-resource-type-defaults' (expected flag.File): stat registry-image:/var/vcap/jobs/web/config/env/CONCOURSE_BASE_RESOURCE_TYPE_DEFAULTS_registry-image: no such file or directory

The content of bpm.yml looks like this, with a registry-image:/ prefix

CONCOURSE_BASE_RESOURCE_TYPE_DEFAULTS: "registry-image:/var/vcap/jobs/web/config/env/CONCOURSE_BASE_RESOURCE_TYPE_DEFAULTS_registry-image"

I want to see if the content of the ops-files is bad or a bug.

Garden properties are always included even when using containerd runtime

Bug Report

The env.sh script generated by BOSH (via the Go templating -> ERB templating) always contains the following flags (since they have defaults defined in the spec:

  • CONCOURSE_GARDEN_ALLOW_HOST_ACCESS
  • CONCOURSE_GARDEN_MAX_CONTAINERS
  • CONCOURSE_GARDEN_NETWORK_POOL

This causes an issue when selecting runtime: containerd as Concourse won't allow Garden options when using the containerd runtime.

The following can also be handy:

  • Concourse version: 6.4.1 (actually, master of this repo)
  • Deployment type (BOSH/Docker/binary): BOSH (obviously? 😁)
  • Infrastructure/IaaS: AWS
  • Browser (if applicable): N/A
  • Did this used to work? New feature.

NB: I'm aware that this is a very new feature so don't expect it to work perfectly but since I was taking it for a spin I figured that pointing out the issues I came across would be helpful! I was looking at fixing it by adding some ERB bits to only include relevant flags based on which runtime is selected but the Go templating stuff isn't really set up for only selecting certain properties from the spec file and I didn't want to be too opinionated on changes without discussing with the team first.

file permissions and ownership under BPM of `CONCOURSE_POSTGRES_CLIENT_KEY`

Ahoy! I'm seeing an issue with file permissions and ownership when:

  • running Concourse 5.0.0
  • via concourse-bosh-deployment
  • using the ops file concourse-bosh-deployment/cluster/operations/external-postgres-client-cert.yml.

The TL;DR is that the file CONCOURSE_POSTGRES_CLIENT_KEY is created with perms 0644 and ownership by root, which causes problems because:

  • the library pq requires 0600 permissions on private keys
  • BPM runs the process as vcap meaning that if 0600 is required, then file ownership must be set to vcap.

There are a few ways we could fix this, and so I'm seeking guidance on what y'all would like me to put into a PR.

What I'm seeing

Out of the box, here's what happens when concourse-bosh-deployment tries to use CONCOURSE_POSTGRES_CLIENT_KEY:

==> /var/vcap/sys/log/web/web.stderr.log <==
failed to migrate database: pq: Private key file has group or world access. Permissions should be u=rw (0600) or less

To level-set, this is what the file permissions look like for the env files:

# ls -lt /var/vcap/data/jobs/web/*/config/env/*
-rw-r--r-- 1 root root 1675 Mar  9 01:51 /var/vcap/data/jobs/web/452353ea17eb7a4afbc61f44531c3c0135262640/config/env/CONCOURSE_TSA_HOST_KEY
-rw-r--r-- 1 root root 1188 Mar  9 01:51 /var/vcap/data/jobs/web/452353ea17eb7a4afbc61f44531c3c0135262640/config/env/CONCOURSE_POSTGRES_CLIENT_CERT
-rw-r--r-- 1 root root 1679 Mar  9 01:51 /var/vcap/data/jobs/web/452353ea17eb7a4afbc61f44531c3c0135262640/config/env/CONCOURSE_POSTGRES_CLIENT_KEY
-rw-r--r-- 1 root root 1675 Mar  9 01:51 /var/vcap/data/jobs/web/452353ea17eb7a4afbc61f44531c3c0135262640/config/env/CONCOURSE_SESSION_SIGNING_KEY
-rw-r--r-- 1 root root  381 Mar  9 01:51 /var/vcap/data/jobs/web/452353ea17eb7a4afbc61f44531c3c0135262640/config/env/CONCOURSE_TSA_AUTHORIZED_KEYS_0
-rw-r--r-- 1 root root 1147 Mar  9 01:51 /var/vcap/data/jobs/web/452353ea17eb7a4afbc61f44531c3c0135262640/config/env/CONCOURSE_POSTGRES_CA_CERT

The code raising this error is lib/pq/ssl_permissions.go, which is called from lib/pq/ssl.go. There doesn't seem to be any way to omit this permissions check.

What I've tried, Part 1

Let's set the file permissions to 0600 as pq demands:

# chmod go-r /var/vcap/data/jobs/web/*/config/env/CONCOURSE_POSTGRES_CLIENT_KEY
# ls -lt /var/vcap/data/jobs/web/*/config/env/*
-rw-r--r-- 1 root root 1675 Mar  9 01:57 /var/vcap/data/jobs/web/452353ea17eb7a4afbc61f44531c3c0135262640/config/env/CONCOURSE_TSA_HOST_KEY
-rw-r--r-- 1 root root  381 Mar  9 01:57 /var/vcap/data/jobs/web/452353ea17eb7a4afbc61f44531c3c0135262640/config/env/CONCOURSE_TSA_AUTHORIZED_KEYS_0
-rw-r--r-- 1 root root 1147 Mar  9 01:57 /var/vcap/data/jobs/web/452353ea17eb7a4afbc61f44531c3c0135262640/config/env/CONCOURSE_POSTGRES_CA_CERT
-rw-r--r-- 1 root root 1188 Mar  9 01:57 /var/vcap/data/jobs/web/452353ea17eb7a4afbc61f44531c3c0135262640/config/env/CONCOURSE_POSTGRES_CLIENT_CERT
-rw------- 1 root root 1679 Mar  9 01:57 /var/vcap/data/jobs/web/452353ea17eb7a4afbc61f44531c3c0135262640/config/env/CONCOURSE_POSTGRES_CLIENT_KEY
-rw-r--r-- 1 root root 1675 Mar  9 01:57 /var/vcap/data/jobs/web/452353ea17eb7a4afbc61f44531c3c0135262640/config/env/CONCOURSE_SESSION_SIGNING_KEY

Now we get this error:

==> /var/vcap/sys/log/web/web.stderr.log <==
failed to migrate database: open /var/vcap/jobs/web/config/env/CONCOURSE_POSTGRES_CLIENT_KEY: permission denied

which of course is because BPM runs the process as vcap who no longer has permissions to read this file.

What I've tried, Part 2

So now let's also set the file ownership to vcap:

# chown vcap /var/vcap/data/jobs/web/*/config/env/CONCOURSE_POSTGRES_CLIENT_KEY
# ls -lt /var/vcap/data/jobs/web/*/config/env/*
-rw-r--r-- 1 root root  381 Mar  9 02:47 /var/vcap/data/jobs/web/452353ea17eb7a4afbc61f44531c3c0135262640/config/env/CONCOURSE_TSA_AUTHORIZED_KEYS_0
-rw-r--r-- 1 root root 1675 Mar  9 02:47 /var/vcap/data/jobs/web/452353ea17eb7a4afbc61f44531c3c0135262640/config/env/CONCOURSE_TSA_HOST_KEY
-rw-r--r-- 1 root root 1188 Mar  9 02:47 /var/vcap/data/jobs/web/452353ea17eb7a4afbc61f44531c3c0135262640/config/env/CONCOURSE_POSTGRES_CLIENT_CERT
-rw------- 1 vcap root 1679 Mar  9 02:47 /var/vcap/data/jobs/web/452353ea17eb7a4afbc61f44531c3c0135262640/config/env/CONCOURSE_POSTGRES_CLIENT_KEY
-rw-r--r-- 1 root root 1675 Mar  9 02:47 /var/vcap/data/jobs/web/452353ea17eb7a4afbc61f44531c3c0135262640/config/env/CONCOURSE_SESSION_SIGNING_KEY
-rw-r--r-- 1 root root 1147 Mar  9 02:47 /var/vcap/data/jobs/web/452353ea17eb7a4afbc61f44531c3c0135262640/config/env/CONCOURSE_POSTGRES_CA_CERT

And now the web job is able to connect and migrate and everything works.

Request for Guidance

My instinct is to fix this by ensuring that the env_file_writer helper in tmpl/create_env_files.erb.tmpl creates all files with 0600 perms and vcap as owner and group.

Would that approach be acceptable if I submitted a PR?

BOSH stop under high load fails - baggageclaim

Bug Report

Under high load, BOSH deploy command fails due to baggageclaim stop failing.

We have observed this twice on Wings already.

Incident 1
We observed that http response rates were extremely slow on Wings. To rectify the issue, it was decided to restart the system and bump the stemcell.

Doing so via a BOSH deploy resulted in baggageclaim stop failing, which resulted in the deploy failing.
baggageclaim stop

Manually ssh'ing onto the VM and issuing a monit restart didn't deterministically resolve the issue on a subsequent BOSH deploy.

To resolve the failed deploy, all the workers had to be stopped via stop --hard, which took multiple tries to succeed (every try, a few more workers would be stopped). Finally, another BOSH deploy got the system back into a working state.

  • Concourse version: 4.2.2
  • Deployment type (BOSH/Docker/binary): BOSH
  • Infrastructure/IaaS: GCP

Steps to Reproduce

Unfortunately, there isn't a consistent way to reproduce the issue. What was observed was that the system was under high load ( workers had ~200 containers ) and were executing builds and resource checks.

Expected Results

BOSH commands such as deploy, stop, start shouldn't fail due to baggageclaim, as this results in not being able to restore the system via BOSH and requires using increasingly destructive actions in order to eventually restore the system to a working state.

Multiple worker_gateway.authorized_keys passed in as an array results in none being used by the TSA process

Bug Report

Hi, we (Redis for PCF) have a concourse deployment that includes both internal workers (which all have the same worker key) and external workers (which have a different worker key). We have recently started work on upgrading from 4.2.3 to 5.0.0 and found that the way in which we are templating in our web authorized keys:

properties:
  worker_gateway:
    authorized_keys: [((external_worker_key.public_key)), ((internal_worker_key.public_key))]

does not work despite the suggestion in the spec that it should:

Public keys to authorize for SSH connections. Either a string with one
public key per line, or an array of public keys.

It appears that the issue revolves around the fact that the env_file_writer func,

def env_file_writer(v, env)
path = "/var/vcap/jobs/web/config/env/#{env}"
case v
when Array
v.collect.with_index do |v, i|
"cat > #{path}_#{i} <<\"ENVGEN_EOF\"\n#{env_file_content(v)}ENVGEN_EOF\n"
end.join("\n\n")
puts values into separate files when given an array and then passes through a comma separated list of files,
CONCOURSE_TSA_TEAM_AUTHORIZED_KEYS: <%= env_file_flag(v, "CONCOURSE_TSA_TEAM_AUTHORIZED_KEYS").to_json %>
while the TSA command expects to be passed a single file path: https://github.com/concourse/concourse/blob/c0422a90aed2ff18b8411b9f5dcaf87b784802b8/tsa/tsacmd/command.go#L35

  • Concourse version: 5.0.0
  • Deployment type (BOSH/Docker/binary): BOSH
  • Infrastructure/IaaS: GCP
  • Browser (if applicable): N/A
  • Did this used to work? Yes

Note: as a workaround we have moved to passing the authorized keys line separated in a single string which works:

properties:
  worker_gateway:
    authorized_keys: |
      ((external_worker_key.public_key))
      ((internal_worker_key.public_key))

cc @jplebre

Release 5.0.1 not in latest release tar/zip

Hi there!

Bug Report

When trying to reference the 5.0+ released artifacts for a Bosh deployed Concourse, no 5.0+ releases exist and the deployment fails.

  • Concourse version: 5.0+
  • Deployment type (BOSH/Docker/binary): BOSH
# ls -lrt ~/concourse-bosh-release-5.0.1/releases/concourse/concourse-5*
ls: cannot access /root/concourse-bosh-release-5.0.1/releases/concourse/concourse-5*: No such file or directory
# ls -lrt ~/concourse-bosh-release-5.0.1/releases/concourse/concourse-4*
-rw-rw-r-- 1 root root 8191 Mar 22 15:53 /root/concourse-bosh-release-5.0.1/releases/concourse/concourse-4.2.3.yml
-rw-rw-r-- 1 root root 8191 Mar 22 15:53 /root/concourse-bosh-release-5.0.1/releases/concourse/concourse-4.2.2.yml
-rw-rw-r-- 1 root root 8191 Mar 22 15:53 /root/concourse-bosh-release-5.0.1/releases/concourse/concourse-4.2.1.yml
-rw-rw-r-- 1 root root 8191 Mar 22 15:53 /root/concourse-bosh-release-5.0.1/releases/concourse/concourse-4.2.0.yml
-rw-rw-r-- 1 root root 7983 Mar 22 15:53 /root/concourse-bosh-release-5.0.1/releases/concourse/concourse-4.1.0.yml
-rw-rw-r-- 1 root root 7984 Mar 22 15:53 /root/concourse-bosh-release-5.0.1/releases/concourse/concourse-4.0.0.yml

Add baggageclaim monit restart when healthcheck fails

The monit config for the baggageclaim job relies on a pid file to ensure the job is alive. This can lead to false positives in determining a healthy system (ran into a scenario where baggageclaim's stderr logs 2018/10/11 20:32:16 http: Accept error: accept tcp 0.0.0.0:7788: accept4: too many open files; retrying in 5ms and closes its socket but monit lists the process as running).

We might want to consider giving monit a health endpoint so it can successfully handle the lifecycle. Here is an example from the UAA-Release.

      if failed port <%= active_uaa_port %> protocol http
        request "/healthz"
        with timeout 60 seconds for 64 cycles
      then restart

Make it easier to co-locate prepackaged resource types

It would be nice if we could co-locate BOSH releases that provide resource types. This would be useful for PCF customers so that we can prepackage cf resource and other resources they need after they're extracted from the core set.

Failed to open connector cf, unknown connector type

Describe the bug

error: server: Failed to open connector cf: failed to open connector: unknown connector type "cf"

We encountered this error in 7.9.1 and it was stated that a fix for this would be pushed in 7.9.2, however we are still seeing this error when attempting to deploy to 7.10.0.

Reproduction steps

  1. use cf_auth
  2. deploy 7.10.0
  3. get error from web.stderr.log
    ...

Expected behavior

That we can continue to use cloudfoundry for authentication

Additional context

The only place that we reference cf is in our team configuration yamls. I went ahead and updated all of the team configs to only have the local admin user as their member, however I still encountered the error when trying to deploy.

Is there anything else I should be changing?

unable to authenticate with Hashicorp Vault using approle

Describe the bug

I have my manifest as such:

      vault:
        auth:
          backend: approle
        params: {role_id: <redacted>, secret_id: <redacted>}
        insecure_skip_verify: true
        path_prefix: /concourse
        url: http://vault.xxxx.net:8200

However, the log shows:

web.stdout.log:{"timestamp":"2024-02-20T21:59:11.292528068Z","level":"error","source":"atc","message":"atc.credential-manager.login.failed","data":{"error":"Error making API request.\n\nURL: PUT http://vault.xxx.net:8200/v1/auth/approle/login\nCode: 500. Errors:\n\n* failed to determine alias name from login request","name":"vault","session":"7.1"}}

If I manually run this from the web instance,

curl --url http://vault.ag6hq.net:8200/v1/auth/approle/login \
  --data '{"role_id": "<redacted>", "secret_id": "<redacted>"}' \
  --request PUT

I get a token, and I can read my secrets.
If my config is wrong, great, just let me know.

The error message is quite vague.

Reproduction steps

  1. Create a Vault instance
  2. Create a policy for concourse
  3. Create an approle for concourse and retrieve the role_id and secret_id
  4. Attach the policy to the approle
  5. Create a kv engine for concorse
  6. Create a concourse manifest, which includes the vault keys described above
  7. Create a simple pipeline that reads a secret from the concourse kv engine
    ...

Expected behavior

  • Successfully retrieve a token from Vault
  • Successfully retrieve a secret value from the vault kv engine for concourse

Additional context

This is a lab with no firewalls, and all the services are on one subnet.

Failed to open connector cf, unknown connector type

Bug Report

The recent 7.9.1 release introduced an issue with a connector for us when BOSH deploying it.

Whenever we make use of the web job's cf_auth property, the web instance's web job will fail with the following error:
error: server: Failed to open connector cf: failed to open connector: unknown connector type "cf"

When reverting back to 7.9.0, all is fine. When removing the cf_auth property, the 7.9.1 deployment also succeeds without issues.

This commit might look, to my untrained eyes, somewhat relevant?

Configure a default drain timeout

Feature Request

What challenge are you facing?

It's somewhat common to forget to set a value for the drain timeout, and then your deployment can end up stuck waiting forever for builds to complete. We could use a saner default here.

A Modest Proposal

Let's just set it to an hour.

Concourse team creation in Bosh manifest

Hi there!

Feature Request

What challenge are you facing?

It seems I am not able to manage Concourse teams and their associated GitHub authentication rules in a Bosh manifest. I believe I am only able to authenticate GitHub users/teams to the "main" Concourse team.

If i was able to do this in the manifest I would be able to use version control to manage it.

A Modest Proposal

Allow users to define Concourse teams for their deployment in the bosh manifest.

Thanks!!
-Kevin

No published 4.2.3

Bug Report

Concourse 4.2.3 has shipped yet I can't seem to find a bosh release. The URL to 4.2.3's bosh release here leads to a 404 on bosh.io

Unable to deploy to docker-cpi via create-env

Issue

Cross posting from cloudfoundry/bosh-docker-cpi-release#13, not sure what the root cause is.
I'm trying to deploy concourse to docker-cpi via create-env. Both on docker-for-mac as well as docker on Ubuntu I'm getting the issue that garden isn't starting and the deploy is failing.

Versions

Stemcell

bosh.io/stemcells:img-ed853648-e0a9-47cb-4c1b-be7e210e382e

garden-runc

https://bosh.io/d/github.com/cloudfoundry/garden-runc-release?v=1.16.8

log output

bosh/0:/var/vcap/sys/log/monit# cat ./garden.err.log
Cache read/write disabled: interface file missing. (Kernel needs AppArmor 2.4 compatibility patch.)
Warning: unable to find a suitable fs in /proc/mounts, is it mounted?
Use --subdomainfs to override.
Cache read/write disabled: interface file missing. (Kernel needs AppArmor 2.4 compatibility patch.)
Warning: unable to find a suitable fs in /proc/mounts, is it mounted?
Use --subdomainfs to override.

`http_proxy` settings for web

Feature Request

What challenge are you facing?

We have no way of injecting http_proxy variables to the web nodes.

A Modest Proposal

We should probably add them.

database backup through bbr feature fails randomly with concourse 7.2

Bug Report

  • Concourse version: 7.2.0
  • Deployment type (BOSH/Docker/binary): BOSH
  • Infrastructure/IaaS: Openstack and vSphere
  • It was working everytime with concourse 6

We have a backup configured with BBR (Bosh Backup Restore).
Randomly it fails with the message :
pg_dump: [archiver (db)] query failed: ERROR: relation "public.build_event_id_seq_15739788" does not exist
LINE 1: SELECT last_value, is_called FROM public.build_event_id_seq_...
^
pg_dump: [archiver (db)] query was: SELECT last_value, is_called FROM public.build_event_id_seq_15739788
2021/06/08 02:15:19 You may need to delete the artifact-file that was created before re-running.

Support --emit-to-logs

Feature Request

What challenge are you facing?

We'd like Concourse metrics in our favorite monitoring tool, Stackdriver. Stackdriver supports deriving metrics from log files, and we have a BOSH add on to push logs to Stackdriver.

A Modest Proposal

If we could enable the --emit-to-logs flag on the ATC, we should be able to get useful Concourse metrics in Stackdriver.

Prepare repo for patch releases

These set of tasks apply only to the 4.2.x branch. If required, we will address 3.13.x branch separately.
Tasks

  • Create release branch(es) on concourse-bosh-release
  • Remove temporary bucket for 3.13.99
  • Ensure, the src dir in concourse-bosh-release matches the corresponding src dir in concourse/concourse repo
  • Ensure, the pipeline yml in concourse-bosh-release matches the corresponding pipeline yml in concourse/concourse repo
  • Update hotfix.yml pipeline config to only use concourse-bosh-release instead of concourse/concourse

Remaining work

  • Validate shipit and subsequent publish jobs generate the desired artifacts at the appropriate locations
    • finalize release metadata committed to concourse-bosh-release with the commit tagged
    • concourse release tarball in concourse-releases bucket
    • commit to 4.2.x branch of concourse-bosh-deployment with new versions of deps with the commit tagged

Clean up work - we had temporarily forked repos bumped versions to exercise the pipeline

  • Reset version to 4.2.4 ( last released version was 4.2.3 )
  • Switch concourse-bosh-release & concourse-bosh-deployment refs to concourse org from chenbh

dev/vbox doesn't work

VBox not working on local workstations

Problem

Vbox has stopped working on our local workstations when trying to use scripts in: concourse-bosh-release/dev/vbox/

Virtualbox version: 6.0.4 & 6.0.8
macOS: 10.12.6

Steps to Reproduce / Details

cd ~/workspace/concourse-bosh-release
./dev/vbox/setup

I just tried this on three workstations. Setup failed on two and worked on one workstation. I'm not sure why this is working on 1/3 workstations.

The error we get on the two failing workstations:

Deploying:
  Creating instance 'bosh/0':
    Updating instance disks:
      Updating disks:
        Deploying disk:
          Attaching disk in the cloud:
            CPI 'attach_disk' method responded with error: CmdError{"type":"Bosh::Clouds::CloudError","message":"Attaching disk '{{disk-d513549a-b20f-4c62-52fe-aaa4e5263a5b}}' to VM '{{vm-31139fb9-d9da-4a10-5e84-ef0e4b707d4a}}': Reconfiguring agent after attaching disk: Retried '30' times: Running command: 'VBoxManage controlvm vm-31139fb9-d9da-4a10-5e84-ef0e4b707d4a pause', stdout: '', stderr: 'VBoxManage: error: Machine 'vm-31139fb9-d9da-4a10-5e84-ef0e4b707d4a' is not currently running\n': exit status 1","ok_to_retry":false}

Exit code 1

We've also seen a TLS timeout error, but that eventually stopped happening and now we only get this error.

If we use the VirtualBox GUI we'll see there's one VM with the state aborted. If you start that VM you'll see the following error a few seconds after reaching the login prompt:

audit: kauditd hold queue overflow

After seeing this error, clicking anywhere on the window will crash the vm.

Some troubleshooting things we've tried and haven't worked:

  • Deleting bosh's cache in ~/.bosh/
  • in the bosh-deployment submodule, checking out an earlier commit before a bunch of stemcell changes
  • Restarting our machine
  • Deleting every item we see in the Virtualbox GUI
  • Upgrading Virtualbox to 6.0.8

bbr-atcdb job doesn't produce a valid bbr-sdk config.json when using bosh links

Bug Report

The bbr-atcdb can optionally consume some bosh links for the database

consumes:
- name: postgres
type: database
optional: true
- name: concourse_db
type: concourse_db
optional: true

Though it doesn't read the password from the postgres link

if postgres_role_password.empty?
if_link("concourse_db") do |cdb|
postgres_role_password = cdb.p("postgresql.role.password", "")
end
if postgres_role_password.empty?
raise "postgres.role.password not found through either properties or links"
end
end

Which is provided by the postgres release here:
https://github.com/cloudfoundry/postgres-release/blob/e19399db5e3913d4f9facbfc25b56b59e8e52f50/jobs/postgres/spec#L33-L40

This results is a config.json file without a password and backup restore fails for me

  • Concourse version: see below
  • Deployment type (BOSH/Docker/binary): BOSH
  • Infrastructure/IaaS: Any

BOSH releases on my director (though it looks like this still applies to the newer releases)

Name                    Version  Commit Hash
backup-and-restore-sdk  1.17.2*  f7138d2
concourse               4.2.1*   0631bd9
garden-runc             1.16.3*  a19258c+
postgres                30*      0cd50c00+

ability to set garden address

it seems that in v5.0
the ability to set the garden address is removed.
is it a possibility to add this again?

for BUCC we have multiple jobs that uses garden on one bosh vm with conourse and credhub etc.
so we share one garden-runc.

Bug: incorrect variable name for bbr-atcdb job config

@cirocosta and I found the password of config file for bbr-atcdb was empty from the vm db/0. When we looked at the template for the config we found some typo problem(postgressql should be postgresql). Please see the link below:

if_link("concourse_db") do |cdb|
postgres_host = cdb.p("postgressql.host", "")
postgres_port = cdb.p("postgressql.port")
postgres_role_name = cdb.p("postgressql.role.name")
postgres_role_password = cdb.p("postgressql.role.password")
postgres_database = cdb.p("postgressql.database")
end if postgres_host.empty?

expose baggageclaim bind_ip property to support p2p streaming

Feature Request

What challenge are you facing?

I'm trying to enable the new p2p volume streaming feature. However, using the baggageclaim default bind_ip, it listens on localhost, so other workers can't reach it to do the streaming. I need to be able to open up the baggageclaim server to bind on 0.0.0.0.

A Modest Proposal

Expose the bind-ip option of baggageclaim as a spec property for the worker job so that it may be manually set.
Either that or automatically set it to 0.0.0.0 if the p2p streaming option is enabled.

Check that windows workers have config parity with linux workers

Throwing an issue in here so this doesn't get forgotten:

We've added support for some new (and old) worker configuration flags to the Linux worker spec. Not all of those features may be supported in the Greenhouse worker runtime but it's worth quickly checking that whatever we're adding to the Linux worker spec, we're also adding for Windows if it's possible.

bbr-atcdb job on postgres node error filling in template config.json.erb

Bug Report

getting this error deploying concourse

Task 11959 | 15:09:17 | Preparing deployment: Preparing deployment (00:00:02)
Task 11959 | 15:09:19 | Preparing deployment: Rendering templates (00:00:01)
                     L Error: Unable to render instance groups for deployment. Errors are:
  - Unable to render jobs for instance group 'db'. Errors are:
    - Unable to render templates for job 'bbr-atcdb'. Errors are:
      - Error filling in template 'config.json.erb' (line 61: Can't find property '["postgresql.ca_cert"]')

manifest for db is:

releases:
- name: concourse
  version: 5.2.0
  url: https://bosh.io/d/github.com/concourse/concourse-bosh-release?v=5.2.0
  sha1: e7c813d35e70e1a3a5334b977136ba736fae05e1
- name: bpm
  version: 1.0.4
  url: https://bosh.io/d/github.com/cloudfoundry-incubator/bpm-release?v=1.0.4
  sha1: 41df19697d6a69d2552bc2c132928157fa91abe0
- name: postgres
  version: 37
  url: https://bosh.io/d/github.com/cloudfoundry/postgres-release?v=37
  sha1: 0bffec6b98df81219a18ec8da0e19721be799eed
- name: backup-and-restore-sdk
  version: 1.15.1
  url: https://bosh.io/d/github.com/cloudfoundry-incubator/backup-and-restore-sdk-release?v=1.15.1
  sha1: 364838c384f2edec80866b4abf2397c4c5d15c62
- name: haproxy
  version: 8.9.0
  url: https://github.com/cloudfoundry-incubator/haproxy-boshrelease/releases/download/v8.9.0/haproxy-8.9.0.tgz
  sha1: 0a135d9f5ce4e32dc9f1afd9a0e93baeff71c62d

stemcells:
- alias: xenial
  os: ubuntu-xenial
  version: 315.13

instance_groups:
- name: db
  instances: 1
  persistent_disk_type: ((db_persistent_disk_type))
  vm_type: ((db_vm_type))
  stemcell: xenial
  azs: [z1]
  networks:
  - name: tools-vsphere 
    static_ips: ((db_ip))
  jobs:
  - release: postgres
    name: postgres
    properties:
      databases:
        port: 5432
        databases:
        - name: atc
        roles:
        - name: atc
          password: ((atc-db-password))
  - release: backup-and-restore-sdk
    name: database-backup-restorer
  - release: concourse
    name: bbr-atcdb

I’ve tried several different combinations of properties on bbr-atcdb and not able to find a combination that doesn’t cause this error in the template.

Line 61 of config.json.erb is this

58:    if_link("concourse_db") do |cdb|
59:      postgres_tls_enabled = cdb.p("postgresql.sslmode", false)
60:      if postgres_tls_enabled
61:        postgres_tls_ca = cdb.p("postgresql.ca_cert")
62:        postgres_tls_public_cert = cdb.p("postgresql.client_cert.certificate")
63:        postgres_tls_private_key = cdb.p("postgresql.client_cert.private_key")
64:      end
65:    end

Tried setting postgresql.sslmode to disable and false, but that didn’t affect the error.

Bosh env

Version            269.0.1 (00000000)
Director Stemcell  ubuntu-xenial/315.13
CPI                vsphere_cpi

Thanks in advance for any response.

Concourse version 6.0.0 upgrade error

Hi there!

Bug Report

When we are trying to upgrade concourse to version 6.0.0 from 5.7.0 its giving below error.

L Error: Action Failed get_task: Task accae7b5-af7d-4c5e-528d-b8181fde67ad result: Compiling package concourse: Compressing compiled package: Shelling out to tar: Running command: 'tar czf /var/vcap/data/tmp/bosh-platform-disk-TarballCompressor-CompressSpecificFilesInDir612291706 -C /var/vcap/data/packages/concourse/b57b501220dbababf27531d2a2a2a4071003e9ce .', stdout: '', stderr: '

Whereas we have enough disk space.

bosh/0:/var/vcap/sys/log/director$ df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 3.8G 0 3.8G 0% /dev
tmpfs 3.8G 0 3.8G 0% /dev/shm
tmpfs 3.8G 20M 3.8G 1% /run
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 3.8G 0 3.8G 0% /sys/fs/cgroup
/dev/nvme1n1p1 2.9G 1.5G 1.4G 53% /
/dev/nvme0n1p2 41G 1.2G 38G 4% /var/vcap/data
tmpfs 1.0M 52K 972K 6% /var/vcap/data/sys/run
/dev/nvme2n1p1 63G 106M 60G 1% /var/vcap/store

When we tried with version 6.1.0 with same disk space it succeeded.
Please let us know the reason of failure of version upgrade to 6.0.0.

  • Concourse version: 6.0.0
  • Deployment type (BOSH/Docker/binary): BOSH
  • Infrastructure/IaaS: AWS
  • Browser (if applicable):
  • Did this used to work? Never tried before.

Thanks,
Umesh Soni

Add contributing guide

Feature Request

What challenge are you facing?

There are certain steps that we need to go through before submitting a PR to this repository, e.g., ensuring that the erb templates were properly generated.

Currently, that's "tribal knowledge" spread across the team that is not openly communicated (and sometimes, forgotten by us 😅 )

A Modest Proposal

Have a CONTRIBUTING.md file, just like we do in concourse/concourse (see contributing.md).

Explore Concourse running on Jammy jellyfish

There is an early version Ubuntu jammy available on bosh.io

Try adapt it into our CI and get estimation of

  1. what work needs to be done so Concourse is fully compatible with Ubuntu Jammy?
  2. Are those work qualify for a patch/minor/major release?

Draining of concourse worker taking very long

Hi there!

Bug Report

When deploying an update of concourse (using concourse/bosh) the update/recreation of the first worker was not finished for more than 4 hours.
Bosh shows the respective worker as failing (using bosh instances).
Logging in to the worker via bosh ssh, we observed:

So for some reason, the drain behaviour described here https://concourse-ci.org/concourse-worker.html#gracefully-removing-a-worker seems not to work.

We then issued another USR2 signal to the worker process manually using kill -USR2 <worker pid>.
This made the worker finish running jobs and it shut down.
You can see the log down here, the worker recreation update took 4hrs29min:

12:53:26 Task 1708127 | 10:53:20 | Preparing deployment: Preparing deployment (00:00:03)
12:53:26 Task 1708127 | 10:53:23 | Preparing deployment: Rendering templates (00:00:02)
12:53:26 Task 1708127 | 10:53:25 | Preparing package compilation: Finding packages to compile (00:00:00)
12:53:26 Task 1708127 | 10:53:25 | Compiling packages: btrfs_tools/797f8df53d2881f034366408b3f043a57f8f4c51
12:53:26 Task 1708127 | 10:53:25 | Compiling packages: postgres-9.6.10/04ecac16e7e53e17d1a1799c0fe874f262f1960ba37514da1b3a30d1c58c13c0
12:53:26 Task 1708127 | 10:53:25 | Compiling packages: postgres-common/9e812f515167406f22e2f983a6c325b0a54e1bd6128aa44e1b8f8bc44034d01f
12:54:22 Task 1708127 | 10:53:25 | Compiling packages: postgres-11.3/c0604a42bdaa3ce61d1b13f7b1017005794c18bb1307cabb30cacb49f30b36ac
12:54:24 Task 1708127 | 10:53:25 | Compiling packages: concourse/faaac11289457bdd4fb8d177051a7d8f03d9ff63
12:56:31 Task 1708127 | 10:54:22 | Compiling packages: postgres-common/9e812f515167406f22e2f983a6c325b0a54e1bd6128aa44e1b8f8bc44034d01f (00:00:57)
12:57:45 Task 1708127 | 10:54:24 | Compiling packages: btrfs_tools/797f8df53d2881f034366408b3f043a57f8f4c51 (00:00:59)
12:57:56 Task 1708127 | 10:56:31 | Compiling packages: concourse/faaac11289457bdd4fb8d177051a7d8f03d9ff63 (00:03:06)
12:58:31 Task 1708127 | 10:57:45 | Compiling packages: postgres-9.6.10/04ecac16e7e53e17d1a1799c0fe874f262f1960ba37514da1b3a30d1c58c13c0 (00:04:20)
12:58:31 Task 1708127 | 10:57:55 | Compiling packages: postgres-11.3/c0604a42bdaa3ce61d1b13f7b1017005794c18bb1307cabb30cacb49f30b36ac (00:04:30)
12:58:31 Task 1708127 | 10:58:31 | Updating instance worker-maintenance: worker-maintenance/d8c3c95a-0353-45d8-85ad-43ba00809758 (0) (canary)
12:58:32 Task 1708127 | 10:58:31 | Updating instance db: db/e768317d-f8f0-4462-a1c3-c9c51c76385e (0) (canary)
13:00:55 Task 1708127 | 10:58:31 | Updating instance web: web/f9fe22ad-e517-4816-8495-41dd69a92e4e (0) (canary)
13:01:18 Task 1708127 | 10:58:31 | Updating instance worker: worker/e1e957d6-355e-43a6-8cde-3610b98fb1dd (0) (canary)
13:01:18 Task 1708127 | 11:00:54 | Updating instance db: db/e768317d-f8f0-4462-a1c3-c9c51c76385e (0) (canary) (00:02:23)
13:01:19 Task 1708127 | 11:01:18 | Updating instance web: web/f9fe22ad-e517-4816-8495-41dd69a92e4e (0) (canary) (00:02:47)
13:01:19 Task 1708127 | 11:01:18 | Updating instance web: web/f9fe22ad-e517-4816-8495-41dd69a92e4e (0) (canary) (00:02:47)
16:01:38 Task 1708127 | 11:01:18 | Updating instance web: web/c10c5e72-0d53-474a-9a75-be897df157df (1)
17:28:03 Task 1708127 | 11:01:18 | Updating instance web: web/c10c5e72-0d53-474a-9a75-be897df157df (1) (00:03:29)
17:28:03 Task 1708127 | 14:01:37 | Updating instance worker-maintenance: worker-maintenance/d8c3c95a-0353-45d8-85ad-43ba00809758 (0) (canary) (03:03:06)
17:31:32 Task 1708127 | 15:28:03 | Updating instance worker: worker/e1e957d6-355e-43a6-8cde-3610b98fb1dd (0) (canary) (04:29:32)
  • Concourse version: v5.2.0
  • Deployment type (BOSH/Docker/binary): BOSH
  • Infrastructure/IaaS: AWS/GCP/Openstack

unable to set vault-max-lease

Hi there!

Bug Report

Unable to set vault-max-lease in a bosh deployment manifest. This is apparently a flag that could be set on the ATC but not available in the bosh deployment.

  • Concourse version: 4.2.1
  • Deployment type (BOSH/Docker/binary): BOSH
  • Infrastructure/IaaS: Softlayer
  • Browser (if applicable): N/A
  • Did this used to work? NO

Suggestions for generating bcrypted passwords with bosh deploy/credhub?

add_local_users:
description: |
List of local concourse users to add with their bcrypted passwords.
bcrypted password must have a strength of 10 or higher or the user will not be able to login
default: []
example:
- some-user:$2a$10$sKZelZprWWcBAWbp28rB1uFef0Ybxsiqh05uo.H8EIm0sWc6IZGJu
- some-other-user:$2a$10$.YIYH.5EWQcCvfE49xH/.OhIhGFiNtn.tQq.4pznpcrqZvoLxuKeC

The method for specifying various local users is a list of strings that includes a bcrypted password. Ok. How do I generate a password and get its bcrypted version during bosh deploy from credhub?

E.g. an ops file like

- type: replace
  path: /instance_groups/name=web/jobs/name=atc/properties/add_local_users?/-
  value: "myteam-admin:((myteam-password.bcrypt))"

- type: replace
  path: /variables/-
  value:
    name: myteam-password
    type: password

Or is there a diff path for this?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.