GithubHelp home page GithubHelp logo

naorlivne / terraformize Goto Github PK

View Code? Open in Web Editor NEW
152.0 7.0 34.0 15.85 MB

Apply\Destory Terraform modules via a simple REST API endpoint.

License: GNU Lesser General Public License v3.0

Dockerfile 1.17% Python 92.22% HCL 0.85% Shell 5.75%
terraform-modules terraform-backends docker-container terraform terraformize devops cloud-computing infrastructure-as-code python python3

terraformize's Introduction

IMPORTANT - Due to Hashicorp licensing change Terraformize will no longer be able to update to new Terraform versions, as a result I'm pausing all work on Terraformize, this is done in part due to legal concerns as it's unclear if an Open source project would be considered a "competitive offer" and in part due to me not wishing to donate my time to HCP ecosystem now that they stopped giving back to the FOSS community

Terraformize

Apply\Destory Terraform modules via a simple REST API endpoint.

Github actions CI unit tests & auto dockerhub push status: CI/CD

Code coverage: codecov

Features

  • REST API to run:
    • terraform apply
    • terraform destroy
    • terraform plan
  • No code changes needed, supports 100% of all terraform modules unmodified
  • Built in support for multiple terraform workspaces
  • Can pass variables to the terraform run via the request body (passed as a -var arg to the terraform apply or terraform destroy command)
  • Supports multiple module directories
  • Automatically runs terraform init before changes
  • Returned response includes all the logs of stdout & stderr of terraform for easy debugging
  • Stateless (requires you use a non local terraform backend)
  • Containerized
  • Health check endpoint included
  • support all terraform backends that support multiple workspaces
  • No DB needed, all data stored at the terraform backend of your choosing
  • terraformize scales out as much as you need risk-free (requires you use a backend that support state locking)
  • AMD64, Arm & Arm64 support
  • (Optional) Ability to have Terraform run on a separate thread and have it return the terraform run result to a (configurable per request) webhook in a non-blocking way
  • (Optional) Ability to work off a RabbitMQ queue and have it return the terraform run result to a different queue

Possible use cases

  • Setting up SaaS clusters\products\etc for clients in a fully automatic way
    • Each customer gets his own workspace and you just run the needed modules to create that customer products
    • Devs don't have to know how the terraform module works, they just point to a rest API & know it creates everything they need
  • CI/CD integration
    • You can consider it as a terraform worker that is very easy to turn on via your CI/CD tool as it's just a REST request away
  • Automatic system creation and\or scaling
    • Easy to intergrate to autoscalers as it can run the same module with diffrent variables passed to the terraform apply command via the request body to scale services up\down as needed
    • Easy to give your company employees a self-service endpoint and have them create and\or remove infrastracutre themselves when it's just an API request away
  • Seperate who writes the terraform modules to who use them easily

Running

Running Terraformize is as simple as running a docker container

docker run -d -p 80:80 -v /path/to/my/terraform/module/dir:/www/terraform_modules/ naorlivne/terraformize

Feel free to skip to the end of the document for a working example that will explain how to use Terraformize

Configuration options

Terraformize uses sane defaults but they can all be easily changed:

value envvar default value notes
basic_auth_user BASIC_AUTH_USER None Basic auth username to use
basic_auth_password BASIC_AUTH_PASSWORD None Basic auth password to use
auth_token AUTH_TOKEN None bearer token to use
terraform_binary_path TERRAFORM_BINARY_PATH None The path to the terraform binary, if None will use the default OS PATH to find it
terraform_modules_path TERRAFORM_MODULES_PATH /www/terraform_modules The path to the parent directory where all terraform module directories will be stored at as subdirs
parallelism PARALLELISM 10 The number of parallel resource operations
rabbit_url_connection_string RABBIT_URL_CONNECTION_STRING None The URL paramters string to connect to RabbitMQ with, if unset RabbitMQ will not be used and only API will be possible
rabbit_read_queue RABBIT_READ_QUEUE terraformize_read_queue Name of the queue to read messages from
rabbit_reply_queue RABBIT_REPLY_QUEUE terraformize_reply_queue Name of the queue to respond with the run result to
CONFIG_DIR /www/config The path to the directory where configuration files are stored at
HOST 0.0.0.0 The IP for gunicorn to bind to
PORT 80 The port for gunicorn to bind to
WORKER_CLASS sync The gunicorn class to use
WORKERS 1 Number of gunicorn workers
THREADS 1 Number of gunicorn threads
PRELOAD False If gunicorn should preload the code
LOG_LEVEL error The log level for gunicorn
TIMEOUT 600 The timeout for gunicorn, if your terraform run takes longer you will need to increase it

The easiest way to change a default value is to pass the envvar key\value to the docker container with the -e cli arg but if you want you can also create a configuration file with the settings you wish (in whatever of the standard format you desire) & place it in the /www/config folder inside the container.

Most providers also allow setting their configuration access_keys\etc via envvars use -e cli args to configure them is ideal as well but should you wish to configure a file you can also easily mount\copy it into the container as well.

Authentication

Terraformize supports 3 authentication methods:

  • Basic auth - will require you to pass a Authorization Basic your_user_pass_base64_combo header with your_user_pass_base64_combo being the same as basic_auth_user & basic_auth_password configured in Terraformize
  • Bearer auth - will require you to pass a Authorization Bearer your_token header with your_token being the same as the auth_token configured in Terraformize
  • No auth - will be used if both the Basic auth & Bearer auth are disabled, note that the /v1/health health-check point never requires authentication

Endpoints

  • POST /v1/module_folder_name/workspace_name
    • runs terraform apply for you
    • takes care of auto approval of the run, auto init & workspace switching as needed
    • takes variables which are passed to terraform apply as a JSON in the body of the message in the format of {"var_key1": "var_value1", "var_key2": "var_value2"}
    • Returns 200 HTTP status code if everything is ok, 404 if you gave it a non existing module_folder_name path & 400 if the terraform apply ran but failed to make all needed modifications
    • Also returns a JSON body of {"init_stdout": "...", "init_stderr": "...", "stderr": "...", "stdout": "..."} with the stderr & stdout of the terraform apply & terraform init run
    • If you pass a webhook URL paramter with the address of the webhook terraformize will return a 202 HTTP code with a body of {{'request_uuid': 'ec743bc4-0724-4f44-9ad3-5814071faddx'}} to the request then work behind the scene to run terraform in a non blocking way, to result of the terraform run will be sent to the webhook address you configured along with the UUID of the request for you to know which request said result related to
  • DELETE /v1/module_folder_name/workspace_name
    • runs terraform destroy for you
    • takes care of auto approval of the run, auto init & workspace switching as needed
    • takes variables which are passed to terraform destroy as a JSON in the body of the message in the format of {"var_key1": "var_value1", "var_key2": "var_value2"}
    • Returns 200 HTTP status code if everything is ok, 404 if you gave it a non existing module_folder_name path & 400 if the terraform destroy ran but failed to make all needed modifications
    • Also returns a JSON body of {"init_stdout": "...", "init_stderr": "...", "stderr": "...", "stdout": "..."} with the stderr & stdout of the terraform destroy & terraform init run
    • In order to preserve the history of terraform runs in your backend the workspace is not deleted automatically, only the infrastructure is destroyed
    • If you pass a webhook URL paramter with the address of the webhook terraformize will return a 202 HTTP code with a body of {{'request_uuid': 'ec743bc4-0724-4f44-9ad3-5814071faddx'}} to the request then work behind the scene to run terraform in a non blocking way, to result of the terraform run will be sent to the webhook address you configured along with the UUID of the request for you to know which request said result related to
  • POST /v1/module_folder_name/workspace_name/plan
    • runs terraform plan for you
    • takes care of auto approval of the run, auto init & workspace switching as needed
    • takes variables which are passed to terraform apply as a JSON in the body of the message in the format of {"var_key1": "var_value1", "var_key2": "var_value2"}
    • Returns 200 HTTP status code if everything is ok, 404 if you gave it a non existing module_folder_name path & 400 if the terraform apply ran but failed to plan all needed modifications
    • Also returns a JSON body of {"init_stdout": "...", "init_stderr": "...", "stderr": "...", "stdout": "...", "exit_code": "0""} with the stderr & stdout of the terraform apply & terraform init run
    • If you pass a webhook URL paramter with the address of the webhook terraformize will return a 202 HTTP code with a body of {{'request_uuid': 'ec743bc4-0724-4f44-9ad3-5814071faddx'}} to the request then work behind the scene to run terraform in a non blocking way, to result of the terraform run will be sent to the webhook address you configured along with the UUID of the request for you to know which request said result related to
  • GET /v1/health
    • Returns 200 HTTP status code
    • Also returns a JSON body of {"healthy": true}
    • Never needs auth
    • Useful to monitoring the health of Terraformize service

RabbitMQ queue

if you prefer using RabbitMQ instead of the API then you'll need to configure the rabbit_url_connection_string (examples can be seen at https://pika.readthedocs.io/en/stable/examples/using_urlparameters.html#using-urlparameters), Terrafromize will then use 2 Queues on rabbit (defined at the rabbit_read_queue & rabbit_reply_queue params), you don't have to create the queues manually, if need be they will be created.

Now all you need to do in order to have a terraform run is to publish a message to the rabbit_read_queue with the following format:

{
  "module_folder": "module_folder_name",
  "workspace": "workspace_name",
  "uuid": "unique_uuid_you_created_to_identify_the_request",
  "run_type": "apply/destroy/plan",
  "run_variables": {
    "var_to_pass_to_terraform_key": "var_to_pass_to_terraform_value",
    "another_var_to_pass_to_terraform_key": "another_var_to_pass_to_terraform_value"
  }
}

Terraformize will then run terraform for you and will return the result of the terraform run to the rabbit_reply_queue queue in the following format:

{
  "uuid": "unique_uuid_you_created_to_identify_the_request",
  "init_stdout": "...", 
  "init_stderr": "...", 
  "stderr": "...", 
  "stdout": "...", 
  "exit_code": 0
}

It's up to you to ensure the uuid you pass is indeed unique.

Example

  1. First we will need a terraform module so create a folder named terraformize_test:
    mkdir terraformize_test
    Make sure not to cd into the folder as we will be mounting it into the container from the parent folder in a couple of steps
  2. Now we need a valid terraform configuration in it, if it works in terraform it will work with terraformize but for this example we will keep it simple with a single terraformize_test/test.tf file:
    resource "null_resource" "test" {
      count   = 1
    }
    
    variable "test_var" {
      description = "an example variable"
      default = "my_variable_default_value"
    }
    
    output "test" {
      value = var.test_var
    }
    
  3. We will also need to add the folder we created into the Terraformize container, this can be done by many different way (for example creating a container that copies our modules into a new image with the FROM base image being Terraformize base image) but for this example we will simply mount the folder path into the container as we run it:
    docker run -d -p 80:80 -v `pwd`:/www/terraform_modules naorlivne/terraformize
    
  4. Now we can run the terraform module by simply calling it which will run terraform apply for us (notice how we are passing variables in the body):
    curl -X POST \
      http://127.0.0.1/v1/terraformize_test/my_workspace \
      -H 'Content-Type: application/json' \
      -H 'cache-control: no-cache' \
      -d '{
        "test_var": "hello-world"
    }'
  5. And lets create another copy infra of the same module in another workspace:
    curl -X POST \
      http://127.0.0.1/v1/terraformize_test/my_other_workspace \
      -H 'Content-Type: application/json' \
      -H 'cache-control: no-cache' \
      -d '{
        "test_var": "hello-world"
    }'
  6. Now that we are done let's delete them both (this will run terrafrom destroy for us):
    curl -X DELETE \
      http://127.0.0.1/v1/terraformize_test/my_workspace \
      -H 'Content-Type: application/json' \
      -H 'cache-control: no-cache' \
      -d '{
        "test_var": "hello-world"
    }' 
    curl -X DELETE \
      http://127.0.0.1/v1/terraformize_test/my_other_workspace \
      -H 'Content-Type: application/json' \
      -H 'cache-control: no-cache' \
      -d '{
        "test_var": "hello-world"
    }'

terraformize's People

Contributors

chasebolt avatar dependabot-preview[bot] avatar dependabot[bot] avatar fernandomiguel avatar naorlivne avatar tdurieux avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraformize's Issues

support for terraform validate and json output

Hi Team, first of all thank you for this useful project. It is not a bug. Only a question from our side.

We are currently considering using Terraformize for our project eclipse-xpanse as we understand that this will help us scale our application easily.
Currently from our application, we run terraform validate -output json to validate the module and also to return validation response in json format so that it can be parsed.
Is there a way to achieve this using Terraformize?
Thanks again.

Expected/Wanted Behavior

Support for "terraform validate with json output"

Actual Behavior

NA

Steps to Reproduce the Problem

NA

Specifications

Python version:
(3.5 & higher required, lower versions may work but will not be tested against)

terraformize version: NA

OS type & version: NA

Add plan endpoint

Expected/Wanted Behavior

There should be a way to run terraform plan without applying it

Actual Behavior

There's non

pyyaml version 6 not available

Expected/Wanted Behavior

Actual Behavior

PyYAML-6.0.tar.gz#sha256=68fb519c14306fec9720a2a5b45bc9f0c8d1b9c72adf45c37baedfcd949c35a2 (from https://pypi.org/simple/pyyaml/) (requires-python:>=3.6). Command errored out with exit status 1: /usr/local/bin/python /usr/local/lib/python3.9/site-packages/pip/_vendor/pep517/in_process/_in_process.py get_requires_for_build_wheel /tmp/tmp_kxbgucj Check the logs for full command output.
#0 12.97 ERROR: Could not find a version that satisfies the requirement PyYAML==6.0 (from versions: 3.10, 3.11, 3.12, 3.13b1, 3.13rc1, 3.13, 4.2b1, 4.2b2, 4.2b4, 5.1b1, 5.1b3, 5.1b5, 5.1, 5.1.1, 5.1.2, 5.2b1, 5.2, 5.3b1, 5.3, 5.3.1, 5.4b1, 5.4b2, 5.4, 5.4.1, 6.0b1, 6.0, 6.0.1)
#0 12.97 ERROR: No matching distribution found for PyYAML==6.0
#0 13.14 WARNING: You are using pip version 21.1.2; however, version 23.2.1 is available.
#0 13.14 You should consider upgrading via the '/usr/local/bin/python -m pip install --upgrade pip' command.

Dockerfile:26

24 |
25 | COPY requirements.txt /www/requirements.txt
26 | >>> RUN pip3 install --no-cache-dir -r /www/requirements.txt
27 |
28 | # copy the codebase

ERROR: failed to solve: process "/bin/sh -c pip3 install --no-cache-dir -r /www/requirements.txt" did not complete successfully: exit code: 1

Steps to Reproduce the Problem

build the docker file

Specifications

Python version:
PyYAML-6.0.tar.gz#sha256=68fb519c14306fec9720a2a5b45bc9f0c8d1b9c72adf45c37baedfcd949c35a2 (from https://pypi.org/simple/pyyaml/) (requires-python:>=3.6). Command errored out with exit status 1: /usr/local/bin/python /usr/local/lib/python3.9/site-packages/pip/_vendor/pep517/in_process/_in_process.py get_requires_for_build_wheel /tmp/tmp_kxbgucj Check the logs for full command output.
#0 12.97 ERROR: Could not find a version that satisfies the requirement PyYAML==6.0 (from versions: 3.10, 3.11, 3.12, 3.13b1, 3.13rc1, 3.13, 4.2b1, 4.2b2, 4.2b4, 5.1b1, 5.1b3, 5.1b5, 5.1, 5.1.1, 5.1.2, 5.2b1, 5.2, 5.3b1, 5.3, 5.3.1, 5.4b1, 5.4b2, 5.4, 5.4.1, 6.0b1, 6.0, 6.0.1)
#0 12.97 ERROR: No matching distribution found for PyYAML==6.0
#0 13.14 WARNING: You are using pip version 21.1.2; however, version 23.2.1 is available.
#0 13.14 You should consider upgrading via the '/usr/local/bin/python -m pip install --upgrade pip' command.

Dockerfile:26

24 |
25 | COPY requirements.txt /www/requirements.txt
26 | >>> RUN pip3 install --no-cache-dir -r /www/requirements.txt
27 |
28 | # copy the codebase

ERROR: failed to solve: process "/bin/sh -c pip3 install --no-cache-dir -r /www/requirements.txt" did not complete successfully: exit code: 1

Credentials for GCP provider

Expected/Wanted Behavior

Run terraform with a GCP provider

Actual Behavior

The GCP provider is unable to authenticate and I get an error message "could not find default credentials"

Steps to Reproduce the Problem

1.Use GCP provider in a tf file
2.Execute a POST command
3.

Specifications

I have a "connections.tf" file in my module that points to a credentials file located on the container. When I run POST using the api, it fails with the error message above.
When I ssh to the container and execute "terraform init" locally, I get the same error message.
To try and workaround this, I run "export GOOGLE_APPLICATION_CREDENTIALS="[PATH]"" locally on the container and I am able to execute "terraform init" locally. But the REST call still fails with the same error message.
Python version:
(3.5 & higher required, lower versions may work but will not be tested against)
Python 3.7.5
terraformize version:
latest
OS type & version:
Mac Catalina Version 10.15.2

concurrency issue

Expected/Wanted Behavior

As a user I expect to run a single module in parallel, but with different workspaces.

Actual Behavior

There is a race condition when doing multiple curl calls to a single module but different workspaces. If workspaceA (wA) performs a workspace select and then a 2nd API is triggered for the same module but for workspaceB (wB). We can run into a race condition where the select for wB overwrites the previous workspace select performed for wA. This will cause wA to write into the state file for wB during its apply.

Steps to Reproduce the Problem

Run two curl calls in the background

curl -X POST \
 http://127.0.0.1/v1/test-module/client1 \
 -H 'Content-Type: application/json' \
 -H 'cache-control: no-cache' \
 -d '{}' | jq & \
curl -X POST \
 http://127.0.0.1/v1/test-module/client2 \
 -H 'Content-Type: application/json' \
 -H 'cache-control: no-cache' \
 -d '{}' | jq &

Do this enough times and you will find the state file becomes incorrect.

Also, you can see here when switching workspaces, this file is updated with the workspace name and is used by subsequent terraform commands.

$ cat /www/terraform_modules/test-module/.terraform/environment
client1

Specifications

docker terraformize version: v151

webhook response of long apply/delete calls once run completes

There should be an option to have a way to poll long requests, so said requests will initially respond with a 202 and will work behind the scenes and there will be some other endpoint to check the status of the request, this will likely mean having to add some sort of DB to keep the open requests status/progress and still keeping the scale out mentality this project is built on.

Thinking this should be an optional way of using terraformize as to give people who don't need it the option to avoid having to setup a needed DB.

500 Internal Server Error

Expected/Wanted Behavior

Apply terraform files

Actual Behavior

500 Internal Server Error

Steps to Reproduce the Problem

  1. Followed the instructions

Specifications

Python version:
(3.5 & higher required, lower versions may work but will not be tested against)
3.7.5
terraformize version:
latest
OS type & version:
Mac Catalina 10.15.2

init before workspace create is required with remote backends

Expected/Wanted Behavior

When using a fresh remote backend (gcs in my case) it should properly create the workspace.

Actual Behavior

On the first run it will store the state in default.tfstate, subsequent runs it will properly store it in the <workspace>.tfstate

When using a remote backend (gcs), you can not run workspace create before init. Currently on the first run with a fresh backend bucket, the workspace create will fail and then run init, causing it to create a default.tfstate instead of <workspace>.tfstate. Once a default.tfstate has been established, on subsequent runs the workspace create will succeed.

Steps to Reproduce the Problem

  1. Setup a remote backend and use example provided in README
  2. curl terraformize (creates all resources)
  3. View remote backend to see a default.tfstate was created instead of <workspace>.tfstate
  4. curl terraformize a 2nd time (creates all resources again)
  5. View remote backend to see a <workspace>.tfstate and a default.tfstate.

Specifications

Using docker container. FROM naorlivne/terraformize:latest

Logs

/www/terraform_modules/droplets # terraform workspace new client1
Backend reinitialization required. Please run "terraform init".
Reason: Initial configuration of the requested backend "gcs"

The "backend" is the interface that Terraform uses to store state,
perform operations, etc. If this message is showing up, it means that the
Terraform configuration you're using is using a custom configuration for
the Terraform backend.

Changes to backend configurations require reinitialization. This allows
Terraform to setup the new configuration, copy existing state, etc. This is
only done during "terraform init". Please run that command now then try again.

If the change reason above is incorrect, please verify your configuration
hasn't changed and try again. At this point, no changes to your existing
configuration or state have been made.


Error: Initialization required. Please see the error message above.

Race condition

Expected/Wanted Behavior

I want to be able to scale out with mutliple copies of terraformize behind a load balancer without sticky sessions

Actual Behavior

Similar to #17 but when running on a scale out architecture behind an LB that isn't set to have it's sessions sticky every once in a while (depends on architecture and LB config) a request will fail because it will try working against the default workspace rather then the real namespace it's requesting due to it passing multiple API calls, the first creates/uses the workspace in

self.tf.create_workspace(workspace=workspace)
&
self.tf.set_workspace(workspace=workspace)
but then only runs the apply in
return_code, stdout, stderr = self.tf.apply(no_color=IsFlagged, var=variables, skip_plan=True,
, so it's possible that the first create/use API calls land on one container/copy of terraformize and the apply API call lands on another which will then assume it's working against the deafult workspace and fail as a result.

same is true for delete requests.

Steps to Reproduce the Problem

  1. Have 2 copies (or more) of terraformize behind an LB
  2. Spam the copies with apply/delete requests
  3. ???
  4. Profit

Add message queue option

Expected/Wanted Behavior

An option to have terraform runs triggered via messages sent to a RabbitMQ queue (alongside the HTTP server as an optional method) then have the response be sent to another queue (on the same server) would be usable in some cases, this will likely also mean that the user will need to generate the UUID on his own (rather then have terraformize generate it for him) and attach it to the original request as he won't have any response to have the UUID returned to him upon the original request submission.

Actual Behavior

Only HTTP requests are an option

arm dockerfile

I'm not sure what your use of the arm Dockerfile is, and I don't see any ARCH anywhere.
Given terraform is a static linked Go binary, the same Dockerfile should work for both ARCH

add k8s Yaml example

Expected/Wanted Behavior

Feature request - add to the example also a k8s YAML example

Well done

This is not an issue, but I want to actually publically say this is awesome, and well done on creating this!

I plan to use it in the coming months!

Keep it up!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.