nienbo / cache-buildkite-plugin Goto Github PK
View Code? Open in Web Editor NEWTarball, Rsync & S3 Cache Kit for Buildkite. Supports Linux, macOS and Windows
Home Page: https://buildkite.com/plugins
License: MIT License
Tarball, Rsync & S3 Cache Kit for Buildkite. Supports Linux, macOS and Windows
Home Page: https://buildkite.com/plugins
License: MIT License
I've started using this plugin as a proof of concept vs multi stage docker builds. So far it's working pretty well. One thing that can be slow is the compression of assets. On our node project this is about 3 minutes. I did some digging into how to speed that up, and it seems like pigz
is the multithreaded equivalent of gzip. I see that gzip is currently hardcoded, it would be great to see the option of using pigz.
Ideally this would be an additional parameter in the plugin configuration.
Would it be possible to add support for multiple restore keys with priority order? I'm thinking about something along the lines of restore-keys
in GitHub's cache Action: https://github.com/actions/cache/blob/main/examples.md#macos-and-ubuntu
So in the linked example, in the event of a cache miss on ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
, the runner will instead restore the newest cache with the prefix ${{ runner.os }}-node-
. It can help when you're iterating on new dependencies, but the really killer feature it enables is for incremental build caching: By backing up, say, node_modules/.cache
with a key of node-cache-${GIT_BRANCH}-${COMMIT_SHA}
, and then using node-cache-${GIT_BRANCH}-
and then node-cache-master-
as restore keys, you can make sure that:
For the filesystem-backed cache backends, I'd think this could be accomplished pretty easily by ls
ing on a glob -- for S3 and the future Google Cloud backend, I'm pretty sure those bucket stores both have "list by prefix" APIs. If I were handier with Bash I'd just make a PR for it, but I figured I'd at least ask. Thanks for making this plugin available for all of us!
The behavior of tar cf
when the destination file already exists is to open the file and overwrite the contents. This means that if another agent is concurrently attempting to read this file, it may fail with an error like tar: Unexpected EOF in archive
.
To prevent this, tar cf
should use a file created by mktemp
and then mv
the file to replace the existing archive atomically.
Just thought I'd report this. Hoping it can be solved by installing findutils
. Great plugin btw!
find: unrecognized: -ignore_readdir_race
--
| BusyBox v1.28.4 (2018-12-31 18:05:13 UTC) multi-call binary.
Is it possible to define cache dirs based on glob pattern?
I am trying to do it like this:
plugins:
- gencer/cache#v2.3.0:
backend: tarball
key: "web-deps-cache-{{ checksum 'yarn.lock' }}"
tarball:
path: '/tmp/buildkite-cache'
max: 7 # Optional. Removes tarballs older than 7 days.
paths: [ "node_modules/", "packages/*/node_modules/" ]
But apparently it's being interpreted literally:
🔍 Locating cache: web-deps-cache-cb1335083a124ebb7ca0a007ccfb41b95a113c1a
🗑️ Deleting backups older than 7 day(s)...
tar: packages/*/node_modules: Cannot stat: No such file or directory
tar: Exiting with failure status due to previous errors
�[31m🚨 Error: The plugin cache post-command hook exited with status 2�[0m
Thanks for the plugin! I was wondering if there is a working example of using this plugin alongside the docker-compose plugin. Specifically, is it possible to use the cache plugin when building an image via the docker-compose plugin?
I see that both docker and docker-compose are listed in the README, but I wasn't able to find a full example using both the cache plugin and docker-compose. Does the directory that is being cached need to be mounted via a volume in the docker-compose.yml file?
Is it possible to do something like this?
key: v1-assets-compile-cache-{{ checksum './{app|backdraft|client|components|test}' }}
In the case for a command that is responsible for populating the cache, it would be helpful to go ahead and skip the command on a non-restore key cache hit. This can help optimize a flow if the command populating the cache takes some time (i.e. in a docker container, where the pull/startup takes time)
Something like:
steps:
- label: Populate Cache
command: ./populate_cache.sh
plugins:
- gencer/cache#v2.4.11:
...
skip_command_on_cache_hit: true
Sometimes it's useful to prepare a directory for caching by removing some bits that should not be cached. This could be done directly in the command but it seems better to group with the caching configuration.
Specific motivating example is the target directory for Rust, which contains both third party dependencies which are of highest value to cache, and incremental compilation of local crates which are large and low value (or sometimes broken) to cache. Inspiration from the https://github.com/Swatinem/rust-cache GitHub Actions plugin.
Consider the following step:
steps:
- label: Build
plugins:
- gencer/cache#v2.3.6:
backend: tarball
key: "v1-cache-{{ runner.os }}-{{ checksum 'rust/Cargo.toml' }}"
tarball:
path: '/tmp/buildkite-cache-rust'
max: 7
compress: true
paths:
- '.cargo'
- 'target'
- docker#v3.2.0:
image: amd64/rust
command: |
rustup toolchain install stable
rustup default stable
rustup component add rustfmt
export CARGO_HOME="$PWD/.cargo"
export CARGO_TARGET_DIR="$PWD/target"
cargo build
I am getting
~~~ Running plugin cache pre-command hook
�[90m$�[0m /usr/local/var/buildkite-agent/plugins/github-com-gencer-cache-buildkite-plugin-v2-3-6/hooks/pre-command
~~~ :bash: �[33mCache�[0m Buildkite Plugin v2.3.6
🔍 Looking for v1-cache-macOS-090a069aa27940db9c0f7afcd20b33393f432de3
�[31m🚨 Error: The plugin cache pre-command hook exited with status 1�[0m
^^^ +++
Any ideas what is causing the status 1?
Hi folks, thanks for the plugin :)
I'm a bit stuck trying to use this with yarn + the Buildkite docker plugin
First build
node:14
)yarn install
inside the containernode_modules
to S3Second build
node_modules
exist on the host alreadyroot
, not the buildkite-agent
userI have tried:
docker
plugin's propagate-uid-gid
setting so that we act as the buildkite-agent
user in the docker container
~/.cache
dirnode:14
image's built in node
user
node_modules
fileMy pipeline is pretty straight forward. The second build immediately has trouble untarring the cache because node modules on the host weren't cleaned up due to permission issues.
cache_plugin_config: &cache_plugin_config
id: gencer-cache-node-modules
backend: s3
key: "v1-cache-{{ runner.os }}-{{ checksum 'yarn.lock' }}"
restore-keys:
- 'v1-cache-{{ id }}-{{ runner.os }}-'
- 'v1-cache-{{ id }}-'
s3:
bucket: "buildkite-node-modules-cache"
paths:
- node_modules
steps:
- name: ':jest: Test'
command:
- "yarn install"
- "yarn run test"
plugins:
- gencer/cache#v2.4.8: *cache_plugin_config
- docker#v3.8.0:
image: "node:14"
There are cache folders generated by NextJS that we can't set a key for. They use external dependencies that may have changed, but we don't have control over that. Could you add the ability to always upload the cache?
Is it possible to use multiple files for the checksum? I am trying to add caching to a repository that has multiple sub packages and would like the cache to update if any of the package-lock files were change.
Example:
node-cache: &node-cache
id: node
key: "v1-cache-{{ id }}-{{ runner.os }}-{{ checksum 'package-lock.json' 'src/example-1/package-lock.json' 'src/example-2/package-lock.json' }}"
restore-keys:
- 'v1-cache-{{ id }}-{{ runner.os }}-'
- 'v1-cache-{{ id }}-'
paths:
- node_modules
- src/**/node_modules
The cache seems to still work but says that it is unable to find the files.
Result:
find: ‘package-lock.json src/example-1/package-lock.json src/example-2/package-lock.json ’: No such file or directory
We want to manage caching of the .eslintcache
file in nodejs projects. Its different because its a file that isnt committed because it causes merge conflicts.
In the pre-hook, we'd like buildkite to load the most recent version. And on post-hook, we want to upload the new version (ideally only pushing if the file changed).
We've tried a few different setups, and this is the closest to what we want:
eslint-cache: &eslint-cache
id: eslint
key: "v1-cache-{{ id }}-$BUILDKITE_COMMIT"
backend: s3
restore-keys:
- 'v1-cache-{{ id }}-'
paths:
- .eslintcache
222
. A cache currently exists for commit id 111
. The app workdir is /app
.222
). There should be no match, so we use restore-keys to load a previous cache.111
, v1-cache-eslint-111.tar
available.v1-cache-eslint-111.tar
is loaded into the workdir to /app/v1-cache-eslint-111.tar
and extracted.yarn lint
. This looks at the extracted .eslintcache
file, uses it to speed up the lint command, and slightly updates the .eslintcache
to match the current files in the app.222
, /app/v1-cache-eslint-222.tar
. The file doesnt exist. So we compress the local .eslintcache
into a tar with that name, and upload it to s3.v1-cache-eslint-111.tar
file is pulled, but saved locally as /app/v1-cache-eslint-222.tar
. This makes step 6 believe we dont have any updates, and therefore ignores the upload.We can force this to work as we'd like by changing the linting step from yarn lint
to yarn lint && rm v1-cache-eslint-*
. This will force step6 to not see any existing cache, and therefore upload the new version. But it feels a little hacky to us.
The situation seems to be caused by the TAR_FILE var being overwritten in https://github.com/gencer/cache-buildkite-plugin/blob/master/lib/backends/s3.bash#L89, and I dont think its intentional behavior. What are your thoughts?
Hi,
I'm attempting to use this plugin alongside the docker plugin, though we are using docker to build the container, we then wish to cache the node_modules
dir from docker, we're using yarn.lock
checksum to trigger the cache, but yarn.lock
is not found even though if you pull the image locally it exists, is this a limitation with this plugin to not work alongside a docker build?
I've tried having the cache plugin above or below the docker plugin with same results, eg:
- id: "js-builder-container"
label: ":docker: Build JS Builder Container"
timeout_in_minutes: 10
commands:
- "docker buildx create --use --name ci || true"
- "docker buildx build --ssh default --build-arg BUILDKITE_BUILD_NUMBER=${BUILDKITE_BUILD_NUMBER} \
--build-arg COMMIT_HASH=${BUILDKITE_COMMIT} \
--cache-from=type=registry,ref=ci:js-builder-cache-master \
--cache-from=type=registry,ref=ci:js-builder-cache-${BUILDKITE_BRANCH} \
--cache-to=type=registry,ref=ci:js-builder-cache-${BUILDKITE_BRANCH},mode=max \
-t ci:js-builder-${BUILDKITE_BUILD_NUMBER} \
--progress=plain \
--push -f .buildkite/Dockerfile-js-builder ."
artifact_paths:
- "website/js/schema.json"
- "website/js/settings.json"
plugins:
- docker-login#v2.0.1:
username: bob
- gencer/cache#v2.4.10:
id: js-builder
key: "v1-cache-{{ id }}-{{ checksum 'yarn.lock' }}"
compress: true
restore-keys:
- 'v1-cache-{{ id }}-'
backend: s3
paths:
- 'node_modules'
excerpt from debug logs:
++++ TARGET=yarn.lock
--
| +++++ find yarn.lock -type f -exec sha1sum '{}' ';'
| +++++ sort -k 2
| +++++ sha1sum
| +++++ awk '{print $1}'
| find: ‘yarn.lock’: No such file or directory
Inside the container:
root@61420a41d2d0:/usr/src/app/js# find yarn.lock -type f -exec sha1sum '{}' ';'
0673457b361e7b98ea37dae3163bb4498c01604d yarn.lock
Not sure what else to try, any ideas?
Many thanks.
Howdy 🤠
Very excited to dig into using this plugin, I believe it solves some very fundamental challenges w/ Buildkite especially as projects grow!
We are hoping to cache our node_modules
directory between builds and shipping them up to S3. I've had a poke around to try and get this working but believe the documentation is lacking some solid real life examples.
Question(s)
We're using the S3 backend and experiencing an issue where corrupted .tar.gz files are being downloaded and extracted but tar dies because it can't decompress the file.
I think it might be happening if the aws s3 cp
command fails part way through (our files are ~400MB so perhaps the connection sometimes breaks), and then the script proceeds anyway to copy it into the local cache location:
cache-buildkite-plugin/lib/backends/s3.bash
Line 115 in 9df04eb
Do we need something like a || exit 1
?
I have a situation where it would be useful to cache data, but the key needs to be a checksum of a file that isn't on the filesystem, yet.
An arbitrary example:
- label: "Run Cached Step"
command:
- wget -O remote-config https://example.com
- ./script-that-does-something.sh
plugins:
- gencer/cache#v2.4.10:
key: "{{ id }}-{{ checksum 'remote-config' }}"
paths:
- .cache-dir
Running the wget command as a separate step won't work, as the file system gets cleaned, and even if there weren't, we run multiple agents.
Would it be possible/make sense to have a pre-run-command argument that I could have, Or perhaps be able to invoke the plugin manually?
Examples:
- label: "Run Cached Step"
command:
- ./script-that-does-something.sh
plugins:
- gencer/cache#v2.4.10:
key: "{{ id }}-{{ checksum 'remote-config' }}"
pre-restore-command: wget -O remote-config https://example.com
paths:
- .cache-dir
or perhaps:
- label: "Run Cached Step"
command:
- wget -O remote-config https://example.com
- gencer-cache-restore
- ./script-that-does-something.sh
plugins:
- gencer/cache#v2.4.10:
key: "{{ id }}-{{ checksum 'remote-config' }}"
defer_restore: true
paths:
- .cache-dir
This is probably a very niche/weird request.
Option for able to read cache, but disable saving or upload
The plugin skips caching if the exit status is not 0 (https://github.com/gencer/cache-buildkite-plugin/blob/v2.4.8/hooks/post-command#L28)
We have steps in our pipeline that may exit with a non-zero code but would still like to cache anyway like when linting fails. The linting cache is still useful for the next run.
By editing Buildkite's git-clean-flags we can exclude cache files or directories from being removed during checkout. If this is so, this plugin could check if the cache files/folders already exist and skip restoring it.
This could be done with a key like skip_restore_existing
or something.
When using S3 with save-cache: true
and the local cache hits, we seem to then upload the local cache to S3 in the post-command step. That doesn't seem necessary as a local cache hit implies the same entry already exists in S3 and was previously downloaded.
For comparison, here's what happens when the local cache misses and we download from S3, no 10 second upload:
In one of my CI jobs, I noticed that the plugin was able to upload an empty/non-existent file/folder to S3.
[2023-06-27T23:17:34Z] [typescript-cache] - 🔍 Locating cache: my_project/some_cacheable_folder
[2023-06-27T23:17:34Z] tar: my_project/some_cacheable_folder: Warning: Cannot stat: No such file or directory
upload: ./cacheable_folder-v1-checksum123.tar.gz to s3://bucket/caches/cacheable_folder-v1-checksum123.tar.gz
This has consequences for CI jobs that have the same cache key because those jobs end up downloading and using an empty cache. For example, this might lead to slower build times (e.g. imagine a bundler like Webpack trying to read from an empty cache which causes bundling time to be slower compared to not using a cache).
Ideally the build would complete but not upload the cache if the cache file is not found. Maybe we can add an option to do this.
e.g.
nienbo/cache#v2.4.14:
id: node-modules-cache
backend: s3
...
paths:
- "my_project/some_cacheable_folder"
upload:
- "always"(default) | "skip-if-tarfile-empty" | "skip"
Reading this,
I wasn't sure where BK agents would rsync
files to.
in the example, it specifies a path that seems local to the agent, but rsync
I thought would sync files to remote servers?
rsync:
path: '/tmp/buildkite-cache' # Defaults to /tmp with v2.4.10+
common:
- assume-ops-services-platform-role: &assume-ops-services-platform-role
role-arn: $CI_ROLE_ARN
- node-cache: &node-cache
id: node
backend: s3
key: "v1-cache-{{ id }}-{{ runner.os }}-{{ checksum 'pnpm-lock.yaml' }}"
restore-keys:
- 'v1-cache-{{ id }}-{{ runner.os }}-'
- 'v1-cache-{{ id }}-'
# compress: true # Optional. Create tar.gz instead of .tar (Compressed) Defaults to `false`.
# compress-program: gzip # Optional. Allows for custom compression backends - i.e. pigz. Defaults to `gzip`.
s3:
bucket: e2-dev-buildkite-cache
# save-cache: true # Optional. Saves the cache on temp folder and keep between builds/jobs on the same machine.
paths:
- node_modules
- .npm-cache
- apps/**/node_modules
steps:
- group: ':open-pull-request: PR Checks'
if: build.branch != "main"
steps:
- label: ':broom: All the things!'
key: 'pr-checks'
plugins:
- *permissions
- aws-assume-role-with-web-identity: *assume-ops-services-platform-role
- nienbo/cache#v2.4.15: *node-cache
commands:
- '.buildkite/scripts/slack-message-user.sh'
- 'apk add --no-cache tar' # Required for cache plugin to work correctly
- 'apk add --update npm'
- 'npm install -g pnpm'
- 'pnpm install --no-prefer-frozen-lockfile'
- 'pnpm exec nx affected -t prisma-generate ${NX_AFFECTED_PR_OPTS}'
- 'pnpm exec nx affected -t lint ${NX_AFFECTED_PR_OPTS}'
- 'pnpm exec nx affected -t test ${NX_AFFECTED_PR_OPTS}'
- 'pnpm exec nx affected -t build ${NX_AFFECTED_PR_OPTS}'
# TODO: Add step to check serverless package build
- wait: ~
- label: 'npm or pnpm work?'
key: 'pr-checks2'
plugins:
- *permissions
- aws-assume-role-with-web-identity: *assume-ops-services-platform-role
- nienbo/cache#v2.4.15: *node-cache
commands:
- 'ls -a'
- 'apk add --update npm'
- 'pnpm list'
in some cases, the local file from a previous 'save-cache' step may be corrupted (with EOFs etc).
the plugin should be able to ignore the local file and revert to default behavior (i.e. download the file and extract it).
the default behavior will also take care of overwriting the corrupted local file for the next time.
Hi, thanks for continuing the work on this plugin.
It seems that on mac this command doesn't exist.
Any idea? Thnx
Attempting to use the tar plugin on a macOS machine fails:
| Usage:
| List: tar -tf
| Extract: tar -xf
| Create: tar -cf [filenames...]
| Help: tar --help
| 🚨 Error: The plugin cache post-command hook exited with status 1
Hi,
In your README you mention that the args
argument gets passed to the s3 cp
command but in the code it's also being passed to the aws s3api head-object
command.
This breaks a lot of arguments that someone might want to potentially pass (in my case --acl
) since the two commands don't share the same options. For example when I try to set --acl
the cache plugin fails once it tries to check for an existing cache:
usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
--
| To see help text, you can run:
|
| aws help
| aws <command> help
| aws <command> <subcommand> help
|
| Unknown options: --acl, bucket-owner-full-control
I'm happy to open a PR to fix this if you let me know how you'd prefer to handle this. The PR that added custom args to the head-object
command mentions needing it so they can set the endpoint, but that's already configurable with the endpoint
argument so I think it makes the most sense to just remove the custom args from the head-object
call.
Hello,
Thanks for the plugin!
I am interested in using the pr
arg to not upload new cache back to s3 in PRs. However, I still want to pull in existing cache from our protected branch since that is the baseline.
Could we have a flag to skip post command
?
Suppose we have N jobs running at the same time using the same cache key and that there is no cache saved yet. The job to finish last will overwrite the cache saved by the ones before.
job_a [0% -----------------------100%] -> cache (will overwrite cache saved from b and c)
job_b [0% ------------100%] -> cache (will overwrite cache saved from c)
job_c [0% -------- 100%] -> cache
It seems like the plugin only checks for the s3 object in restore()
but not in cache()
.
We have some golang projects that store their cache in docker at /go/pkg/mod
. We can't find a good way to cache this folder using the plugin. If we set a path of /go/pkg/mod
, the tar command removes the leading /
and we end up with no files being cached.
The only possible workaround is to copy the /go/pkg/mod
folder into our working directory, but this is a bit janky.
I have a use case whereby I would like to allow a variable in the S3 bucket name. For example:
steps:
- label: Step 1
env:
ENVIRONMENT: dev
plugins:
- nienbo/cache#v2.4.15:
backend: s3
id: python-312
key: v1-{{ id }}-{{ checksum "pyproject.toml" }}
restore-keys:
- v1-{{ id }}
compress: true
s3:
bucket: s3-cache-$ENVIRONMENT
save-cache: true
paths:
- .cache/pip
I have tried using $ENVIRONMENT
, $$ENVIRONMENT
as well as {{ env.ENVIRONMENT }}
, but none of the options seem to work unfortunately.
Is this supported through some other means? Or if not supported, could this be added?
Hi!
First off, love this plugin, thanks for all the hard work on it!
We've run into an issue recently using the new {{ git.branch }}
cache key template, as some member's of my team use /
's in there git branch names.
Eg: fix/some-bug-we-found
, feat/something-cool
.
As using {{ git.branch }}
means BUILDKITE_BRANCH
is added directly into the CACHE_KEY
, then via the S3
backend the ${CACHE_KEY}.tar
is used as a mv
target, we're getting errors like this in our builds:
mv: cannot move ‘/tmp/tmp.7G7Oz0l31L’ to ‘v1-cache-e3205e875e8a4ae8ed103242460f2bc0f41be2d8-fix/some-bug-we-found.tar’: No such file or directory
I imagine since its trying to copy into a non-existent sub-directory?
Example pipeline.yml
that will produce this issue:
- gencer/cache#v2.4.6:
backend: s3
key: "v1-node-cache-{{ checksum 'yarn.lock' }}-{{ git.branch }}"
s3:
bucket: 'some-bucket-name'
paths:
- 'node_modules'
Happy to raise a PR to fix given some guidance on how we wanna fix this.
Thoughts?
Thanks!
S3 supports these storage classes:
While Glacier and Deep Archive absolutely not fit to caching mechanism, Between others could be great to support. Especially ONEZONE_IA
and INTELLIGENT_TIERING
steps:
- plugins:
- gencer/cache#upcoming...:
backend: s3
key: "v1-cache-{{ runner.os }}-{{ checksum 'Podfile.lock' }}"
s3:
profile: "my-s3-profile"
bucket: "my-unique-s3-bucket-name"
compress: true # Create tar.gz instead of .tar (Compressed) Defaults to `false`.
class: 'ONEZONE_IA' # Defaults to `STANDARD` <-- HERE
paths:
- 'Pods/'
- 'Rome/'
or set by environment:
export BUILDKITE_PLUGIN_CACHE_S3_CLASS="ONEZONE_IA"
I'm trying to use ** glob in this plugin and I get the following error
Configuration:
plugins:
- gencer/cache#v2.4.10:
id: node_modules
backend: s3
key: "v2-cache-{{ NODE_MODULES_ARTIFACT_HASH }}"
restore-keys:
- "v2-cache-{{ NODE_MODULES_ARTIFACT_HASH }}"
compress: true
s3:
# bucket: "{{ BUILDKITE_ARTIFACT_BUCKET }}"
bucket: "redacted"
paths:
- "node_modules"
- "./workspaces/*/node_modules"
- docker#v3.5.0:
image: "{{ DOCKER_REPOSITORY }}/redacted:{{ PLATFORM_IMAGE_TAG_COMMIT }}"
mount-checkout: true
Error:
[node_modules] - 🔍 Locating source: v2-cache-677a66f676e9203508bc6caf71f45d9e19cfe25a
[node_modules] - 🔍 Locating cache: node_modules ./workspaces/*/node_modules **/node_modules
tar: **/node_modules: Warning: Cannot stat: No such file or directory
Any advice would be appreciated
Is it possible that files lose permissions when cached (tarball with compression)?
I have a run where, after recovering a cached file, it became non-executable: https://buildkite.com/test-811/example/builds/73#3de87972-da8a-487c-ab48-005d369a2ac3
I am trying to understand if this comes may be from this plugin or is something else.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.