GithubHelp home page GithubHelp logo

gunrock / io Goto Github PK

View Code? Open in Web Editor NEW
10.0 19.0 6.0 134.4 MB

Input (scripts, etc.) and output (scripts, performance results, etc.) for Gunrock and other graph engines

Python 1.02% HTML 97.56% Shell 0.04% Jupyter Notebook 1.35% Makefile 0.01% MATLAB 0.02%
gunrock visualization vega vega-lite altair json graphs

io's Introduction

Gunrock: CUDA/C++ GPU Graph Analytics

Ubuntu Windows Code Quality

Examples Project Template Documentation GitHub Actions

Gunrock1 is a CUDA library for graph-processing designed specifically for the GPU. It uses a high-level, bulk-synchronous/asynchronous, data-centric abstraction focused on operations on vertex or edge frontiers. Gunrock achieves a balance between performance and expressiveness by coupling high-performance GPU computing primitives and optimization strategies, particularly in the area of fine-grained load balancing, with a high-level programming model that allows programmers to quickly develop new graph primitives that scale from one to many GPUs on a node with small code size and minimal GPU programming knowledge.

Branch Purpose Version Status
main Default branch, ported from gunrock/essentials, serves as the official release branch. $\geq$ 2.x.x Active
develop Development feature branch, ported from gunrock/essentials. $\geq$ 2.x.x Active
master Previous release branch for gunrock/gunrock version 1.x.x interface, preserves all commit history. $\leq$ 1.x.x Deprecated
dev Previous development branch for gunrock/gunrock. All changes now merged in master. $\leq$ 1.x.x Deprecated

Quick Start Guide

Before building Gunrock make sure you have CUDA Toolkit2 installed on your system. Other external dependencies such as NVIDIA/thrust, NVIDIA/cub, etc. are automatically fetched using cmake.

git clone https://github.com/gunrock/gunrock.git
cd gunrock
mkdir build && cd build
cmake .. 
make sssp # or for all algorithms, use: make -j$(nproc)
bin/sssp ../datasets/chesapeake/chesapeake.mtx

Implementing Graph Algorithms

For a detailed explanation, please see the full documentation. The following example shows simple APIs using Gunrock's data-centric, bulk-synchronous programming model, we implement Breadth-First Search on GPUs. This example skips the setup phase of creating a problem_t and enactor_t struct and jumps straight into the actual algorithm.

We first prepare our frontier with the initial source vertex to begin push-based BFS traversal. A simple f->push_back(source) places the initial vertex we will use for our first iteration.

void prepare_frontier(frontier_t* f,
                      gcuda::multi_context_t& context) override {
  auto P = this->get_problem();
  f->push_back(P->param.single_source);
}

We then begin our iterative loop, which iterates until a convergence condition has been met. If no condition has been specified, the loop converges when the frontier is empty.

void loop(gcuda::multi_context_t& context) override {
  auto E = this->get_enactor();   // Pointer to enactor interface.
  auto P = this->get_problem();   // Pointer to problem (data) interface.
  auto G = P->get_graph();        // Graph that we are processing.

  auto single_source = P->param.single_source;  // Initial source node.
  auto distances = P->result.distances;         // Distances array for BFS.
  auto visited = P->visited.data().get();       // Visited map.
  auto iteration = this->iteration;             // Iteration we are on.

  // Following lambda expression is applied on every source,
  // neighbor, edge, weight tuple during the traversal.
  // Our intent here is to find and update the minimum distance when found.
  // And return which neighbor goes in the output frontier after traversal.
  auto search = [=] __host__ __device__(
                      vertex_t const& source,    // ... source
                      vertex_t const& neighbor,  // neighbor
                      edge_t const& edge,        // edge
                      weight_t const& weight     // weight (tuple).
                      ) -> bool {
    auto old_distance =
      math::atomic::min(&distances[neighbor], iteration + 1);
    return (iteration + 1 < old_distance);
  };

  // Execute advance operator on the search lambda expression.
  // Uses load_balance_t::block_mapped algorithm (try others for perf. tuning.)
  operators::advance::execute<operators::load_balance_t::block_mapped>(
    G, E, search, context);
}

include/gunrock/algorithms/bfs.hxx

How to Cite Gunrock & Essentials

Thank you for citing our work.

@article{Wang:2017:GGG,
  author =	 {Yangzihao Wang and Yuechao Pan and Andrew Davidson
                  and Yuduo Wu and Carl Yang and Leyuan Wang and
                  Muhammad Osama and Chenshan Yuan and Weitang Liu and
                  Andy T. Riffel and John D. Owens},
  title =	 {{G}unrock: {GPU} Graph Analytics},
  journal =	 {ACM Transactions on Parallel Computing},
  year =	 2017,
  volume =	 4,
  number =	 1,
  month =	 aug,
  pages =	 {3:1--3:49},
  doi =		 {10.1145/3108140},
  ee =		 {http://arxiv.org/abs/1701.01170},
  acmauthorize = {https://dl.acm.org/doi/10.1145/3108140?cid=81100458295},
  url =		 {http://escholarship.org/uc/item/9gj6r1dj},
  code =	 {https://github.com/gunrock/gunrock},
  ucdcite =	 {a115},
}
@InProceedings{Osama:2022:EOP,
  author =	 {Muhammad Osama and Serban D. Porumbescu and John D. Owens},
  title =	 {Essentials of Parallel Graph Analytics},
  booktitle =	 {Proceedings of the Workshop on Graphs,
                  Architectures, Programming, and Learning},
  year =	 2022,
  series =	 {GrAPL 2022},
  month =	 may,
  pages =	 {314--317},
  doi =		 {10.1109/IPDPSW55747.2022.00061},
  url =          {https://escholarship.org/uc/item/2p19z28q},
}

Copyright & License

Gunrock is copyright The Regents of the University of California. The library, examples, and all source code are released under Apache 2.0.

Footnotes

  1. This repository has been moved from https://github.com/gunrock/essentials and the previous history is preserved with tags and under master branch. Read more about gunrock and essentials in our vision paper: Essentials of Parallel Graph Analytics.

  2. Recommended CUDA v11.5.1 or higher due to support for stream ordered memory allocators.

io's People

Contributors

agalup avatar crozhon avatar ffarhour avatar huanzhang12 avatar jdwapman avatar jowens avatar knavely avatar laurawly avatar neoblizz avatar samtruong avatar sgpyc avatar shariiiyy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

io's Issues

MapGraph output in 'topc' -- only 2 of the runs have 'elapsed' at all, making it hard to get meaningful results

$ pwd;ls;grep elapsed *
/Users/jowens/Documents/working/gunrock-io/MapGraph-output/topc
BFS-hollywood-2009.json    PageRank-hollywood-2009.json
BFS-indochina-2004.json    PageRank-indochina-2004.json
BFS-rgg_n24_0.000548.json  PageRank-rgg_n24_0.000548.json
BFS-rmat_n22_e64.json	   PageRank-rmat_n22_e64.json
BFS-rmat_n23_e32.json	   PageRank-rmat_n23_e32.json
BFS-rmat_n24_e16.json	   PageRank-rmat_n24_e16.json
BFS-road_usa.json	   PageRank-road_usa.json
BFS-soc-LiveJournal1.json  PageRank-soc-LiveJournal1.json
BFS-soc-orkut.json	   PageRank-soc-orkut.json
CC-hollywood-2009.json	   SSSP-hollywood-2009.json
CC-indochina-2004.json	   SSSP-indochina-2004.json
CC-rgg_n24_0.000548.json   SSSP-rgg_n24_0.000548.json
CC-rmat_n22_e64.json	   SSSP-rmat_n22_e64.json
CC-rmat_n23_e32.json	   SSSP-rmat_n23_e32.json
CC-rmat_n24_e16.json	   SSSP-rmat_n24_e16.json
CC-road_usa.json	   SSSP-road_usa.json
CC-soc-LiveJournal1.json   SSSP-soc-LiveJournal1.json
CC-soc-orkut.json	   SSSP-soc-orkut.json
BFS-hollywood-2009.json:    "elapsed": 0.0,
BFS-road_usa.json:    "elapsed": 1000.0,
BFS-soc-LiveJournal1.json:    "elapsed": 0.0,
CC-hollywood-2009.json:    "elapsed": 0.0,
PageRank-hollywood-2009.json:    "elapsed": 0.0,
PageRank-road_usa.json:    "elapsed": 0.0,
PageRank-soc-LiveJournal1.json:    "elapsed": 0.0,
SSSP-hollywood-2009.json:    "elapsed": 1000.0,

Gunrock BFS on kron21

Our runs below result in substantially different performance numbers than those reported by the Groute authors.

Single-GPU runs:

Gunrock BFS on soc-LiveJournal1

We believe the following measurements are consistent with our concern that the Groute paper's characterization of Gunrock's performance on BFS on soc-LiveJournal1 was not representative of its actual performance.

Non-idempotent, not direction optimized:

We note the following runs we did on BFS with soc-LiveJournal1, all "on original input" (MatrixMarket):

Idempotent, not direction optimized:

Yuechao notes that he fixed a correctness bug in idempotence mode on 4 October 2016 (gunrock/gunrock@23490d3). For our testing in idempotence mode only, we measured Gunrock versions both immediately before and immediately after this bug was fixed ("the performance differences were very small"). We believe running on any July-October Gunrock build would give similar performance results. Anyway, for idempotence:

DOBFS

Multi-GPU DOBFS was added to BFS primitive, and single-GPU direction_optimizing_bfs was removed as of 26 April 2016 (gunrock/gunrock@1fbbc85). Gunrock's DOBFS has different behavior to Groute's (or anyone else's) BFS, which makes performance differences challenging to explain.

Allow multiple 'inputpath' settings

Sometimes one wants to be able to make comparisons between different engines. Having multiple 'inputpath' settings could allow this.
'inputpaths' could be a list passed to the program

Generating JSON results gunrock/io naming conventions and tags

use better tags
why the test was conducted should go under tags.

include the branch somewhere in the JSON
master, dev, which branch it ran on?

allow multiple tags
--tag=x --tag=y that could output a [x,y] vector in tags

naming convention for directories
[# of apps]Apps.[operating_system]_[gpu]x[# of gpus]_[branch]
5Apps.ubuntu16.04_v100x1_master

or

[app].[operating_system]_[gpu]x[# of gpus]_[branch]
BC.ubuntu14.04_K40cx1

Change create_graph into a modularized script

So have it as a standalone file while being able to import it and run it in other python scripts.

Would needd a call that takes a dictionary as an argument, such that the dictionary contains all the equivalent commandline arguments that would normally be passed to the argument.

Purpose: to be able to build a bunch of graphs at the same time.

SSSP road_usa on v0.4, help me out?

External request from an author who wants to compare against SSSP, road_usa.

We found 2 JSONs:
https://github.com/gunrock/io/blob/master/gunrock-output/topc/CentOS7.2_XXx1_topc_arch/SSSP_road_usa_Thu%20Dec%20%201%20102801%202016.json
https://github.com/gunrock/io/blob/master/gunrock-output/topc/SSSP_road_usa_Fri%20Nov%2018%20004204%202016.json

It looks like the exact same github version (and the same command line), but the timings are different enough as to make me scratch my (big bald) head. K40c vs. K40m (so the same GPU). One is consistently 9.5 seconds, one is 11 seconds. The faster one (hsw216) has

    "edges_queued" : 27709447,
    "edges_redundance" : 0,

and the slower one (luigi) has

    "edges_queued" : 620523505,
    "edges_redundance" : 975.26997178099407,

Any wisdom as to what's going on here?

how to derive the right alpha and beta from our results

Background: We can exhaustively run dobfs with different values of alpha/beta. How do we pick the right alpha/beta given a particular graph? I am not good at this, so I asked @hafen a few months back, who gave me great guidance, then I didn't do anything with it, so I'm posting it here.

@ffarhour I'm assigning it to you, but if you don't get to it this spring, no worries. I just need to write it down here.

Thanks @hafen!


For the simple question, if the simulation is deterministic (same rate for same alpha and beta every time) and you don’t expect any variability in the results, the obvious thing to do of course is to choose the pair of parameters that gives the minimum metric (geometric mean sounds good).

However, if there is variability or if you want to get some insight into how different parameter settings are effecting the result, I’d recommend making some plots. For example, I’d plot rate vs. alpha faceted on beta, with points colored by data set, giving 19 panels. If you can squeeze all 19 panels into one row and still see what is going on, that would be good. When examining a single panel, this will help you see, for a given beta, if the minimum occurs at the edges or within the range the alpha values, etc. You can also see how much variability there is across data sets within each panel. When examining across panels, you can see how the rate behaves in general for different beta. You can make the same plots with the roles of alpha and beta reversed.

If you see enough variability in the plots, you may determine that simply choosing the minimum metric might not be a stable approach (outliers could be chosen as the minimum when the true minimum appears to be somewhere else upon visual inspection, etc.). You can use the plots to help determine whether there should be some smoothing prior to computing the metric. For example, if for a given value of beta, the rate looks like a smooth function of alpha, but the data exhibits a smooth curve plus noise, you can smooth out the noise and use the resulting smooth curve as your data. This could be per data set or across all data sets depending on what the plot looks like.

Hopefully that makes some sense and is going after what you were asking.

For the more complex thing, if I understand correctly, you’d like to, for a given set of characteristics you know about a data set, be able to choose the appropriate alpha and beta without running the simulation. In this case, you can use the 10 data sets you have to build a model. The inputs will be the 10 sets of vertex and edge counts, and the outputs will be the results based on whatever procedure you have followed above to find the best alpha and beta. And you want to train a model on these 10 observations that predicts alpha and beta for a new vertex and edge count. This too will be easiest to approach with some simple plots. For example, plots of vertex count vs. alpha, vertex count vs. beta, edge count vs. alpha, edge count vs. beta. This will help you start to see if there is a clear relationship between pairs of the inputs and outputs and whether it appears that alpha and beta might be modeled independently, and will help determine what kind of model might be appropriate (do relationships look linear?, etc.). At the simplest end of the spectrum, you might find that you can fit a simple model independently for alpha and beta. But it is also possible to model alpha and beta jointly with a multiple dependent variable model. A big issue will be whether the model you fit will be valid when extrapolated beyond the inputs the model has been trained on. You may need more data to train on - perhaps spanning a grid of edge and vertex count values you are interested in.

That’s a bit long winded. Hard to tell you what the right thing is to do without seeing the data, but these are some guidelines.

Parser: Galois library

So, I am not sure how I can parse the last line in the following output to extract the totaltime.

STATTYPE,LOOP,CATEGORY,n,sum,T0,T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13,T14,T15
STAT,(NULL),Conflicts,16,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
STAT,(NULL),Iterations,16,1069173,67912,70279,69517,67931,64756,61527,63911,63153,72772,71319,68362,72007,64750,64354,64194,62429
STAT,(NULL),LoopTime,16,41,41,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
STAT,(NULL),MeminfoPost,16,144,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9
STAT,(NULL),MeminfoPre,16,144,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9
STAT,(NULL),Threads,16,16,16,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
STAT,(NULL),Time,16,42,42,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
STAT,(NULL),TotalTime,16,261,261,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0

GPUEngineOutputParserGalois:

class GPUEngineOutputParserGalois(GPUEngineOutputParserBase):
        def __init__(self, input_path):
                super(GPUEngineOutputParserGalois, self).__init__(input_path)
                self.regex_array = [{   "regex": re.compile("Read ([0-9]+) nodes"),
                                        "keys" : [{ "name" : "vertices_visited", "type" : "int"},
                                                 ]
                                    },
                                    {   "regex": re.compile("STAT,(NULL),TotalTime,([0-9]+),([0-9]+)"),
                                        "keys" : [{ "name" : "iterations", "type" : "int"},
                                                  { "name" : "elapsed", "type" : "int"}]
                                    },
                                    {   "regex": re.compile("INFO: Hostname (.+)"),
                                        "keys" : [{ "name" : "sysinfo", "type" : "dict(nodename={})"}]
                                    }
                ]
                self.engine = "Galois"

@jowens @huanzhang12 do you have any suggestion on how to improve it? Currently, this is all I get as output:

{
    "algorithm": "BFS",
    "dataset": "hollywood-2009",
    "engine": "Galois",
    "rawfile": "/data/Compare/TOPC-results/Galois/bfs/hollywood-2009.txt",
    "sysinfo": {
        "nodename": "mario"
    },
    "time": "Sun Nov 20 09:03:46 2016\n",
    "vertices_visited": 1139905
}

needs an updated readme

The current readme.md has explanations about how to write python code that does the things we want. It needs a new section, directed at people who only want to use the code and never want to look inside the code. You might have more than one .md file if you want, and link them together, that's fine, but here's the kinds of things we want it to cover:

  1. Overall flow: simulation -> JSON stats -> vega-lite -> graphs -> {prototype, file, Gunrock docs}. How do we implement each of those arrows? Each of these (except the first) is presumably mapped to a python file. (You should note in the docs which python files you run and which ones are support files; maybe the latter should be in a src subdirectory.) Also, what are the config files, how should they be named, what is in them?
  2. From graphs, we probably want three different arrows.
    • One is to prototype graphs: I'm designing a graph, I presumably have some JSON stats files, I want to make changes to graph parameters, look at the output, change something, look again. How does a user do this?
    • Two is to have a file output for a graph (I think you support svg and png).
    • Three is to push graphs to the Gunrock docs. I'm presuming we want a standalone python file that does this. I don't think we have one. Certainly we want a description of how it should be done.
  3. Testing out different pieces of this pipeline. For instance, you have test_json2vega.py to go from JSON-stats to vega-lite (vega?) JSON. How do you call this? What arguments? What are the input files, what is the output, how do you look at the output?

You can then reuse most of what you already have in a more detailed section that provides pointers on how to use/customize/develop the pieces you describe above.

Input JSON conflicts

Sometimes input files are the same but from different dates or have different sized datasets. But they lead to conflicts in the data being analyzed.

Need to filter jsons that are repeated. Can do so by having a command line argument such as:
--resolveConflicts={string}
where string might be the earliest, latest, smallest or largest.

how to get html tables out

Let's say I want to output an HTML table instead of a graph. Much of the machinery here can be reused; basically I just want to dump the pandas dataframe. But it's a little kludgey to do this.

For instance, I could have an output type case entry of table() (as an alternative to vegajson(), html(), etc.), and it seems like using --outputtype makes sense here from create_graph. The graph data structure should be available in the output type case entry, and I could just call print(graph.to_html()). But graph is only a dataframe in the VegaGraphBase base class; for the subclasses (Bar and Scatter), it instead returns a JSON object. I'd like to get the dataframe back.

CMD Args: --conds conditions problem

If a condition in --conds dictionary does not match a JSON then it needs to be ignored.

Once done, document the fact that --conds is ignored if it does not matchh JSON.

pdf output in script

Can test_json2vega have a PDF (or SVG) output option? Again, all the complexity can/should be hidden under the hood, but that's a nice feature.

Methodology for our comparisons vs. Groute

As we noted in our email communications, we think the fairest comparisons to make between two graph frameworks are those that offer the best available performance for each at the time the comparisons were made. For Gunrock today, that would be the 0.4 release (10 November 2016). We recognize this version was not available at the time the Groute paper was submitted (although it would have been appropriate for camera-ready), so we ran comparisons against a Gunrock version dated July 11, 2016 (6eb6db5d09620701bf127c5acb13143f4d8de394). Yuechao notes that to build this version, we "need to comment out the lp related includes in tests/pr/test_pr.cu, line 33 to line 35, otherwise the build will fail".

In our group, we generally run primitives multiple times within a single binary launch and report the average time (Graph500 does this, for instance). We think the most important aspect is simply to run it more than once to mitigate any startup effects. In our comparisons, we use --iteration-num=32.

By default, we use a source vertex of 0, and depending on the test, we have used both 0-source and random-source in our publications. Getting good performance on a randomized source is harder, but avoids overtuning. In our comparisons, we use source 0, as Groute does.

Parser: CuSha's output has changed in the updated version

In the updated CuSha (which is a lot cleaner and is what we used for new comparison tests), the generated output differs from what it was before. I have added the new txt2json code below for the parser, the commented segment is the old code. Would you like me to push this change or do we want to stay with the old one? @jowens

# Parser class for parsing CuSha output
class GPUEngineOutputParserCuSha(GPUEngineOutputParserBase):
        def __init__(self, input_path):
                super(GPUEngineOutputParserCuSha, self).__init__(input_path)
                self.regex_array = [{   "regex": re.compile("Input graph collected with ([0-9]+) vertices and ([0-9]+) edges."), 
                                        # re.compile("Graph is populated with ([0-9]+) vertices and ([0-9]+) edges."),
                                        "keys" : [{ "name" : "vertices_visited", "type" : "int"},
                                                  { "name" : "edges_visited", "type" : "int"},
                                                 ]
                                    },
                                    {   "regex": re.compile("Processing finished in (\d+(?:\.\d+)?) \(ms\)."), 
                                         # re.compile("Processing finished in : (\d+(?:\.\d+)?) \(ms\)"),
                                        "keys" : [{ "name" : "elapsed", "type" : "float"}]
                                    }
                ]
                self.engine = "CuSha"

CMD Args: -h (help command) commands listed under "optional"

A default feature of the argparse python module: when commands have a tag (double dash or single dash with a letter or word following e.g. -o --output) then they are assumed to be "optional" arguments.
When they are forced to be "requried", they are still listed under "optional" commands in the help command.

Plot before / after cuda arch changes

I uploaded data for before and after the change I made to our launch bounds calculations. Could they be plotted to see if it had any difference?

Before we were always assuming the compute capability was 300. This collection of JSON compares before and after the commits fixing that issue. I tagged them as

Gunrock-9c8102fa-before-use-cuda-arch
Gunrock-6ddbd33-after-use-cuda-arch

I added an INFO file with this info as well. It's all under gunrock-output/cuda_arch_comparison separated by problem and then dataset.

X and Y axis continuous data display

For some reason the x axis was not displayed correctly when presented with continuous floating point data.
Does this still occur for x-axis?
Does the same happen for the y-axis?

PR only: Comparing push vs. pull

Comparing push vs. pull
I would like to see a comparison of search-depth (this maybe the same), runtime/mteps, edges-queued, nodes-queued

Allow ability to specify axes scales

Implement --yscale and --xscale command arguments thaat add scale such as log or linear to the vega-lite spec.
This can be achieved by using Issue #21 : the mechhanism to merge 2 JSONs.

The --xscale and --yscale would generate a "wedge.json" which could then be merged with the main vega-lite json generated from the inputs.

CMD Args: --algorithm should not be required

Algorithm command should NOT be required. Need to put the algorithm input in the 'conds' dictionary instead.

For instance if I want to see MTEPS for every different algorithm on one particular dataset, I should be able to specify the dataset in the --conds dictionary and the script should work as expected.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.