GithubHelp home page GithubHelp logo

pmodels / oshmpi Goto Github PK

View Code? Open in Web Editor NEW
24.0 8.0 14.0 2.21 MB

OpenSHMEM Implementation on MPI

Home Page: https://pmodels.github.io/oshmpi-www/

License: Other

Makefile 0.86% Shell 3.92% M4 32.19% C 56.21% Smarty 5.93% Perl 0.89%

oshmpi's Introduction

Introduction

The OSHMPI project provides an OpenSHMEM 1.4 implementation on top of MPI-3. MPI is the standard communication library for HPC parallel programming. OSHMPI provides a lightweight OpenSHMEM implementation on top of the portable MPI-3 interface and thus can utilize various high-performance MPI libraries on HPC systems.

Getting Started

1. Installation

The simplest way to configure OSHMPI is using the following command-line:

./configure --prefix=/your/oshmpi/installation/dir CC=/path/to/mpicc CXX=/path/to/mpic++

Once you are done configuring, you can build the source and install it using:

make && make install

2. Test

OSHMPI contains some simple test programs under the tests/ directory. To check if OSHMPI built correctly:

cd tests/ && make check

For comprehensive test, we recommend using external test suites such as SOS Test Suite.

3. Compile an OpenSHMEM program with OSHMPI

OSHMPI supports C and C++ programs. Example to compile the C program test.c:

/your/oshmpi/installation/dir/bin/oshcc -o test test.c

Example to compile the C++ program test.cpp:

/your/oshmpi/installation/dir/bin/oshc++ -o test test.cpp

4. Execute an OSHMPI program

OSHMPI relies on the MPI library's startup command mpiexec or mpirun to execute programs. Example to run the binary test:

/path/to/mpiexec -np 2 ./test

For more information about the MPI startup command, please check the MPI library's documentation.

Configure Options

Below are some commonly used configure options. For detailed explanation please check ./configure --help.

  • Default configuration
./configure --prefix=/your/oshmpi/installation/dir CC=/path/to/mpicc CXX=/path/to/mpic++
  • With fast build, no multi-threading, disable all active messages.
./configure --prefix=/your/oshmpi/installation/dir CC=/path/to/mpicc CXX=/path/to/mpic++ \
    --enable-fast --enable-threads=single --enable-amo=direct --enable-rma=direct
  • With MPICH/OFI version 3.4b1 or newer, enable dynamic window enhancement for scalable memory space implementation.
./configure --prefix=/your/oshmpi/installation/dir CC=/path/to/mpicc CXX=/path/to/mpic++ \
    --enable-win-type=dynamic_win
  • With CUDA GPU memory space support
./configure --prefix=/your/oshmpi/installation/dir CC=/path/to/mpicc CXX=/path/to/mpic++ \
    --with-cuda=/path/to/cuda/installation

Examples

Examples to use memory space extension and CUDA GPU memory kind can be found at the built-in test suite:

  tests/space.c
  tests/space_int_amo.c
  tests/space_int_put.c
  tests/space_ctx_int_put.c

Environment Variables

OpenSHMEM Standard Environment Variables

  • SHMEM_SYMMETRIC_SIZE (default 128 MiB) Number of bytes to allocate for symmetric heap. The size value can be either number of bytes or scaled with a suffix of:

    • "K" or "k" for kilobytes (B * 1024)
    • "M" or "m" for Megabytes (KiB * 1024)
    • "G" or "g" for Gigabytes (MiB * 1024)
    • "T" or "t" for Terabytes (GiB * 1024)
  • SHMEM_DEBUG (0|1, default 0) Enables debugging messages from the OpenSHMEM runtime. It is invalid when configured with --enable-fast (see --enable-fast).

  • SHMEM_VERSION (0|1, default 0) Prints OpenSHMEM library version at start_pes(), shmem_init(), or shmem_init_thread().

  • SHMEM_INFO (0|1, default 0) Prints the list of OpenSHMEM environment variables at stdout.

OSHMPI Environment Variables

  • OSHMPI_VERBOSE (0|1, default 0) Prints both the standard and OSHMPI environment variables at stdout.

  • OSHMPI_AMO_OPS (Comma-separated operations, default "all") Defines the AMO operations used in the OpenSHMEM program. If all PEs issue concurrent AMOs only with the same operation, or with the same operation and fetch, then OSHMPI can directly utilize MPI accumulate operations. This is because, MPI grantees the atomicity of accumulate operations only with same_op or same_op_no_op. The default value of OSHMPI_AMO_OPS is "cswap,finc,inc,fadd,add,fetch,set,swap,fand,and,for,or,fxor,xor" (identical to "all"). Arbitrary combination of the above operations can be given at execution time.

    The variable can be adjusted only with configure --enable-amo=auto.

  • OSHMPI_MPI_GPU_FEATURES (Comma-separated features, default "all") Defines the list of GPU-aware communication functions provided by the underlying MPI library. The default value is "pt2pt,put,get,acc" (identical to "all"). Arbitrary combination of the above features can be given at execution time. If none of the features is supported, specify "none".

  • OSHMPI_ENABLE_ASYNC_THREAD (0|1, default 0) Runtime control of asynchronous progress thread when MPI point-to-point based active messages are used internally. Both AMO and RMA may use the active message based approach:

    • When AMO cannot be directly translated to MPI accumulates (see OSHMPI_AMO_OPS), each AMO operation is issued via active messages.
    • When GPU buffer is used in an RMA operation but the MPI library does not support GPU awareness in the RMA mode (see OSHMPI_MPI_GPU_FEATURES), each RMA operation is issued via active messages. The variable can be adjusted only with configure --enable-async-thread=auto and --enable-threads=multiple.

Debugging Options

  • Enable debugging flag by setting the configure option --enable-g.
  • Set environment variable SHMEM_DEBUG=1 to print OSHMPI internal debugging message.

Test Framework

OSHMPI uses SOS Test Suite for correctness test.

  • Tested platforms: CentOS, MacOS (compilation-only)
  • Tested MPI implementations:
    • MPICH-3.4rc1 (with CH3/TCP, CH4/OFI, or CH4/UCX)
    • MPICH main branch (with CH3/TCP, CH4/OFI, or CH4/UCX)
    • OpenMPI 4.0.5 (with UCX)

Known Issues

  1. Some OpenSHMEM features are not supported in OSHMPI.

    • Context: OSHMPI cannot create separate or shared context on top of MPI interfaces. This feature may be enabled if the MPI end-point concept is accepted at MPI forum and implemented in MPI libraries. Current version always returns SHMEM_NO_CTX error at shmem_ctx_create(); shmem_ctx_destroy() is a no-op.
  2. OSHMPI does not provide Fortran APIs.

  3. OSHMPI may not work on 32-bit platforms. This is because some internal routines rely on 64-bit integer, e.g., shmem_set_lock(), shmem_clear_lock(), shmem_test_lock.

  4. OSHMPI may not correctly initialize symmetric data on OSX platform.

Support

If you have problems or need any assistance about the OSHMPI installation and usage, please contact [email protected] mailing list.

Bug Reporting

If you have found a bug in OSHMPI, please contact [email protected] mailing list, or create an issue ticket on github: https://github.com/pmodels/oshmpi. If possible, please try to reproduce the error with a smaller application or benchmark and send that along in your bug report.

oshmpi's People

Contributors

dalcinl avatar huansong avatar hzhou avatar jeffhammond avatar minsii avatar pavanbalaji avatar raffenet avatar sg0 avatar yfguo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

oshmpi's Issues

Context: Adapt RMA and AMO with team/context

Description: Once a team is created, the user can create a context with the team via SHMEM_TEAM_CREATE_CTX. The context means a set of local communication resource, similar to an MPI VCI. The context will then be used with any RMA or atomics (AMO) operation. When a team-context is used, the dest PE of each RMA or AMO operation needs to be read as team-relative PE, similar to the win-relative target_rank in MPI RMA.

Suggested implementation

# updates the comm based implementation as posted in #36
# For each team, creates `num_contexts` (>=0) number of wins at team creation time.
# It is safe to call win_create here as team creation is also collective.
SHMEM_TEAM_SPLIT_STRIDED
SHMEM_TEAM_SPLIT_2D

SHMEM_TEAM_CREATE_CTX -> return previously created win (local op)
SHMEM_CTX_GET_TEAM-> return associated team object (local op)

TODO

  • Implement SHMEM_TEAM_CREATE_CTX. Can skip optimization with SHMEM_CTX_SERIALIZED | SHMEM_CTX_PRIVATE | SHMEM_CTX_NOSTORE for now.
  • Update remote PE in all RMA and AMO operations by using team-relative PE.

Misc: develop non-blocking AMO

Description: existing atomic operations (AMO) are blocking (i.e., return when it is locally completed). They are implemented by using corresponding MPI accumulate operation + MPI_Win_flush_local . The new nonblocking version can be directly translated to MPI accumulate operation.

SHMEM_ATOMIC_FETCH_NBI
SHMEM_ATOMIC_COMPARE_SWAP_NBI
SHMEM_ATOMIC_SWAP_NBI
SHMEM_ATOMIC_FETCH_INC_NBI
SHMEM_ATOMIC_FETCH_ADD_NBI
SHMEM_ATOMIC_FETCH_AND_NBI
SHMEM_ATOMIC_FETCH_OR_NBI
SHMEM_ATOMIC_FETCH_XOR_NBI

Starting point

  • Section 9.7.2 Nonblocking Atomic Memory Operations in OpenSHMEM v1.5 spec
  • src/internal/amo_impl.h in OSHMPI

TODO

  • Implement functions
  • Add 1 test in tests/

Intel GPU memory kind support

  • Prototype code for supporting Intel GPU memkind
  • Merge into OSHMPI/master

(Temporarily close the PR because Intel GPU infra is not ready)

Strided: Analyze overhead of strided datatype creation and decoding

Description: Analyze overheads of strided datatype creation and decoding in the strided RMA path. Need to configure with --disable-strided-cache to disable the datatype cache optimization in OSHMPI.

Starting point:

  • Look into OSHMPI_create_strided_dtype function in OSHMPI.
  • Can use tests/iput.c as test program.

Hints: The datatype created in OSHMPI is always a resized vector with blocklength=1.

Evaluation platform: LCRC/Bebop Broadwell and KNL are preferred

Estimated effort: 3days - 1 week

TODOs:

  • Overhead breakdown of strided datatype creation in OSHMPI/MPICH/Yaksa path on CPU
  • Overhead breakdown of strided datatype creation in OSHMPI/MPICH/dataloop path on CPU (optional)
  • Overhead analysis of strided datatype decoding in PUT in OSHMPI/MPICH path (assume yaksa and dataloop are the same) on CPU
  • Overhead breakdown of strided datatype creation in OSHMPI/MPICH/Yaksa path on KNL
  • Overhead breakdown of strided datatype creation in OSHMPI/MPICH/dataloop path on KNL (optional)
  • Overhead analysis of strided datatype decoding in PUT in OSHMPI/MPICH path (assume yaksa and dataloop are the same) on KNL

Problems running oshmpi with Fujitsu MPI on Fugaku

I can build, but this is what I see on a compute node. Any idea?

(gdb) cont
Continuing.
[New Thread 0x4000025ff010 (LWP 14149)]
[New Thread 0x4000029ff010 (LWP 14150)]

Thread 1 "a.out" received signal SIGSEGV, Segmentation fault.
ompi_mfh_base_real_t_cvar_write () at pcvar_write.c:43
43	pcvar_write.c: No such file or directory.
(gdb) bt
#0  ompi_mfh_base_real_t_cvar_write () at pcvar_write.c:43
#1  ompi_mfh_ptl_t_cvar_write ()
    at ../../../../src/ompi/mca/mfh/ptl/mfh_ptl_call.h:692
#2  PMPI_T_cvar_write ()
    at ../../../../src/ompi/mca/mfh/base/mfh_base_func_defs.h:13523
#3  0x00004000000a62a8 in set_mpit_cvar (cvar_name=<optimized out>,
    val=<optimized out>) at ../oshmpi-git/src/internal/setup_impl.c:698
#4  0x00004000000a6354 in initialize_mpit ()
    at ../oshmpi-git/src/internal/setup_impl.c:708
#5  0x00004000000a65d4 in OSHMPI_initialize_thread (required=<optimized out>,
    provided=<optimized out>) at ../oshmpi-git/src/internal/setup_impl.c:780
#6  0x00004000000b24a0 in shmem_init () at ../oshmpi-git/src/shmem/setup.c:13
#7  0x0000000000400ee4 in main () at hello.c:64
(gdb) q
A debugging session is active.

	Inferior 1 [process 14143] will be detached.

Quit anyway? (y or n) y
Detaching from program: /vol0004/ra010008/XXXXXX/shmem/openshmem-examples/c/a.out, process 14143
[Inferior 1 (process 14143) detached]
[c34-0003c:14143] *** Process received signal ***
[c34-0003c:14143] Signal: Segmentation fault (11)
[c34-0003c:14143] Signal code: Address not mapped (1)
[c34-0003c:14143] Failing at address: 0x1
[c34-0003c:14143] [ 0] linux-vdso.so.1(__kernel_rt_sigreturn+0x0)[0x40000006066c]
[c34-0003c:14143] [ 1] /opt/FJSVxtclanga/tcsds-1.2.30a/lib64/libmpi.so.0(PMPI_T_cvar_write+0x54)[0x40000023d574]
[c34-0003c:14143] [ 2] /home/ra010008/XXXXXX/opt/oshmpi/git/lib/liboshmpi.so.0(+0x162a8)[0x4000000a62a8]
[c34-0003c:14143] [ 3] /home/ra010008/XXXXXXopt/oshmpi/git/lib/liboshmpi.so.0(+0x16354)[0x4000000a6354]
[c34-0003c:14143] [ 4] /home/ra010008/XXXXXX/opt/oshmpi/git/lib/liboshmpi.so.0(OSHMPI_initialize_thread+0x270)[0x4000000a65d4]
[c34-0003c:14143] [ 5] /home/ra010008/XXXXXX/opt/oshmpi/git/lib/liboshmpi.so.0(shmem_init+0x24)[0x4000000b24a0]
[c34-0003c:14143] [ 6] ./a.out[0x400ee4]
[c34-0003c:14143] [ 7] /lib64/libc.so.6(__libc_start_main+0xe4)[0x400001030be4]
[c34-0003c:14143] [ 8] ./a.out[0x400dfc]
[c34-0003c:14143] *** End of error message ***

openpa: replacement of openpa

Git repository of openpa might be deleted at some point as it is not used in MPICH now. We might want to find replacement for it.

Misc: minimal support of profiling interface

Similar to MPI profiling interface (PMPI), OpenSHMEM 1.5 defines pshmem_ wrapper and a query function shmem_pcontrol.

Starting point

  • Section 10 OpenSHMEM Profiling Interface in OpenSHMEM spec v1.5
  • Many type/size specific functions are generated at autogen.sh. Also see the template files (.tpl) under src/shmem. We may also autogenerate the pshmem_ wrapper.
  • configure.ac already detects USE_WEAK_SYMBOLS
  • Can reuse PMPI_ implementation.

TODO

  • Implementation of pshmem_ weak symbol for all functions
  • Implementation of shmem_pcontrol (NO-OP is fine)

Misc: develop shmem_malloc_with_hints

shmem_malloc_with_hints is added in OpenSHMEM 1.5, with hints SHMEM_MALLOC_ATOMICS_REMOTE|SHMEM_MALLOC_SIGNAL_REMOTE

Basic implementation

  • Reuse existing shmem_malloc

Optional optimizations

  • Develop equivalent hints for MPI_Win_allocate and use it to implement shmem_malloc_with_hints?
    • Potential drawback is the window lookup overhead at RMA/AMO operation

Switch RMA based and AM based RMA

  • Prototype of OSHMPI hint OSHMPI_MPI_GPU_FEATURES and select AM or RMA based PUT/GET based on MPI features and per-message buffer type
  • PR for OSHMPI/master

PR #68

Coll: Develop team-based collectives

Description: OpenSHMEM 1.5 adds team-based collective. It is straightforward to translate to MPI collectives with the corresponding MPI comm.

Depend on #36

Starting point:

  • Section 9.9 Collective Routines in OpenSHMEM v1.5 spec
  • The deprecated active-set-based collectives in OSHMPI (search OSHMPI_* function in src/internal/coll_impl.h)

TODO

  • Basic implementation of team based collectives
  • Add 1-2 tests in tests/

shmem_calloc: Bad allocation call

The following code is trivially wrong,

void *shmem_calloc(size_t count, size_t size)
{
    void *ptr = NULL;

    OSHMPI_NOINLINE_RECURSIVE()
        ptr = OSHMPI_malloc(size);
    memset(ptr, 0, count * size);

    return ptr;
}

it should use ptr = OSHMPI_malloc(count * size);

Web: prepare a student projects page

To better coordinate student internships, it is better to have a page list all small R&D projects (e.g., a project can be done by a student in 3 months)

Misc: develop signal operations

SHMEM_PUT_SIGNAL
SHMEM_PUT_SIGNAL_NBI
SHMEM_SIGNAL_FETCH

Starting point

  • Section 9.8 Signaling Operations in OpenSHMEM v1.5 spec
  • Can use MPI_Accumulate(REPLACE) as put + MPI_FETCH_AND_OP as signal. Because MPI guarantees the ordering of accumulate operations, thus completion of FOP ensures completion of the former ACC.

TODO

  • Implementation
  • 1 test program in tests/

Team: Develop teams concept by using MPI comm

Description: OpenSHMEM 1.5 added the teams concept which is very similar to MPI groups (i.e., a group of processes). We will develop team management routines in OSHMPI by using MPI group/communicator routines.

Strictly speaking, team conceptually equals to MPI group and context equals to MPI comm. But as team is directly used by the collective path, we need to create MPI comm at team creation. Thus, in practice team == MPI comm.

Suggestion for basic implementation: Please feel free to optimize if you have a better idea :-)

SHMEM_TEAM_MY_PE -> MPI_Comm_rank
SHMEM_TEAM_N_PES -> MPI_Comm_size
SHMEM_TEAM_TRANSLATE_PE -> MPI_Group_translate_ranks

## OpenSHMEM require all PEs in parent group to make the creation call
SHMEM_TEAM_SPLIT_STRIDED -> MPI_Group_range_incl + MPI_Comm_create
SHMEM_TEAM_SPLIT_2D -> likely MPI_Group_range_incl + MPI_Comm_create

SHMEM_TEAM_DESTROY -> MPI_Comm_destroy

## predefined symbols
SHMEM_TEAM_WORLD -> a predefined integer internally maps to dup of MPI_COMM_WORLD
SHMEM_TEAM_SHARED -> a predefined integer internally maps to comm created by
              MPI_Comm_split_type with MPI_COMM_TYPE_SHARED
SHMEM_TEAM_INVALID -> MPI_COMM_NULL

Starting point:

  • Section 9.4 Team Management Routines in OpenSHMEM v1.5 spec
  • Look into coll_acquire_comm function in OSHMPI where we already create MPI_Comm for each OpenSHMEM collective call.

Related tasks: team-based collectives, team-context-based RMA and AMO

Estimated effort: 2 week

TODOs:

  • Basic implementation of all above routines
  • 1-2 test programs added to tests/ for correctness validation
  • Try team management related tests in SOS test suite

Wait: extend wait_until|test with all|any|some_vector

Wait or test elements in an array but with a mask (status) to specify whether the element is excluded from the wait set

size_t shmem_wait_until_some(TYPE *ivars, size_t nelems, size_t *indices, const int *status,
             int cmp, TYPE cmp_value);
shmem_wait_until_any_vector
shmem_wait_until_some_vector
shmem_test_all_vector
shmem_test_any_vector
shmem_test_some_vector

Starting point

  • Section 9.10 Point-To-Point Synchronization Routines in OpenSHMEM spec 1.5
  • Can use MPI_Accumulate to get value of all elements in ivars as it is always a contig buffer.

TODO

  • Implementation
  • 1-3 tests in tests/
  • Ensure correctness with SOS test suite

doc: document use of OSHMPI+Casper

To ensure full progress of OSHMPI, the user has to either enable async thread or Casper. The use of OSHMPI + Casper is not well documented.

Misc: multipliers in SHMEM_SYMMETRIC_SIZE environment variables

See Section 8 in OpenSHMEM spec 1.5.

Current version supports only integer value. The new version allows also a floating point value with an optional character suffix (e.g., k, m, t)

Specifies the size (in bytes) of the symmetric heap memory per PE. The resulting size is implementation-defined and must be least as large as the integer ceiling of the product of the numeric prefix and the scaling factor. The allowed character suffixes for the scaling factor are as follows:
• k or K multiplies by 210 (kibibytes)
• m or M multiplies by 220 (mebibytes)
• g or G multiplies by 230 (gibibytes)
• t or T multiplies by 240 (tebibytes)
For example, string “20m” is equivalent to the integer value 20971520, or 20 mebibytes. Similarly the string “3.1M” is equivalent to the integer value 3250586. Only one multi- plier is recognized and any characters following the mul- tiplier are ignored, so “20kk” will not produce the same result as “20m”. Usage of string “.5m” will yield the same result as the string “0.5m”.
An invalid value for SHMEM_SYMMETRIC_SIZE is an er- ror, which the OpenSHMEM library shall report by either returning a nonzero value from shmem_init_thread or caus- ing program termination.

Progress mismatching

  • Progress engine refactoring to get rid of function pointer access
  • Dynamically adjust the progress polling for AM in MPICH
  • Ensure progress polling is skipped in flush* routines within the OSHMPI context.
  • Set required CVAR in OSHMPI

Relevant PR in OSHMPI: #74
Relevant PR in MPICH: #4830

IPO static inlining for OSHMPI+MPICH

  • Investigating ways to statically inline OSHMPI and MPICH libraries
  • Analyze instruction overhead with static inlined put and quiet
  • Add compile option in OSHMPI to statically inline program + OSHMPI fast path
  • Add required compilation for MPI in OSHMPI/README.

PR: #58

Strided: Optimize simple datatype creation

Description: For OSHMPI we use only resized vector with blocklength=1 datatype. Can we do any optimization for such simple datatypes in MPICH to reduce creation overhead and also decoding overhead?

Dependent on #34

Estimated effort: 1-2 days after finishing #34

amo: apply fast-path analysis and optimization

Direct AMO mode still shows about 2x overhead than that of SOS

Benchmark: osu_oshm_atomics_all2one (shmem_int_finc -> MPI FOP)

        #SOS   #direct-amo
Theta/np=2 	14.88	39.22
Cori/np=2 	2.67    4.21

The current atomics check in OFI can be the cause. We need analyze the atomics path and apply optimizations similer to previous RMA (IPO inline, reduce instructions on AMO path)

Wait: extend wait_until|test with all|any|some

Similar to MPI_Wait|Test family, these routines accepts an array of elements and atomically check the update of each element. Cannot translate to MPI_Wait|Test{any|all|some} because these routines guarantee atomicity with remote AMO operations.

shmem_wait_until_all
shmem_wait_until_any
shmem_wait_until_some
shmem_test_all
shmem_test_any
shmem_test_some

Starting point

  • Section 9.10 Point-To-Point Synchronization Routines in OpenSHMEM spec v1.5
  • Extend src/internal/p2p_impl.h. Use MPI_Accumulate to get all elements at a time.

TODO

  • Implementation
  • 1-3 tests in tests/

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.