GithubHelp home page GithubHelp logo

zdevito / aten Goto Github PK

View Code? Open in Web Editor NEW
662.0 31.0 124.0 20.83 MB

ATen: A TENsor library for C++11

CMake 5.51% C 14.40% C++ 56.71% Cuda 20.00% Python 1.83% Shell 0.22% Assembly 1.33%
tensor torch7 pytorch

aten's People

Contributors

apaszke avatar bddppq avatar bwasti avatar colesbury avatar cpuhrsch avatar ezyang avatar gchanan avatar goldsborough avatar ifedan avatar iotamudelta avatar izdeby avatar jerryzh168 avatar killeent avatar orionr avatar peterjc123 avatar pietern avatar slayton58 avatar smessmer avatar soumith avatar ssnl avatar suo avatar syed-ahmed avatar t-vi avatar vishwakftw avatar vitalyfedyunin avatar xuhdev avatar yangqing avatar zasdfgbnm avatar zdevito avatar zou3519 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aten's Issues

Failing to configure on a CUDA-less laptop (Ubuntu 16.04)

Lots of little CMake problems.

  1. Policy issue
CMake Warning (dev) at lib/THNN/CMakeLists.txt:61 (LINK_DIRECTORIES):
  This command specifies the relative path  as a link directory.
  Policy CMP0015 is not set: link_directories() treats paths relative to the
  source dir.  Run "cmake --help-policy CMP0015" for policy details.  Use the
  cmake_policy command to set the policy and suppress this warning.
This warning is for project developers.  Use -Wno-dev to suppress it.
  1. Disabling Cuda seems to work
CUDA_TOOLKIT_ROOT_DIR not found or specified
-- Could NOT find CUDA (missing:  CUDA_TOOLKIT_ROOT_DIR CUDA_NVCC_EXECUTABLE CUDA_INCLUDE_DIRS CUDA_CUDART_LIBRARY) (Required is at least version "5.5")
-- CUDA not found: disabling THC

but fails later

CMake Error: The following variables are used in this project, but they are set to NOTFOUND.
Please set them or make sure they are set and tested correctly in the CMake files:
CUDA_CUDART_LIBRARY (ADVANCED)
    linked by target "atest" in directory /home/leonb/ATen/src/ATen
-- Configuring incomplete, errors occurred!

Workaround and possible fix

--- a/src/ATen/CMakeLists.txt
+++ b/src/ATen/CMakeLists.txt
@@ -186,7 +186,9 @@ INSTALL(TARGETS ATen
 
 add_executable(atest test/atest.cpp)
 target_link_libraries(atest ATen)
-target_link_libraries(atest ${CUDA_LIBRARIES})
+IF(CUDA_FOUND)
+  target_link_libraries(atest ${CUDA_LIBRARIES})
+ENDIF()
 
 FOREACH(HEADER ${base_h})
   INSTALL(FILES ${HEADER} DESTINATION ${TENSOR_LIB_INSTALL_INCLUDE_DIR}/ATen)

OS X contbuild

We recently had two bugs which broke OS X builds for downstream users. It would be great to setup OS X contbuild for ATen so we can catch these more quickly.

Size-checking utility functions.

Similar to the size language of the original aten, but implemented as C++ utility functions that match dimensions of tensors to variables and report and error if a variable is defined differently in two different Tensors.

Compilation issues

I'm trying to build ATen on a Ubuntu machine with gcc 5.4. When trying to link test-meter, I get a lot of missing symbols from STL, e.g.

../ATen/libATen.so.1: undefined reference to `std::__cxx11::basic_stringstream<char, std::char_traits<char>, std::allocator<char> >::~basic_stringstream()'
../ATen/libATen.so.1: undefined reference to `std::runtime_error::runtime_error(char const*)'
libxtmeter.so: undefined reference to `std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::_Alloc_hider::_Alloc_hider(char*, std::allocator<char> const&)'
../../lib/THCUNN/libTHCUNN.so: undefined reference to `std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::c_str() const'

Build intermittently fails with "explicit type is missing"

The error on failure:

/private/home/ezyang/ATen/lib/THC/generic/THCTensorMathPairwise.cu(47): error: explicit type is missing ("int" assumed)

/private/home/ezyang/ATen/lib/THC/generic/THCTensorMathPairwise.cu(58): error: explicit type is missing ("int" assumed)

2 errors detected in the compilation of "/tmp/tmpxft_000076f6_00000000-7_THCTensorMathPairwise.cpp1.ii".
CMake Error at THC_generated_THCTensorMathPairwise.cu.o.cmake:267 (message):
  Error generating file
  /private/home/ezyang/ATen/build/lib/THC/CMakeFiles/THC.dir//./THC_generated_THCTensorMathPairwise.cu.o

It seems to go away if you run make again. It seems to happen fairly reproduceably on a fresh build.

CMake problem: running make causes complete rebuild every time

I have noticed a problem that after building the repository and changing some files there
and running make causes the complete rebuild.

I think it might be due to the fact that a lot of cpp files get generated using python while building,
so it might fool the make into thinking that they have changed also.

Didn't figure out how to tackle this yet.

Tensor cuda stream information

we need the cuda stream information for tensors to be used in c2isl. The only way to do this right now is via TH state.

Scalars behave weirdly

This always prints out an integer:

auto x = at::CPU(at::kFloat).rand({500});
std::cout << x.sum().to<float>() << '\n';

This doesn't compile:

auto y = 100.0f * x.sum();

I can add to this issue if I find any more

Feature request: Operations on scalars

Currently, Scalar operations are not overloaded with c types. This makes it quite inconvenient to do things like losses.sum() / n_tot to simply print the losses and slows down development time.

error: no match for ‘operator*’ (operand types are ‘at::Scalar’ and ‘float’)
   auto asdf = weights.sum() * 100.f;

Deal with potential clobber between PyTorch's copy of ATen and a standalone ATen

Both PyTorch and ATen (standalone) produce libATen.so files. This is hazardous because if they are not ABI compatible (and they probably are not), you will get extremely hard to diagnose errors if one clobbers another.

Unfortunately, on the PyTorch side, libATen.so cannot be statically linked into Torch's main _C.so, because cffi plugins for PyTorch may wish to interact with ATen directly. (Actually, how easy is it to actually do this? As ATen is a C++ library, any FFI code must be very careful to get C++ ABI compatibility...)

One possibility is to give PyTorch and standalone ATen distinct symbol names and library names.

Another possibility is to "deprecate" standalone ATen installation: to get ATen, you must install PyTorch (solving the duplicate ATen problem.) TH headers get installed to $CONDA_PREFIX/lib/python3.6/site-packages/torch/lib/include/ which can be used by users (albeit with some difficulty.) One hazard is that you must still make sure your C++ compiler is ABI compatible with the ATen build.

1-dimensional tensors can be "expanded" to 0-dimensional ones

This issue was mentioned in #49, but I pulled out the related changes to have a minimal commit there. Because TH/THC sees these both as 1-dimensional, the "expansion" works even though it is clearly not an expansion. I think other similar functions (like resize) are okay, because they don't imply "expansion", but we should probably doing a thorough check.

Linking against ATen requires some work

Using the Conda build of ATen at https://anaconda.org/ezyang/aten it still requires some work to actually link against ATen. In the end, this incant was sufficient for me:

LD_LIBRARY_PATH=$CUDA_HOME/lib64:$CONDA_PREFIX/lib g++ -I$CONDA_PREFIX/include -L$CONDA_PREFIX/lib -lATen -std=c++11 myprog.cpp

(though I don't claim that this is the "best" way to do it.)

We should either document this or make it easier.

Handle Advanced Indexing in ATen

@killeent @colesbury @gchanan
This issue is just to track the progress on advanced indexing plans in ATen. To summarize our discussion, the initial idea is to add 'low-level' support for advanced indexing in the form of an operator like:

Tensor index(const char * format, TensorList inputs);

format is a string that describes indexing, ellispes, and colons:
".tt:", describes a indexing like [...,a_broadcastable_tensor, another_tensor,:] in python. Use of a 't' consumes one entry in inputs.

We will probably need to eventually support:

  • t a tensor, in TensorList
  • . ellispsis in python ...
  • : colon in python :
  • r a range specifier a:b, encoded as a tensor with 2 (3?) elements.

This will be the version of indexing wrapped by other interfaces, like PyTorch. We may eventually create a wrapper on this that uses objects more idomatic to C++.

Print ATen tensors

It would be really handy if it were possible to print ATen tensors, in the same way PyTorch tensors can be printed. Currently, the toString method only returns the type of the tensor.

Formatting.cpp doesn't handle zero-element tensors

The following code crashes:

auto tensor = CPU(kFloat).ones({0});
// prints 0 -9223372036854775808
std::cout << tensor.numel() << " " << tensor.dim() << std::endl; 
// throws std::bad_alloc
std::cout << tensor << std::endl;

Formatting.cpp handles tensors whose pImpl is NULL but not tensors with zero elements (i.e. tensor.dim() returns kUndefinedDimensions)

question about THLongStorageView(ArrayRef<int64_t> ref, bool zero_dim_to_one)

Hello,

storage.data = (long*)(ref.data());

Could you explain how this conversion supposed to work:
storage.data = (long*)(ref.data());

ref is a pointer to int64_t
storage.data is pointer to long

On below example
int64_t ref = new int64_t[2];
ref[0] = 3; ref[1] = 4;
long
data = (long*)ref;
"data" will contain:
3 == data[0]
0 == data[1]

Is this a planned behavior?

Best,
Oleg.

how to concatenate tensors

If I have two images which are both converted to tensors with dimension 1 x 3 x h x w. Is there any function that I can directly concatenate these two tensor to the shape as 2 x 3 x h x w?

[Scalars] Remove Scalar from return types of functions.

We want to unite functions like Scalar sum(Tensor & tensor); with the other variants of sum. To do this, we must return a Tensor in these cases.

This can be done in two parts:
[ ] Force the creation of Tensors from Scalar objects now.
[ ] Change TH so that in cases where scalars exist on the GPU, we don't both to copy back and just create a Tensor directly.

Note: the first part depends on changing the behavior of how 0-dim tensors in all places where they are valid.

Improve error messages for undefined tensors

Tensor objects do not have to be defined:

Tensor foo; // undefined tensor reference
auto zeros = CPU(kFloat).zeros({3,4});
// will crash because foo is not defined.
add_out(zeros,zeros,foo);

In the very least, the auto-gen'd operators should check for and report a nice error in this case.
Should output-style operators also automatically initialize then Tensor?

Fix include file ordering issues.

Currently when you include ATen/Tensor.h in a compilation unit, you also need to include ATen/TensorMethods.h. This is handled by including ATen/ATen.h, but since we do no checks about what clients include, sometimes only Tensor.h gets included.

We can fix this by restricting what clients can include to only ATen/ATen.h. I am not sure what the right C++ mechanism is for enforcing this.

We should also figure out how to install only the user-facing header files (Tensor.h, Type.h) and not the implementation-only header files (CPUFloatTensor.h) which are only used in the build process.

code_template.py substitution string still fails on Python 2.7.5

Here is a self-contained repro:

[[email protected] ~/local/c2isl/third-party/pytorch] python --version
Python 2.7.5
[[email protected] ~/local/c2isl/third-party/pytorch] python foo.py
Traceback (most recent call last):
  File "foo.py", line 3, in <module>
    re.compile(x, re.MULTILINE)
  File "/usr/lib64/python2.7/re.py", line 190, in compile
    return _compile(pattern, flags)
  File "/usr/lib64/python2.7/re.py", line 242, in _compile
    raise error, v # invalid expression
sre_constants.error: nothing to repeat
[[email protected] ~/local/c2isl/third-party/pytorch] cat foo.py
import re
x = "(^[^\n\S]*)?\$([^\d\W][a-zA-Z0-9_]*|\{,?[^\d\W][a-zA-Z0-9_]*\,?})"
re.compile(x, re.MULTILINE)

The problem is this version of Python does not understand (x*)?.

Scalars optionally backed by 0-dim Tensor.

Scalar objects should be able to be backed by a 0-dim Tensor as well as be an in-place value. This will allow smart APIs to keep the Scalar on the device when possible.

Consider adding a nontrivial TensorRef type

Semantics are similar to const TensorImpl&.

Motivation:

  • It avoids passing const Tensor& which is technically a pointer to a pointer to the actual tensor
  • You can do const-correctness correctly on it (#27), because TensorRef would NOT support copy construction
  • It solves the problem mentioned in 9c05ed3

Copy doesn't support broadcasting

In PyTorch and NumPy, the source is broadcasted to the size of the destination. For example:

matrix = torch.randn(7, 5)
vec = torch.randn(5)
matrix.copy_(vec)  # each row of matrix contains vec

Currently, ATen resizes the destination tensor to the size of the source. We should change this behavior to match PyTorch and Numpy

0-dim Tensor support.

Port Xt-style wrappers around TH/THC that allow Tensors to behave as if they were 0-dim.

invalid static_cast

Please see
https://build.pytorch.org/job/pytorch-master-py3-linux/401/console

00:07:06.411 /home/jenkins/buildbot/workspace/pytorch-master-py3-linux/builder/pytorch/torch/lib/build/ATen/ATen/CPUByteType.cpp:1893:39: error: invalid static_cast from type ‘const at::CPUByteType*’ to type ‘at::Type*’
00:07:06.411          return static_cast<Type*>(this)->add_out(self, value, SparseTensor(other), result);
00:07:06.411                                        ^
00:07:06.411 /home/jenkins/buildbot/workspace/pytorch-master-py3-linux/builder/pytorch/torch/lib/build/ATen/ATen/CPUByteType.cpp: In member function ‘virtual at::Tensor at::CPUByteType::s_add(const at::Tensor&, at::Scalar, const at::Tensor&) const’:
00:07:06.411 /home/jenkins/buildbot/workspace/pytorch-master-py3-linux/builder/pytorch/torch/lib/build/ATen/ATen/CPUByteType.cpp:1905:39: error: invalid static_cast from type ‘const at::CPUByteType*’ to type ‘at::Type*’
00:07:06.411          return static_cast<Type*>(this)->add(self, value, SparseTensor(other));
00:07:06.411                           

Difficult to add hard coded floats

Something like this will fail:

at::CPU(at::kFloat).zeros({5}) + 1e-6

With the message

libc++abi.dylib: terminating with uncaught exception of type std::domain_error: value cannot be losslessly represented in type Float: 0.000001

Seems quite inconvenient for anyone trying to add a small delta.

[dlpack] Pass deleter function to tensorFromBlob

for the dlpack memory management to work, we need to pass a deleter function to the tensorFromBlob() which should be called only when the underlying storage is not needed anymore. We should also free up the RESIZABLE flag from this storage so as to prevent crashes because of resize() happening on the storage. The changes will need to be made all the way to TH/THC allocators

Ability to mark ATen tensor non-resizable

If a user decides to convert an ATen tensor into a dlpack/numpy one, sharing underlying data, we MUST NOT resize the tensor. Unfortunately, there is no way in the current ATen API to set TH_STORAGE_RESIZABLE.

CMake Error at CMakeLists.txt:31 (add_subdirectory)

hello,I am a green hand,I want to install ATen.h.but when I install it,I meet a problem.I just use cmake .. -DCMAKE_INSTALL_PREFIX=/home/package/Aten/

CMake Error at CMakeLists.txt:31 (add_subdirectory):
add_subdirectory given source "lib/TH" which is not an existing directory.

CMake Error at CMakeLists.txt:46 (add_subdirectory):
add_subdirectory given source "lib/THNN" which is not an existing
directory.

CMake Error at CMakeLists.txt:47 (add_subdirectory):
add_subdirectory given source "lib/THS" which is not an existing directory.

CMake Error at CMakeLists.txt:53 (add_subdirectory):
add_subdirectory given source "lib/THC" which is not an existing directory.

CMake Error at CMakeLists.txt:54 (add_subdirectory):
add_subdirectory given source "lib/THCUNN" which is not an existing
directory.

CMake Error at CMakeLists.txt:55 (add_subdirectory):
add_subdirectory given source "lib/THCS" which is not an existing
directory.

CMake Error at CMakeLists.txt:70 (add_subdirectory):
add_subdirectory given source "src/ATen" which is not an existing
directory.

CMake Error at CMakeLists.txt:74 (add_subdirectory):
add_subdirectory given source "src/ATen/test" which is not an existing
directory.

CMake Error at CMakeLists.txt:75 (add_subdirectory):
add_subdirectory given source "src/data" which is not an existing
directory.

CMake Error at CMakeLists.txt:76 (add_subdirectory):
add_subdirectory given source "src/meter" which is not an existing
directory.

Please help me...

Defaults to Debug build

An ATen build with default cmake arguments will be a debug build. I don't think we should default to this.

Const-correctness

ATen is presently has no const modifiers, even for methods which are obviously const (e.g. virtual bool isCuda() in Type). Make it more const-correct.

Method to allocate CUDA tensors on specific device

PyTorch CUDA tensor constructors have an undocumented keyword argument device which allows you to specify what GPU device the tensor should be allocated on. Looking at Type in ATen (the documented method for allocating tensors), there does not seem to be any way to specify the device when allocating new tensors this way. There should be!

[Scalars] Remove Tensor + Scalar overloads, and dispatch correctly in Tensor + Tensor

Current we have two overloads for things where 0-dim tensors can occur:

Tensor + Tensor
Tensor + Scalar

Instead we should only have Tensor + Tensor. However, we still need to maintain good performance.
This means that in case where we have GPUTensor + CPUScalar, we should be calling the THC_addc method and not the generic add.

In addition to modifying ATen to remove the Tensor +Scalar variant, this commit needs to add the appropriate casting behavior for 0-dim tensors:

In the places we modify we need the concept of a relocatable 0-dim Tensor. Unlike in other locations in these special cases a 0-dim tensor will first be automatically converted to the right backend, and the right type if it is not already. The rest of the function should then proceed as it would have if the 0-dim Tensor was already the right type/backend.

Document static linking mode

Apparently, it is possible to link libATen.so statically against TH and friends. This makes deploy easier in many situations, so it's good to mention how to actually do this.

Use -Wno-missing-braces

ATen uses {true, true, true} for boolean array literals, but unfortunately, Clang has a bug https://bugs.llvm.org/show_bug.cgi?id=21689 which makes it spuriously claim that more braces are needed as warning. We should turn off this warning.

I tried editing some obvious places in ATen's build system to apply this flag but it didn't stick. Maybe someone who is more familiar can fix it?

Allow tensors to view externally managed data

Currently Tensors are always backed by internal allocators.

We should have a way to create a Tensor from external data. Attempts to resize the underlying data from ATen should fail with an exception, but otherwise it should behave like a normal Tensor.

Unsqueeze doesn't work properly on zero-dim Tensors

Unsqueeze transforms a 0d tensor into a 2d tensor. It should instead make a 1d Tensor.

auto t = Scalar(1).toTensor();
std::cout << t.sizes() << std::endl; // []
std::cout << t.unsqueeze(0).sizes() << std::endl; // [1, 1] but should be [1]

methods marked override when they aren't

00:07:53.924 torch/csrc/autograd/generated/VariableType.h:592:16: error: ‘virtual void torch::autograd::VariableType::SpatialConvolutionMM_accGradParameters(const at::Tensor&, const at::Tensor&, const at::Tensor&, const at::Tensor&, const at::Tensor&, const at::Tensor&, int, int, int, int, int, int, at::Scalar)’ marked ‘override’, but does not override

See https://build.pytorch.org/job/pytorch-PR-py3-linux/911//console
and pytorch/pytorch#2820
cc: @gchanan

[Scalars] scalar check tests needed for complicated functions

I did a quick scan of the generated code and here are functions that looked complicated enough that I wasn't confident that their scalar checks were correct. Let's think through these and add tests:

Tensor CPUIntType::index_select(const Tensor & self, int64_t dim, const Tensor & index) const {
Tensor & CPUIntType::m_index_copy_(Tensor & self, int64_t dim, const Tensor & index, const Tensor & source) const {
Tensor & CPUIntType::m_index_add_(Tensor & self, int64_t dim, const Tensor & index, const Tensor & source) const {
Tensor & CPUIntType::m_index_fill_(Tensor & self, int64_t dim, const Tensor & index, Scalar value) const {
Tensor & CPUIntType::m_scatter_(Tensor & self, int64_t dim, const Tensor & index, const Tensor & src) const {
Tensor & CPUIntType::m_scatter_add_(Tensor & self, int64_t dim, const Tensor & index, const Tensor & src) const {
Tensor CPUIntType::gather(const Tensor & self, int64_t dim, const Tensor & index) const {
Tensor & CPUIntType::m_addmv_(Tensor & self, const Tensor & mat, const Tensor & vec, Scalar beta, Scalar alpha) const {
Tensor CPUIntType::s_addr(const Tensor & self, const Tensor & vec1, const Tensor & vec2, Scalar beta, Scalar alpha) const {
CPUIntType::s_ger
Tensor CPUIntType::m_narrow(const Tensor & self, int64_t dimension, int64_t start, int64_t length) const {
Tensor CPUIntType::m_unfold(const Tensor & self, int64_t dimension, int64_t size, int64_t step) const {

btrifact() may dereference nullptr

Here is the generated code:

std::tuple<Tensor,Tensor> CPUFloatType::btrifact(const Tensor & info, bool pivot, const Tensor & self) {
    auto result_ = new CPUFloatTensor(context);
    auto result = Tensor(result_,false);
    auto pivots_ = new CPUIntTensor(context);
    auto pivots = Tensor(pivots_,false);
    auto info_ = checked_cast<CPUIntTensor>(info.pImpl,"info",1, true);
    auto self_ = checked_cast<CPUFloatTensor>(self.pImpl,"self",3, false);
    THFloatTensor_btrifact(result_->tensor, pivots_->tensor, info_ ? info_->tensor : NULL, pivot, self_->tensor);
    bool maybe_scalar = info.dim() == 0 && self.dim() == 0;
    result_->maybeScalar(maybe_scalar);
    pivots_->maybeScalar(maybe_scalar);
    return std::tuple<Tensor, Tensor>(result, pivots);
}

The maybe_scalar calculation calls info.dim() but info is allowed to be an undefined tensor. I don't think info should affect the maybe_scalar calculation at all.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.