GithubHelp home page GithubHelp logo

fly's Introduction

We use the template from https://github.com/ashleve/lightning-hydra-template. Please read the instructions there to understand the repo structure.

Implementation & Experiments

An example of Scatterbrain implementation (combining local attention and Performer) is in the file src/models/modules/attention/sblocal.py.

T2T-ViT inference on ImageNet

To run the T2T-ViT inference on ImageNet experiment:

  1. Download the pretrained weights from the [T2T-ViT repo][https://github.com/yitu-opensource/T2T-ViT/releases]:
mkdir -p checkpoints/t2tvit
cd checkpoints/t2tvit
wget https://github.com/yitu-opensource/T2T-ViT/releases/download/main/81.7_T2T_ViTt_14.pth.tar
  1. Convert the weights to the format compatible with our implementation of T2T-ViT:
# cd to scatterbrain path
python scripts/convert_checkpoint_t2t_vit.py checkpoints/t2tvit/81.7_T2T_ViTt_14.pth.tar
  1. Download the ImageNet dataset (just the validation set will suffice). Below, /path/to/imagenet refers to the directory that contains the train and val directories.
  2. Run the inference experiments:
python run.py experiment=imagenet-t2tvit-eval.yaml model/t2tattn_cfg=full datamodule.data_dir=/path/to/imagenet/ eval.ckpt=checkpoints/t2tvit/81.7_T2T_ViTt_14.pth.tar  # 81.7% acc
python run.py experiment=imagenet-t2tvit-eval.yaml model/t2tattn_cfg=local datamodule.data_dir=/path/to/imagenet/ eval.ckpt=checkpoints/t2tvit/81.7_T2T_ViTt_14.pth.tar  # 80.6% acc
python run.py experiment=imagenet-t2tvit-eval.yaml model/t2tattn_cfg=performer datamodule.data_dir=/path/to/imagenet/ eval.ckpt=checkpoints/t2tvit/81.7_T2T_ViTt_14.pth.tar  # 77.8-79.0% acc (there's randomness)
python run.py experiment=imagenet-t2tvit-eval.yaml model/t2tattn_cfg=sblocal datamodule.data_dir=/path/to/imagenet/ eval.ckpt=checkpoints/t2tvit/81.7_T2T_ViTt_14.pth.tar  # 81.1% acc

MLP-Mixer-B with Pixelfly on ImageNet

With 8 GPUs, at least 32GB memory each:

python run.py experiment=imagenet/mixer/mixerb-cutmix-fbbflylr datamodule.data_dir=/path/to/imagenet model.channel_mlp_cfg.linear1_cfg.sparse_cfg.sparsity_config.butterfly_size=8 model.channel_mlp_cfg.linear1_cfg.sparse_cfg.sparsity_config.n_factors=2 model.channel_mlp_cfg.linear1_cfg.sparse_cfg.sparsity_config.block=32 

Requirements

Python 3.8+, Pytorch 1.9+, torchvision, torchtext, pytorch-fast-transformers, munch, einops, timm, hydra-core, hydra-colorlog, python-dotenv, rich, pytorch-lightning, lightning-bolts, triton.

We provide a Dockerfile that lists all the required packages.

Citation

If you use this codebase, or otherwise found our work valuable, please cite:

@inproceedings{chen2021scatterbrain,
  title={Scatterbrain: Unifying Sparse and Low-rank Attention},
  author={Beidi Chen and Tri Dao and Eric Winsor and Zhao Song and Atri Rudra and Christopher R\'{e}},
  booktitle={Advances in Neural Information Processing Systems (NeurIPS)},
  year={2021}
}
@article{chen2021pixelated,
  title={Pixelated Butterfly: Simple and Efficient Sparse training for Neural Network Models},
  author={Chen, Beidi and Dao, Tri and Liang, Kaizhao and Yang, Jiaming and Song, Zhao and Rudra, Atri and R{\'e}, Christopher},
  booktitle={International Conference on Learning Representations}
  year={2021}
}
@inproceedings{dao2022monarch,
  title={Monarch: Expressive structured matrices for efficient and accurate training},
  author={Dao, Tri and Chen, Beidi and Sohoni, Nimit S and Desai, Arjun and Poli, Michael and Grogan, Jessica and Liu, Alexander and Rao, Aniruddh and Rudra, Atri and R{\'e}, Christopher},
  booktitle={International Conference on Machine Learning},
  year={2022},
  organization={PMLR}
}

fly's People

Contributors

keroro824 avatar tridao avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fly's Issues

Pixelfly is a binary hypercube!

Okay, so imagine the full pixelfly matrix with 2^n blocks

image

Let's give each input block a number 0... 2^n-1
Then, the pixelfly matrix can be defined as such:

  • for every pair of input blocks i, j
  • if binary(i) xor binary(j) has all zeros ones except one, then matrix[i, j] is a non-zero block
    • informally, if binary codes for i and j are equal in all but one bit, "connect" them with weights

This is the same condition that defines a binary cube in n dimensions:
image

Ergo, pixelfly neurons actually form a cube and "connect" over the edges of said cube.

p.s. not my original thought, discovered in discussion with @ostroumova-la

p.p.s. if that is the case, are there other geometric shapes we could try?
So, for instance, fully connected matrix can be viewed as an n-dimensional simplex (triangle -> tetrahedron -> simples) because all blocks connect to all other blocks. Than goes the hypercube of pixelfly. Then what?

Triton blocksparse matmul

Hi! First of all, thanks for an awesome paper :)

Can you please help me with an implementation question?

In blocksparse_linear.py, the is an import statement:
https://github.com/HazyResearch/pixelfly/blob/7c3f233cd3b1b165ba66942e38eee3702aafea8d/src/models/modules/layers/blocksparse_linear.py#L17-L21

However, the file it is attempting to import appears to be missing in the models.modules.attention package:
https://github.com/HazyResearch/pixelfly/tree/7c3f233cd3b1b165ba66942e38eee3702aafea8d/src/models/modules/attention

As a result, I'm getting "triton no supported" even though I have triton installed. What is the best way to overcome this?

About the Vit-Pixelfly Implementation

Hi @tridao,
Thanks for the great work.
As you have provided the training script of MLP-Mixer-B with Pixelfly on ImageNet, I wonder if you can provide the counterpart training script of the Vit-Pixelfly Implementation?

Looking forward to your reply.

Error when running wiki103 gpt2-m and gpt2-l baseline pretraining experiments

Hi,
When running wiki103 gpt2-m and gpt2-l baseline pretraining experiments,
python run.py experiment=wt103/gpt2m
and
python run.py experiment=wt103/gpt2l
will receive non-converge error.
The only solving way we found is to change default precision from 16 to 32. Is any way to keep precision 16 but still converge? I am curious what precision you used to report baseline results in the paper?

We use Nvidia A100 80G * 8 machines.

Any help Thanks!

Monarch in pure pytorch. [Yes, it's also a hypercube]

Hi, @tridao . Yes, it's me again.
First, thanks for the monarch paper, it was quite a read ;)

I've been looking through the repo, but could not find the official monarch implementation yet. So i've built an unofficial one :)
https://gist.github.com/justheuristic/499ec116f1f353dfd3314de87f310f80 (warning: pure torch, please don't use for speed evaluation)

And yes, it's also a hypercube. Lemme explain)

Imagine a small monarch layer with 4 input units and 4 output units.

Here's what the paper says should happen:
image

Consider a matrix with N=4 input units, and hence, M=2.
The permutation layer for 4 units is defined as x_new = [x[0], x[2], x[1], x[3]].

Hence, the first batch matrix multiplication has two blocks: [x[0], x[2]] and [x[1], x[3]].
And the second matrix multiplication is unpermuted, the blocks are: [x[0], x[1]] and [x[2], x[3]].

Now let's naively reshape this tensor into a 2x2 square:

x[0] --- x[1]
  |        |
x[2] --- x[3]

As you can see, the two batch matrix products go over the square's columns (L matrix) and rows (R matrix).

This intuition holds for any valid N, M: for instance, N=1024, M=32 results in a 32-by-32 square lattice, and the column-to-row order stays the same.

This leads to a few obvious considerations:

  • we can change this square into an uneven rectangle, e.g. 32-by-64, which yields several ways to define non-square Monarch
  • we can add more dimensions! i.e. go from square to cube to hypercube, etc.

On adding more dimensions: consider a GPT-3 layer has 12288 units.
We can view this as a 3d lattice of shape [16, 24, 32], since 16 * 24 * 32 = 12288

  • Round 1 goes over the first dimension. It needs 16x16 weight matrices, and the number of such matrices is 768 (32*24).
  • Round 2 similarly uses 24x24 matrices, 512 of them to be specific (32*16)
  • Round 3 multiplies over the last dimension, using a total of 384 matrices (16*24), each of size 16x16

Using the code above, you can define this specific grid as follows:
image

This, in turn, raises several questions:

  1. On memory requirements of Monarch: when done naively, Monarch requires storing 1 additional tensor of activations for backprop for every additional dimensionality -- or recomputing them due to gradient checkpointing. Is there any more efficient strategy for backprop through Monarch?
  2. On relation to tensor decompositions: when viewed from this angle, Monarch sounds vaguely related (though not equivalent) to some popular tensor decompositions, such as TensorTrain or TensorRing. Is Monarch universally better or are there special cases where I should use either one?

p.s. the perspective from this question is not my own, we stumbled into it in discussions with @ostroumova-la , @TimDettmers , @KhrulkovV

Error when running baseline inference experiments

Hi,

When running the first inference experiment in the README

python run.py experiment=imagenet-t2tvit-eval.yaml model/t2tattn_cfg=full datamodule.data_dir=/home/dan/ILSVRC/ eval.ckpt=checkpoints/t2tvit/81.7_T2T_ViTt_14.pth.tar

I get the following error:

  File "run.py", line 53, in main
    return evaluate(config)
  File "/home/dan/frameworks/pixelfly/src/eval.py", line 57, in evaluate
    trained_model: LightningModule = hydra.utils.instantiate(config.task, cfg=config,
  File "/home/dan/frameworks/pixelfly/env/lib/python3.8/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 180, in instantiate
    return instantiate_node(config, *args, recursive=_recursive_, convert=_convert_)
  File "/home/dan/frameworks/pixelfly/env/lib/python3.8/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 249, in instantiate_node
    return _call_target(target, *args, **kwargs)
  File "/home/dan/frameworks/pixelfly/env/lib/python3.8/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 64, in _call_target
    raise type(e)(
  File "/home/dan/frameworks/pixelfly/env/lib/python3.8/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 62, in _call_target
    return target(*args, **kwargs)
  File "/home/dan/frameworks/pixelfly/src/tasks/seq.py", line 41, in __init__
    self.model = hydra.utils.instantiate(self.model_cfg, _recursive_=False)
  File "/home/dan/frameworks/pixelfly/env/lib/python3.8/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 180, in instantiate
    return instantiate_node(config, *args, recursive=_recursive_, convert=_convert_)
  File "/home/dan/frameworks/pixelfly/env/lib/python3.8/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 240, in instantiate_node
    target = _resolve_target(node.get(_Keys.TARGET))
  File "/home/dan/frameworks/pixelfly/env/lib/python3.8/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 104, in _resolve_target
    return _locate(target)
  File "/home/dan/frameworks/pixelfly/env/lib/python3.8/site-packages/hydra/_internal/utils.py", line 577, in _locate
    raise ImportError(
ImportError: Error instantiating 'src.tasks.seq.SequenceModel' : Encountered error: `cannot import name 'BlockSparseLRLinear' from 'src.models.modules.layers.blocksparse_linear' (/home/dan/frameworks/pixelfly/src/models/modules/layers/blocksparse_linear.py)` when loading module 'src.models.vit.t2t_vit.t2t_vit_t_14' '''

I am attaching my `pip freeze' output in case it helps:

absl-py==1.0.0
aiohttp==3.8.1
aiosignal==1.2.0
alembic==1.7.5
antlr4-python3-runtime==4.8
anyio==3.4.0
argon2-cffi==21.3.0
argon2-cffi-bindings==21.2.0
async-timeout==4.0.1
attrs==21.2.0
autopage==0.4.0
Babel==2.9.1
backcall==0.2.0
backports.entry-points-selectable==1.1.1
black==21.12b0
bleach==4.1.0
cachetools==4.2.4
certifi==2021.10.8
cffi==1.15.0
cfgv==3.3.1
charset-normalizer==2.0.9
click==8.0.3
cliff==3.10.0
cmaes==0.8.2
cmd2==2.3.3
colorama==0.4.4
colorlog==6.6.0
commonmark==0.9.1
configparser==5.2.0
cycler==0.11.0
debugpy==1.5.1
decorator==5.1.0
defusedxml==0.7.1
distlib==0.3.4
docker-pycreds==0.4.0
einops==0.3.2
entrypoints==0.3
filelock==3.4.0
flake8==4.0.1
fonttools==4.28.3
frozenlist==1.2.0
fsspec==2021.11.1
future==0.18.2
gitdb==4.0.9
GitPython==3.1.24
google-auth==2.3.3
google-auth-oauthlib==0.4.6
greenlet==1.1.2
grpcio==1.42.0
hydra-colorlog==1.1.0
hydra-core==1.1.0
hydra-optuna-sweeper==1.1.0
identify==2.4.0
idna==3.3
importlib-metadata==4.8.2
importlib-resources==5.4.0
iniconfig==1.1.1
ipykernel==6.6.0
ipython==7.30.1
ipython-genutils==0.2.0
isort==5.10.1
jedi==0.18.1
Jinja2==3.0.3
joblib==1.1.0
json5==0.9.6
jsonschema==4.2.1
jupyter-client==7.1.0
jupyter-core==4.9.1
jupyter-server==1.13.1
jupyterlab==3.2.5
jupyterlab-pygments==0.1.2
jupyterlab-server==2.8.2
kiwisolver==1.3.2
Mako==1.1.6
Markdown==3.3.6
MarkupSafe==2.0.1
matplotlib==3.5.1
matplotlib-inline==0.1.3
mccabe==0.6.1
mistune==0.8.4
multidict==5.2.0
mypy-extensions==0.4.3
nbclassic==0.3.4
nbclient==0.5.9
nbconvert==6.3.0
nbformat==5.1.3
nest-asyncio==1.5.4
nodeenv==1.6.0
notebook==6.4.6
numpy==1.21.4
oauthlib==3.1.1
omegaconf==2.1.1
optuna==2.4.0
packaging==21.3
pandas==1.3.5
pandocfilters==1.5.0
parso==0.8.3
pathspec==0.9.0
pathtools==0.1.2
pbr==5.8.0
pexpect==4.8.0
pickleshare==0.7.5
Pillow==8.4.0
platformdirs==2.4.0
pluggy==1.0.0
pre-commit==2.16.0
prettytable==2.4.0
prometheus-client==0.12.0
promise==2.3
prompt-toolkit==3.0.24
protobuf==3.19.1
psutil==5.8.0
ptyprocess==0.7.0
pudb==2021.2.2
py==1.11.0
pyasn1==0.4.8
pyasn1-modules==0.2.8
pycodestyle==2.8.0
pycparser==2.21
pyDeprecate==0.3.1
pyflakes==2.4.0
Pygments==2.10.0
pyparsing==3.0.6
pyperclip==1.8.2
pyrsistent==0.18.0
pytest==6.2.5
python-dateutil==2.8.2
python-dotenv==0.19.2
pytorch-lightning==1.5.5
pytorch-lightning-bolts==0.3.2
pytz==2021.3
PyYAML==6.0
pyzmq==22.3.0
requests==2.26.0
requests-oauthlib==1.3.0
rich==10.16.0
rsa==4.8
scikit-learn==1.0.1
scipy==1.7.3
seaborn==0.11.2
Send2Trash==1.8.0
sentry-sdk==1.5.0
sh==1.14.2
shortuuid==1.0.8
six==1.16.0
smmap==5.0.0
sniffio==1.2.0
SQLAlchemy==1.4.28
stevedore==3.5.0
subprocess32==3.5.4
tensorboard==2.7.0
tensorboard-data-server==0.6.1
tensorboard-plugin-wit==1.8.0
termcolor==1.1.0
terminado==0.12.1
testpath==0.5.0
threadpoolctl==3.0.0
timm==0.4.12
toml==0.10.2
tomli==1.2.2
torch==1.10.0
torchmetrics==0.6.1
torchvision==0.11.1
tornado==6.1
tqdm==4.62.3
traitlets==5.1.1
typing-extensions==4.0.1
urllib3==1.26.7
urwid==2.1.2
urwid-readline==0.13
virtualenv==20.10.0
wandb==0.12.7
wcwidth==0.2.5
webencodings==0.5.1
websocket-client==1.2.3
Werkzeug==2.0.2
yarl==1.7.2
yaspin==2.1.0
zipp==3.6.0
'''

Thanks,
Dan

layers difference help

Hi, I am very interested in your work and want to try their applications. I have figured out the usage of "monarch_linear.py". But I am still confused about other layers with similar names.

Could you please briefly introduce them to help me better understand your code? Thx in advance.

Pure torch implementaton of pixelfly (for TPUs, CPUs and custom blocks)

I'm not sure if it's appropriate to create issues like this, feel free to close it without warning. Otherwise, I'd request this to stay open for some time in case somebody is interested.

Why?: The two existing backends for pixelfly use either huggingface blocksparse or triton. However, these are not always available, such as when training on TPUs or using custom parameters (e.g. triton offers only a couple block sizes)

What?: Below you can find a (limited) re-implementation of pixelfly in pure pytorch. Instead of block-sparse kernels, this implementation takes advantage of the fact that butterfly layout has equal number of nonzero blocks in each row. We can take advantage of this using a two-stage procedure:

  1. compute all blocks using regular (dense) matmul with [in_features, (block_size * blocks_per_input)] weights
  2. aggregate blocks according to butterfly layout using F.embedding_bag(..., mode='sum')

Here's the implementation: https://gist.github.com/justheuristic/9e4fb81381451a4bc8cbfee0a5100eba
It's heavily inspired by the original code and re_uses parts of blocksparse_linear.py

It's a single file, requires only pytorch and einops and is compatible with TPUs. The speed-ups are comparable (see example_and_tests), plus it supports custom block sizes, tf32, autocast, etc. You can also easily re-write this in tensorflow using tfa.EmbeddingBag

Feel free to use for whatever :)

Monarch Projection Step for rectangular blocks and where the number of blocks != sqrt(input dimension)

Hi, I had a question regarding the projection step from dense -> monarch butterfly matrices, in the general case, where we have rectangular or nblocks != sqrt(m) of the matrix, as I'm having trouble finding the code for this and extending existing implementations for this case.

I found multiple projection files/functions: blockdiag_butterfly_projection.py and blockdiag_butterfly_einsum.py.

As an example, if I use code roughly as follows:

from src.models.layers.blockdiag_butterfly_multiply import blockdiag_butterfly_multiply

x = np.eye(8)
nblocks = 2
bfly_dim = 4

w1_bfly = np.random.normal((nblocks, bfly_dim, bfly_dim))
w2_bfly = np.random.normal((nblocks, bfly_dim, bfly_dim))

bfly_matrix = blockdiag_butterfly_multiply(x, w1_bfly, w2_bfly)

print(bfly_matrix.shape)

The resulting shape of the output will be 8x8, which is essentially the full matrix used for transformation.

However, if I use the projection function (which is meant for square matrices) from blockdiag_butterfly_projection.py to try and recover the butterfly matrices from this matrix, I run into the issue that it expects the matrix to decompose as follows M_permuted_batched = rearrange(M, '(p k) (r s) -> k r p s', k=sizes[1], r=sizes[0]), while in our case: r = 4 and s = 4, making it incompatible with the matrix dimensions.

Meanwhile, the einsum functions in blockdiag_butterfly_einsum.py gave different results from the original blockdiag_butterfly_multiply (comparing the forward multiplication step not the projection step). (see this colab)

In the paper, I did see the original derivation for algorithm 1:
image but I was unclear on how to actually perform the decomposition step when we can't decompose the tensor into an m x m x m x m shape.

Monarch & PixelFly based MLP layer efficiency testing

Here I post some efficiency testing numbers for Monarch based MLP v.s. vanilla nn.Linear based MLP. I found that Monarch is best suitable for MLPs in Transformer architectures, which generally have large hidden size and batch size. In recommendation-focused MLPs, the MLP is usually small (e.g., 10000x1024x512, the first is feature input dim) and importantly a small batch size (say 10) is often used for serving given concurrent online requests. The following testing numbers are provided as a reference for anyone who has similar tasks.

Train(Fwd+Bwd) Test(Fwd only)
Batch_size=1000 GPU-P100 GPU-P100 CPU
MLP(10000x1024x512) 2.95ms 0.16ms 26.57ms
Monarch(nblk=4) 1.85ms 0.57ms 10.29ms
Monarch(nblk=16) 1.37ms 0.55ms 5.67ms
Batch_size=10
MLP(10000x1024x512) 0.48ms 0.13ms 0.59ms
Monarch(nblk=4) 1.34ms 0.54ms 1.16ms
Monarch(nblk=16) 1.31ms 0.52ms 1.37ms
Batch_size=10000
MLP(1024x1024x512) 4.86ms 0.13ms 46.99ms
Monarch(nblk=4) 6.87ms 0.53ms 47.55ms
Monarch(nblk=16) 6.04ms 0.51ms 39.66ms
Batch_size=1000
MLP(1024x1024x512) 0.74ms 0.16ms 5.35ms
Monarch(nblk=4) 1.42ms 0.53ms 4.17ms
Monarch(nblk=16) 1.38ms 0.52ms 3.84ms
Batch_size=10
MLP(1024x1024x512) 0.46ms 0.13ms 0.27ms
Monarch(nblk=4) 1.29ms 0.53ms 1.15ms
Monarch(nblk=16) 1.27ms 0.51ms 0.84ms

I will post the numbers for pixelfly later.

No requirement file

No "requirements.txt" that is needed by Dockerfile:

COPY requirements.txt .
RUN pip install -r requirements.txt \
    && rm -f requirements.txt

Minor questions about the paper and code

Thanks a lot for the interesting work!
I am really enjoying reading the paper and the code.
I actually have two minor questions. It will be really appreciated if any hints can be provided:

  1. I notice that, in Section 5.1, Pixelfly is only applied on the projection step of Attention and MLP, without sparsifying the attention matrix (score matrix). While in T2T-Vit, the Pixelfly is only applied on the attention matrix without sparsifying MLP and projection. Are there any reasons for this? Also, are there any experimental results if Pixelfly is applied on all layers (MLP and attention matrix)?
  2. I saw there are many options for /model/t2tattn_cfg with T2T-Vit, such as sblocal, performer. It seems like sblocal uses sparse + low rank. Maby I know which one should I choose if I want to use flat butterfly + low rank?
  3. In the experiment folder under config, it seems like only the scripts for MLP-mixer, T2T-vit are provided. Do you have plans to release all scripts for other experiments? such as Vit, GPT etc......

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.