GithubHelp home page GithubHelp logo

reedscot / icml2016 Goto Github PK

View Code? Open in Web Editor NEW
910.0 40.0 215.0 565 KB

Generative Adversarial Text-to-Image Synthesis

Home Page: http://arxiv.org/abs/1605.05396

License: MIT License

Lua 95.65% Shell 4.35%

icml2016's Introduction

###Generative Adversarial Text-to-Image Synthesis Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, Honglak Lee

This is the code for our ICML 2016 paper on text-to-image synthesis using conditional GANs. You can use it to train and sample from text-to-image models. The code is adapted from the excellent dcgan.torch.

####Setup Instructions

You will need to install Torch, CuDNN, and the display package.

####How to train a text to image model:

  1. Download the birds and flowers and COCO caption data in Torch format.
  2. Download the birds and flowers and COCO image data.
  3. Download the text encoders for birds and flowers and COCO descriptions.
  4. Modify the CONFIG file to point to your data and text encoder paths.
  5. Run one of the training scripts, e.g. ./scripts/train_cub.sh

####How to generate samples:

  • For flowers: ./scripts/demo_flowers.sh. Add text descriptions to scripts/flowers_queries.txt.
  • For birds: ./scripts/demo_cub.sh.
  • For COCO (more general images): ./scripts/demo_coco.sh.
  • An html file will be generated with the results:

####Pretrained models:

####How to train a text encoder from scratch:

  • You may want to do this if you have your own new dataset of text descriptions.
  • For flowers and birds: follow the instructions here.
  • For MS-COCO: ./scripts/train_coco_txt.sh.

####Citation

If you find this useful, please cite our work as follows:

@inproceedings{reed2016generative,
  title={Generative Adversarial Text-to-Image Synthesis},
  author={Scott Reed and Zeynep Akata and Xinchen Yan and Lajanugen Logeswaran and Bernt Schiele and Honglak Lee},
  booktitle={Proceedings of The 33rd International Conference on Machine Learning},
  year={2016}
}

icml2016's People

Contributors

reedscot avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

icml2016's Issues

What is the opt.large parameter used for ?

Hi Scott,

I am curious as to what the opt.large parameter in main_cls.lua used for?

Also, in the below figure:

image

Lines 109 - 130 are exactly for what purpose? I see its down sampling from 1024 to 256 and upsampling back, but don't know the motive.
Similarly in the next instance, its 512-> 128 -> 128 -> 512. Is there a specific purpose for doing this?

Thanks.

When should we update the RNN?

Hi, I am going to reimplement the end-to-end training in TensorFlow.
I wonder when should I update the RNN variables?

Thank you in advance.

Using a new dataset

Hi Scott,
Thank you for sharing your code. I have a different captioning dataset similar to mscoco. It would be great if you can give some hints about how to convert the data to the torch format required by your code.

Is there code available for converting raw MSCOCO data to the format of the files in train2014_ex_t7/ folder?

Thanks again

Train another dataset

Hi Scott , nice job and thanks for sharing your code!
I have the sign langage dataset (0->9 digits) and I want to train so that I can use the model (.mlmodel) in my ios app .
Could you please tell me how can I change your code so that I can train my dataset?
Thanks.

purpose of `ConcatTable` in Generator and Discriminator

I read your paper. Your paper is very wonderful.
When trying to implement your network with Chainer, I'm just curious to know the purpose of ConcatTable in Generator and Discriminator.
What is this for? For redundancy?

About the coco text encoder

Hi! Thanks for your code sharing.I am really interested in your work. But I can't find the checkpoint about COCO_NET_TXT. Would u mind sharing the file named "COCO_NET_TXT=/home/reedscot/checkpoints/coco_gru18_bs64_cls0.5_ngf128_ndf128_a10_c512_80_net_T.t7"?

Any details when converting txt to t7 format?

Hi,
We want to run your model with other datasets. To do this, according to instructions provided, we need to convert our text into t7 format. We wrote a script to implement the conversion and tested our script on a txt sample in coco dataset, but our script just got different results compared with the txt sample's ccorresponding t7 data you provided.
Here is our conversion script:

`require 'image'
require 'nn'
require 'nngraph'
require 'cunn'
require 'cutorch'
require 'cudnn'
require 'lfs'
torch.setdefaulttensortype('torch.FloatTensor')

local alphabet = "abcdefghijklmnopqrstuvwxyz0123456789-,;.!?:'"/\|_@#$%^&*~`+-=<>()[]{} "
local dict = {}
for i = 1,#alphabet do
dict[alphabet:sub(i,i)] = i
end
ivocab = {}
for k,v in pairs(dict) do
ivocab[v] = k
end

opt = {
filenames = '',
dataset = 'cub',
batchSize = 16, -- number of samples to produce
noisetype = 'normal', -- type of noise distribution (uniform / normal).
imsize = 1, -- used to produce larger images. 1 = 64px.mg_demo.lua 2 = 80px, 3 = 96px, ...
noisemode = 'random', -- random / line / linefull1d / linefull
gpu = 1, -- gpu mode. 0 = CPU, 1 = GPU
display = 0, -- Display image: 0 = false, 1 = true
nz = 100,
doc_length = 201,
queries = 'test-caption.txt',
checkpoint_dir = '',
net_gen = '',
net_txt = '',
}

for k,v in pairs(opt) do opt[k] = tonumber(os.getenv(k)) or os.getenv(k) or opt[k] end
print(opt)
if opt.display == 0 then opt.display = false end

noise = torch.Tensor(opt.batchSize, opt.nz, opt.imsize, opt.imsize)
net_gen = torch.load(opt.checkpoint_dir .. '/' .. opt.net_gen)
net_txt = torch.load(opt.net_txt)
if net_txt.protos ~=nil then net_txt = net_txt.protos.enc_doc end

net_gen:evaluate()
net_txt:evaluate()

-- Extract all text features.
local fea_txt = torch.Tensor(5,1024)
idx=1
-- Decode text for sanity check.
local raw_txt = {}
local raw_img = {}
for query_str in io.lines(opt.queries) do
local txt = torch.zeros(1,opt.doc_length,#alphabet)
for t = 1,opt.doc_length do
local ch = query_str:sub(t,t)
local ix = dict[ch]
if ix ~= 0 and ix ~= nil then
txt[{1,t,ix}] = 1
end
end
raw_txt[#raw_txt+1] = query_str
txt = txt:cuda()
print('idx = ,', idx,'txt size',txt:size())
print("query_str = ",query_str)
tmp = net_txt:forward(txt):float():clone()
fea_txt[idx] = tmp
idx = idx + 1
print('tmp size',tmp:size())
end

torch.save('fea-txt.t7',fea_txt)
`

Why we can't got the exactly same t7 result with the provided one? Did we miss some detail? Any directions or hints would be appreciated.

Inconsistencies in /cub_icml. Updating CUB class name.

Recently, I downloaded the latest CUB datasets and found some inconsistencies between the caption classes and CUB image classes. These classes have been renamed as:
009.Brewers_Blackbird -> 009.Brewer_Blackbird
022.Chuck_wills_Widow -> 022.Chuck_will_Widow
023.Brandts_Cormorant -> 023.Brandt_Cormorant
061.Heermanns_Gull -> 061.Heermann_Gull
067.Annas_Hummingbird -> 067.Anna_Hummingbird
093.Clarks_Nutcracker -> 093.Clark_Nutcracker
098.Scotts_Oriole -> 098.Scott_Oriole
113.Bairds_Sparrow -> 113.Baird_Sparrow
115.Brewers_Sparrow -> 115.Brewer_Sparrow
122.Harriss_Sparrow -> 122.Harris_Sparrow
123.Henslows_Sparrow -> 123.Henslow_Sparrow
124.Le_Contes_Sparrow -> 124.Le_Conte_Sparrow
125.Lincolns_Sparrow -> 125.Lincoln_Sparrow
126.Nelson_Sparrow -> 126.Nelson_Sharp_tailed_Sparrow
178.Swainsons_Warbler -> 178.Swainson_Warbler
180.Wilsons_Warbler -> 180.Wilson_Warbler
193.Bewicks_Wren -> 193.Bewick_Wren

Please rename these folders in cub/icml.

help

Hi i'm sorry to borther u, i'm sure u are very busy...
I'm an art student and I would like to use your processor text to image in a photography project. I'm totaly lost with the code, is there the possibility to download an easier app (sorry for the so silly question) or would it be the possibility to work together in this project?

Regards
Laurent

How to run the code in cpu mode properly?

First of all, thanks for your great work! Now I am trying to run the code and I change the value of gpu in main_cls.lua and main_cls_int.lua to zero already. But when I run train_coco.sh, I still got such an error: Users/yobichi/torch/install/bin/luajit: main_cls.lua:56: attempt to call field 'setDevice' (a nil value). The full message is as below:

{
  img_dir : "/Users/yobichi/icml2016/resource/train2014"
  name : "coco_nc3_nt128_nz100_bs64_cls0.5_ngf196_ndf196"
  txtSize : 1024
  niter : 200
  batchSize : 64
  ndf : 196
  nz : 100
  numCaption : 3
  gpu : 3
  filenames : ""
  decay_every : 40
  cls_weight : 0.5
  noise : "normal"
  ntrain : inf
  beta1 : 0.5
  nThreads : 12
  lr_decay : 0.5
  init_g : ""
  fineSize : 64
  loadSize : 76
  print_every : 4
  ngf : 196
  use_cudnn : 1
  init_d : ""
  checkpoint_dir : "/Users/yobichi/icml2016/checkpoints"
  lr : 0.0002
  dataset : "coco"
  data_root : "/Users/yobichi/icml2016/resource/train2014_ex_t7"
  save_every : 5
  large : 0
  doc_length : 201
  nt : 128
  display_id : 103
  display : 1
}
/Users/yobichi/torch/install/bin/luajit: main_cls.lua:56: attempt to call field 'setDevice' (a nil value)
stack traceback:
    main_cls.lua:56: in main chunk
    [C]: in function 'dofile'
    ...ichi/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk
    [C]: at 0x0105c15ad0

Could you tell me what's wrong with it and how can I fix it? Thanks in advance!

Reg: Backprop in generator network

Hi,

I have a question regarding the differential which is propagated back in the generator network. The first element of df_dg table, df_dg[1] is passed to the backward function of generator. Is this the differential of loss function with respect to the input image to discriminator (which is the first input to the discriminator)? If so, df_dg[2] would be the differential of the loss function with respect to the second input to discriminator, the text attribute vector, right?

Thanks!

text encoder for birds and flowers datasets

in main_cls_int.lua , the text encoder is as follow :
netR = nn.Sequential()
if opt.replicate == 1 then
netR:add(nn.Reshape(opt.batchSize / opt.numCaption, opt.numCaption, opt.txtSize))
netR:add(nn.Transpose({1,2}))
netR:add(nn.Mean(1))
netR:add(nn.Replicate(opt.numCaption))
netR:add(nn.Transpose({1,2}))
netR:add(nn.Reshape(opt.batchSize, opt.txtSize))
else
netR:add(nn.Reshape(opt.batchSize, opt.numCaption, opt.txtSize))
netR:add(nn.Transpose({1,2}))
netR:add(nn.Mean(1))
end

however, in the paper you used hyprid CNN-RNN network , is it ok to use this network instead of the one mentioned in the paper ?

How can I download this checkpoint

this file is missing I didn't find it: coco_gru18_bs64_cls0.5_ngf128_ndf128_a10_c512_80_net_T.t7

when I run this script: ./scripts/demo_coco.sh
I got this error:
Found Environment variable CUDNN_PATH = /usr/local/cuda/lib64/libcudnn.so{
gpu : 1
filenames : ""
queries : "scripts/coco_queries.txt"
noisemode : "random"
dataset : "coco"
noisetype : "normal"
batchSize : 16
net_txt : "/home/reedscot/checkpoints/coco_gru18_bs64_cls0.5_ngf128_ndf128_a10_c512_80_net_T.t7"
imsize : 1
net_gen : "coco_fast_t70_nc3_nt128_nz100_bs64_cls0.5_ngf196_ndf196_100_net_G.t7"
nz : 100
checkpoint_dir : "/home/reedscot/checkpoints"
doc_length : 201
display : 0
}
/home/dieaa/torch/install/bin/luajit: cannot open </home/reedscot/checkpoints/coco_gru18_bs64_cls0.5_ngf128_ndf128_a10_c512_80_net_T.t7> in mode r at /home/dieaa/t
orch/pkg/torch/lib/TH/THDiskFile.c:673
stack traceback:
[C]: at 0x7f29b3ea2440
[C]: in function 'DiskFile'
/home/dieaa/torch/install/share/lua/5.1/torch/File.lua:405: in function 'load'
txt2img_demo.lua:44: in main chunk
[C]: in function 'dofile'
...ieaa/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: at 0x00405d50

What is skipD in main_cls.lua and main_cls_int.lua?

Hi Reed,
In your code, there is "skipD", what does this mean? Is it a typo? (maybe conv is right) Thanks.
-- state size: (ndf*8) x 4 x 4 local conc = nn.ConcatTable() local conv = nn.Sequential() conv:add(SpatialConvolution(ndf * 8, ndf * 2, 1, 1, 1, 1, 0, 0)) conv:add(SpatialBatchNormalization(ndf * 2)):add(nn.LeakyReLU(0.2, true)) conv:add(SpatialConvolution(ndf * 2, ndf * 2, 3, 3, 1, 1, 1, 1)) conv:add(SpatialBatchNormalization(ndf * 2)) conv:add(nn.LeakyReLU(0.2, true)) conv:add(SpatialConvolution(ndf * 2, ndf * 8, 3, 3, 1, 1, 1, 1)) conv:add(SpatialBatchNormalization(ndf * 8)) conc:add(nn.Identity()) conc:add(skipD) convD:add(conc) convD:add(nn.CAddTable())

"cannot open" or "unknown object" when training coco

I've modified CONFIG and main_cls.lua as necessary,
and ran train_coco.sh.

I got a bunch of outputs that look like following:

/home/ms_coco/train2014/COCO_train2014_000000035230.jpg
/home/ms_coco/train2014/COCO_train2014_000000163761.jpg
/home/torch/install/share/lua/5.1/torch/File.lua:375: unknown object
/home/ms_coco/train2014/COCO_train2014_000000058429.jpg
/home/ms_coco/train2014/COCO_train2014_000000198382.jpg
/home/torch/install/share/lua/5.1/torch/File.lua:375: unknown object
/home/ms_coco/train2014/COCO_train2014_000000212091.jpg
/home/ms_coco/train2014/COCO_train2014_000000215679.jpg
/home/torch/install/share/lua/5.1/torch/File.lua:375: unknown object

or

/home/ms_coco/train2014/COCO_train2014_000000178793.jpg
/home/ms_coco/train2014/COCO_train2014_000000183212.jpg
cannot open </home/ms_coco/train2014/COCO_train2014_000000498615.jpg> in mode r at /tmp/luarocks_torch-scm-1-3096/torch7/lib/TH/THDiskFile.c:649
/home/ms_coco/train2014/COCO_train2014_000000498615.jpg
/home/ms_coco/train2014/COCO_train2014_000000246971.jpg
cannot open </home/ms_coco/train2014/COCO_train2014_000000220148.jpg> in mode r at /tmp/luarocks_torch-scm-1-3096/torch7/lib/TH/THDiskFile.c:649

I've check many times that the path is correct and the files exist.
What could've gone wrong?
If everything goes right, what kind of print output should I be seeing?

What does "self.nfg" stand for?

Hi,

I am reproducing this code with another dataset that requires the image size to be 224x224.
I am changing the size of the CNN, and I am wondering whether this abbv of "nfg" means the output img size

I am in a hurry, and thank you in advance!

Cheers,
nfg

pytorch version

I want to if anyone translate the project to pytorch version?

CONFIG: not found

when i run the scirpts, like './scripts/train_cub.sh': i always get a problem 'CONFIG: not found '
what happend? But the CONFIG is really there.

Download caption data for COCO

I was following the link given below to get the caption data for COCO. But noticed that flowers caption dataset has a lot of additional folders and files like valclasses.txt, trainclasses.txt etc..while COCO doesnt have these. Is there a way to get these files in COCO also?

Download the birds and flowers and COCO caption data in Torch format.

out of memory on demo_coco

Hello there,
We tried to test your model on a computer with following configuration :

Ubuntu 16.04 with CUDA 8, CuDNN 5 & Torch 7

The demo_coco.sh failed with following error

THCudaCheck FAIL file=/home/sku/torch/extra/cutorch/lib/THC/generic/THCStorage.cu line=66 error=2 : out of memory /home/sku/torch/install/bin/luajit: /home/sku/torch/install/share/lua/5.1/torch/File.lua:351: cuda runtime error (2) : out of memory at /home/sku/torch/extra/cutorch/lib/THC/generic/THCStorage.cu:66 stack traceback: [C]: in function 'read' /home/sku/torch/install/share/lua/5.1/torch/File.lua:351: in function </home/sku/torch/install/share/lua/5.1/torch/File.lua:245> [C]: in function 'read' /home/sku/torch/install/share/lua/5.1/torch/File.lua:351: in function 'readObject' /home/sku/torch/install/share/lua/5.1/torch/File.lua:369: in function 'readObject' /home/sku/torch/install/share/lua/5.1/nn/Module.lua:192: in function 'read' /home/sku/torch/install/share/lua/5.1/torch/File.lua:351: in function 'readObject' /home/sku/torch/install/share/lua/5.1/torch/File.lua:369: in function 'readObject' /home/sku/torch/install/share/lua/5.1/torch/File.lua:369: in function 'readObject' /home/sku/torch/install/share/lua/5.1/nn/Module.lua:192: in function 'read' /home/sku/torch/install/share/lua/5.1/torch/File.lua:351: in function 'readObject' /home/sku/torch/install/share/lua/5.1/torch/File.lua:409: in function 'load' txt2img_demo.lua:43: in main chunk [C]: in function 'dofile' ...sku/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk [C]: at 0x00405d50

Error remain the same when changing batch_size from 16 to 8 or 1 in txt2img_demo.lua
Does anyone have any idea to solve this problem ?

bad argument #4 to 'v' (cannot convert 'struct THCudaLongTensor *' to 'struct THCudaTensor *')

rzai@rzai00:~/prj/icml2016$ bash scripts/train_coco_txt.sh
{
img_dir : "/media/rzai/ai_data/VQA-ALL/mscoco.org-visualqa.org/train2014"
beta1 : 0.5
nThreads : 6
txtSize : 1024
niter : 200
batchSize : 256
lr_decay : 0.5
fineSize : 64
use_cudnn : 1
init_t : ""
numCaption : 1
loadSize : 76
print_every : 4
encoder : "gru18"
name : "coco_gru18_bs256_c512"
gpu : 1
checkpoint_dir : "checkpoints"
dataset : "coco_txt"
filenames : ""
lr : 0.0002
ntrain : inf
decay_every : 50
save_every : 5
data_root : "/home/rzai/_reedscot/de_coco_icml.tar.gz/train2014_ex_t7"
doc_length : 201
cnn_dim : 512
display_id : 101
display : 0
}
Random Seed: 3243
Starting donkey with id: 1 seed: 3244
Starting donkey with id: 5 seed: 3248
Starting donkey with id: 4 seed: 3247
Starting donkey with id: 2 seed: 3245
Starting donkey with id: 3 seed: 3246
Starting donkey with id: 6 seed: 3249
Dataset: coco_txt Size: 82783
Warning: cudnn.convert does not work with nngraph yet. Ignoring nn.gModuleWarning: cudnn.convert does not work with nngraph yet. Ignoring nn.gModule/home/rzai/torch/install/bin/luajit: /home/rzai/torch/install/share/lua/5.1/nn/Container.lua:67:
In 3 module of nn.Sequential:
/home/rzai/torch/install/share/lua/5.1/nn/THNN.lua:110: bad argument #4 to 'v' (cannot convert 'struct THCudaLongTensor *' to 'struct THCudaTensor *')
stack traceback:
[C]: in function 'v'
/home/rzai/torch/install/share/lua/5.1/nn/THNN.lua:110: in function 'TemporalMaxPooling_updateOutput'
...ai/torch/install/share/lua/5.1/nn/TemporalMaxPooling.lua:19: in function <...ai/torch/install/share/lua/5.1/nn/TemporalMaxPooling.lua:12>
[C]: in function 'xpcall'
/home/rzai/torch/install/share/lua/5.1/nn/Container.lua:63: in function 'rethrowErrors'
/home/rzai/torch/install/share/lua/5.1/nn/Sequential.lua:44: in function 'func'
/home/rzai/torch/install/share/lua/5.1/nngraph/gmodule.lua:345: in function 'neteval'
/home/rzai/torch/install/share/lua/5.1/nngraph/gmodule.lua:380: in function 'forward'
main_txt_coco.lua:181: in function 'opfunc'
/home/rzai/torch/install/share/lua/5.1/optim/adam.lua:37: in function 'adam'
main_txt_coco.lua:207: in main chunk
[C]: in function 'dofile'
...rzai/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk
[C]: at 0x00406670

WARNING: If you see a stack trace below, it doesn't point to the place where this error occurred. Please use only the one above.
stack traceback:
[C]: in function 'error'
/home/rzai/torch/install/share/lua/5.1/nn/Container.lua:67: in function 'rethrowErrors'
/home/rzai/torch/install/share/lua/5.1/nn/Sequential.lua:44: in function 'func'
/home/rzai/torch/install/share/lua/5.1/nngraph/gmodule.lua:345: in function 'neteval'
/home/rzai/torch/install/share/lua/5.1/nngraph/gmodule.lua:380: in function 'forward'
main_txt_coco.lua:181: in function 'opfunc'
/home/rzai/torch/install/share/lua/5.1/optim/adam.lua:37: in function 'adam'
main_txt_coco.lua:207: in main chunk
[C]: in function 'dofile'
...rzai/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk
[C]: at 0x00406670
rzai@rzai00:~/prj/icml2016$

Running pretrained models on CPU

Hi,
Is it possible to use pretrained models that you provide with cpu only?
It seems like it is not as the models were trained on gpu, but I would like to have a confirmation from you, as I am not that familiar with Torch.

Quesions on the related file names in your source code

I note there are some default files name you use in order to train your model. for example, 'allids.txt' in your source code file main_cls_int.lua on line 26, what this files about? can you give me some details. becuase i have downloaded the cub dataset, but i cannot find this file.

encoding captions to .t7 format

I'd like to train with a new caption dataset,
but can't figure out how to encode the captions to required .t7 format.
Has anyone been able to figure out how?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.