GithubHelp home page GithubHelp logo

sachin19 / seq2seq-con Goto Github PK

View Code? Open in Web Editor NEW
76.0 76.0 19.0 34.08 MB

Implementation of "Von Mises-Fisher Loss for Training Sequence to Sequence Models with Continuous Outputs"

License: Other

Python 95.11% Shell 0.35% Perl 4.54%

seq2seq-con's People

Contributors

sachin19 avatar ytsvetko avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

seq2seq-con's Issues

Got strange result while training translation from zh to en.

Hi,

I'm trying using your "langvar" branch to translate from Chinese to English, but I got strange statistic and result.

Statistic:

[2021-09-01 14:16:18,792 INFO] Step 50/150000; acc:  83.52; oth_acc:   0.00; ppl:  0.00; xent: -427.01; lr: 0.00015; 2888/4087 tok/s;     90 sec
[2021-09-01 14:16:39,127 INFO] Step 100/150000; acc:  58.23; oth_acc:   0.00; ppl:  0.00; xent: -427.74; lr: 0.00030; 12860/18451 tok/s;    110 sec
[2021-09-01 14:16:59,991 INFO] Step 150/150000; acc:  55.61; oth_acc:   0.00; ppl:  0.00; xent: -427.82; lr: 0.00030; 12801/17890 tok/s;    131 sec
[2021-09-01 14:17:21,238 INFO] Step 200/150000; acc:  55.08; oth_acc:   0.00; ppl:  0.00; xent: -427.84; lr: 0.00030; 12558/17354 tok/s;    152 sec
[2021-09-01 14:17:42,579 INFO] Step 250/150000; acc:  54.64; oth_acc:   0.00; ppl:  0.00; xent: -427.86; lr: 0.00030; 12691/17512 tok/s;    174 sec

As you can see, the acc is decreasing and the perplexity is always zero.

When I use trained model to translate, it will always translate every Chinese token into "the".


Below is my own training process:
First, use moses scripts to do tokenization and truecase.

/path/to/moses/scripts/tokenizer/tokenizer.perl -l zh -a -no-escape -threads 20 < train.zh > train.tok.zh
/path/to/moses/scripts/tokenizer/tokenizer.perl -l en -a -no-escape -threads 20 < train.en > train.tok.en
#repeat similar steps for tokenizing val and test sets

/path/to/moses/scripts/recaser/train-truecaser.perl --model truecaser.model.zh --corpus train.tok.zh
/path/to/moses/scripts/recaser/train-truecaser.perl --model truecaser.model.en --corpus train.tok.en

/path/to/moses/scripts/recaser/truecase.perl --model truecaser.model.zh < train.tok.zh > train.tok.true.zh
/path/to/moses/scripts/recaser/truecase.perl --model truecaser.model.en < train.tok.en > train.tok.true.en
#repeat similar steps for truecasing val and test sets (using the same truecasing model learnt from train)

Second, use fastBPE to tokenize.

# learning BPE codes
/path/to/fastBPE/fast learnbpe 24000 train.tok.true.zh > zh.bpecodes
/path/to/fastBPE/fast learnbpe 24000 train.tok.true.en > en.bpecodes

# applying BPE codes
/path/to/fastBPE/fast applybpe train.zh.bpetok train.tok.true.zh zh.bpecodes 
/path/to/fastBPE/fast applybpe train.en.bpetok train.tok.true.en en.bpecodes 

# get vocab
/path/to/fastBPE/fast getvocab train.zh.bpetok > vocab.zh
/path/to/fastBPE/fast getvocab train.en.bpetok > vocab.en

#repeat similar steps for tokenizing the target monolingual corpus, validation set, and test set, using the vocab.lang.
/path/to/fastBPE/fast applybpe valid.zh.bpetok valid.tok.true.zh zh.bpecodes vocab.zh
/path/to/fastBPE/fast applybpe valid.en.bpetok valid.tok.true.en en.bpecodes vocab.en

Third, train fasttext using hyperpara mentioned in #11 .

./fasttext skipgram -input valid.en.bpetok -output emb/en -dim 300 -thread 8

Fourth, use preprocess.py to binary, using src_vocab and tgt_vocab is a difference.

python preprocess.py 
-train_src train.zh.bpetok
-train_tgt train.en.bpetok
-valid_src valid.zh.bpetok
-valid_tgt valid.en.bpetok
-save_data bin
-tgt_emb emb/en.vec
-src_vocab vocab.zh
-tgt_vocab vocab.en
-src_seq_length 175
-tgt_seq_length 175
-num_threads 32
-overwrite

Finally, use train.py to train, using the same hyperpara from README.

python train.py
-accum_count 2
-adam_beta2 0.9995
-batch_size 4000
-batch_type tokens
-decay_method linear
-decoder_type transformer
-dropout 0.1
-encoder_type transformer
-generator_function continuous-linear
-generator_layer_norm
-heads 4
-label_smoothing 0.1
-lambda_vmf 0.2
-layers 6
-learning_rate 1
-loss nllvmf
-max_generator_batches 2
-max_grad_norm 5
-min_lr 1e-09
-normalization tokens
-optim radam
-param_init 0
-param_init_glorot
-position_encoding
-rnn_size 512
-save_checkpoint_steps 5000
-share_decoder_embeddings
-train_steps 150000
-transformer_ff 1024
-valid_batch_size 4
-valid_steps 5000
-warmup_end_lr 0.0007
-warmup_init_lr 1e-08
-warmup_steps 1
-weight_decay 1e-05
-word_vec_size 512
-world_size 1
-data bin
-save_model ckpts/
-gpu_ranks 0

I want to know if there are some mistakes in my training process, your response will be appreciated!

Thank you!

Extra term in `LogCmk` forward

Hi,

It seems that you might have an extra term in this line, which is calculating the log partition function. scipy.special.ive already is the exponentially scaled version of the modified bessel function (so the -k is an extra term, which gets backpropagated through - so this might give potentially wrong gradients). I think this can be fixed by either getting rid of the - k or simply changing scipy.special.ive -> scipy.special.iv. Did I get that right ?

Thanks in advance,
Amanjit

weird value of LogCmk

Hi,

I'm trying to apply NLLvMF loss to language modelling based on your code and paper. But I got some strange results which is very similar to the issue in #2.

  1. I got the value of log(C_m(k)) is around 400, which means the C_m(k) is actually greater than e^400. Is that normal?

  2. I also tried the approximation of log(C_m(k)) because my model's output sometimes is with small norm (i.e small kappa) . The approximation gave the result that is around -600 which is quite dissimilar to the exact calculation in (1). Then I noticed that the line 42 and 43 in your code, the exact calculation and approximation have a difference on the sign, namely the approximation doesn't have a negative sign in the front. Is this a typo or it is in fact the case?

I really appreciate your idea, and the whole concept is perfect. Just this vMF calculation really confused me.

Thanks a lot!
Chauncey

How to use evaluate.sh?

Hi,
Thanks for sharing the code. I was able to follow the instruction to train and decode. But it seems that the evaluate.sh takes a few inputs which were not specified. Could you give an working example for evaluate.sh please? Thank you!

Regarding the training speed of the model

Hi thanks for paper and the code.
I have few doubts,

  1. Is the training speed of the model trained via continuous outputs is 2.5X times the model trained via seq2seq with softmax or is it with adaptive softmax - sampling based (since adaptive softmax proved to be 3 to 5 times faster than the normal softmax) and the bleu is considerably good.

  2. Does implementing the beam size increases the training speed in the same way as softmax based models or in this approach a bit differently like, beam size 5 implementation of continuous inputs training time gives only 1 - 1.5 X times of beam size 5 implementation of softmax.

  3. regarding the inference speed does it prove to be lesser than the softmax based approach and by what factor - considering comparision between the two models of considerable batch size on CPU (since training time is 2.5 times faster considering the convergence plus continuous outputs but not the model alone)

Thanks.

pickle error when saving checkpoints

When I run train.py

python ~/seq2seq-con/train.py -gpus 0 -data ./data/data.pt.train.pt -layers 2 -rnn_size 512 -word_vec_size 512 -output_emb_size 300 -brnn -loss nllvmf -epochs 2 -optim adam -dropout 0.0 -learning_rate 0. 0005 -log_interval 100 -save_model my_nll -batch_size 64 -tie_emb

I get an error like this

Namespace(batch_size=64, brnn=True, brnn_merge='concat', curriculum=False, data='./data/data.pt.train.pt', dropout=0.0, epochs=2, extra_shuffle=False, fix_src_emb=False, gpus=[0], input_feed=1, layers=2, learning_rate=0.0005, learning_rate_decay=0.5, log_interval=100, loss='nllvmf', max_generator_batches=32, max_grad_norm=5, nonlin_gen=False, optim='adam', output_emb_size=300, param_init=0.1, pre_word_vecs_dec=None, pre_word_vecs_enc=None, pre_word_vecs_output=None, rnn_size=512, save_all_epochs=False, save_model='my_nll', start_decay_at=8, start_epoch=1, tie_emb=True, train_anew=False, train_from='', train_from_state_dict='', word_vec_size=512)
Loading data from './data/data.pt.train.pt'
 * vocabulary size. source = 50004; target = 50000
 * number of training sentences. 234036
 * maximum batch size. 64
Building model...
* number of trainable parameters: 35101484
NMTModel(
  (encoder): Encoder(
    (word_lut): Embedding(50004, 512, padding_idx=0)
    (rnn): LSTM(512, 256, num_layers=2, bidirectional=True)
  )
  (decoder): Decoder(
    (word_lut): Embedding(50000, 300)
    (emb2input): Linear(in_features=300, out_features=512, bias=True)
    (rnn): StackedLSTM(
      (dropout): Dropout(p=0.0)
      (layers): ModuleList(
        (0): LSTMCell(1024, 512)
        (1): LSTMCell(512, 512)
      )
    )
    (attn): GlobalAttention(
      (linear_in): Linear(in_features=512, out_features=512, bias=False)
      (sm): Softmax()
      (linear_out): Linear(in_features=1024, out_features=512, bias=False)
      (tanh): Tanh()
    )
    (dropout): Dropout(p=0.0)
  )
  (generator): Sequential(
    (0): Linear(in_features=512, out_features=300, bias=True)
  )
)
/home/yiu/seq2seq-con/onmt/Dataset.py:68: UserWarning: volatile was removed and now has no effect. Use `with torch.no_grad():` instead.
  b = Variable(b, volatile=self.volatile)
/home/yiu/seq2seq-con/loss.py:16: UserWarning: volatile was removed and now has no effect. Use `with torch.no_grad():` instead.
  outputs = Variable(outputs.data, requires_grad=(not eval), volatile=eval)
Validation per step loss: -427.31
Validation per step other loss: 0.99006

/home/yiu/seq2seq-con/onmt/Optim.py:35: UserWarning: torch.nn.utils.clip_grad_norm is now deprecated in favor of torch.nn.utils.clip_grad_norm_.
  clip_grad_norm(self.params, self.max_grad_norm)
Epoch  1,   100/ 3657; lps: -428.01404; mse_lps: 0.56763; 12928 src tok/s; 12733 tgt tok/s; 668 sample/s;     10 s elapsed
Epoch  1,   200/ 3657; lps: -428.17770; mse_lps: 0.49556; 14156 src tok/s; 13699 tgt tok/s; 631 sample/s;     21 s elapsed
Epoch  1,   300/ 3657; lps: -428.24899; mse_lps: 0.46586; 14040 src tok/s; 13645 tgt tok/s; 641 sample/s;     31 s elapsed
Epoch  1,   400/ 3657; lps: -428.29535; mse_lps: 0.44727; 13836 src tok/s; 13412 tgt tok/s; 670 sample/s;     40 s elapsed
Epoch  1,   500/ 3657; lps: -428.33740; mse_lps: 0.43049; 14208 src tok/s; 13678 tgt tok/s; 595 sample/s;     51 s elapsed
Epoch  1,   600/ 3657; lps: -428.38971; mse_lps: 0.40920; 14407 src tok/s; 14027 tgt tok/s; 630 sample/s;     61 s elapsed
Epoch  1,   700/ 3657; lps: -428.43015; mse_lps: 0.39302; 13909 src tok/s; 13532 tgt tok/s; 622 sample/s;     71 s elapsed
Epoch  1,   800/ 3657; lps: -428.47525; mse_lps: 0.37523; 13679 src tok/s; 13319 tgt tok/s; 659 sample/s;     81 s elapsed
Epoch  1,   900/ 3657; lps: -428.47098; mse_lps: 0.37668; 14284 src tok/s; 13836 tgt tok/s; 613 sample/s;     91 s elapsed
Epoch  1,  1000/ 3657; lps: -428.51727; mse_lps: 0.35823; 13618 src tok/s; 13353 tgt tok/s; 675 sample/s;    101 s elapsed
Epoch  1,  1100/ 3657; lps: -428.51468; mse_lps: 0.35920; 14259 src tok/s; 13923 tgt tok/s; 629 sample/s;    111 s elapsed
Epoch  1,  1200/ 3657; lps: -428.54184; mse_lps: 0.34848; 13945 src tok/s; 13549 tgt tok/s; 626 sample/s;    121 s elapsed
Epoch  1,  1300/ 3657; lps: -428.54544; mse_lps: 0.34690; 14336 src tok/s; 13912 tgt tok/s; 620 sample/s;    132 s elapsed
Epoch  1,  1400/ 3657; lps: -428.55545; mse_lps: 0.34304; 14013 src tok/s; 13677 tgt tok/s; 632 sample/s;    142 s elapsed
Epoch  1,  1500/ 3657; lps: -428.59375; mse_lps: 0.32786; 13999 src tok/s; 13636 tgt tok/s; 645 sample/s;    152 s elapsed
Epoch  1,  1600/ 3657; lps: -428.58691; mse_lps: 0.33046; 14409 src tok/s; 13871 tgt tok/s; 593 sample/s;    162 s elapsed
Epoch  1,  1700/ 3657; lps: -428.60376; mse_lps: 0.32374; 14253 src tok/s; 13859 tgt tok/s; 609 sample/s;    173 s elapsed
Epoch  1,  1800/ 3657; lps: -428.61404; mse_lps: 0.31958; 14372 src tok/s; 13931 tgt tok/s; 620 sample/s;    183 s elapsed
Epoch  1,  1900/ 3657; lps: -428.62067; mse_lps: 0.31684; 14278 src tok/s; 13859 tgt tok/s; 621 sample/s;    194 s elapsed
Epoch  1,  2000/ 3657; lps: -428.62711; mse_lps: 0.31431; 14500 src tok/s; 14006 tgt tok/s; 612 sample/s;    204 s elapsed
Epoch  1,  2100/ 3657; lps: -428.64532; mse_lps: 0.30712; 13907 src tok/s; 13592 tgt tok/s; 637 sample/s;    214 s elapsed
Epoch  1,  2200/ 3657; lps: -428.65009; mse_lps: 0.30508; 14166 src tok/s; 13633 tgt tok/s; 615 sample/s;    225 s elapsed
Epoch  1,  2300/ 3657; lps: -428.67178; mse_lps: 0.29664; 12135 src tok/s; 11900 tgt tok/s; 636 sample/s;    235 s elapsed
Epoch  1,  2400/ 3657; lps: -428.66647; mse_lps: 0.29864; 13683 src tok/s; 13340 tgt tok/s; 657 sample/s;    244 s elapsed
Epoch  1,  2500/ 3657; lps: -428.69794; mse_lps: 0.28635; 13363 src tok/s; 13059 tgt tok/s; 716 sample/s;    253 s elapsed
Epoch  1,  2600/ 3657; lps: -428.68912; mse_lps: 0.28969; 13630 src tok/s; 13356 tgt tok/s; 687 sample/s;    263 s elapsed
Epoch  1,  2700/ 3657; lps: -428.68018; mse_lps: 0.29306; 14002 src tok/s; 13687 tgt tok/s; 666 sample/s;    272 s elapsed
Epoch  1,  2800/ 3657; lps: -428.67957; mse_lps: 0.29320; 14343 src tok/s; 13920 tgt tok/s; 622 sample/s;    282 s elapsed
Epoch  1,  2900/ 3657; lps: -428.69455; mse_lps: 0.28743; 13715 src tok/s; 13484 tgt tok/s; 680 sample/s;    292 s elapsed
Epoch  1,  3000/ 3657; lps: -428.68796; mse_lps: 0.28988; 13785 src tok/s; 13397 tgt tok/s; 655 sample/s;    302 s elapsed
Epoch  1,  3100/ 3657; lps: -428.69406; mse_lps: 0.28747; 14149 src tok/s; 13791 tgt tok/s; 647 sample/s;    312 s elapsed
Epoch  1,  3200/ 3657; lps: -428.70630; mse_lps: 0.28273; 13777 src tok/s; 13421 tgt tok/s; 666 sample/s;    321 s elapsed
Epoch  1,  3300/ 3657; lps: -428.70053; mse_lps: 0.28483; 14535 src tok/s; 14166 tgt tok/s; 627 sample/s;    331 s elapsed
Epoch  1,  3400/ 3657; lps: -428.71848; mse_lps: 0.27790; 13991 src tok/s; 13669 tgt tok/s; 652 sample/s;    341 s elapsed
Epoch  1,  3500/ 3657; lps: -428.71857; mse_lps: 0.27776; 13829 src tok/s; 13620 tgt tok/s; 678 sample/s;    351 s elapsed
Epoch  1,  3600/ 3657; lps: -428.71576; mse_lps: 0.27881; 14597 src tok/s; 14136 tgt tok/s; 618 sample/s;    361 s elapsed
Epoch  1, 234036 samples,    367 s
Train per step loss: -428.563
Validation per step loss: -428.625
Validation per step other loss: 0.312244
Traceback (most recent call last):
  File "/home/yiu/seq2seq-con/train.py", line 437, in <module>
    main()
  File "/home/yiu/seq2seq-con/train.py", line 434, in main
    trainModel(model, trainData, validData, dataset, target_embeddings, optim)
  File "/home/yiu/seq2seq-con/train.py", line 290, in trainModel
    '%s_latest.pt' % (opt.save_model))
  File "/home/yiu/.conda/envs/yiuenv/lib/python3.7/site-packages/torch/serialization.py", line 224, in save
    return _with_file_like(f, "wb", lambda f: _save(obj, f, pickle_module, pickle_protocol))
  File "/home/yiu/.conda/envs/yiuenv/lib/python3.7/site-packages/torch/serialization.py", line 149, in _with_file_like
    return body(f)
  File "/home/yiu/.conda/envs/yiuenv/lib/python3.7/site-packages/torch/serialization.py", line 224, in <lambda>
    return _with_file_like(f, "wb", lambda f: _save(obj, f, pickle_module, pickle_protocol))
  File "/home/yiu/.conda/envs/yiuenv/lib/python3.7/site-packages/torch/serialization.py", line 297, in _save
    pickler.dump(obj)
AttributeError: Can't pickle local object 'Optim.set_parameters.<locals>.<lambda>'

The error seems to come from the params attribute of the optim object, which is a lambda function.

Hyperparameter in NLL-VMF loss

Why is the hyperparameter in NLL-VMF loss different from the ones in your paper? Does it achieve better empirical results?

Hyper-parameters to train word2vec/FastText

This question is a bit off-topic, but I would like to know what hyper-parameters you used to train word2vec/FastText models.

You have mentioned, "which were trained using the hyper-parameters recommended by the authors" in your paper, but not all kinds of parameters have been recommended in the authors' paper.
If I may, I would like you to provide any scripts/codes to train embedding models.

Thanks!

VMF loss problems

Hi,

I was trying to reproduce the Von Mises-Fisher loss from the paper following your code and got some unexpected results after experiments with fasttext embeddings(m=300). Most of the problems comes from the first loss term - logcmk.

  1. Exact loss value is equal to ~ -400. I suppose, there is no requirement for the loss values to be non-negative, since we are searching for the minimum, but it seems pretty weird. Also, approximated loss value is ~ 290.
  2. Regardless of the similarity between output and target vectors gradients after loss.backward() are in the narrow range. For example, if we calculate the loss between output=target and target, the gradient values are ~2e-05, while the gradient between completely different output and target gives the gradient values ~6e-05. The usual cosine loss for the same example gives ~e-21 and ~5e-05 respectively.

I've tried to play with the regularization and some other parameters, but haven't got rif of the problems above. Is it a correct behavior for the vmfloss? What could be a problem? I'd really appreciate your help.

Also, I haven't figured out from the paper: is there a reason for using ||e_hat|| as kappa?

Thanks a lot!
Anna

error running translate.py

I ran translate.py with the source and target files generated by prepare_data.py as
python ~/seq2seq-con/translate.py -loss nllvmf -gpu 0 -model ./my_nll_bestmodel.pt -src ./data/dev.tok.true.fr -tgt ./data/dev.tok.true.en -replace_unk -output ./data/predictions -batch_size 512 -beam_size 1

and I got this error

Namespace(batch_size=512, beam_size=1, cuda=True, gpu=0, lookup_dict=None, loss='nllvmf', max_sent_length=100, model='./my_nll_bestmodel.pt', n_best=1, output='./data/predictions', replace_unk=True, saved_lm='', src='./data/dev.tok.true.fr', tgt='./data/dev.tok.true.en', tgt_dict=None, use_lm=False, verbose=False)
/home/yiu/seq2seq-con/onmt/Dataset.py:68: UserWarning: volatile was removed and now has no effect. Use `with torch.no_grad():` instead.
  b = Variable(b, volatile=self.volatile)
/home/yiu/seq2seq-con/onmt/Translator.py:258: UserWarning: volatile was removed and now has no effect. Use `with torch.no_grad():` instead.
  input_var = Variable(input_, volatile=True)
/home/yiu/seq2seq-con/onmt/Translator.py:311: UserWarning: volatile was removed and now has no effect. Use `with torch.no_grad():` instead.
  .view(*newSize), volatile=True)
Traceback (most recent call last):
  File "/home/yiu/seq2seq-con/translate.py", line 152, in <module>
    main()
  File "/home/yiu/seq2seq-con/translate.py", line 100, in main
    predBatch, predScore, knntime_ = translator.translate(srcBatch, tgtBatch)
  File "/home/yiu/seq2seq-con/onmt/Translator.py", line 351, in translate
    [self.buildTargetTokens(pred[b][n], srcBatch[b], attn[b][n]) for n in range(self.opt.n_best) ]
  File "/home/yiu/seq2seq-con/onmt/Translator.py", line 351, in <listcomp>
    [self.buildTargetTokens(pred[b][n], srcBatch[b], attn[b][n]) for n in range(self.opt.n_best) ]
  File "/home/yiu/seq2seq-con/onmt/Translator.py", line 151, in buildTargetTokens
    if isNumeral(tokens[i]):
  File "/home/yiu/seq2seq-con/onmt/Translator.py", line 12, in isNumeral
    x = re.findall(r'[0-9]+[\.,][0-9]+',s)
  File "/home/yiu/.conda/envs/yiuenv/lib/python3.7/re.py", line 223, in findall
    return _compile(pattern, flags).findall(string)
TypeError: expected string or bytes-like object

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.