GithubHelp home page GithubHelp logo

Benchmarks about imagecaptioning.pytorch HOT 71 OPEN

ruotianluo avatar ruotianluo commented on August 19, 2024 12
Benchmarks

from imagecaptioning.pytorch.

Comments (71)

ruotianluo avatar ruotianluo commented on August 19, 2024 3

Finetuning is actually worse. It's about how to extract the features, check the self critical sequence training paper.

from imagecaptioning.pytorch.

mojesty avatar mojesty commented on August 19, 2024 2

Hello! I have some questions about pretrained models performance.
I tested top-down, Fully Connected and att2in models for several random images from Internet and found that they cannot describe images correctly (although top-down and att2in models produced syntactically correct sentences, e. g. "a woman sitting on a chair with a dog"). Also I visualized attention maps and they look more or less random for every model as well.
So either my method of testing models is corrupt or the models themselves are not so good, I want to discuss this.
Also, @upccpu could you please provide me a trained model?

from imagecaptioning.pytorch.

ruotianluo avatar ruotianluo commented on August 19, 2024 1

They didn't fine-tune in both phase. And finetuning may not work as well under attention based model.

from imagecaptioning.pytorch.

miracle24 avatar miracle24 commented on August 19, 2024 1

I did not train the attention based model. But I will try. Thank you and your codes. I will start learning pytorch with you code.

from imagecaptioning.pytorch.

ruotianluo avatar ruotianluo commented on August 19, 2024 1

@dmitriy-serdyuk it's using res101. and FC stands for the FC model in self critical sequence training paper which can be regarded as a variant of showtell.

from imagecaptioning.pytorch.

YuanEZhou avatar YuanEZhou commented on August 19, 2024 1

@jamiechoi1995 I use the default options.

from imagecaptioning.pytorch.

SJTUzhanglj avatar SJTUzhanglj commented on August 19, 2024

is there any code or options, to show how to train any of these models (topdown, etc) with self-critical algorithm? @ruotianluo

from imagecaptioning.pytorch.

ruotianluo avatar ruotianluo commented on August 19, 2024

It's in my another repository

from imagecaptioning.pytorch.

miracle24 avatar miracle24 commented on August 19, 2024

Did you fine-tune the CNN when trained the model with cross entropy loss?

from imagecaptioning.pytorch.

ruotianluo avatar ruotianluo commented on August 19, 2024

No.

from imagecaptioning.pytorch.

miracle24 avatar miracle24 commented on August 19, 2024

Wow. It's unbelievable. I can not achieve that high score without fine-tune when train my own captioning model under cross entropy loss. Most papers I have read will fine-tune the CNN when train the model with cross entropy loss. Is there any tips when train the model with cross entropy?

from imagecaptioning.pytorch.

miracle24 avatar miracle24 commented on August 19, 2024

I think they means they did not do finetuning when trained the model under RL loss, while they did not mention whether they finetune the CNN when train the model under cross entropy loss.

from imagecaptioning.pytorch.

miracle24 avatar miracle24 commented on August 19, 2024

I finetnue the CNN under cross entropy loss as neuraltalk2 (Lua version) and I got cider 0.91 on validation set without beamsearch. Then I train the self-critical model without finetuning based on the best pretrained model and I finally got cider almost close result compared with self-critical paper.

from imagecaptioning.pytorch.

ahkarami avatar ahkarami commented on August 19, 2024

Dear @ruotianluo,
Thank you for your fantastic code. Would you please tell me all of your used parameters for run the train.py code? (In fact, I used your code, as the guidance in the ReadMe file, but when I have used and tested the trained model, I got same result (i.e., same captions) for all of my different test images). It is worth noting that, I have used --language_eval 0, and maybe this wrong parameter caused these obtained results, am I correct?

from imagecaptioning.pytorch.

ruotianluo avatar ruotianluo commented on August 19, 2024

Can you try downloading the pertrained model and evaluate on your test images. It helps me to narrow down the problem.

from imagecaptioning.pytorch.

ahkarami avatar ahkarami commented on August 19, 2024

Yes, I can download the pre-trained models and use them. The results from pre-Trained models were appropriate and nice; However, the obtained results from my Trained models were same for all of the images. It seems something wrong with my used parameters for training and the trained model produced same caption for all of given images.

from imagecaptioning.pytorch.

ruotianluo avatar ruotianluo commented on August 19, 2024

You should be able to reproduce my result following my instructions, it's really weird.
Anyway which options are not clear to me (most of the options are explained in the opts.py)?

from imagecaptioning.pytorch.

ahkarami avatar ahkarami commented on August 19, 2024

Thank you very much for your help. The problem has been solved. In fact, I have trained your code on another Synthetic data set, and as a result the error has been occurred. However, when I used your code on MS-COCO data set, the training process hasn't any problem.
Just as another question, would you please kindly tell me the appropriate value of parameters for training? I mean the appropriate values for parameters such as beam_size, rnn_size, num_layers, rnn_type, learning_rate, learning_rate_decay_every, and scheduled_sampling_start.

from imagecaptioning.pytorch.

ruotianluo avatar ruotianluo commented on August 19, 2024

@ahkarami is the previous problem related to my code?
I think it varies from dataset to dataset. Beam size could be 5. The numbers I set are the same as in the readme.

from imagecaptioning.pytorch.

ahkarami avatar ahkarami commented on August 19, 2024

Dear @ruotianluo,
No, the previous problem related to my data set, and your code is correct. In fact, in my data set the repetitious words are many. Moreover, the length of sentences vary from ~15 up to 90 words. I have changed the parameters of the prepro_labels.py by --max_length = 50 & --word_count_threshold = 2 then after about 40 epochs, the produced results are not same for any given image; However the results were bad and not appropriate. I think still my parameters for training & pre-processing the labels are not appropriate.

from imagecaptioning.pytorch.

xyy19920105 avatar xyy19920105 commented on August 19, 2024

Hi @ruotianluo ,
Thank you for your code and benchmark, did you test the adaptive attention on your code?? Could you output the adaptive attention's result??
Thank you again.

from imagecaptioning.pytorch.

ruotianluo avatar ruotianluo commented on August 19, 2024

Actually no. I didn't spend much time on that model.

from imagecaptioning.pytorch.

xyy19920105 avatar xyy19920105 commented on August 19, 2024

Thanks for your reply.
Do you think that the adaptive attention model is not good enough as a baseline??

from imagecaptioning.pytorch.

ruotianluo avatar ruotianluo commented on August 19, 2024

It's good, just I couldn't get it work well.

from imagecaptioning.pytorch.

dmitriy-serdyuk avatar dmitriy-serdyuk commented on August 19, 2024

Could you clarify, which features are used for the results above? resnet152? And does fc stand for ShowTell?

from imagecaptioning.pytorch.

chynphh avatar chynphh commented on August 19, 2024

Thank you for your fantastic code. I am a beginner, and it helped me a lot.
I have a question about the 'LSTMCore' class in the FCModel.py. Why don't you use the official LSTM model and train it by step, or the LSTMCell model and add a dropout layer on it? Is there any difference between your code and them?

from imagecaptioning.pytorch.

ruotianluo avatar ruotianluo commented on August 19, 2024

The in gate is different.
https://github.com/ruotianluo/ImageCaptioning.pytorch/blob/master/models/FCModel.py#L34

from imagecaptioning.pytorch.

chynphh avatar chynphh commented on August 19, 2024

OK, I got it. But why do you make this change? Is there any paper or any research about this?

from imagecaptioning.pytorch.

ruotianluo avatar ruotianluo commented on August 19, 2024

Self-critical Sequence Training for Image Captioning
https://arxiv.org/abs/1612.00563

from imagecaptioning.pytorch.

chynphh avatar chynphh commented on August 19, 2024

Thank you very much!

from imagecaptioning.pytorch.

eriche2016 avatar eriche2016 commented on August 19, 2024

i am wondering if you only use the 80K dataset to get such a high performance on validation set, or use 110K dataset? I am doing experiment on karpathy split and use 80K dataset, but i get only 0.72 in terms of cider when using only train set. If so, can you give me some tips on training the net.

from imagecaptioning.pytorch.

eriche2016 avatar eriche2016 commented on August 19, 2024

BTW, i am using show attend model for my experiment.

from imagecaptioning.pytorch.

ruotianluo avatar ruotianluo commented on August 19, 2024

@eriche2016 I use 110k.

from imagecaptioning.pytorch.

eriche2016 avatar eriche2016 commented on August 19, 2024

okay, i got it, thank you very much for your quick reply.

from imagecaptioning.pytorch.

jamiechoi1995 avatar jamiechoi1995 commented on August 19, 2024

I use att2in2 pre-trained model, resnet 101 CNN features,
and the evaluation result is:

Bleu_1: 0.752
Bleu_2: 0.588
Bleu_3: 0.448
Bleu_4: 0.339
computing METEOR score...
METEOR: 0.264
computing Rouge score...
ROUGE_L: 0.551
computing CIDEr score...
CIDEr: 1.058
loss: 12.9450276334
{'CIDEr': 1.0579511410971039, 'Bleu_4': 0.33850444932429163, 'Bleu_3': 0.4475539789958938, 'Bleu_2': 0.588021344462357, 'Bleu_1': 0.7524049671248727, 'ROUGE_L': 0.5509140488261475, 'METEOR': 0.2637079091201445}

I am confused about the loss, it seems too high.

from imagecaptioning.pytorch.

ruotianluo avatar ruotianluo commented on August 19, 2024

@jiamiechoi1995
That's cross entropy loss, that's expected.

from imagecaptioning.pytorch.

jamiechoi1995 avatar jamiechoi1995 commented on August 19, 2024

@ruotianluo so the pre-trained models include self critical training?
I thought they only include MLE training, sorry.

from imagecaptioning.pytorch.

ruotianluo avatar ruotianluo commented on August 19, 2024

There is, but in other folders.

from imagecaptioning.pytorch.

miracle24 avatar miracle24 commented on August 19, 2024

Hi. Can you tell more details about how to run att2in2 using self-critical? like how many epochs you pretrained att2in2 with XE loss, and after that, how many epochs you trained it with self-critical? If possible, could you provide the train script? Thanks a lot.

from imagecaptioning.pytorch.

ruotianluo avatar ruotianluo commented on August 19, 2024

Check out https://github.com/ruotianluo/self-critical.pytorch

from imagecaptioning.pytorch.

miracle24 avatar miracle24 commented on August 19, 2024

I have read that. python train.py --id fc_rl --caption_model fc --input_json data/cocotalk.json --input_fc_dir data/cocotalk_fc --input_att_dir data/cocotalk_att --input_label_h5 data/cocotalk_label.h5 --batch_size 10 --learning_rate 5e-5 --start_from log_fc_rl --checkpoint_path log_fc_rl --save_checkpoint_every 6000 --language_eval 1 --val_images_use 5000 --self_critical_after 30. But how many epoches did you train the model with self-critical ?

from imagecaptioning.pytorch.

ruotianluo avatar ruotianluo commented on August 19, 2024

I see. You can actually train for no matter how long you want. I think I train for additional 30 epochs.

from imagecaptioning.pytorch.

miracle24 avatar miracle24 commented on August 19, 2024

Ok, I see. Thanks a lot.

from imagecaptioning.pytorch.

upccpu avatar upccpu commented on August 19, 2024

I achieved the ideas in my paper based on your code with resnet152 and cross entropy loss without ensemble. The evaluation result is:
Bleu_1: 0.759
Bleu_2: 0.595
Bleu_3: 0.454
Bleu_4: 0.344
computing METEOR score...
METEOR: 0.268
computing Rouge score...
ROUGE_L: 0.556
computing CIDEr score...
CIDEr: 1.090
The results exceeds the top-down model a lot with the same environment especially the cider score(1.090>>1.051). It is beyond my expectation.

from imagecaptioning.pytorch.

YuanEZhou avatar YuanEZhou commented on August 19, 2024

opt.id = 'topdown'
opt.caption_model = 'topdown'
opt.rnn_size = 1000
opt.input_encoding_size = 1000

opt.batch_size = 100
Other configurations follow this repository.

Cross_entropy loss:
ce_wo_constrain

Cross_entropy+self-critical: slightly better than the result reported in original paper.
ce sc argmax

from imagecaptioning.pytorch.

jamiechoi1995 avatar jamiechoi1995 commented on August 19, 2024

opt.id = 'topdown'
opt.caption_model = 'topdown'
opt.rnn_size = 1000
opt.input_encoding_size = 1000

opt.batch_size = 100
Other configurations follow this repository.

Cross_entropy loss:
ce_wo_constrain

Cross_entropy+self-critical: slightly better than the result reported in original paper.
ce sc argmax

@YuanEZhou which feature did you use? the default resnet101 feature or the bottom up feature

from imagecaptioning.pytorch.

YuanEZhou avatar YuanEZhou commented on August 19, 2024

bottom up feature

from imagecaptioning.pytorch.

jamiechoi1995 avatar jamiechoi1995 commented on August 19, 2024

bottom up feature

@YuanEZhou may I ask how did you use these features?
Because I have a similar question in this issue: ruotianluo/self-critical.pytorch#66

did you modify the code to incorporate bounding box information? Or just use the default options.

from imagecaptioning.pytorch.

jamiechoi1995 avatar jamiechoi1995 commented on August 19, 2024

Adaptive Attention model
learning rate 1e-4
batch size 32
trained for 100 epochs
I use the code in self-critical repo

{'CIDEr': 1.0295328576254532, 'Bleu_4': 0.32367107232015596, 'Bleu_3': 0.4308636494026319, 'Bleu_2': 0.5710839754137301, 'Bleu_1': 0.7375622419883233, 'ROUGE_L': 0.5415854013591195, 'METEOR': 0.2603669044858015, 'SPICE': 0.193603187345227
47}

from imagecaptioning.pytorch.

fawazsammani avatar fawazsammani commented on August 19, 2024

@YuanEZhou can you please share the results.json file you got from the coco caption code which includes all the image ids with their predictions for the validation images? I urgently need it. Your help is highly appreciated

from imagecaptioning.pytorch.

YuanEZhou avatar YuanEZhou commented on August 19, 2024

Hi @fawazsammani , I am sorry that I have lost the file.

from imagecaptioning.pytorch.

2033329616 avatar 2033329616 commented on August 19, 2024

when I use the att2in2 pre-trained model to evaluate the coco dataset, the decoder is always output similar sentences, metrics are very bad why?
wrong_info4
wrong_info3

from imagecaptioning.pytorch.

fawazsammani avatar fawazsammani commented on August 19, 2024

@2033329616 maybe the mistake is in your images. Yesterday, i ran the att2in2 model on the COCO karpathy split validation images, you can run them in the coco caption and see the results, they are identical to the ones posted. (I've already pre-processed the file to include the image ids for evaluation purpose, so you may just run the coco caption code on it directly).
att2in2_results.zip
Regards

from imagecaptioning.pytorch.

YuanEZhou avatar YuanEZhou commented on August 19, 2024

@2033329616 You need to download pretrained resnet model from the link in this project.

from imagecaptioning.pytorch.

2033329616 avatar 2033329616 commented on August 19, 2024

@fawazsammani @YuanEZhou , Thanks for your reply, I download the "att2in2_results.zip" and run the coco metrics code, it gets a good result. I have already used the pretrained att2in2 mode in this project, and test it on the karpathy split test cocodataset, but I can't get the correct result, I notice the output sentences are same whatever I change the image or fc and att feature, I have no idea how to solve this problem?

from imagecaptioning.pytorch.

akashprakas avatar akashprakas commented on August 19, 2024

is there a pretrained model in which the self attention was used?

from imagecaptioning.pytorch.

kakazl avatar kakazl commented on August 19, 2024

@fawazsammani @YuanEZhou , Thanks for your reply, I download the "att2in2_results.zip" and run the coco metrics code, it gets a good result. I have already used the pretrained att2in2 mode in this project, and test it on the karpathy split test cocodataset, but I can't get the correct result, I notice the output sentences are same whatever I change the image or fc and att feature, I have no idea how to solve this problem?

i meet the same problem. Are you solving the problem now?

from imagecaptioning.pytorch.

fawazsammani avatar fawazsammani commented on August 19, 2024

Hi @2033329616 and @kakazl . I'm not sure exactly what's the problem in your case. Maybe you used different settings? This is the command i run: pytorch-0.4:py2 "python eval.py --model '/data/att2in2/model-best.pth' --infos_path '/data/att2in2/infos_a2i2-best.pkl' --image_folder '/captiondata' --num_images -1 --beam_size 3 --dump_path 1"
Make sure you place all the images in the folder 'captiondata'. Or create a new folder and change the name in the command. Hope that helps

from imagecaptioning.pytorch.

sssilence avatar sssilence commented on August 19, 2024

Hi @2033329616 and @kakazl . I'm not sure exactly what's the problem in your case. Maybe you used different settings? This is the command i run: pytorch-0.4:py2 "python eval.py --model '/data/att2in2/model-best.pth' --infos_path '/data/att2in2/infos_a2i2-best.pkl' --image_folder '/captiondata' --num_images -1 --beam_size 3 --dump_path 1"
Make sure you place all the images in the folder 'captiondata'. Or create a new folder and change the name in the command. Hope that helps

Sorry,when I run:python eval.py --model 'self_cirtical/att2in2/model-best.pth' --infos_path 'self_cirtical/att2in2/infos_a2i2-best.pkl' --image_folder 'data/coco/images/val2014/' --num_images 10,
always occur the error:TypeError: 'int' object is not callable ,on AttModels line 165,batch_size = fc_feats.size(0)
I don't know why.Thank you!

from imagecaptioning.pytorch.

sssilence avatar sssilence commented on August 19, 2024

Hi @2033329616 and @kakazl . I'm not sure exactly what's the problem in your case. Maybe you used different settings? This is the command i run: pytorch-0.4:py2 "python eval.py --model '/data/att2in2/model-best.pth' --infos_path '/data/att2in2/infos_a2i2-best.pkl' --image_folder '/captiondata' --num_images -1 --beam_size 3 --dump_path 1"
Make sure you place all the images in the folder 'captiondata'. Or create a new folder and change the name in the command. Hope that helps

Sorry,when I run:python eval.py --model 'self_cirtical/att2in2/model-best.pth' --infos_path 'self_cirtical/att2in2/infos_a2i2-best.pkl' --image_folder 'data/coco/images/val2014/' --num_images 10,
always occur the error:TypeError: 'int' object is not callable ,on AttModels line 165,batch_size = fc_feats.size(0)
I don't know why.Thank you!

@fawazsammani

from imagecaptioning.pytorch.

fawazsammani avatar fawazsammani commented on August 19, 2024

@sssilence are you using Python 2 or 3? I just ran it again and it works. According to your error, your fc_feats is an integer. Are you sure to extracted the features correctly and didn't modify something in the code?

from imagecaptioning.pytorch.

sssilence avatar sssilence commented on August 19, 2024

@sssilence are you using Python 2 or 3? I just ran it again and it works. According to your error, your fc_feats is an integer. Are you sure to extracted the features correctly and didn't modify something in the code?

Yeah I used python2.I didn't modify anything in the code.And I used resnet101 extracting the features.Then I modify some code in rval_utils.py: tmp = [torch.from_numpy(_).cuda() if _ is not None else _ for _ in tmp],and I can run python rval.py but I can't run python train successfuly.
Besides, when I finished running eval.py,only these:
cp "data/coco/images/val2014/COCO_val2014_000000316715.jpg" vis/imgs/img40508.jpg
image 4: a group of traffic lights on a city street
cp "data/coco/images/val2014/COCO_val2014_000000278350.jpg" vis/imgs/img40509.jpg
image 5: a man standing in the water with a frisbee
cp "data/coco/images/val2014/COCO_val2014_000000557573.jpg" vis/imgs/img40510.jpg
image 6: a close up of a flower in a street
evaluating validation preformance... 5/40504 (0.000000)
loss: 0.0
there are nothing in eval_results and there are not any score.

from imagecaptioning.pytorch.

Sun-WeiZhen avatar Sun-WeiZhen commented on August 19, 2024

Dear @ruotianluo,
Thank you for your fantastic code. Would you please tell me with the following questions,thank you.I have downloaded the pretrained models as readme.
usage: eval.py [-h] --model MODEL [--cnn_model CNN_MODEL] --infos_path
INFOS_PATH [--batch_size BATCH_SIZE] [--num_images NUM_IMAGES]
[--language_eval LANGUAGE_EVAL] [--dump_images DUMP_IMAGES]
[--dump_json DUMP_JSON] [--dump_path DUMP_PATH]
[--sample_max SAMPLE_MAX] [--beam_size BEAM_SIZE]
[--temperature TEMPERATURE] [--image_folder IMAGE_FOLDER]
[--image_root IMAGE_ROOT] [--input_fc_dir INPUT_FC_DIR]
[--input_att_dir INPUT_ATT_DIR]
[--input_label_h5 INPUT_LABEL_H5] [--input_json INPUT_JSON]
[--split SPLIT] [--coco_json COCO_JSON] [--id ID]
eval.py: error: unrecognized arguments: python eval.py

from imagecaptioning.pytorch.

AnupKumarGupta avatar AnupKumarGupta commented on August 19, 2024

Hi everyone. Thanks and kudos to this great repository. I am just a newbie and this repo has helped me a lot. I want to mimic the results of ShowAndTell, ShowAttendAndTell. I have provided the path to the model as mle/fc/model-best.pth but an exception is raised Exception: Caption model not supported: newfc.

I changed the name of caption_model to fc from new_fc, but yet again I encounter an error. Any help will be highly appreciated.

@dmitriy-serdyuk it's using res101. and FC stands for the FC model in self critical sequence training paper which can be regarded as a variant of showtell.

from imagecaptioning.pytorch.

Mollylulu avatar Mollylulu commented on August 19, 2024

image
Hello, I download the restnet101 folder and move model.pth & infos.pkl files into the layer where eval.py exists, then when I run the eval command as your direction, it just reports the error like the image showing. could you help me figure out where I make mistakes?

from imagecaptioning.pytorch.

ruotianluo avatar ruotianluo commented on August 19, 2024

@Willowlululu i guess you are using python3? This repo only support py2. Try selfcritical. Pytorch

from imagecaptioning.pytorch.

anuragrpatil avatar anuragrpatil commented on August 19, 2024

Hi @ruotianluo, Thank you for the great repo! I was wondering is there a pretrained transformer model in the drive link?

from imagecaptioning.pytorch.

ruotianluo avatar ruotianluo commented on August 19, 2024

There is, check out self critical pytorch repo model zoo

from imagecaptioning.pytorch.

anuragrpatil avatar anuragrpatil commented on August 19, 2024

@ruotianluo Thank you for the quick response! To check my understanding, the fc_nsc, fc_rl and att2in2 are from the self critical paper and the updown is the Anderson paper. Apologies if I am missing out anything here.

Screenshot 2020-04-18 at 1 27 34 PM

from imagecaptioning.pytorch.

ruotianluo avatar ruotianluo commented on August 19, 2024

https://github.com/ruotianluo/self-critical.pytorch/blob/master/MODEL_ZOO.md

from imagecaptioning.pytorch.

ydyrx-ldm avatar ydyrx-ldm commented on August 19, 2024

@jamiechoi1995

Adaptive Attention model
learning rate 1e-4
batch size 32
trained for 100 epochs
I use the code in self-critical repo

{'CIDEr': 1.0295328576254532, 'Bleu_4': 0.32367107232015596, 'Bleu_3': 0.4308636494026319, 'Bleu_2': 0.5710839754137301, 'Bleu_1': 0.7375622419883233, 'ROUGE_L': 0.5415854013591195, 'METEOR': 0.2603669044858015, 'SPICE': 0.193603187345227
47}

Hi, I also want to use Adaptive Attention. What was your training command at that time? Waiting for your answer

from imagecaptioning.pytorch.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.