GithubHelp home page GithubHelp logo

eric-yyjau / pytorch-superpoint Goto Github PK

View Code? Open in Web Editor NEW
766.0 10.0 163.0 197.81 MB

Superpoint Implemented in PyTorch: https://arxiv.org/abs/1712.07629

License: MIT License

Python 2.05% Jupyter Notebook 97.95% Shell 0.01%
deep-learning keypoints-detector descriptor kitti superpoint hpatches ms-coco

pytorch-superpoint's Introduction

pytorch-superpoint

This is a PyTorch implementation of "SuperPoint: Self-Supervised Interest Point Detection and Description." Daniel DeTone, Tomasz Malisiewicz, Andrew Rabinovich. ArXiv 2018. This code is partially based on the tensorflow implementation https://github.com/rpautrat/SuperPoint.

Please be generous to star this repo if it helps your research. This repo is a bi-product of our paper deepFEPE(IROS 2020).

Differences between our implementation and original paper

  • Descriptor loss: We tested descriptor loss using different methods, including dense method (as paper but slightly different) and sparse method. We notice sparse loss can converge more efficiently with similar performance. The default setting here is sparse method.

Results on HPatches

Task Homography estimation Detector metric Descriptor metric
Epsilon = 1 3 5 Repeatability MLE NN mAP Matching Score
Pretrained model 0.44 0.77 0.83 0.606 1.14 0.81 0.55
Sift (subpixel accuracy) 0.63 0.76 0.79 0.51 1.16 0.70 0.27
superpoint_coco_heat2_0_170k_hpatches_sub 0.46 0.75 0.81 0.63 1.07 0.78 0.42
superpoint_kitti_heat2_0_50k_hpatches_sub 0.44 0.71 0.77 0.56 0.95 0.78 0.41
  • Pretrained model is from SuperPointPretrainedNetwork.
  • The evaluation is done under our evaluation scripts.
  • COCO/ KITTI pretrained model is included in this repo.

Installation

Requirements

  • python == 3.6
  • pytorch >= 1.1 (tested in 1.3.1)
  • torchvision >= 0.3.0 (tested in 0.4.2)
  • cuda (tested in cuda10)
conda create --name py36-sp python=3.6
conda activate py36-sp
pip install -r requirements.txt
pip install -r requirements_torch.txt # install pytorch

Path setting

  • paths for datasets ($DATA_DIR), logs are set in setting.py

Dataset

Datasets should be downloaded into $DATA_DIR. The Synthetic Shapes dataset will also be generated there. The folder structure should look like:

datasets/ ($DATA_DIR)
|-- COCO
|   |-- train2014
|   |   |-- file1.jpg
|   |   `-- ...
|   `-- val2014
|       |-- file1.jpg
|       `-- ...
`-- HPatches
|   |-- i_ajuntament
|   `-- ...
`-- synthetic_shapes  # will be automatically created
`-- KITTI (accumulated folders from raw data)
|   |-- 2011_09_26_drive_0020_sync
|   |   |-- image_00/
|   |   `-- ...
|   |-- ...
|   `-- 2011_09_28_drive_0001_sync
|   |   |-- image_00/
|   |   `-- ...
|   |-- ...
|   `-- 2011_09_29_drive_0004_sync
|   |   |-- image_00/
|   |   `-- ...
|   |-- ...
|   `-- 2011_09_30_drive_0016_sync
|   |   |-- image_00/
|   |   `-- ...
|   |-- ...
|   `-- 2011_10_03_drive_0027_sync
|   |   |-- image_00/
|   |   `-- ...

run the code

  • Notes:
    • Start from any steps (1-4) by downloading some intermediate results
    • Training usually takes 8-10 hours on one 'NVIDIA 2080Ti'.
    • Currently Support training on 'COCO' dataset (original paper), 'KITTI' dataset.
  • Tensorboard:
    • log files is saved under 'runs/<\export_task>/...'

tensorboard --logdir=./runs/ [--host | static_ip_address] [--port | 6008]

1) Training MagicPoint on Synthetic Shapes

python train4.py train_base configs/magicpoint_shapes_pair.yaml magicpoint_synth --eval

you don't need to download synthetic data. You will generate it when first running it. Synthetic data is exported in ./datasets. You can change the setting in settings.py.

2) Exporting detections on MS-COCO / kitti

This is the step of homography adaptation(HA) to export pseudo ground truth for joint training.

  • make sure the pretrained model in config file is correct
  • make sure COCO dataset is in '$DATA_DIR' (defined in setting.py)
  • config file:
export_folder: <'train' | 'val'>  # set export for training or validation

General command:

python export.py <export task>  <config file>  <export folder> [--outputImg | output images for visualization (space inefficient)]

export coco - do on training set

python export.py export_detector_homoAdapt configs/magicpoint_coco_export.yaml magicpoint_synth_homoAdapt_coco

export coco - do on validation set

  • Edit 'export_folder' to 'val' in 'magicpoint_coco_export.yaml'
python export.py export_detector_homoAdapt configs/magicpoint_coco_export.yaml magicpoint_synth_homoAdapt_coco

export kitti

  • config
    • check the 'root' in config file
    • train/ val split_files are included in datasets/kitti_split/.
python export.py export_detector_homoAdapt configs/magicpoint_kitti_export.yaml magicpoint_base_homoAdapt_kitti

3) Training Superpoint on MS-COCO/ KITTI

You need pseudo ground truth labels to traing detectors. Labels can be exported from step 2) or downloaded from link. Then, as usual, you need to set config file before training.

  • config file
    • root: specify your labels root
    • root_split_txt: where you put the train.txt/ val.txt split files (no need for COCO, needed for KITTI)
    • labels: the exported labels from homography adaptation
    • pretrained: specify the pretrained model (you can train from scratch)
  • 'eval': turn on the evaluation during training

General command

python train4.py <train task> <config file> <export folder> --eval

COCO

python train4.py train_joint configs/superpoint_coco_train_heatmap.yaml superpoint_coco --eval --debug

kitti

python train4.py train_joint configs/superpoint_kitti_train_heatmap.yaml superpoint_kitti --eval --debug
  • set your batch size (originally 1)
  • refer to: 'train_tutorial.md'

4) Export/ Evaluate the metrics on HPatches

  • Use pretrained model or specify your model in config file
  • ./run_export.sh will run export then evaluation.

Export

  • download HPatches dataset (link above). Put in the $DATA_DIR. python export.py <export task> <config file> <export folder>
  • Export keypoints, descriptors, matching
python export.py export_descriptor  configs/magicpoint_repeatability_heatmap.yaml superpoint_hpatches_test

evaluate

python evaluation.py <path to npz files> [-r, --repeatibility | -o, --outputImg | -homo, --homography ]

  • Evaluate homography estimation/ repeatability/ matching scores ...
python evaluation.py logs/superpoint_hpatches_test/predictions --repeatibility --outputImg --homography --plotMatching

5) Export/ Evaluate repeatability on SIFT

# export detection, description, matching
python export_classical.py export_descriptor configs/classical_descriptors.yaml sift_test --correspondence

# evaluate (use 'sift' flag)
python evaluation.py logs/sift_test/predictions --sift --repeatibility --homography 
  • specify the pretrained model

Pretrained models

Current best model

  • COCO dataset logs/superpoint_coco_heat2_0/checkpoints/superPointNet_170000_checkpoint.pth.tar
  • KITTI dataset logs/superpoint_kitti_heat2_0/checkpoints/superPointNet_50000_checkpoint.pth.tar

model from magicleap

pretrained/superpoint_v1.pth

Jupyter notebook

# show images saved in the folders
jupyter notebook
notebooks/visualize_hpatches.ipynb 

Updates (year.month.day)

  • 2020.08.05:
    • Update pytorch nms from (#19)
    • Update and test KITTI dataloader and labels on google drive (should be able to fit the KITTI raw format)
    • Update and test SIFT evaluate at step 5.

Known problems

  • test step 5: evaluate on SIFT
  • Export COCO dataset in low resolution (240x320) instead of high resolution (480x640).
  • Due to step 1 was done long time ago. We are still testing it again along with step 2-4. Please refer to our pretrained model or exported labels. Or let us know how the whole pipeline works.
  • Warnings from tensorboard.

Work in progress

  • Release notebooks with unit testing.
  • Dataset: ApolloScape/ TUM.

Citations

Please cite the original paper.

@inproceedings{detone2018superpoint,
  title={Superpoint: Self-supervised interest point detection and description},
  author={DeTone, Daniel and Malisiewicz, Tomasz and Rabinovich, Andrew},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops},
  pages={224--236},
  year={2018}
}

Please also cite our DeepFEPE paper.

@misc{2020_jau_zhu_deepFEPE,
Author = {You-Yi Jau and Rui Zhu and Hao Su and Manmohan Chandraker},
Title = {Deep Keypoint-Based Camera Pose Estimation with Geometric Constraints},
Year = {2020},
Eprint = {arXiv:2007.15122},
}

Credits

This implementation is developed by You-Yi Jau and Rui Zhu. Please contact You-Yi for any problems. Again the work is based on Tensorflow implementation by Rémi Pautrat and Paul-Edouard Sarlin and official SuperPointPretrainedNetwork. Thanks to Daniel DeTone for help during the implementation.

Posts

What have I learned from the implementation of deep learning paper?

pytorch-superpoint's People

Contributors

eric-yyjau avatar guppykang avatar saunair avatar valeyards avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pytorch-superpoint's Issues

improve results

Hi @eric-yyjau,

Thanks for your great work to reproduce SuperPoint in pytorch.

I fellow your training scripts many times, but always get bad result. The first gif image is the result of my model. The second gif image is the result of your released model.
out
out2

I exactly fellow the three training steps of this repository. My superpoint model is trained on coco data. Your model is from logs/superpoint_coco_heat2_0/checkpoints/superPointNet_170000_checkpoint.pth.tar.

I am confused about why my detected key points is far less than yours.

Do you have any suggestions for me? Thanks!

Error in the first step

2021-08-12 20:11:57 DESKTOP-LHM6IOU root[7360] INFO Running command TRAIN_BASE
2021-08-12 20:11:57 DESKTOP-LHM6IOU root[7360] INFO train on device: cuda
Traceback (most recent call last):
File "C:\Users\User\anaconda3\envs\py36-sp\lib\site-packages\tensorboardX\record_writer.py", line 47, in directory_check
factory = REGISTERED_FACTORIES[prefix]
KeyError: 'runs/train_base/magicpoint_synth_2021-08-12_20'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "train4.py", line 142, in
args.func(config, output_dir, args)
File "train4.py", line 41, in train_base
return train_joint(config, output_dir, args)
File "train4.py", line 62, in train_joint
exper_name=args.exper_name, date=True))
File "C:\Users\User\anaconda3\envs\py36-sp\lib\site-packages\tensorboardX\writer.py", line 299, in init
self._get_file_writer()
File "C:\Users\User\anaconda3\envs\py36-sp\lib\site-packages\tensorboardX\writer.py", line 354, in _get_file_writer
**self.kwargs)
File "C:\Users\User\anaconda3\envs\py36-sp\lib\site-packages\tensorboardX\writer.py", line 106, in init
logdir, max_queue, flush_secs, filename_suffix)
File "C:\Users\User\anaconda3\envs\py36-sp\lib\site-packages\tensorboardX\event_file_writer.py", line 104, in init
directory_check(self._logdir)
File "C:\Users\User\anaconda3\envs\py36-sp\lib\site-packages\tensorboardX\record_writer.py", line 51, in directory_check
os.makedirs(path)
File "C:\Users\User\anaconda3\envs\py36-sp\lib\os.py", line 220, in makedirs
mkdir(name, mode)
OSError: [WinError 123] 檔案名稱、目錄名稱或磁碟區標籤語法錯誤。: 'runs/train_base/magicpoint_synth_2021-08-12_20:11:57'

Loader.py error

Hi, I am currently trying to use synthetic dataset for the homography adaptation task and I am getting the error below;
Traceback (most recent call last):
File "export.py", line 409, in
args.func(config, output_dir, args)
File "/home/hmahmad/.local/lib/python3.6/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "export.py", line 243, in export_detector_homoAdapt_gpu
data = dataLoader(config, dataset=task, export_task=export_task)
File "/home/hmahmad/Desktop/Emin/pytorch-superpoint-master/utils/loader.py", line 105, in dataLoader_test
from datasets.SyntheticDataset import SyntheticDataset
ImportError: cannot import name 'SyntheticDataset'
there is no file called SyntheticDataset inside of the datasets folder. any help is appreciated.

Thanks.

Step 4 error

hi, @eric-yyjau great repo.
I got an error in step 4, can you give me a hint? What thing I am doing wrong?


Traceback (most recent call last):
File "train4.py", line 141, in
args.func(config, output_dir, args)
File "train4.py", line 93, in train_joint
train_agent.train()
File "/home/users/jemllerena/Desktop/pytorch-superpoint/Train_model_frontend.py", line 277, in train
loss_out = self.train_val_sample(sample_train, self.n_iter, True)
File "/home/users/jemllerena/Desktop/pytorch-superpoint/Train_model_heatmap.py", line 315, in train_val_sample
**self.desc_params
File "/home/users/jemllerena/Desktop/pytorch-superpoint/utils/loss_functions/sparse_loss.py", line 243, in batch_descriptor_loss_sparse
homographies[i].type(torch.float32), **options)
File "/home/users/jemllerena/Desktop/pytorch-superpoint/utils/loss_functions/sparse_loss.py", line 212, in descriptor_loss_sparse
num_masked_non_matches_per_match=num_masked_non_matches_per_match)
File "/home/users/jemllerena/Desktop/pytorch-superpoint/utils/loss_functions/sparse_loss.py", line 107, in get_non_matches_corr
img_b_mask=None)
File "/home/users/jemllerena/Desktop/pytorch-superpoint/utils/correspondence_tools/correspondence_finder.py", line 261, in create_non_correspondences
diffs_0_flattened = diffs_0.view(-1,1)
RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.


Which loss did you? dense or sparse?

Hi, there

I find it is confusing that there are dense loss and sparse loss. The config you provided used dense loss function, while the original paper used sparse loss.

Which one did you use for the provided weight?

By the way, what does heat2.0 mean in model name?

Cheers

how to use it with libtorch

hello, I want to use this model with libtorch, can you teach me how to convert .pth.tar file to .pt file?

training parameters

Thanks for the work !

I am training my own network. But I can not get the same result as yours.
When I training the magicpoint, the pr is not steady. It could be 0.15 in this 2k iters, then 0.25 in the next 2k iters.
Could you please provide your training parameters?
Do you change you learning rate during training? And do you use same img size 240*320 in export and training with COCO?

Cheers!

SuperPointNet_gauss2

Hallo Eric-yyjau,

Thank you for your amazing work. Your codes are working without any problem. But I did not get the idea behind using SuperPointNet_gauss2 instead of original SuperPointNet given by MagicLeap. Can you please share relevant literature behind this implementation. It would be very helpful

Homographic adapatation time is too long

Hi Eric,

The detector exporting time for homographic adaptation on MSCOCO dataset is taking too long. Is there any possibility to parallelize code here? Because GPU is not getting completely utilized during this process

Thanks a lot for help.

Superpoint model on mobile devices

Hi, thank you for your good work. Have you considered how to improve the speed and apply it to mobile devices? such as replace VGG with MobileNet. If you have tried or have ideas about it, please tell me. Thank you, sincerely.

code can not support torch 1.1

Hi, thank you for your good work. I test your code and encounter an error:
pytorch-superpoint/utils/utils.py", line 351, in inv_warp_image_batch
warped_img = F.grid_sample(img, src_pixel_coords, mode=mode, align_corners=True)
TypeError: grid_sample() got an unexpected keyword argument 'align_corners

I find align_corners is used in pytorch after 1.3:
torch.nn.functional.grid_sample(input, grid, mode='bilinear', padding_mode='zeros', align_corners=None)
But my cluster only support cuda9.0, which does not support pytorch 1.3.

I tried to change your code, remove align_corners of grid_sample function under Pytorch 1.1, but there is another error:
args.func(config, output_dir, args)
File "train4.py", line 42, in train_base
return train_joint(config, output_dir, args)
File "train4.py", line 95, in train_joint
train_agent.train()
File "/home/test/pytorch-superpoint/Train_model_frontend.py", line 277, in train
loss_out = self.train_val_sample(sample_train, self.n_iter, True)
File "/home/test/pytorch-superpoint/Train_model_heatmap.py", line 385, in train_val_sample
loss.backward()
File "/home/anaconda3/envs/py36-sp/lib/python3.6/site-packages/torch/tensor.py", line 107, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/anaconda3/envs/py36-sp/lib/python3.6/site-packages/torch/autograd/init.py", line 93, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: Function AddBackward0 returned an invalid gradient at index 0 - expected type torch.cuda.FloatTensor but got torch.cuda.LongTensor

Do you have code support lower versions or how to solve the above problems? Thank you.

about kitti dataset train

@eric-yyjau
Thank you for your great work!
I notice that in Kitti_inh.py,there are two files called "cam.txt" and "imu_pose_matrixs.txt" which are not in Kitti dataset,could you offer me these?
Looking forward to your reply.Thanks!

evaluation score

Hi, thank you for your great work!
I finished SuperPoint training by your code.
Thank you very much.

I understand that your code is based on "https://github.com/rpautrat/SuperPoint".
And so I would like to ask 3 question.

I tried to use your evaluation code and pretrain model(by MagicLeap).
I use H-Patches-v(only view point change) evaluation dataset.
スクリーンショット 2020-08-30 21 50 12
and so, why is this homography estimation score(threshold 3 pix) lower than the score of pretrain model by evaluation code of https://github.com/rpautrat/SuperPoint?
スクリーンショット 2020-09-04 10 09 56
I think H-Patches-all(view point change and illumination change) score may have same trend.

Is there difference between your training procedure and https://github.com/rpautrat/SuperPoint?
for example, the number of steps of Homographic Adaption procedure, MS-COCO data augmentation…
I would like to know the precautions when changing from using code of https://github.com/rpautrat/SuperPoint to using your code.

What is difference between SuperPointNet_gauss2 model and SuperPointNet model?

Best,

Heatmap gets blur after trained with the warped detector loss

Hi, thank you for your excellent work!
I am using this project to train for my personal satelite dataset, and I encounter the problem that, after I train the data with the warped detector loss, the model's inferences on the training data are being less precise, check the two heatmap 1_pretrain.jpg and 2_with_warp.jpg. The second one (after trained carefully) finds more points, but each point covers a larger area and the center is not as confident and bright.

I feel like the training is not sufficient, but not sure how to improve. I wonder if I could get some suggestions from you, and here is an introduction of how I use this repo.

How I train the model
I have to start the original model (SuperPointNet_pretrained.py in your project) and load its original weight (superpoint_v1.pth)

step2: I crop my high resolution images to 360x360 pieces as train/ val data, and export the label with the aforementioned model weights, based on 100 homographic aggregation.

step3: Data will keep (360, 360) during training, and I also load the pretrain as initial weight; I wanted to use the original dense loss, but it seems to be memory costly so I keep with the sparse loss. Then I train directly on step3, for 3000 iters (I use a small dataset to do experiment, only 400 images, bs=16) with lr=0.0001, then another 3000 iters with lr=0.00001.

Observations
When I export label for the second round, I find out those heatmap on a same image data are blur and not so bright (a grey area instead of just one shiny point)

Alternations
I try to disable the warped loss and the descriptor loss, to let it "overfit" on the its own exported label, and I do get an overfited result, the points are bright and normally just cover 1 pixel (as desired), check 3_without_warp.jpg

Please let me know if you have encountered anything similar, or could you please give me some suggestions about the training tricks? Thank you very much in advance!

1_pretrain.jpg
heatmap1

2_with_warp.jpg
heatmap2

3_without_warp.jpg
3_without_warp

about output HPatches image

Hello.
Thank you for your great works.

I tried to train SuperPoint & HPatches evaluation by your code.
and, I checked output matching image.

I find key point detected at the place that is clearly not key point(figure 1&2).
5cv_b
15cv_b

Did you output similar images?
And, Do you have any idea how to improve it?

Diff between SuperPointNet and SuperPointNet_gauss2

Hi, firstly I would like to thank you for your work on this repository.

SuperPointNet
image

SuperPointNet_gauss2
image

Could you explain:
why do you use SuperPointNet, rather than SuperPointNet_gauss2?
What is difference between SuperPointNet and SuperPointNet_gauss2. The above pictures show that there is no difference in terms of network structure.

Error in running on GPUs

When we run the below command
python train4.py train_joint configs/superpoint_coco_train_heatmap.yaml superpoint_coco --eval --debug

we get below errors:

File "train4.py", line 145, in args.func(config, output_dir, args) File "train4.py", line 95, in train_joint train_agent.train() File "/mnt/disks/user/project/Train_model_frontend.py", line 277, in train loss_out = self.train_val_sample(sample_train, self.n_iter, True) File "/mnt/disks/user/project/Train_model_heatmap.py", line 315, in train_val_sample **self.desc_params File "/mnt/disks/user/project/utils/loss_functions/sparse_loss.py", line 243, in batch_desc
Traceback (most recent call last): File "train4.py", line 145, in args.func(config, output_dir, args) File "train4.py", line 95, in train_joint train_agent.train() File "/mnt/disks/user/project/Train_model_frontend.py", line 277, in train loss_out = self.train_val_sample(sample_train, self.n_iter, True) File "/mnt/disks/user/project/Train_model_heatmap.py", line 315, in train_val_sample **self.desc_params File "/mnt/disks/user/project/utils/utils.py", line 815,

server shut down when runing export.py

Hi. Really great work. I just finished the pretrain part and begin to do export on COCO. Everything goes on well until it load several pictures and the server suddenly shut down. I tried several times. It cannot be GPU overheat cause it shut down at the very beginning. Do you have any idea why this happens? Maybe it's the CUDA version problem?

Inference module for custom trained model

Hello, thanks for your great work. After i trained superpoint on my dataset, I can't find some way to extract feature points in my own database. I try to simply load image by using Opencv and calculate pred with those images but get some error'Expected 4-dimensional input for 4-dimensional weight [64, 1, 3, 3], but got 3-dimensional input of size [992, 744, 3] instead'. So i use your dataloader and modify it to Infer.py. And for configuring the path of dataset I change 'path' to a option in yaml.
Thanks again for this code, but i still have many problems with your dataloader, it seems that it's not simply loading images. I'll appreciate it if you can tell more details about it.

Custom Dataset

How do I prepare the custom dataset for training this model?

Why is the benchmark different from SuperPoint paper

Hi! Thank you for your great work!

I noticed that the SIFT results in your benchmark are quite different from the original paper of Superpoint.
The results of SIFT in original paper and your benchmark are respectively:

E = 1 E = 3 E = 5 Rep. MLE NNmAP MS.
SIFT in the paper: 0.42 | 0.68 | 0.76 | 0.50 | 0.83| 0.69 | 0.31 (480640, top1000)
SIFT in this work: 0.60 | 0.75 | 0.80 | 0.47 | 1.13 | 0.71 | 0.31 (480
640, top1000)

ps: to get the top 1000 result, I configured the classical_detectors_descriptors.py line 50
from "sift = cv2.xfeatures2d.SIFT_create(contrastThreshold=1e-5)"
to "sift = cv2.xfeatures2d.SIFT_create(1000, contrastThreshold=1e-5)"

Could you please explained what causes these differences, especially for the MLE and Homography? Thanks a lot!

An error occurred while running the first step

Traceback (most recent call last):
File "train4.py", line 141, in
args.func(config, output_dir, args)
File "train4.py", line 41, in train_base
return train_joint(config, output_dir, args)
File "train4.py", line 93, in train_joint
train_agent.train()
File "/home/hyl/program/pytorch-superpoint/Train_model_frontend.py", line 275, in train
for i, sample_train in tqdm(enumerate(self.train_loader)):
File "/home/hyl/anaconda3/envs/py36-sp/lib/python3.6/site-packages/tqdm/std.py", line 1129, in iter
for obj in iterable:
File "/home/hyl/anaconda3/envs/py36-sp/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 819, in next
return self._process_data(data)
File "/home/hyl/anaconda3/envs/py36-sp/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 846, in _process_data
data.reraise()
File "/home/hyl/anaconda3/envs/py36-sp/lib/python3.6/site-packages/torch/_utils.py", line 369, in reraise
raise self.exc_type(msg)
TypeError: Caught TypeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/hyl/anaconda3/envs/py36-sp/lib/python3.6/site-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop
data = fetcher.fetch(index)
File "/home/hyl/anaconda3/envs/py36-sp/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/hyl/anaconda3/envs/py36-sp/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/hyl/program/pytorch-superpoint/datasets/SyntheticDataset_gaussian.py", line 443, in getitem
img.squeeze(), inv_homography, mode="bilinear"
File "/home/hyl/program/pytorch-superpoint/utils/utils.py", line 370, in inv_warp_image
warped_img = inv_warp_image_batch(img, mat_homo_inv, device, mode)
File "/home/hyl/program/pytorch-superpoint/utils/utils.py", line 351, in inv_warp_image_batch
warped_img = F.grid_sample(img, src_pixel_coords, mode=mode, align_corners=True)
TypeError: grid_sample() got an unexpected keyword argument 'align_corners'

I have some questions I'd like to ask. I hope you can answer them for me.The above error occurred when I was running the first step, train_joint() will also be used in the next few parts.In addition, how to install roi_pool in the Training tutorial?Hope to get your reply!

Index error when I train superpoint

Hello, eric.
When I train superpoint for the third step
I get a error after 1188ite,
Traceback (most recent call last): File "train4.py", line 141, in <module> args.func(config, output_dir, args) File "train4.py", line 93, in train_joint train_agent.train() File "/home/lijt/PycharmProjects/pytorch-superpoint-master/Train_model_frontend.py", line 275, in train for i, sample_train in tqdm(enumerate(self.train_loader)): File "/home/lijt/anaconda3/envs/python36/lib/python3.6/site-packages/tqdm/std.py", line 1178, in __iter__ for obj in iterable: File "/home/lijt/anaconda3/envs/python36/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 517, in __next__ data = self._next_data() File "/home/lijt/anaconda3/envs/python36/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 1179, in _next_data return self._process_data(data) File "/home/lijt/anaconda3/envs/python36/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 1225, in _process_data data.reraise() File "/home/lijt/anaconda3/envs/python36/lib/python3.6/site-packages/torch/_utils.py", line 429, in reraise raise self.exc_type(msg) IndexError: Caught IndexError in DataLoader worker process 0. Original Traceback (most recent call last): File "/home/lijt/anaconda3/envs/python36/lib/python3.6/site-packages/torch/utils/data/_utils/worker.py", line 202, in _worker_loop data = fetcher.fetch(index) File "/home/lijt/anaconda3/envs/python36/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/lijt/anaconda3/envs/python36/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp> data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/lijt/PycharmProjects/pytorch-superpoint-master/datasets/Coco.py", line 294, in __getitem__ labels = points_to_2D(pnts, H, W) File "/home/lijt/PycharmProjects/pytorch-superpoint-master/datasets/Coco.py", line 234, in points_to_2D labels[pnts[:, 1], pnts[:, 0]] = 1 IndexError: index -9223372036854775808 is out of bounds for axis 0 with size 240
I train many times from scratch and it happen again!
I don't know how to slove the problem.

training_model_frontend

Traceback (most recent call last):
File "train4.py", line 144, in
args.func(config, output_dir, args)
File "train4.py", line 44, in train_base
return train_joint(config, output_dir, args)
File "train4.py", line 96, in train_joint
train_agent.train()
File "C:\Users\User\Desktop\python\STUDY_CNN_imagematching\pytorch-superpoint-master-venv\Train_model_frontend.py", line 275, in train
for i, sample_train in tqdm(enumerate(self.train_loader)):
File "C:\Users\User\Desktop\python\STUDY_CNN_imagematching\pytorch-superpoint-master-venv\venv\lib\site-packages\torch\utils\data\dataloader.py", line 352, in iter
return self._get_iterator()
File "C:\Users\User\Desktop\python\STUDY_CNN_imagematching\pytorch-superpoint-master-venv\venv\lib\site-packages\torch\utils\data\dataloader.py", line 294, in _get_iterator
return _MultiProcessingDataLoaderIter(self)
File "C:\Users\User\Desktop\python\STUDY_CNN_imagematching\pytorch-superpoint-master-venv\venv\lib\site-packages\torch\utils\data\dataloader.py", line 801, in init
w.start()
File "C:\Users\User\AppData\Local\Programs\Python\Python36\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "C:\Users\User\AppData\Local\Programs\Python\Python36\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Users\User\AppData\Local\Programs\Python\Python36\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Users\User\AppData\Local\Programs\Python\Python36\lib\multiprocessing\popen_spawn_win32.py", line 65, in init
reduction.dump(process_obj, to_child)
File "C:\Users\User\AppData\Local\Programs\Python\Python36\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
File "C:\Users\User\AppData\Local\Programs\Python\Python36\lib\multiprocessing\pool.py", line 528, in reduce
'pool objects cannot be passed between processes or pickled'
NotImplementedError: pool objects cannot be passed between processes or pickled

(venv) C:\Users\User\Desktop\python\STUDY_CNN_imagematching\pytorch-superpoint-master-venv>Traceback (most recent call last):
File "", line 1, in
File "C:\Users\User\AppData\Local\Programs\Python\Python36\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "C:\Users\User\AppData\Local\Programs\Python\Python36\lib\multiprocessing\spawn.py", line 115, in _main
self = reduction.pickle.load(from_parent)
EOFError: Ran out of input

Pretraining the MagicPoint with GPU

First of all, Thank you for the amazing work for translating tensorflow to pytorch.

I'm currently training the MagicPoint Network with Synthetic Shapes dataset.
When MagicPoint starts training based on your code, I find out that it doesn't use any GPU power.

Is this code, supports GPU training?

export kitti error

hello, When I execute the export kitti command, it shown FileNotFoundError: [Errno 2] No such file or directory: 'logs/magicpoint_base_homoAdapt_kitti/predictions/train/2011_10_03_drive_0027_sync_02/0000004074.npz' , and then it got stuck, all the previous programs are fine, my magicpoint_kitti_export.yaml is

dataset: 'Kitti_inh' # 'coco' 'hpatches', 'Kitti', ''
export_folder: 'train'
alteration: 'all' # 'all' 'i' 'v'
root: 'datasets/kitti_wVal' # root for dataset
root_split_txt: 'datasets/kitti_split' # split file provided in datasets/kitti_split ,

Have you ever been in this situation? I'm looking forward to your reply

ValueError: need at least one array to stack

2931it [11:44, 4.18it/s][01/13/2021 21:05:38 INFO] name: COCO_train2014_000000456425
outputs: torch.Size([1, 240, 320])
pts: (3, 0)
heatmap: (244, 324)
2931it [11:46, 4.15it/s]
Traceback (most recent call last):
File "export.py", line 408, in
args.func(config, output_dir, args)
File "/home/mems/anaconda3/envs/pytorch-superpoint/lib/python3.6/site-packages/torch/autograd/grad_mode.py", line 26, in decorate_context
return func(*args, **kwargs)
File "export.py", line 318, in export_detector_homoAdapt_gpu
pts = fe.soft_argmax_points([pts])
File "/home/mems/new_home/superpoint/pytorch-superpoint-Test/models/model_wrap.py", line 215, in soft_argmax_points
patches = np.stack(patches)
File "<array_function internals>", line 6, in stack
File "/home/mems/anaconda3/envs/pytorch-superpoint/lib/python3.6/site-packages/numpy/core/shape_base.py", line 423, in stack
raise ValueError('need at least one array to stack')
ValueError: need at least one array to stack

PR to kornia?

Hi,

Thank you for the great repo! Would you be interested in making your SuperPoint pretrained model available via kornia? https://github.com/kornia/kornia

Kornia is "OpenCV in PyTorch" and we already have classical descriptors like SIFT, pretrained HardNet and SOSNet from the authors. I believe that could make more people use your model and cite your work.

Best, Dmytro.

Which mask_3d_flattened is used?

Hello, i can't think of better name for this, but oh well. Here is the case:

File Train_model_heatmap.py. Line 301. Here you're using mask_3d_flattened as mask_desc. THough, if bool value if_warp == True, then you're replacing value of this mask_3d_flattened (which was defined previously as mask_3D_flattened = self.getMasks(mask_2D, self.cell_size, device=self.device) ) with new value, line 278, mask_3D_flattened = self.getMasks(mask_warp_2D, self.cell_size, device=self.device). So, in the further calculations, you're using warped mask instead of normal one. Is that algorithmically right?

Thanks in advance fr explanations.

Sub-pixel

Hi,
I've been implementing Superpoint and got good results. However I found your repo now and I've seen that you use Sub-pixel. Are you using sub-pixel in the coordinates of the detected keypoints? Could you briefly explain how you do that?
Have you noticed improvements in homography estimation errors?

Best regards,
Pedro

about export.py

Hi!
Thank you for your great work!!
I have a question about procedure (2).

I tried exporting detection on MS-COCO by GPU, but it is failed.
Do you know why it doesn't work?
No error, but tqdm bar doesn't progress.

スクリーンショット 2020-08-14 1 12 28
スクリーンショット 2020-08-14 1 13 24

Now, I can exporting by CPU.

Loading pretrained weight

Hello,
Thank you for your effort to make it!

I found something strange in Train_model_frontend.py:

mode = "" if path[:-3] == ".pth" else "full"

I guess it checks whether the suffix is '.pth' or 'tar.gz'. If that's correct, above code should be changed like:

mode = "" if path[-4:] == ".pth" else "full"

Thanks again, please let me know if I miss something!

Confusion in the way descriptors of the keypoints are extracted

Hi,

Thank you for the pytorch implementation of SuperPoint.
Reference code:

def desc_to_sparseDesc(self):
        # pts_nms_batch = [self.getPtsFromHeatmap(h) for h in heatmap_np]
        desc_sparse_batch = [self.sample_desc_from_points(self.outs['desc'], pts) for pts in self.pts_nms_batch]
        self.desc_sparse_batch = desc_sparse_batch
        return desc_sparse_batch

in the file Val_model_heatmap.py, I see that you are using the function desc_to_sparseDesc to get the descriptors of the relevant keypoints, but pts seems to be taken from a heatmap of size (H,W) but outs['desc'] here is (D,H/8,W/8) and there is no upsample operation being done here. How is this getting the descriptors of the relevant heatmaps?

Thanks

pytorch version

Thanks for the awesome repo!

torch.nn.functional.grid_sample doesn't have align_corners in torch < 1.3.0, so you might wanna change the requirements or change the code :)

Binary cross entropy loss

Hi,

Thanks for this great repo.
I have a question about the detector loss. I noticed that you used binary cross entropy loss while diving the labels of the interest point by the norm of each bin. In the paper and in the tensorflow implementation they used cross entropy loss while choosing randomly one of the interest point in each bin.
Can you explain why did you use it?

Is this a normal result?

Thank you for your great work!
I just do the first training step "Training MagicPoint on Synthetic Shapes" and I look at the result by tensorBoard. The train-precision is 0.25 and the train-recall is 0.4. Is this a normal result?

About load the Coco dataset when export(export.py)

Hello ,I am sorry to bother you again. I have met new problems. When export the Coco dataset, the dataset is 'Coco' in config file. While dataset is 'coco' in dataset loader. The file in ./datasets is also 'Coco.py' . Looking forward to your reply! Thanks!

data:
dataset: 'Coco' # 'coco' 'hpatches'

if task == "coco":

if task == "coco" or "Kitti":

elif dataset == 'coco':
from datasets.coco import Coco
test_set = Coco(

About the descriptor loss

Hi,Thank you for your great work! What is the difference between the dense loss and the sparse loss?
I am looking forward to your reply.Thanks!

why need to crop to the same length?

hi, eric-yyjau

why need to crop to the same length?

crop to the same length

shuffle = True
if not shuffle: print("shuffle: ", shuffle)
choice = crop_or_pad_choice(uv_b_matches.shape[0], num_matching_attempts, shuffle=shuffle)
choice = torch.tensor(choice ,dtype=torch.bool) #.type(torch.bool)
uv_a = uv_a[choice]
uv_b_matches = uv_b_matches[choice]

by the way, here is the error about it:

Traceback (most recent call last):
File "train4.py", line 150, in
args.func(config, output_dir, args)
File "train4.py", line 102, in train_joint
train_agent.train()
File "C:\Users\User\Desktop\python\STUDY_CNN_imagematching\pytorch-superpoint-master-venv\Train_model_frontend.py", line 277, in train
loss_out = self.train_val_sample(sample_train, self.n_iter, True)
File "C:\Users\User\Desktop\python\STUDY_CNN_imagematching\pytorch-superpoint-master-venv\Train_model_heatmap.py", line 315, in train_val_sample
**self.desc_params
File "C:\Users\User\Desktop\python\STUDY_CNN_imagematching\pytorch-superpoint-master-venv\utils\loss_functions\sparse_loss.py", line 243, in batch_descriptor_loss_sparse
homographies[i].type(torch.float32), **options)
File "C:\Users\User\Desktop\python\STUDY_CNN_imagematching\pytorch-superpoint-master-venv\utils\loss_functions\sparse_loss.py", line 186, in descriptor_loss_sparse
uv_a = uv_a[choice]
IndexError: The shape of the mask [1000] at index 0 does not match the shape of the indexed tensor [873, 2] at index 0

cuDNN error: CUDNN_STATUS_MAPPING_ERROR

Hi! Thank you for your great work!
I'm sorry to have disturbed you.I ran into a problem while running Step 4 that has been bothering me for a long time. I hope you give me some suggestion.
I'm running on NVIDIA RTX 3090. My experimental environment is as follows.
Requirements:

  • python == 3.6.2
  • pytorch == 1.4.0
  • torchvision == 0.5.0
  • cuda 10.0+cudnn 7.6.3
    The problem is shown in the figure below。
    Thank you in advance!

Error in the first step.

Hi @eric-yyjau. Thanks for the great repo.

I actually have a problem, when I run the first step. It seems to be the data loader but I don't know. Can you give me some hint?

Error:


Traceback (most recent call last):
File "train4.py", line 141, in
args.func(config, output_dir, args)
File "train4.py", line 41, in train_base
return train_joint(config, output_dir, args)
File "train4.py", line 93, in train_joint
train_agent.train()
File "/home/jeffri/Desktop/pytorch-superpoint/Train_model_frontend.py", line 275, in train
for i, sample_train in tqdm(enumerate(self.train_loader)):
File "/home/jeffri/anaconda3/envs/py36-sp/lib/python3.6/site-packages/tqdm/std.py", line 1171, in iter
for obj in iterable:
File "/home/jeffri/anaconda3/envs/py36-sp/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 435, in next
data = self._next_data()
File "/home/jeffri/anaconda3/envs/py36-sp/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 1085, in _next_data
return self._process_data(data)
File "/home/jeffri/anaconda3/envs/py36-sp/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 1111, in _process_data
data.reraise()
File "/home/jeffri/anaconda3/envs/py36-sp/lib/python3.6/site-packages/torch/_utils.py", line 428, in reraise
raise self.exc_type(msg)
ValueError: Caught ValueError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/jeffri/anaconda3/envs/py36-sp/lib/python3.6/site-packages/numpy/lib/format.py", line 583, in _read_array_header
d = safe_eval(header)
File "/home/jeffri/anaconda3/envs/py36-sp/lib/python3.6/site-packages/numpy/lib/utils.py", line 1007, in safe_eval
return ast.literal_eval(source)
File "/home/jeffri/anaconda3/envs/py36-sp/lib/python3.6/ast.py", line 48, in literal_eval
node_or_string = parse(node_or_string, mode='eval')
File "/home/jeffri/anaconda3/envs/py36-sp/lib/python3.6/ast.py", line 35, in parse
return compile(source, filename, mode, PyCF_ONLY_AST)
File "", line 1
{'descr': '<f8', 'fortran_order': False, 'shape': (5, 2), }
^
SyntaxError: invalid character in identifier

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/jeffri/anaconda3/envs/py36-sp/lib/python3.6/site-packages/torch/utils/data/_utils/worker.py", line 198, in _worker_loop
data = fetcher.fetch(index)
File "/home/jeffri/anaconda3/envs/py36-sp/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/jeffri/anaconda3/envs/py36-sp/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/jeffri/Desktop/pytorch-superpoint/datasets/SyntheticDataset_gaussian.py", line 373, in getitem
pnts = np.load(sample["points"]) # (y, x)
File "/home/jeffri/anaconda3/envs/py36-sp/lib/python3.6/site-packages/numpy/lib/npyio.py", line 440, in load
pickle_kwargs=pickle_kwargs)
File "/home/jeffri/anaconda3/envs/py36-sp/lib/python3.6/site-packages/numpy/lib/format.py", line 717, in read_array
shape, fortran_order, dtype = _read_array_header(fp, version)
File "/home/jeffri/anaconda3/envs/py36-sp/lib/python3.6/site-packages/numpy/lib/format.py", line 586, in _read_array_header
raise ValueError(msg.format(header, e))
ValueError: Cannot parse header: "{'descr': '<f8', 'fortran_order': False, 'shape': (5, 2), } \xa0 \n"
Exception: SyntaxError('invalid character in identifier', ('', 1, 65, "{'descr': '<f8', 'fortran_order': False, 'shape': (5, 2), } \xa0 \n"))


Thanks

python package version (12/14)

Package Version


absl-py 0.8.1
appdirs 1.4.3
astor 0.8.1
attrs 19.3.0
backcall 0.1.0
black 19.10b0
bleach 3.1.0
cachetools 3.1.1
certifi 2019.11.28
chardet 3.0.4
Click 7.0
coloredlogs 10.0
cycler 0.10.0
decorator 4.4.1
defusedxml 0.6.0
entrypoints 0.3
gast 0.2.2
google-auth 1.8.2
google-auth-oauthlib 0.4.1
google-pasta 0.1.8
grpcio 1.25.0
h5py 2.10.0
humanfriendly 4.18
idna 2.8
imageio 2.6.1
imgaug 0.3.0
importlib-metadata 1.3.0
ipykernel 5.1.3
ipython 7.10.1
ipython-genutils 0.2.0
ipywidgets 7.5.1
jedi 0.15.1
Jinja2 2.10.3
joblib 0.14.1
jsonschema 3.2.0
jupyter 1.0.0
jupyter-client 5.3.4
jupyter-console 6.0.0
jupyter-core 4.6.1
Keras-Applications 1.0.8
Keras-Preprocessing 1.1.0
kiwisolver 1.1.0
Markdown 3.1.1
MarkupSafe 1.1.1
matplotlib 3.1.2
mistune 0.8.4
more-itertools 8.0.2
nbconvert 5.6.1
nbformat 4.4.0
networkx 2.4
notebook 6.0.2
numpy 1.17.4
oauthlib 3.1.0
opencv-contrib-python 3.4.2.16
opencv-python 3.4.2.16
opencv-python-headless 4.1.2.30
opt-einsum 3.1.0
pandocfilters 1.4.2
parso 0.5.1
pathspec 0.6.0
pexpect 4.7.0
pickleshare 0.7.5
Pillow 6.2.1
pip 19.3.1
prometheus-client 0.7.1
prompt-toolkit 2.0.10
protobuf 3.11.1
ptyprocess 0.6.0
pyasn1 0.4.8
pyasn1-modules 0.2.7
Pygments 2.5.2
pyparsing 2.4.5
pyrsistent 0.15.6
python-dateutil 2.8.1
PyWavelets 1.1.1
PyYAML 5.2
pyzmq 18.1.1
qtconsole 4.6.0
regex 2019.12.9
requests 2.22.0
requests-oauthlib 1.3.0
rsa 4.0
scikit-image 0.16.2
scikit-learn 0.22
scipy 1.3.3
Send2Trash 1.5.0
setuptools 42.0.2.post20191203
Shapely 1.6.4.post2
six 1.13.0
tensorboard 1.14.0
tensorboardX 1.9
tensorflow 1.14.0
tensorflow-estimator 1.14.0
termcolor 1.1.0
terminado 0.8.3
testpath 0.4.4
toml 0.10.0
torch 1.3.1
torchgeometry 0.1.2
torchsummary 1.5.1
torchvision 0.4.2
tornado 6.0.3
tqdm 4.40.2
traitlets 4.3.3
typed-ast 1.4.0
urllib3 1.25.7
wcwidth 0.1.7
webencodings 0.5.1
Werkzeug 0.16.0
wheel 0.33.6
widgetsnbextension 3.5.1
wrapt 1.11.2
zipp 0.6.0

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.