GithubHelp home page GithubHelp logo

Comments (27)

DrFridolin avatar DrFridolin commented on May 19, 2024

When I try to train the model myself, by omitting the resume argument, I get the following:

Epoch 0/500 (results/sema3d/trainval_best):
100%|█████████████████████████████████████████████████████████████████| 7/7 [00:06<00:00, 1.09it/s]
/home/local/SRI/e28454/repositories/SuperPointGraphs/learning/metrics.py:66: RuntimeWarning: invalid value encountered in true_divide
return sum(values) / class_seen
-> Train accuracy: 80.78873732074152, Loss: 0.655669653023194
Epoch 1/500 (results/sema3d/trainval_best):
100%|█████████████████████████████████████████████████████████████████| 7/7 [00:05<00:00, 1.20it/s]
-> Train accuracy: 100.0, Loss: 5.94529438607034e-06
Epoch 2/500 (results/sema3d/trainval_best):
100%|█████████████████████████████████████████████████████████████████| 7/7 [00:05<00:00, 1.27it/s]
-> Train accuracy: 100.0, Loss: 6.107642467968941e-08
Epoch 3/500 (results/sema3d/trainval_best):
100%|█████████████████████████████████████████████████████████████████| 7/7 [00:05<00:00, 1.34it/s]
-> Train accuracy: 100.0, Loss: 0.0
Epoch 4/500 (results/sema3d/trainval_best):
100%|█████████████████████████████████████████████████████████████████| 7/7 [00:05<00:00, 1.26it/s]
-> Train accuracy: 100.0, Loss: 0.0

The rest of the epochs are identical in that train accuracy shows 100.0 and loss shows 0.0 for each one.

from superpoint_graph.

loicland avatar loicland commented on May 19, 2024

Hi,

The test sets of semantic 3d are unannotated, as you need to submit on their website to get a test score.

Consequently the label field is all zeros and you can't learn with it. You should use the training set to learn your models, or our split into training set / validation set to validate your changes.

To write the prediction clouds you should use the visualize function with --outputype p'. If the _pred.plyin/clouds` is grey, then we have a problem.

Does the reduced_test folder contains a prediction_testred.h5 file? If so, can you check it's values?

from superpoint_graph.

DrFridolin avatar DrFridolin commented on May 19, 2024

Hi Loic,

First off, thank you very much for your prompt responses.

I know that test sets are not annotated. I was just trying to produce some segmentation results on the reduced test set to look at it visually. I tried using the pre-trained model with the --resume setting, and I also tried training my own model from scratch. I assume I should be able to do these.

When I run learning/main.py and then partition/write_Semantic3d.py with the suggested parameters, I get .label files under $SEMA3D_DIR/labels that have all 0s in them and nothing else. I assume that's why the generated visualization is all grey.

When calling partition/visualize.py, I use --output_type ifprs as suggested.

There is a predictions_testred.h5 file under results, and it has the following values:

8948 4446 0d0a 1a0a 0000 0000 0008 0800
0400 1000 0000 0000 0000 0000 0000 0000
...
0100 0000 0baf 165b 0000 8000 0000 0000

It has all zeros after these lines.

Another thing that might be related:
I was initially getting an "incorrectly setup Cupy" error when I was running /learning/main.py. When I installed "cupy-cuda90" instead of just "cupy" with pip install, the error went away. Could this be somehow related? I have cuda 9.0 installed.

from superpoint_graph.

DrFridolin avatar DrFridolin commented on May 19, 2024

I'm beginning to wonder whether it's the fact that I have cuda 9.0 that is the problem. I installed pytorch 0.3.1 following your suggestion, but now it's not clear to me if this version works with cuda 9.0

Any thoughts?

from superpoint_graph.

loicland avatar loicland commented on May 19, 2024

I meant to look at the value with a h5 reader, like h5py or hview:)

I use cuda9, but would be very surprised if cuda9 was the culprit here.

Does the partition step go well? Is the partition file sensical? Can you run the inference in debug mode and check the values of embeddings and output in eval_final?

from superpoint_graph.

DrFridolin avatar DrFridolin commented on May 19, 2024

I checked the contents of the prediction_testred.h5 file with hview and there is an array of all-zeros for each point cloud in the reduced test set. This is consistent with the all-zero label values I get for these point clouds.

The partition step seems to be going well, and I can visualize the color-coded ply file with the computed partitions. Attached is a screenshot. I think the partitions look sensible.

snapshot00

When you say "run inference in debug mode", you mean running /learning/main.py in debug mode with some sort of IDE? Should I check the values of the outputs in the line below, or something else?

acc_test, oacc_test, avg_iou_test, per_class_iou_test, predictions_test, avg_acc_test, confusion_matrix = eval_final()

Regarding cuda version: Can we even use pytorch 0.3.1 with cuda 9.0? I was following the instructions here: https://pytorch.org/previous-versions/ but

conda install pytorch=0.3.1 cuda90 -c soumith

gives me the following error:

Solving environment: failed
PackagesNotFoundError: The following packages are not available from current channels:

  • cuda90

So I actually ended up installing pytorch with:

conda install pytorch=0.3.1 cuda80 -c soumith

even though I have cuda 9.0 installed. Could this be related?

EDIT: I tried uninstalling pytorch and reinstalling with pip using the linux binary precompiled with Cuda 9.0 (cu90/torch-0.3.1-cp36-cp36m-linux_x86_64.whl). It did not seem to make a difference.

from superpoint_graph.

loicland avatar loicland commented on May 19, 2024

I can't test the different versions right now, but I would be surprised if that's the problem.

Ideally yes, you would run the code in an IDE in debug mode, put a stop point line 256 or learning/main.py and run the inference.

By looking at clouds_data (containing the superpoints), embeddings (containing the pointnet embeddings) and outputs (containing the output of the graph convolutional network) we would be able to see what went wrong hopefully!

from superpoint_graph.

DrFridolin avatar DrFridolin commented on May 19, 2024

Hi Loic,

I've ran the code through pycharm and here are the contents of related variables. Is this helpful?

EMBEDDINGS:
Variable containing:
0.0000 0.0000 0.0000 ... 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 ... 0.0000 0.0000 0.0000
-1.6205 -0.7200 -2.4746 ... 1.5480 0.5685 0.6032
... ⋱ ...
-0.8535 -0.5737 -0.7068 ... 0.5264 0.8831 -0.5381
-0.1578 0.2041 0.1181 ... -0.6903 0.8855 -0.1793
-0.3028 0.8053 -1.2218 ... -0.1014 0.8992 -0.6746
[torch.cuda.FloatTensor of size 2982x32 (GPU 0)]


OUTPUTS:
Variable containing:
10.9338 -6.4504 -3.9939 ... -3.4705 -5.9629 -6.4918
10.0552 -4.1477 -3.6375 ... -4.0481 -5.3293 -5.4333
17.3586 -13.9171 -9.1304 ... -5.9031 -8.8613 -18.1583
... ⋱ ...
13.8326 -12.5954 -7.4242 ... -4.6192 -6.9181 -10.7421
8.9287 -5.6563 -5.7933 ... -4.1726 -2.9312 -7.1534
12.2733 -4.8216 -5.4621 ... -5.0394 -5.5727 -9.9124
[torch.cuda.FloatTensor of size 2982x8 (GPU 0)]


CLOUDS_DATA:

<class 'tuple'>: (['test_reduced/MarketplaceFeldkirch_Station4.0', 'test_reduced/MarketplaceFeldkirch_Station4.1', 'test_reduced/MarketplaceFeldkirch_Station4.2', ...'test_reduced/MarketplaceFeldkirch_Station4.2981'],
,
-1
-1
0

0
0
0
[torch.LongTensor of size 2982]
,
( 0 ,.,.) =
0.8244 0.8301 -0.0437 ... -0.0760 -0.1554 -0.1624
-0.0435 -0.0260 -0.0529 ... 0.0307 0.0447 0.0277
0.0312 0.0142 -0.0152 ... -0.0341 0.0612 0.0099
... ⋱ ...
-0.4678 -0.4655 -0.4310 ... -0.4271 -0.4310 -0.4310
-0.4152 -0.4154 -0.3597 ... -0.3606 -0.3597 -0.3597
-0.4775 -0.4740 -0.4605 ... -0.4695 -0.4605 -0.4605

( 1 ,.,.) =
0.2877 0.2724 0.0813 ... 0.0047 0.0142 0.2749
-0.0531 -0.0540 -0.0350 ... 0.0132 -0.0119 -0.0540
0.0231 -0.0539 -0.0795 ... 0.0742 0.0997 0.1005
... ⋱ ...
0.0128 0.0128 0.0128 ... 0.0141 0.0141 0.0141
-0.3664 -0.3664 -0.3664 ... -0.3682 -0.3682 -0.3682
0.0884 0.0884 0.0884 ... 0.0739 0.0739 0.0739

( 2 ,.,.) =
0.1662 0.1621 -0.0206 ... -0.0028 0.0842 -0.0359
-0.1378 -0.1386 0.0988 ... -0.1117 -0.1245 0.0727
-0.0972 -0.1225 0.3112 ... 0.0660 -0.0637 0.4786
... ⋱ ...
0.0301 -0.0242 0.2678 ... -0.0550 0.1158 -0.0312
-0.3754 -0.3272 -0.3839 ... -0.1253 -0.3207 -0.2754
0.2070 0.1747 0.1062 ... 0.2305 0.0560 0.2122
...

(2069,.,.) =
-0.0154 0.0072 0.0411 ... 0.0545 0.2309 -0.2796
-0.1841 0.0310 0.3646 ... -0.2401 -0.2425 -0.1569
-0.0302 0.1054 0.0872 ... -0.0953 0.0811 -0.0572
... ⋱ ...
-0.1587 -0.4441 -0.1761 ... -0.3701 -0.3673 -0.4290
0.1355 0.2565 -0.3311 ... -0.0946 0.1279 0.1193
0.1114 0.1494 -0.3935 ... -0.1065 0.0384 -0.0548

(2070,.,.) =
-0.0535 -0.0514 -0.0535 ... 0.0195 -0.0535 0.0552
0.4992 0.4182 0.4838 ... -0.1564 0.4715 -0.4331
0.2729 -0.0486 -0.3652 ... -0.1482 0.1657 -0.2804
... ⋱ ...
-0.1747 0.1748 -0.0028 ... 0.3651 0.0824 0.1345
-0.2664 -0.4921 -0.2894 ... -0.4906 -0.4928 -0.2887
-0.1633 -0.0256 -0.0290 ... 0.1408 -0.1297 0.2845

(2071,.,.) =
-0.2927 0.1667 0.5254 ... -0.4704 0.0445 0.2231
-0.2298 0.0199 0.2476 ... -0.1821 0.0628 0.1033
-0.0015 -0.0039 0.0008 ... -0.0015 0.0017 -0.0015
... ⋱ ...
0.1277 0.1045 0.0820 ... 0.1887 0.3375 0.0762
-0.4949 -0.4859 -0.4956 ... -0.4953 -0.4876 -0.4904
-0.4917 -0.4847 -0.4917 ... -0.4854 -0.4713 -0.4787
[torch.FloatTensor of size 2072x11x128]
,
3.7060
2.4280
2.4140

0.8220
0.4690
6.6190
[torch.FloatTensor of size 2072]
)

from superpoint_graph.

loicland avatar loicland commented on May 19, 2024

Thank you for the info, it is useful. It seems like every lines of output has it's maximum value at it's first column, i.e. roads, hence the grey (is it true?). I suspect the model loads wrong.

Can you please copy here the output of print(model) (which is called at the beginning of main)?
Are you able to train the model from scratch on the training set with:

CUDA_VISIBLE_DEVICES=0 python learning/main.py --dataset sema3d --SEMA3D_PATH $SEMA3D_DIR --db_test_name testred --db_train_name trainval \
--epochs 500 --lr_steps '[350, 400, 450]' --test_nth_epoch 100 --model_config 'gru_10,f_8' --ptn_nfeat_stn 11 \
--nworkers 2 --odir "results/sema3d/trainval_best"

Alternatively, I will be back from conference on Monday, and I'll be able to check everything myself, which I can't right now.

from superpoint_graph.

DrFridolin avatar DrFridolin commented on May 19, 2024

I'm available to get you the things you need, so no need to wait for monday as long as you are available to reply :)

When I try to train the model from scratch with that command, I get the following warning after the 1st epoch:

~/repositories/SuperPointGraphs/learning/metrics.py:66: RuntimeWarning: invalid value encountered in true_divide
return sum(values) / class_seen
-> Train accuracy: 80.17628911414721, Loss: 0.6746747928887739

After that, the 3 epochs output the following, and train accuracy and loss remains the same for the rest of the epochs and training finishes relatively quickly. The segmentation results are all-grey in this case as well.

-> Train accuracy: 80.17628911414721, Loss: 0.6746747928887739
Epoch 1/500 (results/sema3d/trainval_best):
100%|█████████████████████████████████████████████████████████████████| 7/7 [00:05<00:00, 1.34it/s]
-> Train accuracy: 100.0, Loss: 1.1066283734960183e-05
Epoch 2/500 (results/sema3d/trainval_best):
100%|█████████████████████████████████████████████████████████████████| 7/7 [00:05<00:00, 1.32it/s]
-> Train accuracy: 100.0, Loss: 1.2826265205490017e-07
Epoch 3/500 (results/sema3d/trainval_best):
100%|█████████████████████████████████████████████████████████████████| 7/7 [00:05<00:00, 1.36it/s]
-> Train accuracy: 100.0, Loss: 0.0

Below is the output of print(model). I think this output is the same whether I try to load the existing model, or train one from scratch.

Module(
(ecc): GraphNetwork(
(0): RNNGraphConvModule(
(_cell): GRUCellEx(32, 32)(ingate layernorm)
(_fnet): Sequential(
(0): Linear(in_features=13, out_features=32, bias=True)
(1): ReLU(inplace)
(2): Linear(in_features=32, out_features=128, bias=True)
(3): ReLU(inplace)
(4): Linear(in_features=128, out_features=64, bias=True)
(5): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True)
(6): ReLU(inplace)
(7): Linear(in_features=64, out_features=32, bias=False)
)
)
(1): Linear(in_features=352, out_features=8, bias=True)
)
(ptn): PointNet(
(stn): STNkD(
(convs): Sequential(
(0): Conv1d(11, 64, kernel_size=(1,), stride=(1,))
(1): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True)
(2): ReLU(inplace)
(3): Conv1d(64, 64, kernel_size=(1,), stride=(1,))
(4): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True)
(5): ReLU(inplace)
(6): Conv1d(64, 128, kernel_size=(1,), stride=(1,))
(7): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True)
(8): ReLU(inplace)
)
(fcs): Sequential(
(0): Linear(in_features=128, out_features=128, bias=True)
(1): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True)
(2): ReLU(inplace)
(3): Linear(in_features=128, out_features=64, bias=True)
(4): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True)
(5): ReLU(inplace)
)
(proj): Linear(in_features=64, out_features=4, bias=True)
)
(convs): Sequential(
(0): Conv1d(11, 64, kernel_size=(1,), stride=(1,))
(1): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True)
(2): ReLU(inplace)
(3): Conv1d(64, 64, kernel_size=(1,), stride=(1,))
(4): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True)
(5): ReLU(inplace)
(6): Conv1d(64, 128, kernel_size=(1,), stride=(1,))
(7): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True)
(8): ReLU(inplace)
(9): Conv1d(128, 128, kernel_size=(1,), stride=(1,))
(10): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True)
(11): ReLU(inplace)
(12): Conv1d(128, 256, kernel_size=(1,), stride=(1,))
(13): BatchNorm1d(256, eps=1e-05, momentum=0.1, affine=True)
(14): ReLU(inplace)
)
(fcs): Sequential(
(0): Linear(in_features=257, out_features=256, bias=True)
(1): BatchNorm1d(256, eps=1e-05, momentum=0.1, affine=True)
(2): ReLU(inplace)
(3): Linear(in_features=256, out_features=64, bias=True)
(4): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True)
(5): ReLU(inplace)
(6): Linear(in_features=64, out_features=32, bias=True)
)
)
)

from superpoint_graph.

loicland avatar loicland commented on May 19, 2024

Alright, so let's continue to remote debug then.

So something is wrong, but the model seems to load fine. On the other hand, the accuray shouldn't go to 100% at all. Can you please check the following:

  • label_mode and label_vec line 178 of main (should contain the majority label and the break down of labels between classes in each superpoint respectively)
  • check one of the h5 files in /features, and in particular the labels field

from superpoint_graph.

DrFridolin avatar DrFridolin commented on May 19, 2024

Hi,

I put a breakpoint on line 180 in main.py and the contents of label_mode and label_vec seem to be all zeros, and they do not change throughout the training process:

label_mode:
0
0
0

0
0
0
[torch.cuda.LongTensor of size 1568 (GPU 0)]

label_vec:
0 0 0 ... 0 0 0
0 0 0 ... 0 0 0
0 0 0 ... 0 0 0
... ⋱ ...
0 0 0 ... 0 0 0
0 0 0 ... 0 0 0
0 0 0 ... 0 0 0
[torch.cuda.FloatTensor of size 1568x8 (GPU 0)]

I checked MarketplaceFeldkirch_Station4.h5 under /features/test_reduced/ folder and the labels field seems empty - dimension size is 0, and max dimension size is also 0.

In case these are relevant:

I've unzipped the point cloud files I downloaded from Semantic3D website, so there are .txt files in the data folder for each point cloud, rather than the zipped files.

I've placed .labels files into the same folder for the training files. There are no .labels files for either test category.

from superpoint_graph.

DrFridolin avatar DrFridolin commented on May 19, 2024

I just checked another feature file from the training set and it also has the labels field empty. All the other fields are full. This must be the culprit. I am guessing something is going wrong in the partition process, even though the resulting partitions look good. Looking forward to hearing your thoughts.

from superpoint_graph.

loicland avatar loicland commented on May 19, 2024

Ok so that shouldn't be!

So just tomake sure, your /data folder contains 3 folders, /train /test_reduced and /test_full. Each of them contains the .txt files obtained when uncompressing the zip. The .labels files should be in the /data/train folder.

To try to pinpoint where it goes wrong, can you check the value of has_labels and labels at line 133 of /partition/partition.py?

from superpoint_graph.

DrFridolin avatar DrFridolin commented on May 19, 2024

I will let you know the values of has_labels and labels shortly, but just to make sure the data locations are right:

My /data folder contains those folders and each folder contains the corresponding .txt files. But the .labels files are in /data/train/ folder, next to the .txt files they belong. Is this wrong?

The instructions say: "The label files of the training files must be put in the same directory than the .txt files."

from superpoint_graph.

loicland avatar loicland commented on May 19, 2024

my bad i typed too fast. what you did is correct.

from superpoint_graph.

DrFridolin avatar DrFridolin commented on May 19, 2024

has_labels is set to False for data under test_reduced and test_full, but it is set to True for data under train.

I think this is to be expected, since test data does not have corresponding .labels files in the same folder, but training data does.

Consequently, the contents of labels is empty for all the test data. Trying to see what labels contain for training data, it's taking a while to run on the debugger.

from superpoint_graph.

DrFridolin avatar DrFridolin commented on May 19, 2024

Here is a screenshot showing the first several rows of the labels variable for one of the training scenes.

The entire thing is too large and I thought it would be impractical to try to post the whole thing here.

labels

Here is what the print statement shows for the same labels variable:

[[ 64 0 0 ... 0 0 0]
[929 0 0 ... 0 0 0]
[248 0 0 ... 0 0 0]
...
[ 1 0 0 ... 0 0 0]
[ 1 0 0 ... 0 0 0]
[ 1 0 0 ... 0 0 0]]

Here's something else that is interesting: The features file for one of the training scenes (bildstein_station3) generated by my previous regular run of the partition code had an empty labels field, but the new one generated through the debugger does have a labels field of correct size!

My debugger is running with sudo command whereas my previous run did not. I am now testing to see if this is simply a permissions issue.

Just so that I know what I am supposed to see: the feature files for training scenes should contain a non-empty labels field but the ones for the test scenes should have the label field empty, is this correct?

from superpoint_graph.

loicland avatar loicland commented on May 19, 2024

This is what it is supposed to look like yes. The first column is the number of points in the superpoint, the next one the number of unlabelled points, and the next 13 the number of points in each classes.

The feature file should be how you describe it, with labels for the training set and none (just [ ]) for test sets.

To see what is going on with label writing, put a stopping point line 346 of /partition/provider.py

from superpoint_graph.

DrFridolin avatar DrFridolin commented on May 19, 2024

I will do that, though my debugger seems to be able to write the labels fine so I'm sure whatever is going wrong otherwise, I won't be able to observe it in the debugger.

I am also trying to rerun the partitioning code with sudo access to see if it makes a difference.

There is something I don't understand though: If I download the pretrained model and try to do segmentation on test_reduced with that file, are these label fields significant?

The way I see it, the test data will have an empty labels field anyway, and the training data will not be used as I am using a pre-trained model. Am I missing something?

from superpoint_graph.

loicland avatar loicland commented on May 19, 2024

Yep you are correct that the labels of the training set shouldn't impact the inference with the trained model.

Hopefully investigating this label writing problem will help us understanding what is going wrong.

from superpoint_graph.

DrFridolin avatar DrFridolin commented on May 19, 2024

Hi Loic,

As I suspected, the label writing problem disappeared when I made sure the code has sudo access. I can view the label fields in the .h5 files for the training data now.

I am thinking that we no longer need to look into what's happening with labels at this point since they are working as intended - would you agree?

I am re-running the whole pipeline with proper sudo privileges to see if I can get some segmentation results. This will take a few hours since the partitioning has to be re-run. I will let you know what kinds of results I get as soon as I get them.

In the meantime, if you have any suggestions on what I can look at to make sure things are running smoothly, let me know. For instance - should I go back and see the contents of label_mode and label_vec ?

Many thanks for your help.

from superpoint_graph.

DrFridolin avatar DrFridolin commented on May 19, 2024

Hi Loic,

I just wanted to let you know that with proper sudo access, the .h5 files had the proper label fields and I was able to get prediction results without any issues.

I am actually not sure if label fields had anything to do with this, as I was loading the pre-trained model and testing it on reduced test set, which does not have labels to begin with. However I think it's possible that other things were going wrong without sudo access, which are now probably working correctly.

I have also started re-training the model on my end and the loss does not go to 0 after a mere 3 iterations like it used to. I'm guessing that the training is also working as intended.

Thank you very much for your time and effort and I'm sorry this turned out to be such a disappointing issue :) I will next try to run your code on my own data, and eventually train on my own data. As far as I can see you've already included instructions on how I can do that, so I'm hoping those will go smoothly as well.

If you have no further comments, I will close this thread now. Many thanks.

from superpoint_graph.

loicland avatar loicland commented on May 19, 2024

Glad to hear it was fixed by a simple sudo!

Do not hesitate to ask for advice if you struggle to train your model. For starters, if you only have a small database I would advise you to use vv-convolution (vector-vector) and to decrease the size of the pointnets.

from superpoint_graph.

 avatar commented on May 19, 2024

Hi, I am facing the same issue. Working in python virtual environment(and hence non-sudo access), Could you suggest some possible sources of error. Starting from scratch with sudo access is possible though, but need help in debugging where the exact source of error could be, since that would be more insightful in getting it fixed.
Trying to avoid the process from start itself with sudo access since the partition process took 15 hours nearly on my system . Any suggestion in debugging or fastening the process of partition welcomed.
Thanks!

from superpoint_graph.

 avatar commented on May 19, 2024

This needs to be re-opened, I tried the entire process with SUDO privileges at any of the step, but the labels being empty problem persists.Looks like source of error is somewhere else. @DrFridolin , did you change any thing(however subtle) in the way you proceeded ? Highly possible that something else you did, went unnoticed attributing the error to the sudo thing.

from superpoint_graph.

loicland avatar loicland commented on May 19, 2024

Hi,

So just to be clear, the labels when loading your training data are all zero?
Can you post the first few lines of the label array before and after pruning?

from superpoint_graph.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.