GithubHelp home page GithubHelp logo

kenziyuliu / ms-g3d Goto Github PK

View Code? Open in Web Editor NEW
410.0 410.0 96.0 86.72 MB

[CVPR 2020 Oral] PyTorch implementation of "Disentangling and Unifying Graph Convolutions for Skeleton-Based Action Recognition"

Home Page: https://arxiv.org/abs/2003.14111

License: MIT License

Python 97.31% Shell 2.69%
action-recognition computer-vision deep-learning pretrained-models pytorch skeleton

ms-g3d's People

Contributors

alirezadizaji avatar kenziyuliu avatar pinto0309 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ms-g3d's Issues

Can not reproduced the results reported in the paper

Thanks for you sharing the code.
I prepared the dataset as in the readme.
And training on the NTU-60 sub dataset.
The scrips i used as:
dataset=nturgbd-cross-subject
type=joint
python main.py --config ./config/${dataset}/train_${type}.yaml --device 0 1 --work-dir train_res/${dataset}_${type}

dataset=nturgbd-cross-subject
type=bone
python main.py --config ./config/${dataset}/train_${type}.yaml --device 2 3 --work-dir train_res/${dataset}_${type}
But i can only get the result, even after with 4 gpu, batchsize 64。
my result: joint: 88.68% bone: 89.24% fusion: 90.61%
paper result: joint: 89.40% bone:90.10% fusion: 91.50%

Am I missing something?

Problem in splitting actions classes

Hi,
I want to split this ntu rgb d 120 dataset into traininv and testing set based on action classess.
I need to split 100 action classes for training and 20 action classes for testing set.

I am trying to edit the datagen120.py code
But it seems like model 2s-AGCN is showing error when i am training with 100 classes. And code is working fine with if i write 120 classes in configuration file, but actually i splitted the data with 100 classes. I think i am not correctly splitting the data.

Error is at loss.backward()
Runtime: CUDA error: CUBLAS status aloc failed when calling 'cublascreate(handle)'

Can you provide me the function that how i can split the data based on desired classes but not based on xsub or xview.

Error when trying to reproduce result from pretrained model -- MKL_THREADING_LAYER

Hi

I am trying to reproduce result from pretrained model using

python3 main.py --config ./config/nturgbd-cross-subject/test_joint.yaml --work-dir pretrain_eval/ntu60/xsub/joint-fusion --weights pretrained_models/ntu60-xsub-joint-fusion.pt

python3 ensemble.py --dataset ntu/xsub --joint-dir pretrain_eval/ntu60/xsub/joint --bone-dir pretrain_eval/ntu60/xsub/bone

I have generated both bone and joint data as instructed but when I am trying to run these I am getting the following error

Error: mkl-service + Intel(R) MKL: MKL_THREADING_LAYER=INTEL is incompatible with libgomp.so.1 library.
Try to import numpy first or set the threading layer accordingly. Set MKL_SERVICE_FORCE_INTEL to force it.

Could you please help.

Could you provide demo?

Hi, your paper is amazing good, I'm interesting about your work. Could you provide some demo? Thanks a lot!

Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 131072.0

Hi, here
I train your model on the dataset of kinetics. I set '--amp_opt_level 2 --half', because if I do not do that, it will reply an error ' CUDA out of memory'(My GPU's version is RTX 2080Ti and I only use one GPU). However, it appears 'Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 131072.0' every a few of steps. Do you have this problem?
2020-06-22 12-40-31屏幕截图

Snapshots of actions

Congratulations for your great work! I have viewed your supplementary material and wanna inquire how to visualize the skeleton data. Could you please release the code? Thank you very much!

IndexError on ensemble.py

Hello,

I have trained the joint and bone model for NTU120 and now I want to ensemble their results, therefore I am running the following command:

python ensemble.py --dataset ntu120/xsub --joint-dir work_dir/ntu120/xsub/msg3d_joint/ --bone-dir work_dir/ntu120/xsub/msg3d_bone/

But I am having this error:

Traceback (most recent call last):
File "ensemble.py", line 44, in
_, r11 = r1[i]
IndexError: list index out of range

I have printed the length of those lists and I found that r1 and r2 have different length than label:

label: 101838
r1: 50919
r2: 50919

Can you help solving this issue?

Ps: I am using a CentOS Linux

使用VS code运行直接终止并且没有报错

你好,请问我复现该代码的时候,使用VS code 和anaconda来跑,环境都没问题,在跑的时候,进入第一次训练,然后程序就直接终止了了,也没有任何的报错,请问这怎么解决呢?

Some problems with motion recognition

Hello, may I ask, when there are two people in the RGB video (such as when playing tennis), which person is predicted by the current video frame?and does this project have the function of motion recognition? Or is it just a feature extractor?

Cannot reproduce the pretrained model accuracy for NTU120 无法复现提供的pretrained模型的accuracy

Hi,

Thanks for the great work. I follow the given config files to run both the joint and the bone streams for the NTU120 dataset. However, I could not obtain the accuracy as high as the provided pretrained models.

非常感谢您的好工作. 我用您给的config文件来跑joint和bone stream的NTU120数据集. 但是, 我跑不到像pretrained model那么高的结果.

For XSub Joint, I got top-1 82.19 and top-5 95.77. However, the results for the pretrained model are: top-1 83.32 and top-5 96.06.
For XSub Bone, I got top-1 84.69 and top-5 96.76. However, the results for the pretrained model are: top-1 85.88 and top-5 97.04.
For XSet Joint, I got top-1 84.05 and top-5 95.97. However, the results for the pretrained model are: top-1 84.44 and top-5 95.89.
For XSet Bone, I got top-1 86.47 and top-5 96.89. However, the results for the pretrained model are: top-1 87.39 and top-5 97.36.

I received consistent worse performance. Can you please enlighten me on how to get the accuracy that is the same as the provided pretrained models? Thank you very much!

I can't reproducing the results of the paper

Hi, here
I use your pretrained model of kinetics, but I can't reproducing the results of the paper. Is something I did wrong? (I think that I generate data in a correct way according to the README.md, while I have changed some file paths of your code for convenience, just like the path of dataset )

2020-06-21 16-53-33屏幕截图
2020-06-21 17-02-57屏幕截图
2020-06-21 16-46-53屏幕截图
2020-06-21 16-48-42屏幕截图

Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 131072.0

Hi, here
I train your model on the dataset of kinetics. I set '--amp_opt_level 2 --half', because if I do not do that, it will reply an error ' CUDA out of memory'(My GPU's version is RTX 2080Ti and I only use one GPU). However, it appears 'Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 131072.0' every a few of steps. Do you have this problem?
2020-06-22 12-40-31屏幕截图

Function 'CudnnBatchNormBackward' returned nan values in its 0th output

Hi thank you so much for sharing your work. I am trying to recreate the results. I am using the ntu xsub dataset you provided with half precision amp level 1. But at epoch 33 I got NaN out from batchnorm layer. I originated from "out = tempconv(x)" function in ms_tcn.py file. I had autograd_anomally on. All the config settings are kept as your repo. Could you please suggest why this happened.

a

CUDA out of memory

Hi,
I'm trying to run MS-G3D with eval_pretrained.sh on a machine equipped with NVIDIA 2080 11 GB GPU but I have this issue:

RuntimeError: CUDA out of memory. Tried to allocate 2.58 GiB (GPU 0; 10.76 GiB total capacity; 3.54 GiB already allocated; 2.51 GiB free; 7.17 GiB reserved in total by PyTorch)

How is it possible that the net doesn't work with 11GB GPU RAM?

Transfer Learning

Hello,

I am working on a dataset similar to NTU120 and I would like to pre-train MS-G3D on NTU120 and fine-tune it on my custom dataset with only 25 classes (but same joint/bone structure).

Is there a way I can do that just editing the config files or do I need to alter some code?

kinetics normalization differs to ST-GCN, what's the meaning?

I have reproduce your paper. The result are good and I view the ST-GCN codes and find some differences.
In ST-GCN
centralization
data_numpy[0:2] = data_numpy[0:2] - 0.5
data_numpy[0][data_numpy[2] == 0] = 0
data_numpy[1][data_numpy[2] == 0] = 0
In MS-G3D your code is:
centralization
data_numpy[0:2] = data_numpy[0:2] - 0.5
data_numpy[1:2] = -data_numpy[1:2]
data_numpy[0][data_numpy[2] == 0] = 0
data_numpy[1][data_numpy[2] == 0] = 0

why you turn data_numpy[1:2] to -data_numpy[1:2]? Is helpful for our model? I will do experiments according to ST-GCN later~

How many epochs do we need to train in order to get the accuracy of the given pretrained model?

I am training the model from scratch with a new custom action dataset. The accuracy was increasing (highest ~ 33.08) and loss was decreasing till 70 epochs. After that, the loss is decreasing but accuracy is decreasing as well and currently, after 100 epochs it is 31.29. Would you please tell me that, in order to get satisfactory accuracy and loss, how many epochs should we train? Thanks in advance!
Screenshot from 2020-08-05 17-45-54

Cannot reproduce the result on NTU120

Hi, thanks for your excellent work. But I can't reproduce the result on NTU120 X-sub. The best accuracy I got is 82.22%. The command I run is

python main.py --config config/nturgbd120-cross-subject/train_joint.yaml --batch-size 64 --forward-batch-size 64 --device 0 1 2 3 4 --base-lr 0.1 --work-dir work_dir/ntu120/msg3d/cs --half

The apex is python-only. Do you have some idea about what happened? Looking forward to your reply.

Activity recognition using your model

Hi there,
Is it possible to detect activity using you model? If yes, could you please provide a script to run detection? What would be the input data (rgb video/picture or skeleton data)?

Request to upload models to GoogleDrive

I am a student of CASIA, thanks to your kindly sharing. Due to some well known reasons, I can not download the pretrained models from Dropbox. Could you please upload the pretrained models to other platforms (e.g. GoogleDrive) and share them? Thank you very much.

Function 'LogSoftmaxBackward' returned nan values in its 0th output

Hi I am trying to train your model with the provided config for NTU-60 XSUB with --half and --amp-opt-level 1. But after step 6 it gives a "Function 'LogSoftmaxBackward' returned nan values in its 0th output" error. I had kept the autograde anomally detect on and CUDA_LAUNCH_BLOCKING=1. Could you please tell me why this might happen. Also you provided pretrained model trained on un-normalized data. Is there a reason for that. What's your accuracy with normalized data. I tried both with normalized and un-normalized data but got "nan" at step 6. I guess if I turn off anomally detection it will not bother for the time being but I am concerned if it makes loss nan later. I checked my inputs, no "nan" there. Your suggestions will be a great help.

Recognizing activities using your library

Hi,

I'm trying to use your code (or some of its functions/classes) as a "blackbox" in order to classify a dataset extracted from an Intel Real Sense camera (video + skeleton information + extra data obtained post-processing the info using Unity and Matlab). At the moment it's not clear how should I "encode" my dataset and pass it to your scripts, in order to get it labeled by your code (using the pre-trained models). I'm "fluent" with Python, but it's the first time that I use PyTorch. I've read your paper and README, but it seems to me this information are not given (perhaps they are obvious to your readers, but I'm not very familiar with AI and action recogniztion).

BTW, I'm aware of issue #18 that seems very close to what I want to achieve, and I don't understand why you said it was not possible. Perhaps I'm missing something obvious for you, but not for me. Can you please give me some hint or elaborate why it's not possible to do that using your code? Thanks in advance for any information you can provide.

failure to excute the file of main.py

Hi,

Great job!

Thank you for sharing the code. I was encountered the following error while executing the main.py file.

"CUDA out of memory. Tried to allocate 1.29 GiB (GPU 0; 10.76 GiB total capacity; 8.00 GiB already allocated; 789.94 MiB free; 625.17 MiB cached)"

My GPU is Nvidia 2080TI and I have tried to set a small batch size, but it seems not working for me. Can you give me some suggestions to solve it.

Thank you in advance.

About the flops of the model

Thanks for the code sharing. I have some problems about the flops of the model.

I calculated the FLOPS of MS-G3D (msg3d.py single model), and I got the flops is of around 24G. Is this the correct number?

I was confused of the FLOPS because of this paper. The authors claim the flops of MSG3D is around 8G but I got 24G. I am not sure whether I am correct or they are correct.
image

BTW, the way of my calculating flops is using thop like:

from thop import profile
import sys
sys.path.append('..')

model = Model(
    num_class=60,
    num_point=25,
    num_person=2,
    num_gcn_scales=13,
    num_g3d_scales=6,
    graph='graph.ntu_rgb_d.AdjMatrixGraph'
)

N, C, T, V, M = 1, 3, 300, 25, 2
x = torch.randn(N,C,T,V,M)
model.forward(x)
flops, params = profile(model, inputs = (x,))
#print(out.shape)
print(flops)
print(params)
print('Model total # params:', count_params(model))

Change number of people in model

Hi, thanks for great works,

I see that in your implementation, you fixed 2 as number of skeleton so how to change it to be flexible with any people number?
Thank you very much!

Failing to reproduce results on Kinetics 400

Hi, thank you for your code, very nice work!
I am trying to reproduce the results for the joint-stream on Kinetics 400, but my results are a lot lower than for the pretrained model. I've only trained for 32 epochs, but the test loss and accuracy has not improved for the last 15 epochs. I only have access to 2 GPUs, but the parameters should be the default ones you recommend for that.
Can you see if there is something wrong with my setup?

Here is the ouput from the log.txt:
[ Tue Nov 10 15:18:07 2020 ] Model total number of params: 3144328
[ Tue Nov 10 15:18:07 2020 ] *************************************
[ Tue Nov 10 15:18:07 2020 ] *** Using Half Precision Training ***
[ Tue Nov 10 15:18:07 2020 ] *************************************
[ Tue Nov 10 15:18:07 2020 ] 2 GPUs available, using DataParallel
[ Tue Nov 10 15:18:07 2020 ] Parameters:
{'amp_opt_level': 1,
'assume_yes': False,
'base_lr': 0.05,
'batch_size': 32,
'checkpoint': None,
'config': './config/kinetics-skeleton/train_joint.yaml',
'debug': False,
'device': [0, 1],
'eval_interval': 1,
'eval_start': 1,
'feeder': 'feeders.feeder.Feeder',
'forward_batch_size': 32,
'half': True,
'ignore_weights': [],
'log_interval': 100,
'model': 'model.msg3d.Model',
'model_args': {'graph': 'graph.kinetics.AdjMatrixGraph',
'num_class': 400,
'num_g3d_scales': 8,
'num_gcn_scales': 8,
'num_person': 2,
'num_point': 18},
'model_saved_name': '',
'nesterov': True,
'num_epoch': 65,
'num_worker': 48,
'optimizer': 'SGD',
'optimizer_states': None,
'phase': 'train',
'print_log': True,
'save_interval': 1,
'save_score': False,
'seed': 89,
'show_topk': [1, 5],
'start_epoch': 0,
'step': [45, 55],
'test_batch_size': 32,
'test_feeder_args': {'data_path': './data/kinetics/val_data_joint.npy',
'label_path': './data/kinetics/val_label.pkl'},
'train_feeder_args': {'data_path': './data/kinetics/train_data_joint.npy',
'debug': False,
'label_path': './data/kinetics/train_label.pkl'},
'weight_decay': 0.0005,
'weights': None,
'work_dir': 'work_dir/kinetics/msg3d-joint'}

[ Tue Nov 10 15:18:07 2020 ] Model total number of params: 3144328
[ Tue Nov 10 15:18:07 2020 ] Training epoch: 1, LR: 0.0500
[ Tue Nov 10 20:24:06 2020 ] Mean training loss: 4.9719 (BS 32: 4.9719).
[ Tue Nov 10 20:24:06 2020 ] Time consumption: [Data]00%, [Network]98%
[ Tue Nov 10 20:24:06 2020 ] Eval epoch: 1
[ Tue Nov 10 20:27:22 2020 ] Mean test loss of 619 batches: 4.918049363211784.
[ Tue Nov 10 20:27:22 2020 ] Top 1: 6.34%
[ Tue Nov 10 20:27:23 2020 ] Top 5: 19.42%
[ Tue Nov 10 20:27:23 2020 ] Training epoch: 2, LR: 0.0500
[ Wed Nov 11 01:33:01 2020 ] Mean training loss: 4.5430 (BS 32: 4.5430).
[ Wed Nov 11 01:33:01 2020 ] Time consumption: [Data]00%, [Network]98%
[ Wed Nov 11 01:33:01 2020 ] Eval epoch: 2
[ Wed Nov 11 01:36:19 2020 ] Mean test loss of 619 batches: 4.900795953146668.
[ Wed Nov 11 01:36:19 2020 ] Top 1: 8.28%
[ Wed Nov 11 01:36:19 2020 ] Top 5: 22.56%
[ Wed Nov 11 01:36:20 2020 ] Training epoch: 3, LR: 0.0500
[ Wed Nov 11 06:09:39 2020 ] Mean training loss: 4.3545 (BS 32: 4.3545).
[ Wed Nov 11 06:09:39 2020 ] Time consumption: [Data]00%, [Network]98%
[ Wed Nov 11 06:09:39 2020 ] Eval epoch: 3
[ Wed Nov 11 06:12:58 2020 ] Mean test loss of 619 batches: 4.609359575203817.
[ Wed Nov 11 06:12:58 2020 ] Top 1: 10.86%
[ Wed Nov 11 06:12:59 2020 ] Top 5: 27.58%
[ Wed Nov 11 06:12:59 2020 ] Training epoch: 4, LR: 0.0500
[ Wed Nov 11 11:54:41 2020 ] Mean training loss: 4.2222 (BS 32: 4.2222).
[ Wed Nov 11 11:54:41 2020 ] Time consumption: [Data]00%, [Network]99%
[ Wed Nov 11 11:54:41 2020 ] Eval epoch: 4
[ Wed Nov 11 11:59:37 2020 ] Mean test loss of 619 batches: 4.774576034607525.
[ Wed Nov 11 11:59:38 2020 ] Top 1: 11.37%
[ Wed Nov 11 11:59:38 2020 ] Top 5: 28.21%
[ Wed Nov 11 11:59:39 2020 ] Training epoch: 5, LR: 0.0500
[ Wed Nov 11 18:11:53 2020 ] Mean training loss: 4.1349 (BS 32: 4.1349).
[ Wed Nov 11 18:11:53 2020 ] Time consumption: [Data]00%, [Network]99%
[ Wed Nov 11 18:11:53 2020 ] Eval epoch: 5
[ Wed Nov 11 18:15:15 2020 ] Mean test loss of 619 batches: 4.5376818095347415.
[ Wed Nov 11 18:15:15 2020 ] Top 1: 12.43%
[ Wed Nov 11 18:15:15 2020 ] Top 5: 31.07%
[ Wed Nov 11 18:15:16 2020 ] Training epoch: 6, LR: 0.0500
[ Wed Nov 11 23:00:44 2020 ] Mean training loss: 4.0797 (BS 32: 4.0797).
[ Wed Nov 11 23:00:44 2020 ] Time consumption: [Data]00%, [Network]98%
[ Wed Nov 11 23:00:44 2020 ] Eval epoch: 6
[ Wed Nov 11 23:04:07 2020 ] Mean test loss of 619 batches: 4.315563132574177.
[ Wed Nov 11 23:04:08 2020 ] Top 1: 14.59%
[ Wed Nov 11 23:04:08 2020 ] Top 5: 33.09%
[ Wed Nov 11 23:04:09 2020 ] Training epoch: 7, LR: 0.0500
[ Thu Nov 12 03:37:52 2020 ] Mean training loss: 4.0471 (BS 32: 4.0471).
[ Thu Nov 12 03:37:52 2020 ] Time consumption: [Data]00%, [Network]98%
[ Thu Nov 12 03:37:52 2020 ] Eval epoch: 7
[ Thu Nov 12 03:41:16 2020 ] Mean test loss of 619 batches: 4.347155949218208.
[ Thu Nov 12 03:41:17 2020 ] Top 1: 14.47%
[ Thu Nov 12 03:41:17 2020 ] Top 5: 33.24%
[ Thu Nov 12 03:41:17 2020 ] Training epoch: 8, LR: 0.0500
[ Thu Nov 12 08:15:11 2020 ] Mean training loss: 4.0206 (BS 32: 4.0206).
[ Thu Nov 12 08:15:11 2020 ] Time consumption: [Data]00%, [Network]98%
[ Thu Nov 12 08:15:12 2020 ] Eval epoch: 8
[ Thu Nov 12 08:18:37 2020 ] Mean test loss of 619 batches: 4.342459396322247.
[ Thu Nov 12 08:18:38 2020 ] Top 1: 14.43%
[ Thu Nov 12 08:18:38 2020 ] Top 5: 33.43%
[ Thu Nov 12 08:18:38 2020 ] Training epoch: 9, LR: 0.0500
[ Thu Nov 12 14:47:29 2020 ] Mean training loss: 3.9975 (BS 32: 3.9975).
[ Thu Nov 12 14:47:29 2020 ] Time consumption: [Data]00%, [Network]99%
[ Thu Nov 12 14:47:29 2020 ] Eval epoch: 9
[ Thu Nov 12 14:50:56 2020 ] Mean test loss of 619 batches: 4.374219649057588.
[ Thu Nov 12 14:50:57 2020 ] Top 1: 14.58%
[ Thu Nov 12 14:50:57 2020 ] Top 5: 33.36%
[ Thu Nov 12 14:50:57 2020 ] Training epoch: 10, LR: 0.0500
[ Thu Nov 12 19:25:03 2020 ] Mean training loss: 3.9830 (BS 32: 3.9830).
[ Thu Nov 12 19:25:03 2020 ] Time consumption: [Data]00%, [Network]98%
[ Thu Nov 12 19:25:03 2020 ] Eval epoch: 10
[ Thu Nov 12 19:28:31 2020 ] Mean test loss of 619 batches: 4.325896201110618.
[ Thu Nov 12 19:28:31 2020 ] Top 1: 15.21%
[ Thu Nov 12 19:28:32 2020 ] Top 5: 34.09%
[ Thu Nov 12 19:28:32 2020 ] Training epoch: 11, LR: 0.0500
[ Fri Nov 13 00:02:47 2020 ] Mean training loss: 3.9669 (BS 32: 3.9669).
[ Fri Nov 13 00:02:47 2020 ] Time consumption: [Data]00%, [Network]98%
[ Fri Nov 13 00:02:47 2020 ] Eval epoch: 11
[ Fri Nov 13 00:06:16 2020 ] Mean test loss of 619 batches: 4.51993297028426.
[ Fri Nov 13 00:06:17 2020 ] Top 1: 15.21%
[ Fri Nov 13 00:06:17 2020 ] Top 5: 34.06%
[ Fri Nov 13 00:06:18 2020 ] Training epoch: 12, LR: 0.0500
[ Fri Nov 13 04:40:40 2020 ] Mean training loss: 3.9534 (BS 32: 3.9534).
[ Fri Nov 13 04:40:40 2020 ] Time consumption: [Data]00%, [Network]98%
[ Fri Nov 13 04:40:41 2020 ] Eval epoch: 12
[ Fri Nov 13 04:44:12 2020 ] Mean test loss of 619 batches: 4.18243257872315.
[ Fri Nov 13 04:44:12 2020 ] Top 1: 16.76%
[ Fri Nov 13 04:44:13 2020 ] Top 5: 35.87%
[ Fri Nov 13 04:44:13 2020 ] Training epoch: 13, LR: 0.0500
[ Fri Nov 13 09:18:32 2020 ] Mean training loss: 3.9444 (BS 32: 3.9444).
[ Fri Nov 13 09:18:32 2020 ] Time consumption: [Data]00%, [Network]98%
[ Fri Nov 13 09:18:32 2020 ] Eval epoch: 13
[ Fri Nov 13 09:22:04 2020 ] Mean test loss of 619 batches: 4.246960545972784.
[ Fri Nov 13 09:22:05 2020 ] Top 1: 16.79%
[ Fri Nov 13 09:22:05 2020 ] Top 5: 35.90%
[ Fri Nov 13 09:22:05 2020 ] Training epoch: 14, LR: 0.0500
[ Fri Nov 13 14:22:37 2020 ] Mean training loss: 3.9370 (BS 32: 3.9370).
[ Fri Nov 13 14:22:37 2020 ] Time consumption: [Data]00%, [Network]98%
[ Fri Nov 13 14:22:37 2020 ] Eval epoch: 14
[ Fri Nov 13 14:26:56 2020 ] Mean test loss of 619 batches: 4.39393074909204.
[ Fri Nov 13 14:26:57 2020 ] Top 1: 14.21%
[ Fri Nov 13 14:26:57 2020 ] Top 5: 32.71%
[ Fri Nov 13 14:26:58 2020 ] Training epoch: 15, LR: 0.0500
[ Fri Nov 13 20:57:22 2020 ] Mean training loss: 3.9268 (BS 32: 3.9268).
[ Fri Nov 13 20:57:22 2020 ] Time consumption: [Data]00%, [Network]99%
[ Fri Nov 13 20:57:22 2020 ] Eval epoch: 15
[ Fri Nov 13 21:00:54 2020 ] Mean test loss of 619 batches: 4.3702820344964985.
[ Fri Nov 13 21:00:55 2020 ] Top 1: 16.00%
[ Fri Nov 13 21:00:55 2020 ] Top 5: 35.35%
[ Fri Nov 13 21:00:56 2020 ] Training epoch: 16, LR: 0.0500
[ Sat Nov 14 01:36:31 2020 ] Mean training loss: 3.9230 (BS 32: 3.9230).
[ Sat Nov 14 01:36:31 2020 ] Time consumption: [Data]00%, [Network]98%
[ Sat Nov 14 01:36:31 2020 ] Eval epoch: 16
[ Sat Nov 14 01:40:06 2020 ] Mean test loss of 619 batches: 4.2670101137269105.
[ Sat Nov 14 01:40:06 2020 ] Top 1: 16.45%
[ Sat Nov 14 01:40:07 2020 ] Top 5: 36.25%
[ Sat Nov 14 01:40:07 2020 ] Training epoch: 17, LR: 0.0500
[ Sat Nov 14 08:58:03 2020 ] Mean training loss: 3.9162 (BS 32: 3.9162).
[ Sat Nov 14 08:58:03 2020 ] Time consumption: [Data]00%, [Network]99%
[ Sat Nov 14 08:58:03 2020 ] Eval epoch: 17
[ Sat Nov 14 09:04:05 2020 ] Mean test loss of 619 batches: 4.154163993040464.
[ Sat Nov 14 09:04:06 2020 ] Top 1: 17.26%
[ Sat Nov 14 09:04:06 2020 ] Top 5: 36.72%
[ Sat Nov 14 09:04:06 2020 ] Training epoch: 18, LR: 0.0500
[ Sat Nov 14 14:08:22 2020 ] Mean training loss: 3.9094 (BS 32: 3.9094).
[ Sat Nov 14 14:08:22 2020 ] Time consumption: [Data]00%, [Network]98%
[ Sat Nov 14 14:08:22 2020 ] Eval epoch: 18
[ Sat Nov 14 14:11:58 2020 ] Mean test loss of 619 batches: 4.4448124141415795.
[ Sat Nov 14 14:11:59 2020 ] Top 1: 14.89%
[ Sat Nov 14 14:11:59 2020 ] Top 5: 33.62%
[ Sat Nov 14 14:12:00 2020 ] Training epoch: 19, LR: 0.0500
[ Sat Nov 14 18:47:13 2020 ] Mean training loss: 3.9050 (BS 32: 3.9050).
[ Sat Nov 14 18:47:13 2020 ] Time consumption: [Data]00%, [Network]98%
[ Sat Nov 14 18:47:13 2020 ] Eval epoch: 19
[ Sat Nov 14 18:50:51 2020 ] Mean test loss of 619 batches: 4.1810635703831.
[ Sat Nov 14 18:50:52 2020 ] Top 1: 16.64%
[ Sat Nov 14 18:50:52 2020 ] Top 5: 35.91%
[ Sat Nov 14 18:50:52 2020 ] Training epoch: 20, LR: 0.0500
[ Sat Nov 14 23:26:05 2020 ] Mean training loss: 3.9015 (BS 32: 3.9015).
[ Sat Nov 14 23:26:05 2020 ] Time consumption: [Data]00%, [Network]98%
[ Sat Nov 14 23:26:05 2020 ] Eval epoch: 20
[ Sat Nov 14 23:29:44 2020 ] Mean test loss of 619 batches: 4.292042032389726.
[ Sat Nov 14 23:29:44 2020 ] Top 1: 16.31%
[ Sat Nov 14 23:29:45 2020 ] Top 5: 36.05%
[ Sat Nov 14 23:29:45 2020 ] Training epoch: 21, LR: 0.0500
[ Sun Nov 15 04:04:46 2020 ] Mean training loss: 3.9022 (BS 32: 3.9022).
[ Sun Nov 15 04:04:46 2020 ] Time consumption: [Data]00%, [Network]98%
[ Sun Nov 15 04:04:46 2020 ] Eval epoch: 21
[ Sun Nov 15 04:08:26 2020 ] Mean test loss of 619 batches: 4.401749669446313.
[ Sun Nov 15 04:08:26 2020 ] Top 1: 14.86%
[ Sun Nov 15 04:08:26 2020 ] Top 5: 34.54%
[ Sun Nov 15 04:08:27 2020 ] Training epoch: 22, LR: 0.0500
[ Sun Nov 15 08:43:29 2020 ] Mean training loss: 3.8973 (BS 32: 3.8973).
[ Sun Nov 15 08:43:29 2020 ] Time consumption: [Data]00%, [Network]98%
[ Sun Nov 15 08:43:29 2020 ] Eval epoch: 22
[ Sun Nov 15 08:47:09 2020 ] Mean test loss of 619 batches: 4.267861750283804.
[ Sun Nov 15 08:47:10 2020 ] Top 1: 16.54%
[ Sun Nov 15 08:47:10 2020 ] Top 5: 35.90%
[ Sun Nov 15 08:47:10 2020 ] Training epoch: 23, LR: 0.0500
[ Sun Nov 15 13:22:30 2020 ] Mean training loss: 3.8934 (BS 32: 3.8934).
[ Sun Nov 15 13:22:30 2020 ] Time consumption: [Data]00%, [Network]98%
[ Sun Nov 15 13:22:30 2020 ] Eval epoch: 23
[ Sun Nov 15 13:26:12 2020 ] Mean test loss of 619 batches: 4.327233063770226.
[ Sun Nov 15 13:26:13 2020 ] Top 1: 15.61%
[ Sun Nov 15 13:26:13 2020 ] Top 5: 34.82%
[ Sun Nov 15 13:26:13 2020 ] Training epoch: 24, LR: 0.0500
[ Sun Nov 15 18:32:26 2020 ] Mean training loss: 3.8921 (BS 32: 3.8921).
[ Sun Nov 15 18:32:26 2020 ] Time consumption: [Data]00%, [Network]98%
[ Sun Nov 15 18:32:27 2020 ] Eval epoch: 24
[ Sun Nov 15 18:36:08 2020 ] Mean test loss of 619 batches: 4.113329430196512.
[ Sun Nov 15 18:36:09 2020 ] Top 1: 17.63%
[ Sun Nov 15 18:36:09 2020 ] Top 5: 36.88%
[ Sun Nov 15 18:36:10 2020 ] Training epoch: 25, LR: 0.0500
[ Sun Nov 15 23:11:30 2020 ] Mean training loss: 3.8915 (BS 32: 3.8915).
[ Sun Nov 15 23:11:30 2020 ] Time consumption: [Data]00%, [Network]98%
[ Sun Nov 15 23:11:31 2020 ] Eval epoch: 25
[ Sun Nov 15 23:15:14 2020 ] Mean test loss of 619 batches: 4.205234567645293.
[ Sun Nov 15 23:15:15 2020 ] Top 1: 16.82%
[ Sun Nov 15 23:15:15 2020 ] Top 5: 36.28%
[ Sun Nov 15 23:15:16 2020 ] Training epoch: 26, LR: 0.0500
[ Mon Nov 16 03:50:32 2020 ] Mean training loss: 3.8870 (BS 32: 3.8870).
[ Mon Nov 16 03:50:32 2020 ] Time consumption: [Data]00%, [Network]98%
[ Mon Nov 16 03:50:32 2020 ] Eval epoch: 26
[ Mon Nov 16 03:54:16 2020 ] Mean test loss of 619 batches: 4.226733705338831.
[ Mon Nov 16 03:54:17 2020 ] Top 1: 16.58%
[ Mon Nov 16 03:54:17 2020 ] Top 5: 35.97%
[ Mon Nov 16 03:54:18 2020 ] Training epoch: 27, LR: 0.0500
[ Mon Nov 16 08:29:39 2020 ] Mean training loss: 3.8883 (BS 32: 3.8883).
[ Mon Nov 16 08:29:39 2020 ] Time consumption: [Data]00%, [Network]98%
[ Mon Nov 16 08:29:39 2020 ] Eval epoch: 27
[ Mon Nov 16 08:33:25 2020 ] Mean test loss of 619 batches: 4.199135843887083.
[ Mon Nov 16 08:33:26 2020 ] Top 1: 16.34%
[ Mon Nov 16 08:33:26 2020 ] Top 5: 35.95%
[ Mon Nov 16 08:33:27 2020 ] Training epoch: 28, LR: 0.0500
[ Mon Nov 16 13:24:34 2020 ] Mean training loss: 3.8853 (BS 32: 3.8853).
[ Mon Nov 16 13:24:34 2020 ] Time consumption: [Data]00%, [Network]98%
[ Mon Nov 16 13:24:34 2020 ] Eval epoch: 28
[ Mon Nov 16 13:31:11 2020 ] Mean test loss of 619 batches: 4.273007657493259.
[ Mon Nov 16 13:31:12 2020 ] Top 1: 16.53%
[ Mon Nov 16 13:31:12 2020 ] Top 5: 36.67%
[ Mon Nov 16 13:31:13 2020 ] Training epoch: 29, LR: 0.0500
[ Mon Nov 16 19:23:28 2020 ] Mean training loss: 3.8837 (BS 32: 3.8837).
[ Mon Nov 16 19:23:28 2020 ] Time consumption: [Data]00%, [Network]98%
[ Mon Nov 16 19:23:28 2020 ] Eval epoch: 29
[ Mon Nov 16 19:27:16 2020 ] Mean test loss of 619 batches: 4.165285659337083.
[ Mon Nov 16 19:27:16 2020 ] Top 1: 17.00%
[ Mon Nov 16 19:27:17 2020 ] Top 5: 36.51%
[ Mon Nov 16 19:27:17 2020 ] Training epoch: 30, LR: 0.0500
[ Tue Nov 17 00:02:35 2020 ] Mean training loss: 3.8809 (BS 32: 3.8809).
[ Tue Nov 17 00:02:35 2020 ] Time consumption: [Data]00%, [Network]98%
[ Tue Nov 17 00:02:35 2020 ] Eval epoch: 30
[ Tue Nov 17 00:06:24 2020 ] Mean test loss of 619 batches: 4.481445889095498.
[ Tue Nov 17 00:06:25 2020 ] Top 1: 15.19%
[ Tue Nov 17 00:06:25 2020 ] Top 5: 35.08%
[ Tue Nov 17 00:06:25 2020 ] Training epoch: 31, LR: 0.0500
[ Tue Nov 17 04:41:32 2020 ] Mean training loss: 3.8807 (BS 32: 3.8807).
[ Tue Nov 17 04:41:32 2020 ] Time consumption: [Data]00%, [Network]98%
[ Tue Nov 17 04:41:32 2020 ] Eval epoch: 31
[ Tue Nov 17 04:45:23 2020 ] Mean test loss of 619 batches: 4.213557656246549.
[ Tue Nov 17 04:45:24 2020 ] Top 1: 17.22%
[ Tue Nov 17 04:45:24 2020 ] Top 5: 36.56%
[ Tue Nov 17 04:45:24 2020 ] Training epoch: 32, LR: 0.0500
[ Tue Nov 17 09:20:29 2020 ] Mean training loss: 3.8820 (BS 32: 3.8820).
[ Tue Nov 17 09:20:29 2020 ] Time consumption: [Data]00%, [Network]98%
[ Tue Nov 17 09:20:29 2020 ] Eval epoch: 32
[ Tue Nov 17 09:24:21 2020 ] Mean test loss of 619 batches: 4.307116228852403.
[ Tue Nov 17 09:24:22 2020 ] Top 1: 16.39%
[ Tue Nov 17 09:24:22 2020 ] Top 5: 35.38%

How to combine the two data training?(怎么把bone和joint两种数据一起训练呢)

Hi, authors, your article is so nice! I have a question to ask you.Two types of data concat together initially C=6, and then as input in the network how to deal with it? According to 2s-agcn, you input the two kinds of data into the network respectively and then you feed them into the softmax classifier, and then you add up the output probabilities. or choose to add the two outputs into loss respectively, or add them into the (n,c) tensor of the network? How do you do that?
大佬你好,你的文章真的很精彩,我有一个问题想问你。两种数据concat到一起初始C=6,然后作为输入在网络中如何处理呢?按照2s-AGCN的说法是将两种数据的分别输入网络然后在送入softmax分类器,输出概率进行加和,是选择把两个输出分别送进loss进行相加吗,还是网络输出的(n,c)tensor进行相加呢,是怎么处理呢?

Softmax scores

Hi!

First, let me congratulate you on the work and a nice to use repo! I used your kinetics-pretrained model for finetuning and making a 10 class classifier of new activities and it works quite nice. I would just like to clarify one thing the output of your model (of the forward function) is the output of the last linear layer, right? So if I want to know the class probabilities, I still need to apply nn.Softmax() to the output, right? And in the ensemble, you again add the outputs of the linear layer before applying any loss/softmax, correct?

I am just confused that when I apply Softmax to the output of my fine-tuned model, the model tends to be overly confident (one prob nearing 1 and the rest nearing 0), so I just wanted to clarify this, so I could search for problems in my training, if my assumptions about your model are right.

Thank you in advance for any response!
Best,
Petr

"kinetics_gendata.py" Error

When I executed this file as it is, the following error was printed.
"json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)"
I don't know if it's a problem with the file uploaded to Google drive or the A-z order.

So I changed the order of the list of 'part' variables in line 173 of the file.
part = ['val', 'train'] modified this code as part = ['train', 'val'].

solved it

数据类型为NoneTyoe格式

我在用ntu120进行训练时,会出现数据类型为nonetype的问题,尝试了很多方法,还是不行

Result on Kinetics Skeleton 400

Hi, thanks for your code. I'm trying to reproduce results on Kinetics Skeleton 400 and I get ~33% with joint-stream alone. It seems hard to get 38% even if ensembling it with a bone-stream. So I want to know the accuracy you got on Kinetics Skeleton 400 with joint-stream alone and is there something wrong with my experiment? My config is shown below:

amp_opt_level: 1
assume_yes: false
base_lr: 0.1
batch_size: 64
checkpoint: null
config: config/kinetics-skeleton/train_joint.yaml
debug: false
device: [0, 1, 2, 3]
eval_interval: 1
eval_start: 1
feeder: feeders.feeder.Feeder
forward_batch_size: 64
half: false
ignore_weights: []
log_interval: 100
model: model.msg3d.Model
model_args: {graph: graph.kinetics.AdjMatrixGraph, num_class: 400, num_g3d_scales: 8,
num_gcn_scales: 8, num_person: 2, num_point: 18}
model_saved_name: ''
nesterov: true
num_epoch: 65
num_worker: 48
only_train_epoch: 5
only_train_part: true
optimizer: SGD
optimizer_states: null
phase: train
print_log: true
save_interval: 1
save_score: false
seed: 20
show_topk: [1, 5]
start_epoch: 0
step: [45, 55]
test_batch_size: 32
test_feeder_args: {data_path: ./data/kinetics/val_data_joint.npy, label_path: ./data/kinetics/val_label.pkl}
train_feeder_args: {data_path: ./data/kinetics/train_data_joint.npy, debug: false,
label_path: ./data/kinetics/train_label.pkl}
weight_decay: 0.0005
weights: null
work_dir: work_dir/kinetics/msg3d

Vs code运行代码直接终止,并且没有报错

你好,请问我复现该代码的时候,使用VS code 和anaconda来跑,环境都没问题,在跑的时候,进入第一次训练,然后程序就直接终止了了,也没有任何的报错,请问这怎么解决呢?

Why not apply residual in MS-G3D or MS-GCN?

Hi Ziyu!
Recently I read your paper about skeleton-based action recognition. It is really a solid work! However, when I try to deeply dive into the model, I find it werid since there are no residual in both MS-G3D and MS-GCN.

I notice that there IS A reidual path in MS-TCN implemented by conv1x1. However, after careful check, there are no residual path in other modules which means: 1. the low-level skeleton data have to pass three heavy STGC to get final result; 2. the gradient may not be flow back via residual link.

Also in vanilla ST-GCN, a residual link exists in every GCN-TCN block.

However, the experiment result IS not only stable but also satistying. Could you share what you think about this model design? Thanks a lot :)

CUDA out of memory while evaluating pretrained

Hi,

first of all thanks for sharing your poject. I'm trying to replicate your steps, but get stuck due to low GPU memory available (I've about 4GB of memory in my GPU, is that not enough?).

I followed the instructions but get an error at the bash eval_pretrained.sh step, allegedly because of torch is trying to allocate too much memory at once (it asks for a chunk of >800MB, see a log below). Is there any way to by-pass this problem?

(nn) spegni@locanda:~/git/neural-networks/MS-G3D$ bash eval_pretrained.sh 
/home/spegni/git/neural-networks/MS-G3D/main.py:687: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
  default_arg = yaml.load(f)
[ Wed May 12 16:34:08 2021 ] Model total number of params: 3194595
Cannot parse global_step from model weights filename
[ Wed May 12 16:34:08 2021 ] Loading weights from pretrained-models/ntu60-xsub-joint-fusion.pt
[ Wed May 12 16:34:08 2021 ] Model:   model.msg3d.Model
[ Wed May 12 16:34:08 2021 ] Weights: pretrained-models/ntu60-xsub-joint-fusion.pt
[ Wed May 12 16:34:08 2021 ] Eval epoch: 1
  0%|                                                   | 0/516 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "/home/spegni/git/neural-networks/MS-G3D/main.py", line 702, in <module>
    main()
  File "/home/spegni/git/neural-networks/MS-G3D/main.py", line 698, in main
    processor.start()
  File "/home/spegni/git/neural-networks/MS-G3D/main.py", line 660, in start
    self.eval(
  File "/home/spegni/git/neural-networks/MS-G3D/main.py", line 580, in eval
    output = self.model(data)
  File "/home/spegni/.virtualenvs/nn/lib/python3.9/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/spegni/git/neural-networks/MS-G3D/model/msg3d.py", line 160, in forward
    x = F.relu(self.sgcn1(x) + self.gcn3d1(x), inplace=True)
  File "/home/spegni/.virtualenvs/nn/lib/python3.9/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/spegni/git/neural-networks/MS-G3D/model/msg3d.py", line 100, in forward
    out_sum += gcn3d(x)
  File "/home/spegni/.virtualenvs/nn/lib/python3.9/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/spegni/git/neural-networks/MS-G3D/model/msg3d.py", line 61, in forward
    x = self.gcn3d(x)
  File "/home/spegni/.virtualenvs/nn/lib/python3.9/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/spegni/.virtualenvs/nn/lib/python3.9/site-packages/torch/nn/modules/container.py", line 119, in forward
    input = module(input)
  File "/home/spegni/.virtualenvs/nn/lib/python3.9/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/spegni/git/neural-networks/MS-G3D/model/ms_gtcn.py", line 106, in forward
    out = self.mlp(agg)
  File "/home/spegni/.virtualenvs/nn/lib/python3.9/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/spegni/git/neural-networks/MS-G3D/model/mlp.py", line 23, in forward
    x = layer(x)
  File "/home/spegni/.virtualenvs/nn/lib/python3.9/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/spegni/.virtualenvs/nn/lib/python3.9/site-packages/torch/nn/modules/batchnorm.py", line 135, in forward
    return F.batch_norm(
  File "/home/spegni/.virtualenvs/nn/lib/python3.9/site-packages/torch/nn/functional.py", line 2149, in batch_norm
    return torch.batch_norm(
RuntimeError: CUDA out of memory. Tried to allocate 880.00 MiB (GPU 0; 3.82 GiB total capacity; 1.41 GiB already allocated; 434.81 MiB free; 1.82 GiB reserved in total by PyTorch)

NTU RGB+D 60 XSub
Traceback (most recent call last):
  File "/home/spegni/git/neural-networks/MS-G3D/ensemble.py", line 31, in <module>
    with open(os.path.join(arg.joint_dir, 'epoch1_test_score.pkl'), 'rb') as r1:
FileNotFoundError: [Errno 2] No such file or directory: 'pretrain_eval/ntu60/xsub/joint-fusion/epoch1_test_score.pkl'

kinect 400 dataset

Hello, I could not download kinect 400 dataset from ST-GCN .could you please send a link for me ? I badly need it for my research .My e-mail is [email protected]
I'll be very grateful if you could send it to me.
Thank you a lot.

Some questions about table 2

Why the total count of parameters is half less than the baseline by simply replacing TCN with proposed multi-scale TCN module. I assume the proposed TCN module should contain more parameters as there are multiple branches in it.

ST-private

MS-G3D/LICENSE

Lines 140 to 158 in 88ef4ef

Section 2 -- Scope.
a. License grant.
1. Subject to the terms and conditions of this Public License,
the Licensor hereby grants You a worldwide, royalty-free,
non-sublicensable, non-exclusive, irrevocable license to
exercise the Licensed Rights in the Licensed Material to:
a. reproduce and Share the Licensed Material, in whole or
in part, for NonCommercial purposes only; and
b. produce, reproduce, and Share Adapted Material for
NonCommercial purposes only.
2. Exceptions and Limitations. For the avoidance of doubt, where
Exceptions and Limitations apply to Your use, this Public
License does not apply, and You do not need to comply with
its terms and conditions.

Is the format of data generated with `ntu_gendata.py` same as described in `lshiwjx/2s-AGCN`

Hi,
I had generated nturgb+d data using the script given in
Ishiwjx/2s-AGCN, I see that you have referenced that work also on matching ntu_gendata.py files I see that they are almost same.
But when I use bash eval_pretrained.sh with the generated I get:

/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [1,0,0] Assertion `t >= 0 && t < n_classes` failed.

So I just wanted to confirm that does the script provided by you for generating nturgb+d data generates data in same format as that provided in Ishiwjx/2s-AGCN or I will have to regenerate the data using the script provided by you(takes a lot of time, so I thought of confirming before going for it.).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.