GithubHelp home page GithubHelp logo

spatial-temporal-re-identification's Introduction

Spatial-Temporal Person Re-identification


Code for st-ReID(pytorch). We achieve Rank@1=98.1%, mAP=87.6% without re-ranking and Rank@1=98.0%, mAP=95.5% with re-ranking for market1501.For Duke-MTMC, we achieve Rank@1=94.4%, mAP=83.9% without re-ranking and Rank@1=94.5%, mAP=92.7% with re-ranking.

Update and FQA:

  • 2023.12.26: I would not maintain this code base. The PyTorch version or some code packages would be obsolete, but I don't think it is a big deal. I might give you some suggestions for implementation if possible.
  • 2020.01.08: If you do not want to re-train a model, you can follow this link. #26 (comment)
  • 2019.12.26:a demo figure has been added. I am not sure if it works or not because it was written one years ago. I will update this file in the future.
  • 2019.07.28: Models(+RE) (google drive Link:https://drive.google.com/drive/folders/1FIreE0pUGiqLzppzz_f7gHw0kaXZb1kC)
  • 2019.07.11: Models (+RE) (baiduyun Link:https://pan.baidu.com/s/1QMp22dVGJvBH45e4XPdeKw password:dn7b) are released. Note that, for market, slightly different from the results in the paper because we use pytorch 0.4.1 to train these models (mAP is slightly higher than paper while rank-1 is slightly lower than paper). We may reproduce the results by Pytorch 0.3 later.
  • 2019.07.11: README.md, python3 prepare --Duke ---> python3 prepare.py --Duke
  • 2019.06.02: How to add the spatial-temporal constraint into conventional re-id models? You can replace step 2 and step 3 by your own visual feature represenation.
  • 2019.05.31: gen_st_model_market.py, added Line 68~69.

1. ST-ReID

1.1 model

1.2 result

2. rerequisites

  • Pytorch 0.3
  • Python 3.6
  • Numpy

3. experiment

Market1501

  1. data prepare

    1. change the path of dataset
    2. python3 prepare.py --Market
  2. train (appearance feature learning)
    python3 train_market.py --PCB --gpu_ids 2 --name ft_ResNet50_pcb_market_e --erasing_p 0.5 --train_all --data_dir "/home/huangpg/st-reid/dataset/market_rename/"

  3. test (appearance feature extraction)
    python3 test_st_market.py --PCB --gpu_ids 2 --name ft_ResNet50_pcb_market_e --test_dir "/home/huangpg/st-reid/dataset/market_rename/"

  4. generate st model (spatial-temporal distribution)
    python3 gen_st_model_market.py --name ft_ResNet50_pcb_market_e --data_dir "/home/huangpg/st-reid/dataset/market_rename/"

  5. evaluate (joint metric, you can use your own visual feature or spatial-temporal streams)
    python3 evaluate_st.py --name ft_ResNet50_pcb_market_e

  6. re-rank
    6.1) python3 gen_rerank_all_scores_mat.py --name ft_ResNet50_pcb_market_e
    6.2) python3 evaluate_rerank_market.py --name ft_ResNet50_pcb_market_e

DukeMTMC-reID

  1. data prepare
    python3 prepare.py --Duke

  2. train (appearance feature learning)
    python3 train_duke.py --PCB --gpu_ids 2 --name ft_ResNet50_pcb_duke_e --erasing_p 0.5 --train_all --data_dir "/home/huangpg/st-reid/dataset/DukeMTMC_prepare/"

  3. test (appearance feature extraction)
    python3 test_st_duke.py --PCB --gpu_ids 2 --name ft_ResNet50_pcb_duke_e --test_dir "/home/huangpg/st-reid/dataset/DukeMTMC_prepare/"

  4. generate st model (spatial-temporal distribution)
    python3 gen_st_model_duke.py --name ft_ResNet50_pcb_duke_e --data_dir "/home/huangpg/st-reid/dataset/DukeMTMC_prepare/"

  5. evaluate (joint metric, you can use your own visual feature or spatial-temporal streams)
    python3 evaluate_st.py --name ft_ResNet50_pcb_duke_e

  6. re-rank
    6.1) python3 gen_rerank_all_scores_mat.py --name ft_ResNet50_pcb_duke_e
    6.2) python3 evaluate_rerank_duke.py --name ft_ResNet50_pcb_duke_e

Citation

If you use this code, please kindly cite it in your paper.

@article{guangcong2019aaai,
  title={Spatial-Temporal Person Re-identification},
  author={Wang, Guangcong and Lai, Jianhuang and Huang, Peigen and Xie, Xiaohua},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  pages={8933-8940},
  year={2019}
}

Paper Link:https://wvvw.aaai.org/ojs/index.php/AAAI/article/view/4921 or https://arxiv.org/abs/1812.03282

Related Repos

Our codes are mainly based on this repository

spatial-temporal-re-identification's People

Contributors

wanggcong avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

spatial-temporal-re-identification's Issues

test_st_market.py:None (test_st_market.py)

test_st_market.py:32: in
opt = parser.parse_args()
C:\ProgramData\Anaconda3\lib\argparse.py:1752: in parse_args
self.error(msg % ' '.join(argv))
C:\ProgramData\Anaconda3\lib\argparse.py:2501: in error
self.exit(2, _('%(prog)s: error: %(message)s\n') % args)
C:\ProgramData\Anaconda3\lib\argparse.py:2488: in exit
_sys.exit(status)
E SystemExit: 2

I can't run step3 on win10
Can you teach me the reason?

Prediction Code Sample

Hi, I wonder if you can share code that compares two image. We need to compare images than decide whether the person in these two images are same or not. So, how we can use your system in such condition?

Why is validation loss lower than the training loss.

I haven't dug deep into the code, but find that validation accuracy is generally higher than training accuracy, like this:
Epoch 23/59

train Loss: 0.3353 Acc: 0.8267
val Loss: 0.2442 Acc: 0.8602

Epoch 24/59

train Loss: 0.3229 Acc: 0.8391
val Loss: 0.2326 Acc: 0.8762

Epoch 25/59

train Loss: 0.3149 Acc: 0.8486
val Loss: 0.2430 Acc: 0.8495

Epoch 26/59

train Loss: 0.3033 Acc: 0.8621
val Loss: 0.1564 Acc: 0.9294

Epoch 27/59

train Loss: 0.2955 Acc: 0.8710
val Loss: 0.1795 Acc: 0.9321

Epoch 28/59

train Loss: 0.2854 Acc: 0.8745
val Loss: 0.1871 Acc: 0.9161

Epoch 29/59

train Loss: 0.2819 Acc: 0.8761
val Loss: 0.1720 Acc: 0.9201

Is it because the process of validation takes spatial-temporal distance into account while training doesn't?

basic information question

Q1. The model we train is only about visual features. The st model is a pytorch_result2.mat file.There is no training model that combines visual features and st.
Q2.Market1501 datasets has 6 cameras, DukeMTMC-reID datasets has 8 cameras, why is np.zeros ((class_num,8)) created in line 51 of the gen_st_model_market.py file?
market: 1->2 1->3 1->4 1->5 1->6
duke: 1->2 1->3 1->4 1->5 1->6 1->7 1->8
Q3: Is pytorch_result2.mat fixed for the dataset? Is the visual model one-to-one with it? Suppose I need to use the PCB model of the trained market dataset, then I can only call the corresponding pytorch_result2.mat ?
Thanks!

why is frame avged in gen_st_model.py?

the avged frame is the time of somebody in center point? but when the person go through one camera twice or more times, the avged frame will be wrong, i am confused. besides, my english is poor, excuse me.

Pre trained weights

Hello,
Thank you for sharing code for the paper. Could you please share weights on google drive or any other hosting that doesn't require baidu account? I'm having problems downloading it from outside of China.

In train.py file

Hello
You have done excellent and impressive work. Actually, I am new in machine learning and I was trying to run the code but I was facing problems. It would be grateful if you help me
my specs- i5 8300h , gtx 1050m ti 4 gb, ram 8 gb currently i am using this code on windows 10
if want to re-train a model using market 1501 dataset
error-
C:\Users\ATUL\Desktop\Python\Spatial-Temporal-Re-identification-master\model.py:14: UserWarning: nn.init.kaiming_normal is now deprecated in favor of nn.init.kaiming_normal_.
init.kaiming_normal(m.weight.data, a=0, mode='fan_out')
C:\Users\ATUL\Desktop\Python\Spatial-Temporal-Re-identification-master\model.py:15: UserWarning: nn.init.constant is now deprecated in favor of nn.init.constant_.
init.constant(m.bias.data, 0.0)
C:\Users\ATUL\Desktop\Python\Spatial-Temporal-Re-identification-master\model.py:17: UserWarning: nn.init.normal is now deprecated in favor of nn.init.normal_.
init.normal(m.weight.data, 1.0, 0.02)
C:\Users\ATUL\Desktop\Python\Spatial-Temporal-Re-identification-master\model.py:18: UserWarning: nn.init.constant is now deprecated in favor of nn.init.constant_.
init.constant(m.bias.data, 0.0)
C:\Users\ATUL\Desktop\Python\Spatial-Temporal-Re-identification-master\model.py:23: UserWarning: nn.init.normal is now deprecated in favor of nn.init.normal_.
init.normal(m.weight.data, std=0.001)
C:\Users\ATUL\Desktop\Python\Spatial-Temporal-Re-identification-master\model.py:24: UserWarning: nn.init.constant is now deprecated in favor of nn.init.constant_.
init.constant(m.bias.data, 0.0)
net output size:
torch.Size([8, 751])
0
[Resize(size=(288, 144), interpolation=PIL.Image.BICUBIC), RandomCrop(size=(256, 128), padding=None), RandomHorizontalFlip(p=0.5), ToTensor(), Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), <random_erasing.RandomErasing object at 0x000001CD467C78B0>]

Traceback (most recent call last):
File "", line 1, in
File "C:\Users\ATUL\Anaconda3\lib\multiprocessing\spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "C:\Users\ATUL\Anaconda3\lib\multiprocessing\spawn.py", line 125, in _main
prepare(preparation_data)
File "C:\Users\ATUL\Anaconda3\lib\multiprocessing\spawn.py", line 236, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "C:\Users\ATUL\Anaconda3\lib\multiprocessing\spawn.py", line 287, in _fixup_main_from_path
main_content = runpy.run_path(main_path,
File "C:\Users\ATUL\Anaconda3\lib\runpy.py", line 265, in run_path
return _run_module_code(code, init_globals, run_name,
File "C:\Users\ATUL\Anaconda3\lib\runpy.py", line 97, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "C:\Users\ATUL\Anaconda3\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "C:\Users\ATUL\Desktop\Python\Spatial-Temporal-Re-identification-master\train_market.py", line 20, in
from model import ft_net, ft_net_dense, PCB
ModuleNotFoundError: No module named 'model'

closed

有点疑问希望您能解答一下
根据公式(3),对p(y=1 | k, ci, cj)的估计是以各个观测样本所在的bin作为高斯分布的均值,而代码中写的是
gaussian_vect = gaussian_func2(vect, 0, o) ,都以0均值的高斯分布进行平滑

Little confused about function spatial_temporal_distribution

I am researching about your project, but i am so confused about the function spatial_temporal_distribution in file gen_st_model_xxx.py. Can you brief explanation about this function. What is the main mission of this function ? And Which part in the paper corresponding to this function ?

Laplace smoothing

The paper use Laplace smoothing to estimate a prior proba�bility in Naive Bayes, but in the code I can't see where use the Laplace smoothing

License Please!

Hello, I know this repo has been around a for a while. I was hoping to use this in a project but I don't see an MIT license anywhere. Could you please put the MIT license in the repo? It will encourage others to use your code too. Thank you!

Little confused

Hi Bro, I try to utilize your code to prefrom other reid task, and I am little confused by this line. What's the propose of this line?

dict_cam_seq_max = {

In Market1501, it has two annotations to discribe the camera info, like (person-id)c(camid)s(recoding sequence id?)(timestamp)_(count).jpg.

I think the key means the (seq_id) and the I don't know the exact meaning of the value number. Dose this value means some kind of video length or the largeset distance between two frames?Am I right?

Any Help will be appreciated!

*** KeyError: 'query_f'

When I run "python3 evaluate_st.py --name ft_ResNet50_pcb_market_e ", I have some problems.
problem like this:
*** KeyError: 'query_f'
I load model is "model/modelft_ResNet50_pcb_market_e/pytorch_result2.mat", which from the link (baiduyun Link:https://pan.baidu.com/s/1QMp22dVGJvBH45e4XPdeKw password:dn7b).
I checked the code:
result = scipy.io.loadmat('model/'+name+'/'+'pytorch_result2.mat')
I found "result" has no key 'query_f'.
I don't know how to operate.

not able to reproduce your paper result

Hi,
I have used your trained models and tested but getting very large difference on results
reproduced result is as follows
top1:0.697150 top5:0.740202 top10:0.755344 mAP:0.329223

could you please explain what is the reason for such large gap in result

Train on multiple GPUs.

I already remove the code abt the GPU in the training code,still run on one GPU,I have also try those with the follow:
I have give some annotation follow those lines:
#torch.cuda.set_device(gpu_ids[0]) # if torch.cuda.is_available: # network.cuda(gpu_ids[0])
and add something like that:
model = DataParallel(model,gpu_ids)
but it didn't work,
do you have any suggestion?

Can't evaluate KeyError: 'query_f'`

I am running the python evaluate_st.py --name market1501 but I receive the following error

Traceback (most recent call last): File "evaluate_st.py", line 149, in <module> query_feature = result['query_f'] KeyError: 'query_f'

when I print the result I get this

`{'header': b'MATLAB 5.0 MAT-file Platform: posix, Created on: Thu Jun 6 22:53:34 2019', 'version': '1.0', 'globals': [], 'distribution': array([[[0. , 0. , 0. , ..., 0. ,
0. , 0. ],
[0.00527704, 0.00263852, 0.00791557, ..., 0. ,
0. , 0. ],
[0.06461538, 0.09230769, 0.07384615, ..., 0. ,
0. , 0. ],
...,
[0.00226757, 0.00453515, 0.00680272, ..., 0. ,
0. , 0. ],
[0. , 0. , 0. , ..., 0. ,
0. , 0. ],
[0. , 0. , 0. , ..., 0. ,
0. , 0. ]],

   [[0.06578947, 0.02631579, 0.02631579, ..., 0.        ,
     0.        , 0.        ],
    [0.        , 0.        , 0.        , ..., 0.        ,
     0.        , 0.        ],
    [0.08928571, 0.04464286, 0.07142857, ..., 0.        ,
     0.        , 0.        ],
    ...,
    [0.        , 0.00591716, 0.        , ..., 0.        ,
     0.        , 0.        ],
    [0.        , 0.        , 0.        , ..., 0.        ,
     0.        , 0.        ],
    [0.        , 0.        , 0.        , ..., 0.        ,
     0.        , 0.        ]],

   [[0.075     , 0.08214286, 0.07857143, ..., 0.        ,
     0.        , 0.        ],
    [0.02518892, 0.05793451, 0.04282116, ..., 0.        ,
     0.        , 0.        ],
    [0.        , 0.        , 0.        , ..., 0.        ,
     0.        , 0.        ],
    ...,
    [0.00208768, 0.        , 0.00417537, ..., 0.        ,
     0.        , 0.        ],
    [0.        , 0.        , 0.        , ..., 0.        ,
     0.        , 0.        ],
    [0.        , 0.        , 0.        , ..., 0.        ,
     0.        , 0.        ]],`

   ...,

   [[0.11363636, 0.04545455, 0.11363636, ..., 0.        ,
     0.        , 0.        ],
    [0.        , 0.04761905, 0.07936508, ..., 0.        ,
     0.        , 0.        ],
    [0.02777778, 0.01388889, 0.        , ..., 0.        ,
     0.        , 0.        ],
    ...,
    [0.        , 0.        , 0.        , ..., 0.        ,
     0.        , 0.        ],
    [0.        , 0.        , 0.        , ..., 0.        ,
     0.        , 0.        ],
    [0.        , 0.        , 0.        , ..., 0.        ,
     0.        , 0.        ]],

   [[0.        , 0.        , 0.        , ..., 0.        ,
     0.        , 0.        ],
    [0.        , 0.        , 0.        , ..., 0.        ,
     0.        , 0.        ],
    [0.        , 0.        , 0.        , ..., 0.        ,
     0.        , 0.        ],
    ...,
    [0.        , 0.        , 0.        , ..., 0.        ,
     0.        , 0.        ],
    [0.        , 0.        , 0.        , ..., 0.        ,
     0.        , 0.        ],
    [0.        , 0.        , 0.        , ..., 0.        ,
     0.        , 0.        ]],

   [[0.        , 0.        , 0.        , ..., 0.        ,
     0.        , 0.        ],
    [0.        , 0.        , 0.        , ..., 0.        ,
     0.        , 0.        ],
    [0.        , 0.        , 0.        , ..., 0.        ,
     0.        , 0.        ],
    ...,
    [0.        , 0.        , 0.        , ..., 0.        ,
     0.        , 0.        ],
    [0.        , 0.        , 0.        , ..., 0.        ,
     0.        , 0.        ],
    [0.        , 0.        , 0.        , ..., 0.        ,
     0.        , 0.        ]]])}`

Laplacing smoothing and joint metric

Hi sir,
Could you please explain in layman's term how joint metric is evaluated. What is the purpose of lapacia smoothing. I read your paper, notable to understand clearly.

Why use both "test" and "generate st model" mothed?

Sorry to bother U. It confused me that the code of the two method are same, and what would possibly happened when I don't apply either of them? In another words, I want to know the effect of each method, Thx!

Visual stream features vs Spatial-Temporal stream

Hello. Pretty nice and well coded repo. Thanks for your contributions. I have a question regarding feature extractions. For visual stream you are extracting gallery and query features after training the model. But for Spatial-Temporal stream, you are extracting distribution from training set data(camID+frames+labels). Shouldn't it be from gallery/query set? Because later you are joining those features. Please explain I have a little confusion. Many thanks

download pretraind models

Hello,

I hope this message finds you well. I am currently working with your project and have encountered an issue with the download links. The files I expected to be in the format of mat and pth (specifically, pytorch_result2.mat for the st model and net_last.pth for net weights) are instead in the format of rpm or dbm.

Additionally, I noticed that both files seem to be the same, and they are not in the expected mat and pth formats. Could you please provide guidance on resolving this issue? It seems like there might be a discrepancy in the file formats or a potential error in the download links.

Any assistance or clarification you can provide would be greatly appreciated.

Thank you,
Elnaz

Replaced ResNet50 to VGG16 as backbone

I am working on this spatial-temporal person Reid. I have a constraint to use VGG16 as a backbone architecture and added/removed some customized layers at the back of VGG16 architecture. I wanted to improve my Rank@1 and mAP values on the Market-1501 dataset. I tried with my own visual stream features (VGG16) with spatial-temporal constraints. I got some results and would like to discuss it with you. Please see the screenshot that I have shared with you. I evaluated the VGG16 model with spatial-temporal constraint and without temporal-constraint. The interesting part is when I used spatial-temporal constraint Rank@1 metric is decreased by around 3%( Please see the values on screenshot). I'd like to discuss with you that why values gets decreased when applied spatial-temporal constraint and How can I improve it with using VGG16 architecture?
Screen Shot 2019-10-08 at 12 45 12

pytorch version 0.3 is not compatible with the code provided.

Hey, thanks for your contribution. I can't reproduce your result due to version compatibility, as you explicitly described the code works on pytorch 0.3 but it has some features which are from pytorch 0.4 and this cause an error during compilation. For instance, in train_market.py line 202: running_loss += loss.item(), this feature is not available in pytorch 0.3. Would you please advice me on how to tackle this issue or provide an alternative code which works?

Thanks

Gaussian smoothing on DukeMTMC-reID

Hi. Why are you not applying Gaussian smoothing to the spatial temporal distribution in the "gen_st_nodel_duke.py" as it was applied in market data set. Also in market data set, smoothing of the distribution code is commented. Line 136 -139 in "gen_st_model_market.py"

can't reproduce your precision

I use both script code and pre-trained weight of your repo. But the result I got is not as good as yours.
I got(market1501):
top1:0.944774 top5:0.983670 top10:0.990202 mAP:0.796412
alpha,smooth: 5 50
why I meet the gap?

Question about the accuracy on duke

Hi, I have downloaded the two pretrained models(net_lst.pth and result2.mat) you have attached on other issue, but I cannot reach the top accurancy mentioned in your paper. For example, the top1 I got is 94.3, and after rerank, top1 acc is 92.2, but in your paper, it is 94.4 and 94.5, I am wondering the reason for this diffrence, do you mind giving me some advice on it? thanks a lot.

train on custom dataset

  1. How to train on the custom dataset

  2. I have images from mulitple camera, but I don't have the camera information.
    Can I train with out specifying camera id's.

Thanks

Unseen data

Hello, thanks for sharing your work!

In the code, I see that you're removing the classification layer in order to extract raw features from the gallery images. I was wondering whether this means the model could work for re-id on unseen data (data it wasn't trained on filmed on a different camera and in a different setting) and if so, do you think it could achieve close to the accuracy reported if not trained for that domain?

Why is st-ReID + RE lower than in the paper?

environment: pytorch 1.0/python 3.6/one-gpu
I train st-Reid+RE model according to the parameters you code provide.(original parameter)
st-Reid+RE
map:87%, top1:96.7%
st-Reid+RE + re-rank
map: 94.5%, top1:96.8%

KeyError: 'module name can\'t contain "."'

Hi, I am new to person re-identification. I found this model super excellent.
But, I am facing to a problem. Since KeyError: 'module name can't contain "."' happens every single running, I removed '.' from line 73 to 78 in "C:\Users#####\anaconda3\envs\STReID\lib\site-packages\torchvision\models\densenet.py". But still, I can't train the model. How can I sort out this problem?

Which data set has been used to train "net_last.pth" ?

Hi
Thanks for sharing your work.
In which data set, your pre-trained model "net_last.pth" has been trained?
Is it possible to share the pre-trained model on the other data sets such as duke-MTMC, cuhk and etc?
Regards

Smoothing in the Joint Metric

Hi, thanks for the great work and sharing the code!

May I ask which part of the code reflects your two observations mentioned in the paper, the Laplace smooth and Logistic function used to filter the visual and temporal information, I have some difficulties locate those implementations, thank you!

About training and validation data

I noticed that your default training command contains a '--train_all', and after preparing the market1501 dataset, it seems that train_all is made up of both train and val. Does that mean that the training data contains the validation data?

Hi, while evaluating facing this error , would you please help me to fix ?

File "evaluate_st.py", line 230, in
ap_tmp, CMC_tmp = evaluate(query_feature[i],query_label[i],query_cam[i],query_frames[i], gallery_feature,gallery_label,gallery_cam,gallery_frames,distribution)
File "evaluate_st.py", line 53, in evaluate
pr = distribution[gc[i]-1][qc-1][hist_]
IndexError: index 3028 is out of bounds for axis 0 with size 3000

train error

hi
During in the process of run train_market.py, i get a error as follows:

train_market.py", line 159, in train_model
outputs = model(inputs)
ValueError: expected 2D or 3D input (got 1D input)

what's the problem?
can you show me the market category?
Thank you.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.