GithubHelp home page GithubHelp logo

uzh-rpg / rpg_asynet Goto Github PK

View Code? Open in Web Editor NEW
118.0 12.0 22.0 664 KB

Code for the paper "Event-based Asynchronous Sparse Convolutional Networks" (ECCV, 2020).

License: GNU General Public License v3.0

CMake 0.13% C++ 8.05% Shell 0.05% Python 91.76%

rpg_asynet's Introduction

Event-based Asynchronous Sparse Convolutional Networks

Video to Events

This is the code for the paper Event-based Asynchronous Sparse Convolutional Networks (PDF) by Nico Messikommer*, Daniel Gehrig*, Antonio Loquercio, and Davide Scaramuzza.

If you use any of this code, please cite the following publication:

@InProceedings{Messikommer20eccv,
  author        = {Nico Messikommer and Daniel Gehrig and Antonio Loquercio and Davide Scaramuzza},
  title         = {Event-based Asynchronous Sparse Convolutional Networks},
  journal       = {European Conference on Computer Vision. (ECCV)},
  url           = {http://rpg.ifi.uzh.ch/docs/ECCV20_Messikommer.pdf},
  year          = 2020
}

Installation

First set up an Anaconda environment:

conda create -n asynet python=3.7  
conda activate asynet

Then clone the repository and install the dependencies with pip

git clone [email protected]:uzh-rpg/rpg_asynet.git
cd rpg_asynet/
pip install -r requirements.txt

In addition sparseconvnet 0.2 needs to be installed from here.

CPP Bindings

To build the cpp bindings for the event representation tool, you can follow the instructions below:

pip install event_representation_tool/

For the bindings for asynchronous sparse convolutions, we first need to clone the 3.4.0-rc version of Eigen into the include folder. In addition, pybind11 is required.

cd async_sparse_py/include
git clone https://gitlab.com/libeigen/eigen.git --branch 3.4.0-rc1
conda install -c conda-forge pybind11

Finally, the bindings can be installed

pip install async_sparse_py/

Sparse CNN Training

The training parameters can be adjusted in the config/settings.yaml file. The following training tasks and datasets are supported:

  • Classification on NCars and NCaltech101.
  • Object Detection on Prophesee Gen1 Automotive and NCaltech101

Download location of the datasets:

To test the code, make a directory data/ in the root of the repository and download one of the datasets:

mkdir data
cd data
wget http://rpg.ifi.uzh.ch/datasets/gehrig_et_al_iccv19/N-Caltech101.zip
unzip N-Caltech101.zip
rm N-Caltech101.zip

The dataset can be configured in the dataset/name tag in config/settings.yaml. Different model types can be chosen based on the task and whether or not sparse convolutions should be used.

  • Sparse VGG for Classification and Object Detection
  • Standard VGG for Classification and Object Detection

These can be configured in the model tag in config/settings.yaml.

The following command starts the training:

CUDA_VISIBLE_DEVICES=<GPU_ID>, python train.py --settings_file config/settings.yaml

By default, a folder with the current date and time is created in log/ containing the corresponding tensorboard files.

Unit Tests

To test the different asynchronous and sparse layers, multiple unit tests are implemented in unittests/:

  • Asynchronous Sparse Convolution Layer 2D: sparse_conv2D_test.py
  • Asynchronous Sparse Convolution Layer 2D CPP Implementation: sparse_conv2D_cpp_test.py
  • Asynchronous Sparse Max Pooling Layer: sparse_max_pooling_test.py
  • Asynchronous Sparse VGG: sparse_VGG_test.py.
    There are three paths in the script sparse_VGG_test.py specified with 'PATH_TO_MODEL' and 'PATH_TO_DATA', which need to be replaced with the NCaltech Classification dataset and the sparse classification model trained on N-Caltech.

To run the unittests call

python -m unittest discover -s unittests/ -p '*_test.py'

Evaluation

There is one script in the evaluation/sliding_window_flops.py folder for computing the number of FLOPs. The command to execute the script is:

python -m evaluation.sliding_window_flops --setting config/settings.yaml
    --save_dir <PATH_TO_DIR> --num_events 1 --num_samples 500
    --representation histogram  --use_multiprocessing

The output of the script are the numbers of FLOPs for the four processing modes (Asyn Sparse Conv, Asyn Conv, Sparse Conv, Standard Conv).

rpg_asynet's People

Contributors

danielgehrig18 avatar messikommernico avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rpg_asynet's Issues

Error compiling async_sparse_py

Sorry for boardering. And I got some problem while compiling the async_sparse_py module while installing.

While compiling async_sparse_py using
pip install async_sparse_py
it fails while processing conv2d.cpp.

The output log is as follows:
/home/manchanghai/dvs/rpg_asynet/async_sparse_py/src/conv2d.cpp:216:70: error: converting to ‘AsynSparseConvolution2D::ReturnType {aka std::tuple<Eigen::Matrix<int, -1, -1, 0, -1, -1>, Eigen::Matrix<float, -1, -1, 0, -1, -1>, Eigen::Matrix<AsynSparseConvolution2D::Site, -1, 1, 0, -1, 1> >}’ from initializer list would use explicit constructor ‘constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {Eigen::Matrix<int, -1, -1, 0, -1, -1>&, Eigen::Matrix<float, -1, -1, 0, -1, -1>&, Eigen::Ref<Eigen::Matrix<AsynSparseConvolution2D::Site, -1, 1, 0, -1, 1>, 0, Eigen::InnerStride<1> >&}; <template-parameter-2-2> = void; _Elements = {Eigen::Matrix<int, -1, -1, 0, -1, -1>, Eigen::Matrix<float, -1, -1, 0, -1, -1>, Eigen::Matrix<AsynSparseConvolution2D::Site, -1, 1, 0, -1, 1>}]’ return {new_update_location, output_feature_map, active_sites_map};

The pybind11 is installed, and the eigen's git repo is cloned at include/eigen. The eigen version used is 3.3.9.
The version of gcc is 5.4.0, and the system used is ubuntu 16.04.

It seems to be caused by the type casting to constexpr or something, but I'm not very sure about that because I'm not familar with C at all(sorry for that).

Please help to check why this error, and I will try to compiling using different system platform.

Thank you in advance.
Best regards.

Error while running Object Detection

Hi, when I run the Object Detection model I get an erroe that I´m not able to solve.
I am working in a conda enviroment that for the task of Classification works well and it is created following this steps:

  • conda install -c anaconda pytorch=1.4
  • conda install -c anaconda pillow
  • bash develop.sh (in the folder of SparseConv Net)
  • pip install - r requirements.txt (in the folder of rpg_asynet)
  • pip install event_representation_tool/
  • conda install -c conda-forge pybind11
  • conda install -c anaconda cmake
  • pip install async_sparse_py

When my enviroment is already created and a use I obtain this:

Name Version Build Channel

_libgcc_mutex 0.1 main
_pytorch_select 0.2 gpu_0 anaconda
absl-py 0.9.0 pypi_0 pypi
async-sparse 0.0.1 pypi_0 pypi
backcall 0.1.0 pypi_0 pypi
blas 1.0 mkl anaconda
bzip2 1.0.8 h7b6447c_0 anaconda
ca-certificates 2020.10.14 0 anaconda
cachetools 4.0.0 pypi_0 pypi
certifi 2019.11.28 pypi_0 pypi
cffi 1.14.3 py37he30daa8_0 anaconda
chardet 3.0.4 pypi_0 pypi
cmake 3.18.2 ha30ef3c_0 anaconda
cudatoolkit 10.1.243 h6bb024c_0 anaconda
cudnn 7.6.5 cuda10.1_0 anaconda
cycler 0.10.0 pypi_0 pypi
data 0.4 pypi_0 pypi
decorator 4.4.1 pypi_0 pypi
event-representations 0.0.1 pypi_0 pypi
expat 2.2.10 he6710b0_2 anaconda
freetype 2.10.4 h5ab3b9f_0 anaconda
funcsigs 1.0.2 pypi_0 pypi
future 0.18.2 pypi_0 pypi
google-auth 1.11.0 pypi_0 pypi
google-auth-oauthlib 0.4.1 pypi_0 pypi
grpcio 1.26.0 pypi_0 pypi
idna 2.8 pypi_0 pypi
intel-openmp 2020.2 254 anaconda
ipython 7.13.0 pypi_0 pypi
ipython-genutils 0.2.0 pypi_0 pypi
jedi 0.16.0 pypi_0 pypi
jpeg 9b habf39ab_1 anaconda
kiwisolver 1.1.0 pypi_0 pypi
krb5 1.18.2 h173b8e3_0 anaconda
lcms2 2.11 h396b838_0 anaconda
ld_impl_linux-64 2.33.1 h53a641e_7
libcurl 7.71.1 h20c2e04_1 anaconda
libedit 3.1.20191231 h14c3975_1 anaconda
libffi 3.3 he6710b0_2
libgcc-ng 9.1.0 hdf63c60_0
libpng 1.6.37 hbc83047_0 anaconda
libssh2 1.9.0 h1ba5d50_1 anaconda
libstdcxx-ng 9.1.0 hdf63c60_0
libtiff 4.2.0 h85742a9_0
libuv 1.40.0 h7b6447c_0 anaconda
libwebp-base 1.1.0 h7b6447c_3 anaconda
line-profiler 3.0.2 pypi_0 pypi
lz4-c 1.9.2 heb0550a_3 anaconda
markdown 3.2 pypi_0 pypi
matplotlib 3.1.3 pypi_0 pypi
mkl 2019.4 243 anaconda
mkl-service 2.3.0 py37he904b0f_0 anaconda
mkl_fft 1.2.0 py37h23d657b_0 anaconda
mkl_random 1.1.0 py37hd6b4f25_0 anaconda
ncurses 6.2 he6710b0_1
ninja 1.10.1 py37hfd86e86_0 anaconda
numpy 1.18.1 pypi_0 pypi
oauthlib 3.1.0 pypi_0 pypi
olefile 0.46 py37_0 anaconda
opencv-python 4.2.0.32 pypi_0 pypi
openssl 1.1.1j h27cfd23_0
parso 0.6.2 pypi_0 pypi
pexpect 4.8.0 pypi_0 pypi
pickleshare 0.7.5 pypi_0 pypi
pillow 7.1.0 pypi_0 pypi
pip 21.0.1 py37h06a4308_0
prompt-toolkit 3.0.3 pypi_0 pypi
protobuf 3.11.3 pypi_0 pypi
ptyprocess 0.6.0 pypi_0 pypi
pyasn1 0.4.8 pypi_0 pypi
pyasn1-modules 0.2.8 pypi_0 pypi
pybind11 2.4.3 pypi_0 pypi
pybind11-global 2.6.1 pypi_0 pypi
pycparser 2.20 py_2 anaconda
pygments 2.5.2 pypi_0 pypi
pyparsing 2.4.6 pypi_0 pypi
python 3.7.10 hdb3f193_0
python-dateutil 2.8.1 pypi_0 pypi
python_abi 3.7 1_cp37m conda-forge
pytorch 1.4.0 cuda101py37h02f0884_0 anaconda
pytz 2019.3 pypi_0 pypi
pyyaml 5.3 pypi_0 pypi
readline 8.1 h27cfd23_0
requests 2.22.0 pypi_0 pypi
requests-oauthlib 1.3.0 pypi_0 pypi
rhash 1.4.0 h1ba5d50_0 anaconda
rsa 4.0 pypi_0 pypi
setuptools 52.0.0 py37h06a4308_0
shutilwhich 1.1.0 pypi_0 pypi
six 1.14.0 pypi_0 pypi
snakeviz 2.0.1 pypi_0 pypi
sparseconvnet 0.2 dev_0
sqlite 3.35.2 hdfb4753_0
tempdir 0.7.1 pypi_0 pypi
tensorboard 2.1.0 pypi_0 pypi
tk 8.6.10 hbc83047_0
torchvision 0.5.0 pypi_0 pypi
tornado 6.0.3 pypi_0 pypi
tqdm 4.42.1 pypi_0 pypi
traitlets 4.3.3 pypi_0 pypi
urllib3 1.25.8 pypi_0 pypi
wcwidth 0.1.8 pypi_0 pypi
werkzeug 1.0.0 pypi_0 pypi
wheel 0.36.2 pyhd3eb1b0_0
xz 5.2.5 h7b6447c_0
zlib 1.2.11 h7b6447c_3
zstd 1.4.5 h9ceee32_0 anaconda

Finally, when I run the train.py with the NCaltech101_ObjectDetection dataset and the fb_sparse_object_det model, I obtain the following error:

op = torch._C._jit_get_operation(qualified_op_name)

RuntimeError: No such operator torchvision::nms

Questions about Prophesee dataset

Hi, I found the Prophesee Gen1 Automotive updated recently, the structure of the file may be different from the dataset used before.
could anyone provide a download link that is compatible with the released code?
many thanks!

Training times

Hi

I have begun experimenting with your framework and as a starting point I am just trying out your defined models on the NCaltech dataset and the Prophesee dataset. I am using an RTX2060 GPU for training, but I have found that using GPU over CPU does not yield a significant speed boost. For the fb_sparse_vgg trained on NCaltech each epoch takes approx 3 minutes, with similar times using CPU. If training on the entire Prophesee dataset, then reaching 1000 epochs would take many weeks. I am not too experienced in pytorch and CUDA, so I just wanted to know if this is normal behavior using your model?

For how long did you train your model, and how much time did an epoch take?

How to compute FLOPs

Dear authors,

Thanks for sharing this brilliant work, I am curious about how do you compute the FLOPs of this network?

This network contains sparse conv, so is there any difference from normal FLOPs computation?

I mean, people often use https://github.com/sovrasov/flops-counter.pytorch or htop.profile to compute FLOPs.

Best,
Iris

How to use the model to detect?

Hello! Thank you for your excellent job! ! !
I have got the trained model of fb_sparse_object_det through the tutorial.
But I don't know how to use it (model_step_149.pth) to detect the data and get the visual bounding boxes.
Could you elaborate on the specific implementation details?
Thanks in advance!!!

Performances of SSC and Asynet are not identical?

Thansk for your work and sharing.

I notice that performaces of SSC and Asynet(labeled Ours) reported at paper (Table.3, Table.4) are not the same? What is the reason? Since Asynet should output identical result.

fail to run "pip install async_sparse_py/"

ERROR: Failed building wheel for async-sparse

subprocess.CalledProcessError: Command '['cmake', '--build', '.', '--config', 'Release', '--', '-j']' returned non-zero exit status 2.
----------------------------------------
ERROR: Command errored out with exit status 1: /home/auto-lwj/anaconda3/envs/mmdet/bin/python3.7 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-gyb27m4s/setup.py'"'"'; file='"'"'/tmp/pip-req-build-gyb27m4s/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record /tmp/pip-record-wdbvum9a/install-record.txt --single-version-externally-managed --compile --install-headers /home/auto-lwj/anaconda3/envs/mmdet/include/python3.7m/async-sparse Check the logs for full command output.

an error when training with gpu

when i use GPU 1 training fb_sparse_vgg, it raise an error:

File "/storage/rpg_asynet-master/training/trainer.py", line 470, in trainEpoch
   model_output = self.model([locations, features, histogram.shape[0]])
 File "/home/anaconda3/envs/asynet/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
   result = self.forward(*input, **kwargs)
 File "/storage/rpg_asynet-master/models/facebook_sparse_vgg.py", line 42, in forward
   x = self.sparseModel(x)
 File "/home/anaconda3/envs/asynet/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
   result = self.forward(*input, **kwargs)
 File "/home/anaconda3/envs/asynet/lib/python3.7/site-packages/torch/nn/modules/container.py", line 100, in forward
   input = module(input)
 File "/home/anaconda3/envs/asynet/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
   result = self.forward(*input, **kwargs)
 File "/storage/rpg_asynet-master/SparseConvNet/sparseconvnet/batchNormalization.py", line 59, in forward
   self.leakiness)
 File "/storage/rpg_asynet-master/SparseConvNet/sparseconvnet/batchNormalization.py", line 111, in forward
   saveInvStd = running_mean.clone().resize_(ctx.nPlanes)
RuntimeError: CUDA error: an illegal memory access was encountered

but GPU 0 is ok, training dense_vgg with GPU 1 is ok.
i don't know where is the problem.

How to training SSC with multi-GPU?

The code released could only use one GPU or CPU , whether i can train Sparse Net using multi-GPUs?
If i can achieve multi-GPU training by changing the yaml's gpu_device parameter?
Or if there is some easy way to fulfill my need?

Failed to run "pip install async_sparse_py/"

Dear author,

I met this ERROR when tried to install async_sparse_py/:

Processing ./async_sparse_py
DEPRECATION: A future pip version will change local packages to be built in-place without first copying to a temporary directory. We recommend you use --use-feature=in-tree-build to test your packages with this new behavior before it becomes the default.
pip 21.3 will remove support for this functionality. You can find discussion regarding this at pypa/pip#7555.
Building wheels for collected packages: async-sparse
Building wheel for async-sparse (setup.py) ... error
ERROR: Failed building wheel for async-sparse
Running setup.py clean for async-sparse
Failed to build async-sparse
Installing collected packages: async-sparse
Running setup.py install for async-sparse ... error
ERROR: Command errored out with exit status 1: /usr/bin/python3 -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-9o5kx_9y/setup.py'"'"'; file='"'"'/tmp/pip-req-build-9o5kx_9y/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(file) if os.path.exists(file) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record /tmp/pip-record-ib5zgp17/install-record.txt --single-version-externally-managed --compile --install-headers /usr/local/include/python3.7/async-sparse Check the logs for full command output.

May you know how to solve this problem?

My gcc version is 7.5.0. I have installed eigen 3.4 using:
git clone https://gitlab.com/libeigen/eigen.git --branch 3.4-rc1
successfully.

Thank you very much.

can asynet be trained?

i notice that the choice of modelparameter in seeing.yaml donn't include asynet

image

is it means we could only train network using SSC and test using asynet?

Unable to make dataset for N-Cars

Hi,
I am trying to generate the dataset for N-Cars using your method but I am unable to find the "is_cars.txt" and "events.txt" files in the folder of the original dataset that I have downloaded from the Prophesee website. Can you please help me to figure out the issue that how you made "is_cars.txt" and "events.txt" files?
Thanks

where does the accuracy come from?

i trained Ncaltech101 with 1500 epochs and the accuracy is 0.72 in validation dataset, 0.70 in test dataset.
Which dataset are the results in the table obtained from?
i want to know how can i get the accuracy 0.745 in Table 1 ?

Help of running time

Hi!

I am testing running time of async convolution.

My testing method is to use time.time() in python to measure elapsed time of asyn_model.forward in unittests/sparse_VGG_test.py.

It shows that asyn_model takes about 200ms~300ms to process a single event on i7-9700K, while fb_model takes only about 200ms to process 1k events. It is strange, since async should be faster... maybe you know why?

How to preprocess DVSGesture

Hello. I have downloaded DVSGesture. However, I cannot directly run the training code using the raw dataset. Are there any preprocess steps?

Histogram representation updating asynchronously?

Hi,

Great work by the way. Please correct me if I am wrong.

I am currently understanding the work with n-cars dataset

  1. The histogram representation isn't using the timestamps from the events?

Thank you.

how can we make asynet trainable?

what should i do if i want training asynet?
i write a training code according to sparse_VGG_test.py but the following error reported.

Traceback (most recent call last):
  File "train.py", line 45, in <module>
    main()
  File "train.py", line 28, in main
    trainer = AsynSparseVGGModel(settings)
  File "/storage/rpg_asynet-master/training/trainer.py", line 53, in __init__
    self.optimizer = optim.Adam(filter(lambda p: p.requires_grad, self.model.parameters()),
AttributeError: 'asynSparseVGG' object has no attribute 'parameters'

AsynSparseVGGModel is a VGG model built by asyn_sparse_vgg.py

Bug

When I was debugging your code, I found an error: 'all' is not a member of 'Eigen'. "all" seems to be a nonexistent attribute, could you please tell me how to solve this problem? I'm looking forward to your reply, my email address: [email protected]

How to test a whole dataset?

i notice that the sparse_VGG_test.py just test one image in NCaltech101 , how do i test the accuracy in the whole dataset?

Docker

How about a Docker for those configurations?
I failed several times now.

Help with using asynet for object detection

Hi!

This is probably not an issue but more a cry for help. I have trained an object detection model (facebook_sparse_object_det) on the Prophesee dataset, and I am able to use it for detecting bounding boxes around cars and pedestrians using yolodetect. So far, so good.

The next thing I am working on is trying out your asynchronous implementation of this model. The issue is that I cannot for the life of me get the same result using the asynchronous model. It produces a LOT of incorrect bounding boxes. My first question is whether this layer-list looks correct and if it correctly corresponds to the pre-defined facebook_sparse_object_det model.:

    layer_list =  [['C', self.nr_input_channels, 16], ['BNRelu'], ['C', 16, 16],   ['BNRelu'], ['MP'],
                   ['C', 16,   32], ['BNRelu'], ['C', 32, 32],   ['BNRelu'], ['MP'],
                   ['C', 32,   64], ['BNRelu'], ['C', 64, 64],   ['BNRelu'], ['MP'],
                   ['C', 64,  128], ['BNRelu'], ['C', 128, 128], ['BNRelu'], ['MP'],
                   ['C', 128, 256], ['BNRelu'], ['C', 256, 256], ['BNRelu'],
                   ['ClassicC', 256, 256, 3, 2], ['ClassicBNRelu'],
                   ['ClassicFC', 256*self.output_map, 1024],  ['ClassicFC', 1024, 576]]

self.output_map = 6*8 and self.nr_input_channels=2
I am able to setWeightsEqual without issue, but the results are simply not correct. When I run the unittest sparse_VGG_test.py using a facebook_sparse_vgg model trained on N-Caltech101, the asynchronous implementation yields correct results. So i imagine it has to be something with the layer-list. If you have a script where you perform object detection using the asynchronous model, it would be awesome if I could have a look.

Thank you in advance.
Best regards
Kristoffer

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.