GithubHelp home page GithubHelp logo

tensorlayer / hyperpose Goto Github PK

View Code? Open in Web Editor NEW
1.2K 58.0 276.0 9.9 MB

Library for Fast and Flexible Human Pose Estimation

Home Page: https://hyperpose.readthedocs.io

Python 72.58% Shell 0.42% CMake 1.37% C++ 25.22% Dockerfile 0.42%
tensorlayer tensorflow openpose pose-estimation computer-vision distributed-training tensorrt mobilenet neural-networks

hyperpose's Introduction

GitHub last commit (branch) Supported TF Version Documentation Status Build Status Downloads Downloads Docker Pulls Codacy Badge

Please click TensorLayerX 🔥🔥🔥

TensorLayer is a novel TensorFlow-based deep learning and reinforcement learning library designed for researchers and engineers. It provides an extensive collection of customizable neural layers to build advanced AI models quickly, based on this, the community open-sourced mass tutorials and applications. TensorLayer is awarded the 2017 Best Open Source Software by the ACM Multimedia Society. This project can also be found at OpenI and Gitee.

News

  • 🔥 TensorLayerX is a Unified Deep Learning and Reinforcement Learning Framework for All Hardwares, Backends and OS. The current version supports TensorFlow, Pytorch, MindSpore, PaddlePaddle, OneFlow and Jittor as the backends, allowing users to run the code on different hardware like Nvidia-GPU and Huawei-Ascend.
  • TensorLayer is now in OpenI
  • Reinforcement Learning Zoo: Low-level APIs for professional usage, High-level APIs for simple usage, and a corresponding Springer textbook
  • Sipeed Maxi-EMC: Run TensorLayer models on the low-cost AI chip (e.g., K210) (Alpha Version)

Design Features

TensorLayer is a new deep learning library designed with simplicity, flexibility and high-performance in mind.

  • Simplicity : TensorLayer has a high-level layer/model abstraction which is effortless to learn. You can learn how deep learning can benefit your AI tasks in minutes through the massive examples.
  • Flexibility : TensorLayer APIs are transparent and flexible, inspired by the emerging PyTorch library. Compared to the Keras abstraction, TensorLayer makes it much easier to build and train complex AI models.
  • Zero-cost Abstraction : Though simple to use, TensorLayer does not require you to make any compromise in the performance of TensorFlow (Check the following benchmark section for more details).

TensorLayer stands at a unique spot in the TensorFlow wrappers. Other wrappers like Keras and TFLearn hide many powerful features of TensorFlow and provide little support for writing custom AI models. Inspired by PyTorch, TensorLayer APIs are simple, flexible and Pythonic, making it easy to learn while being flexible enough to cope with complex AI tasks. TensorLayer has a fast-growing community. It has been used by researchers and engineers all over the world, including those from Peking University, Imperial College London, UC Berkeley, Carnegie Mellon University, Stanford University, and companies like Google, Microsoft, Alibaba, Tencent, Xiaomi, and Bloomberg.

Multilingual Documents

TensorLayer has extensive documentation for both beginners and professionals. The documentation is available in both English and Chinese.

English Documentation Chinese Documentation Chinese Book

If you want to try the experimental features on the the master branch, you can find the latest document here.

Extensive Examples

You can find a large collection of examples that use TensorLayer in here and the following space:

Getting Start

TensorLayer 2.0 relies on TensorFlow, numpy, and others. To use GPUs, CUDA and cuDNN are required.

Install TensorFlow:

pip3 install tensorflow-gpu==2.0.0-rc1 # TensorFlow GPU (version 2.0 RC1)
pip3 install tensorflow # CPU version

Install the stable release of TensorLayer:

pip3 install tensorlayer

Install the unstable development version of TensorLayer:

pip3 install git+https://github.com/tensorlayer/tensorlayer.git

If you want to install the additional dependencies, you can also run

pip3 install --upgrade tensorlayer[all]              # all additional dependencies
pip3 install --upgrade tensorlayer[extra]            # only the `extra` dependencies
pip3 install --upgrade tensorlayer[contrib_loggers]  # only the `contrib_loggers` dependencies

If you are TensorFlow 1.X users, you can use TensorLayer 1.11.0:

# For last stable version of TensorLayer 1.X
pip3 install --upgrade tensorlayer==1.11.0

Performance Benchmark

The following table shows the training speeds of VGG16 using TensorLayer and native TensorFlow on a TITAN Xp.

Mode Lib Data Format Max GPU Memory Usage(MB) Max CPU Memory Usage(MB) Avg CPU Memory Usage(MB) Runtime (sec)
AutoGraph TensorFlow 2.0 channel last 11833 2161 2136 74
TensorLayer 2.0 channel last 11833 2187 2169 76
Graph Keras channel last 8677 2580 2576 101
Eager TensorFlow 2.0 channel last 8723 2052 2024 97
TensorLayer 2.0 channel last 8723 2010 2007 95

Getting Involved

Please read the Contributor Guideline before submitting your PRs.

We suggest users to report bugs using Github issues. Users can also discuss how to use TensorLayer in the following slack channel.



Citing TensorLayer

If you find TensorLayer useful for your project, please cite the following papers:

@article{tensorlayer2017,
    author  = {Dong, Hao and Supratak, Akara and Mai, Luo and Liu, Fangde and Oehmichen, Axel and Yu, Simiao and Guo, Yike},
    journal = {ACM Multimedia},
    title   = {{TensorLayer: A Versatile Library for Efficient Deep Learning Development}},
    url     = {http://tensorlayer.org},
    year    = {2017}
}

@inproceedings{tensorlayer2021,
  title={Tensorlayer 3.0: A Deep Learning Library Compatible With Multiple Backends},
  author={Lai, Cheng and Han, Jiarong and Dong, Hao},
  booktitle={2021 IEEE International Conference on Multimedia \& Expo Workshops (ICMEW)},
  pages={1--3},
  year={2021},
  organization={IEEE}
}

hyperpose's People

Contributors

boldjoel avatar dependabot[bot] avatar ganler avatar gyx-one avatar jingqingz avatar jovialio avatar korejan avatar lgarithm avatar luomai avatar mandeman avatar neolithera avatar qq160816 avatar sukhodolin avatar sunghopark2010 avatar syoyo avatar wagamamaz avatar zsdonghao avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hyperpose's Issues

Missing train_mode Config Params?

Hi .

Are there some Param missing on file config.py? I can not train missing train_mode params.

Thanks for your reply!

Traceback (most recent call last):
File "train.py", line 197, in
if config.TRAIN.train_mode == 'placeholder':
AttributeError: 'EasyDict' object has no attribute 'train_mode'

Training very CPU intensive

First off, awesome library, some great work here!

Just a question/comment : I'm currently running training the hao28_experimental model, and I've found training is very CPU intensive. To the point that my GPUs are sitting idle much of the time, and all of my CPU cores are under heavy load constantly.
Is this due to data augmentation? Would you recommend disabling some augmentation steps to remove this CPU bottleneck?

Tensorflow Python loading error

tensorflow.python.framework.errors_impl.InvalidArgumentError: Dimension 0 in both shapes must be equal, but are 7 and 3. Shapes are [7,7,128,128] and [3,3,128,128]. for 'Assig
n' (op: 'Assign') with input shapes: [7,7,128,128], [3,3,128,128].
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
  File "example-inference-1.py", line 64, in <module>
    measure(main)
  File "/home/vidursatija/openpose-plus/openpose_plus/inference/common.py", line 198, in measure
    result = f()
  File "example-inference-1.py", line 60, in main
    inference(args.base_model, args.path_to_npz, args.data_format, image_files, args.plot)
  File "example-inference-1.py", line 25, in inference
    'create TfPoseEstimator')
  File "/home/vidursatija/openpose-plus/openpose_plus/inference/common.py", line 198, in measure
    result = f()
  File "example-inference-1.py", line 24, in <lambda>
    e = measure(lambda: TfPoseEstimator(path_to_npz, model_func, target_size=(width, height), data_format=data_format),
  File "/home/vidursatija/openpose-plus/openpose_plus/inference/estimator.py", line 25, in __init__
    self._warm_up(graph_path)
  File "/home/vidursatija/openpose-plus/openpose_plus/inference/estimator.py", line 30, in _warm_up
    tl.files.load_and_assign_npz_dict(graph_path, self.persistent_sess)
  File "/usr/local/lib/python3.5/dist-packages/tensorlayer/files/utils.py", line 1781, in load_and_assign_npz_dict
    ops.append(varlist[0].assign(params[key]))
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/variables.py", line 645, in assign
    return state_ops.assign(self._variable, value, use_locking=use_locking)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/state_ops.py", line 216, in assign
    validate_shape=validate_shape)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/gen_state_ops.py", line 60, in assign
    use_locking=use_locking, name=name)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
    op_def=op_def)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/util/deprecation.py", line 454, in new_func
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 3155, in create_op
    op_def=op_def)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 1731, in __init__
    control_input_ops)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 1579, in _create_c_op
    raise ValueError(str(e))
ValueError: Dimension 0 in both shapes must be equal, but are 7 and 3. Shapes are [7,7,128,128] and [3,3,128,128]. for 'Assign' (op: 'Assign') with input shapes: [7,7,128,128]
, [3,3,128,128].

I'm getting the following error while running the following command

python3 example-inference-1.py --path-to-npz hao28-pose600000.npz --images pic3.jpg --base-model vgg

Model has been downloaded from tensorlayer's pretrained repo

Transfer Learning

I would like to retrain the model on different data with less points (head, upper body, lower body, leg) and on different kinds of pictures. I want to make it as fast as possible (planning to use the mobile net).
I was planning to drop the last layer of the model and create a new one with 4 outputs, and then retrain this layer and freeze the training on the others.

However, I have no idea how should I use MY labeled data and train on them.
Can someone help?

PS: Will this program support real time detection anytime soon?

example-inference-1 not running. Pre trained Model error

While running
python3 examples/example-inference-1.py --path-to-npz models/pretrained-models/models/openpose-plus/hao28-pose600000.npz --images data/mscoco2017/val2017/ --limit 1

Following error is encountered
File "/usr/local/lib/python3.5/dist-packages/numpy/lib/npyio.py", line 440, in load
return pickle.load(fid, **pickle_kwargs)
_pickle.UnpicklingError: invalid load key, 'v'.

run example-inference-2.py with frozen model

Thank you for your great work.
I frozen the model by the script freeze-graph.sh. But when i ran the examlpe-inference-2.py with frozen model, got the error:
KeyError: "The name 'import/upsample_size' refers to an Operation not in the graph."
when I changed the parameter_names to

parameter_names = [
            'image',
            'outputs/conf',
            'outputs/paf',
        ]

with error:

File "./examples/example-inference-2.py", line 60, in <lambda>
    humans, heatMap, pafMap = measure(lambda: e.inference(image), 'inference')
  File "./openpose_plus/inference/estimator.py", line 42, in inference
    humans, heatmap_up, pafmap_up = self.post_processor(heatmap[0], pafmap[0])
AttributeError: 'TfPoseestimatorLoader' object has no attribute 'post_processor'
--------------------------------------------------------------------------------
tot (s)      count        mean (ms)    name
--------------------------------------------------------------------------------
1.355879     1            1355.879307  create TfPoseestimatorLoader

Train with Own custom dataset

Thank you for your hard work: I have the following dataset structure that I would like to train:

Data folder:
Images:
--Image1.jpg
--image1.xml
--Image2.jpg
--Image2.xml

Should I convert my dataset to coco format ( 1. Object Detection, 2. Keypoint Detection 3.Image Captioning) and then train? if yes, is there any script that helps to convert my dataset to three formats?

Thank you for your help

Saving the trained models in pb format

Hello,

First of all thank you for your hard work. I've able to train and save some networks in the npz format but I would like to export the models to .pb to be more easily deployable. I've been working in freezing the graph and saving it to a pb file through tf.graph_util.convert_variables_to_constants and tf.graph_util.remove_training_nodes but I'm unsure of what are the names of the output nodes for the vggtiny model?

By the way, do you plan to provide an easy way to save the models in pb using tensorlayer?

Problem while running live-camera.sh in build_with_cmake

Building CXX object CMakeFiles/helpers.dir/examples/vis.cpp.o
[ 44%] Building CXX object CMakeFiles/helpers.dir/examples/input.cpp.o
In file included from /home/firelark/group1/openpose-plus/examples/vis.cpp:1:0:
/home/firelark/group1/openpose-plus/examples/vis.h:3:30: fatal error: opencv2/opencv.hpp: No such file or directory
compilation terminated.
make[3]: Leaving directory '/home/firelark/group1/openpose-plus/cmake-build/Linux'
CMakeFiles/helpers.dir/build.make:86: recipe for target 'CMakeFiles/helpers.dir/examples/vis.cpp.o' failed
make[3]: *** [CMakeFiles/helpers.dir/examples/vis.cpp.o] Error 1
make[3]: *** Waiting for unfinished jobs....
[ 52%] Built target libstdtracer
make[3]: Entering directory '/home/firelark/group1/openpose-plus/cmake-build/Linux'
Scanning dependencies of target tracer
make[3]: Leaving directory '/home/firelark/group1/openpose-plus/cmake-build/Linux'
make[3]: Entering directory '/home/firelark/group1/openpose-plus/cmake-build/Linux'
/home/firelark/group1/openpose-plus/examples/input.cpp:4:30: fatal error: opencv2/opencv.hpp: No such file or directory
compilation terminated.
CMakeFiles/helpers.dir/build.make:62: recipe for target 'CMakeFiles/helpers.dir/examples/input.cpp.o' failed
make[3]: *** [CMakeFiles/helpers.dir/examples/input.cpp.o] Error 1
make[3]: Leaving directory '/home/firelark/group1/openpose-plus/cmake-build/Linux'
CMakeFiles/Makefile2:334: recipe for target 'CMakeFiles/helpers.dir/all' failed
make[2]: *** [CMakeFiles/helpers.dir/all] Error 2
make[2]: *** Waiting for unfinished jobs....
[ 55%] Linking CXX static library libtracer.a
make[3]: Leaving directory '/home/firelark/group1/openpose-plus/cmake-build/Linux'
[ 55%] Built target tracer
make[2]: Leaving directory '/home/firelark/group1/openpose-plus/cmake-build/Linux'
Makefile:160: recipe for target 'all' failed
make[1]: *** [all] Error 2
make[1]: Leaving directory '/home/firelark/group1/openpose-plus/cmake-build/Linux'
Makefile:12: recipe for target 'build_with_cmake' failed
make: *** [build_with_cmake] Error 2


Keypoint location is very weired

in vis.cpp's draw_function, I print the part_idx and its point,for COCO_val2014_000000000459.jpg ,I found the predicted point 0, 2, 3, 5 ,6 ,7 is correct. while 14,15,16,17 is not right corresponding to the coco API.

[TODO] Speed up OpenPose inferencing

  • TensorRT
  • MobileNet
  • Batch inferencing
  • Streaming pipeline
  • Fast post-processing
    • max pooling with cudnn
    • resize with npp
  • Compiler optimization
    • build with -ffast-math
    • build with openmp enabled

Error in running python train.py

greetings,
Hello, this is a great openpose implementation in tensorflow. I am still new in tensorlayer. And i got this error when running 'python train.py'.
File "train.py", line 445, in
dataset = tf.data.Dataset().from_generator(generator, output_types=(tf.string, tf.string))
TypeError: Can't instantiate abstract class Dataset with abstract methods _as_variant_tensor, output_classes, output_shapes, output_types
Do you have any idea about this error?

tx2 testing

Hi sir !
I have successfully compiled the project on TX2, the image tested is 384x256,
There are two people in the picture, the running script is: run-uff-cpp.sh, I have closed the drawing result and the save function, but the display speed is only 4fps.
Did not reach the 13FPS in the project introduction, is there any problem with my operation? The results of the operation and the pictures I put in the attachment are:
The function that closes the image is shown below, I changed true to false
""
bool draw_humans = true;
if (draw_humans) {
cv::Mat resized_image(cv::Size(width, height), CV_8UC(3),
p.hwc_ptr);
handle(resized_image, humans);
}

""

[Tips] If NaN happen

    1. Use tf.contrib.layers.xavier_initializer()
    1. Use gradient clipping

Change the following

        opt = tf.train.MomentumOptimizer(lr_v, 0.9)
        train_op = opt.minimize(total_loss, global_step=global_step)

To:

        tvars = tl.layers.get_variables_with_name('model', True, True)
        grads, _ = tf.clip_by_global_norm(tf.gradients(total_loss, tvars), 100)
        optimizer = tf.train.MomentumOptimizer(lr_v, 0.9)
        train_op = optimizer.apply_gradients(zip(grads, tvars), global_step=global_step)

Implement PAF process in C++

Description

Given the network output tensors S'(h', w', J) and L'(h', w', C, 2), generate the list of humans.
Where

  • (h', w') is the size of feature map, (8h', 8w') = (h, w) if the base model has 3 2x2 pooling layers
  • J is the number of Joins types (key points), (in COCO case, J = 19, with 18 channels for actual part and 1 for background)
  • C is the number of Connections types (limb), (in COCO case, C = 19, with 17 actual limbs and 2 virtual limbs)
  • S'[:, :, j] is the j-th confident map
  • L'[:, :, c, :] is the c-th part affinity field

Test data:

network-output.npz.gz

Algorithm:

(following https://github.com/ildoonet/tf-pose-estimation/blob/master/tf_pose/pafprocess/pafprocess.cpp)

(S', L') -> (S, L) -> (S, P, L) -> [[Connection]] -> [Human]

  • upsample S', L' to original size S, L
  • determine the peaks of S, P
    • apply Gauss filter to S to get S1
    • apply 3x3 max pooling of stride 1 to S to get S2, pad to the same shape of S1
    • compare S1 and S2 to get P
  • generate a list of Connections from P for each connection type (limb) using NMS
  • generate a list of Humans from Connections using NMS

About the uff-runner.py and pretrained npz file

I used the two pretrained .npz file from official pretrained-models.The error is :
for 600000.npz

ValueError: Dimension 0 in both shapes must be equal, but are 7 and 3. Shapes are [7,7,128,128] and [3,3,128,128]. for 'Assign' (op: 'Assign') with input shapes: [7,7,128,128], [3,3,128,128].

for 345000.npz

ValueError: Dimension 2 in both shapes must be equal, but are 512 and 384. Shapes are [3,3,512,512] and [3,3,384,384]. for 'Assign_2' (op: 'Assign') with input shapes: [3,3,512,512], [3,3,384,384].

When I directly using the pretrained uff file, I got this:

[TensorRT] ERROR: Tensor imagecannot be both input and output
[TensorRT] INFO: UFFParser: parsing MarkOutput_1
[TensorRT] INFO: UFFParser: parsing MarkOutput_2

How could I use the pretrained npz and uff file?

run video

could you please tell me what command to run a video using this library in linux?

convert

Converting to UFF graph
Warning: No conversion function registered for layer: Relu6 yet.
Converting as custom op Relu6 model/stage6/branch2/c6/batchnorm2/Relu6
name: "model/stage6/branch2/c6/batchnorm2/Relu6"
op: "Relu6"
input: "model/stage6/branch2/c6/batchnorm2/batchnorm/Add_1"
attr {
key: "T"
value {
type: DT_FLOAT
}
}

Different logging with official openpose

In the logging of official openpose, the loss is the sum of all losses of all branches:

I0911 18:24:34.281644 18177 solver.cpp:228] Iteration 599975, loss = 382.516
I0911 18:24:34.281685 18177 solver.cpp:244]     Train net output #0: loss_stage1_L1 = 98.4054 (* 1 = 98.4054 loss)
I0911 18:24:34.281697 18177 solver.cpp:244]     Train net output #1: loss_stage1_L2 = 34.9568 (* 1 = 34.9568 loss)
I0911 18:24:34.281704 18177 solver.cpp:244]     Train net output #2: loss_stage5_L1 = 92.2677 (* 1 = 92.2677 loss)
I0911 18:24:34.281713 18177 solver.cpp:244]     Train net output #3: loss_stage5_L2 = 33.2778 (* 1 = 33.2778 loss)
I0911 18:24:34.281721 18177 solver.cpp:244]     Train net output #4: loss_stage6_L1 = 91.2305 (* 1 = 91.2305 loss)
I0911 18:24:34.281729 18177 solver.cpp:244]     Train net output #5: loss_stage6_L2 = 32.3784 (* 1 = 32.3784 loss)
I0911 18:24:34.281736 18177 sgd_solver.cpp:106] Iteration 599975, lr = 4.91855e-07
I0911 18:24:35.087828 18177 solver.cpp:228] Iteration 599980, loss = 231.808
I0911 18:24:35.087862 18177 solver.cpp:244]     Train net output #0: loss_stage1_L1 = 60.8041 (* 1 = 60.8041 loss)
I0911 18:24:35.087868 18177 solver.cpp:244]     Train net output #1: loss_stage1_L2 = 21.7501 (* 1 = 21.7501 loss)
I0911 18:24:35.087873 18177 solver.cpp:244]     Train net output #2: loss_stage5_L1 = 55.8138 (* 1 = 55.8138 loss)
I0911 18:24:35.087877 18177 solver.cpp:244]     Train net output #3: loss_stage5_L2 = 19.9663 (* 1 = 19.9663 loss)
I0911 18:24:35.087884 18177 solver.cpp:244]     Train net output #4: loss_stage6_L1 = 54.2236 (* 1 = 54.2236 loss)
I0911 18:24:35.087889 18177 solver.cpp:244]     Train net output #5: loss_stage6_L2 = 19.2508 (* 1 = 19.2508 loss)
I0911 18:24:35.087894 18177 sgd_solver.cpp:106] Iteration 599980, lr = 4.91855e-07
I0911 18:24:35.923229 18177 solver.cpp:228] Iteration 599985, loss = 335.47
I0911 18:24:35.923282 18177 solver.cpp:244]     Train net output #0: loss_stage1_L1 = 88.7421 (* 1 = 88.7421 loss)
I0911 18:24:35.923290 18177 solver.cpp:244]     Train net output #1: loss_stage1_L2 = 27.7794 (* 1 = 27.7794 loss)
I0911 18:24:35.923295 18177 solver.cpp:244]     Train net output #2: loss_stage5_L1 = 85.2865 (* 1 = 85.2865 loss)
I0911 18:24:35.923301 18177 solver.cpp:244]     Train net output #3: loss_stage5_L2 = 25.1282 (* 1 = 25.1282 loss)
I0911 18:24:35.923306 18177 solver.cpp:244]     Train net output #4: loss_stage6_L1 = 83.6175 (* 1 = 83.6175 loss)
I0911 18:24:35.923311 18177 solver.cpp:244]     Train net output #5: loss_stage6_L2 = 24.917 (* 1 = 24.917 loss)
I0911 18:24:35.923316 18177 sgd_solver.cpp:106] Iteration 599985, lr = 4.91855e-07
I0911 18:24:36.637693 18177 solver.cpp:228] Iteration 599990, loss = 296.236
I0911 18:24:36.637727 18177 solver.cpp:244]     Train net output #0: loss_stage1_L1 = 70.544 (* 1 = 70.544 loss)
I0911 18:24:36.637734 18177 solver.cpp:244]     Train net output #1: loss_stage1_L2 = 32.5959 (* 1 = 32.5959 loss)
I0911 18:24:36.637739 18177 solver.cpp:244]     Train net output #2: loss_stage5_L1 = 66.6619 (* 1 = 66.6619 loss)
I0911 18:24:36.637743 18177 solver.cpp:244]     Train net output #3: loss_stage5_L2 = 31.0486 (* 1 = 31.0486 loss)
I0911 18:24:36.637749 18177 solver.cpp:244]     Train net output #4: loss_stage6_L1 = 65.2246 (* 1 = 65.2246 loss)
I0911 18:24:36.637753 18177 solver.cpp:244]     Train net output #5: loss_stage6_L2 = 30.1618 (* 1 = 30.1618 loss)
I0911 18:24:36.637758 18177 sgd_solver.cpp:106] Iteration 599990, lr = 4.91855e-07
I0911 18:24:37.321897 18177 solver.cpp:228] Iteration 599995, loss = 343.375
I0911 18:24:37.321946 18177 solver.cpp:244]     Train net output #0: loss_stage1_L1 = 89.8674 (* 1 = 89.8674 loss)
I0911 18:24:37.321954 18177 solver.cpp:244]     Train net output #1: loss_stage1_L2 = 29.9036 (* 1 = 29.9036 loss)
I0911 18:24:37.321992 18177 solver.cpp:244]     Train net output #2: loss_stage5_L1 = 85.3504 (* 1 = 85.3504 loss)
I0911 18:24:37.322000 18177 solver.cpp:244]     Train net output #3: loss_stage5_L2 = 27.5036 (* 1 = 27.5036 loss)
I0911 18:24:37.322008 18177 solver.cpp:244]     Train net output #4: loss_stage6_L1 = 83.8315 (* 1 = 83.8315 loss)
I0911 18:24:37.322016 18177 solver.cpp:244]     Train net output #5: loss_stage6_L2 = 26.9185 (* 1 = 26.9185 loss)
I0911 18:24:37.322024 18177 sgd_solver.cpp:106] Iteration 599995, lr = 4.91855e-07

But in our version, the loss is not the sum of all stage losses, which one is correct?

Total Loss at iteration 348546 / 600000 is: 326.2651062011719 Learning rate 4.435560e-06 weight_norm 2.053057e+00 Took: 0.7935836315155029s
Network# 0 For Branch 1 Loss: 80.05341
Network# 1 For Branch 2 Loss: 136.94666
Network# 2 For Branch 1 Loss: 79.45871
Network# 3 For Branch 2 Loss: 136.63705
Network# 4 For Branch 1 Loss: 79.07562
Network# 5 For Branch 2 Loss: 136.25262
Total Loss at iteration 348547 / 600000 is: 222.1693572998047 Learning rate 4.435560e-06 weight_norm 2.053057e+00 Took: 0.35128092765808105s
Network# 0 For Branch 1 Loss: 57.39211
Network# 1 For Branch 2 Loss: 92.5421
Network# 2 For Branch 1 Loss: 53.668625
Network# 3 For Branch 2 Loss: 91.577156
Network# 4 For Branch 1 Loss: 54.71547
Network# 5 For Branch 2 Loss: 90.337135
Total Loss at iteration 348548 / 600000 is: 228.1260223388672 Learning rate 4.435560e-06 weight_norm 2.053057e+00 Took: 0.743701696395874s
Network# 0 For Branch 1 Loss: 58.997124
Network# 1 For Branch 2 Loss: 93.80566
Network# 2 For Branch 1 Loss: 57.302673
Network# 3 For Branch 2 Loss: 93.5231
Network# 4 For Branch 1 Loss: 56.936867
Network# 5 For Branch 2 Loss: 91.580536

Hyperparameters for the mobilenet model training

Hello everyone,

I've been able to successfully train both vgg-based models: vggtiny and vgg. Regretfully after trying with mobilenet I haven't been able to get the total loss below 300 mark. I did some testing using the AdamOptimizer from tensorflow but it didn't seem to have a big impact in the learning. Anyone has some tips about how to train the mobilenet version?

Cheers!

the loss is becoming inf?

the loss of the stage one (stage1/branch1, stage1/branch2) is becoming inf when the models_mobilenet is trained some few steps using parallel way in two gpu, however other stage's loss is normal ,therefore the total loss is inf (python3.6,tensorflow 1.9.0 tensorlayer)。the train is normal when using cpu. I take same times to find bugs but failure. so I ask for help for ,thank you.

[TL] ERROR: file models/pose.npz doesn't exist.

Hi,

Thanks for the awesome project!

I cloned the project and typed commands:

pip3 install -r requirements.txt
pip3 install pycocotools
python3 train.py

I tried to run last command for couple of times, but the same error seems to reoccur (see below).

I'm running this on Linux:
Linux version 4.15.0-38-generic (buildd@lcy01-amd64-023) (gcc version 7.3.0 (Ubuntu 7.3.0-16ubuntu3)) #41-Ubuntu SMP Wed Oct 10 10:59:38 UTC 2018

~/openpose-plus$ python3 train.py
[TL] annotations exists
[TL] validating images exists
[TL] training images exists
[TL] testing images exists
[x] Get pose data from data/mscoco2017/train2017
loading annotations into memory...
Done (t=11.38s)
creating index...
index created!
Overall get 56599 valid pose images from data/mscoco2017/train2017 and data/mscoco2017/annotations/person_keypoints_train2017.json
data/mscoco2017/train2017 has 56599 images
[x] Get pose data from data/mscoco2017/val2017
loading annotations into memory...
Done (t=0.24s)
creating index...
index created!
Overall get 2346 valid pose images from data/mscoco2017/val2017 and data/mscoco2017/annotations/person_keypoints_val2017.json
data/mscoco2017/val2017 has 2346 images
[TL] InputLayer model/input: (?, 368, 368, 3)
[TL] Conv2d model/conv1_1: n_filter: 64 filter_size: (3, 3) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/conv1_2: n_filter: 64 filter_size: (3, 3) strides: (1, 1) pad: SAME act: relu
[TL] MaxPool2d model/pool1: filter_size: (2, 2) strides: (2, 2) padding: SAME
[TL] Conv2d model/conv2_1: n_filter: 128 filter_size: (3, 3) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/conv2_2: n_filter: 128 filter_size: (3, 3) strides: (1, 1) pad: SAME act: relu
[TL] MaxPool2d model/pool2: filter_size: (2, 2) strides: (2, 2) padding: SAME
[TL] Conv2d model/conv3_1: n_filter: 256 filter_size: (3, 3) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/conv3_2: n_filter: 256 filter_size: (3, 3) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/conv3_3: n_filter: 256 filter_size: (3, 3) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/conv3_4: n_filter: 256 filter_size: (3, 3) strides: (1, 1) pad: SAME act: relu
[TL] MaxPool2d model/pool3: filter_size: (2, 2) strides: (2, 2) padding: SAME
[TL] Conv2d model/conv4_1: n_filter: 512 filter_size: (3, 3) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/conv4_2: n_filter: 512 filter_size: (3, 3) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/conv4_3: n_filter: 256 filter_size: (3, 3) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/conv4_4: n_filter: 128 filter_size: (3, 3) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/stage1/branch1/c1: n_filter: 128 filter_size: (3, 3) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/stage1/branch1/c2: n_filter: 128 filter_size: (3, 3) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/stage1/branch1/c3: n_filter: 128 filter_size: (3, 3) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/stage1/branch1/c4: n_filter: 512 filter_size: (1, 1) strides: (1, 1) pad: VALID act: relu
[TL] Conv2d model/stage1/branch1/confs: n_filter: 19 filter_size: (1, 1) strides: (1, 1) pad: VALID act: No Activation
[TL] Conv2d model/stage1/branch2/c1: n_filter: 128 filter_size: (3, 3) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/stage1/branch2/c2: n_filter: 128 filter_size: (3, 3) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/stage1/branch2/c3: n_filter: 128 filter_size: (3, 3) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/stage1/branch2/c4: n_filter: 512 filter_size: (1, 1) strides: (1, 1) pad: VALID act: relu
[TL] Conv2d model/stage1/branch2/pafs: n_filter: 38 filter_size: (1, 1) strides: (1, 1) pad: VALID act: No Activation
[TL] ConcatLayer model/stage2/concat: axis: -1
[TL] Conv2d model/stage2/branch1/c1: n_filter: 128 filter_size: (7, 7) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/stage2/branch1/c2: n_filter: 128 filter_size: (7, 7) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/stage2/branch1/c3: n_filter: 128 filter_size: (7, 7) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/stage2/branch1/c4: n_filter: 128 filter_size: (7, 7) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/stage2/branch1/c5: n_filter: 128 filter_size: (7, 7) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/stage2/branch1/c6: n_filter: 128 filter_size: (1, 1) strides: (1, 1) pad: VALID act: relu
[TL] Conv2d model/stage2/branch1/conf: n_filter: 19 filter_size: (1, 1) strides: (1, 1) pad: VALID act: No Activation
[TL] Conv2d model/stage2/branch2/c1: n_filter: 128 filter_size: (7, 7) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/stage2/branch2/c2: n_filter: 128 filter_size: (7, 7) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/stage2/branch2/c3: n_filter: 128 filter_size: (7, 7) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/stage2/branch2/c4: n_filter: 128 filter_size: (7, 7) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/stage2/branch2/c5: n_filter: 128 filter_size: (7, 7) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/stage2/branch2/c6: n_filter: 128 filter_size: (1, 1) strides: (1, 1) pad: VALID act: relu
[TL] Conv2d model/stage2/branch2/pafs: n_filter: 38 filter_size: (1, 1) strides: (1, 1) pad: VALID act: No Activation
[TL] ConcatLayer model/stage3/concat: axis: -1
[TL] Conv2d model/stage3/branch1/c1: n_filter: 128 filter_size: (7, 7) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/stage3/branch1/c2: n_filter: 128 filter_size: (7, 7) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/stage3/branch1/c3: n_filter: 128 filter_size: (7, 7) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/stage3/branch1/c4: n_filter: 128 filter_size: (7, 7) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/stage3/branch1/c5: n_filter: 128 filter_size: (7, 7) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/stage3/branch1/c6: n_filter: 128 filter_size: (1, 1) strides: (1, 1) pad: VALID act: relu
[TL] Conv2d model/stage3/branch1/conf: n_filter: 19 filter_size: (1, 1) strides: (1, 1) pad: VALID act: No Activation
[TL] Conv2d model/stage3/branch2/c1: n_filter: 128 filter_size: (7, 7) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/stage3/branch2/c2: n_filter: 128 filter_size: (7, 7) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/stage3/branch2/c3: n_filter: 128 filter_size: (7, 7) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/stage3/branch2/c4: n_filter: 128 filter_size: (7, 7) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/stage3/branch2/c5: n_filter: 128 filter_size: (7, 7) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/stage3/branch2/c6: n_filter: 128 filter_size: (1, 1) strides: (1, 1) pad: VALID act: relu
[TL] Conv2d model/stage3/branch2/pafs: n_filter: 38 filter_size: (1, 1) strides: (1, 1) pad: VALID act: No Activation
[TL] ConcatLayer model/stage4/concat: axis: -1
[TL] Conv2d model/stage4/branch1/c1: n_filter: 128 filter_size: (7, 7) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/stage4/branch1/c2: n_filter: 128 filter_size: (7, 7) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/stage4/branch1/c3: n_filter: 128 filter_size: (7, 7) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/stage4/branch1/c4: n_filter: 128 filter_size: (7, 7) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/stage4/branch1/c5: n_filter: 128 filter_size: (7, 7) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/stage4/branch1/c6: n_filter: 128 filter_size: (1, 1) strides: (1, 1) pad: VALID act: relu
[TL] Conv2d model/stage4/branch1/conf: n_filter: 19 filter_size: (1, 1) strides: (1, 1) pad: VALID act: No Activation
[TL] Conv2d model/stage4/branch2/c1: n_filter: 128 filter_size: (7, 7) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/stage4/branch2/c2: n_filter: 128 filter_size: (7, 7) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/stage4/branch2/c3: n_filter: 128 filter_size: (7, 7) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/stage4/branch2/c4: n_filter: 128 filter_size: (7, 7) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/stage4/branch2/c5: n_filter: 128 filter_size: (7, 7) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/stage4/branch2/c6: n_filter: 128 filter_size: (1, 1) strides: (1, 1) pad: VALID act: relu
[TL] Conv2d model/stage4/branch2/pafs: n_filter: 38 filter_size: (1, 1) strides: (1, 1) pad: VALID act: No Activation
[TL] ConcatLayer model/stage5/concat: axis: -1
[TL] Conv2d model/stage5/branch1/c1: n_filter: 128 filter_size: (7, 7) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/stage5/branch1/c2: n_filter: 128 filter_size: (7, 7) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/stage5/branch1/c3: n_filter: 128 filter_size: (7, 7) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/stage5/branch1/c4: n_filter: 128 filter_size: (7, 7) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/stage5/branch1/c5: n_filter: 128 filter_size: (7, 7) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/stage5/branch1/c6: n_filter: 128 filter_size: (1, 1) strides: (1, 1) pad: VALID act: relu
[TL] Conv2d model/stage5/branch1/conf: n_filter: 19 filter_size: (1, 1) strides: (1, 1) pad: VALID act: No Activation
[TL] Conv2d model/stage5/branch2/c1: n_filter: 128 filter_size: (7, 7) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/stage5/branch2/c2: n_filter: 128 filter_size: (7, 7) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/stage5/branch2/c3: n_filter: 128 filter_size: (7, 7) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/stage5/branch2/c4: n_filter: 128 filter_size: (7, 7) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/stage5/branch2/c5: n_filter: 128 filter_size: (7, 7) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/stage5/branch2/c6: n_filter: 128 filter_size: (1, 1) strides: (1, 1) pad: VALID act: relu
[TL] Conv2d model/stage5/branch2/pafs: n_filter: 38 filter_size: (1, 1) strides: (1, 1) pad: VALID act: No Activation
[TL] ConcatLayer model/stage6/concat: axis: -1
[TL] Conv2d model/stage6/branch1/c1: n_filter: 128 filter_size: (7, 7) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/stage6/branch1/c2: n_filter: 128 filter_size: (7, 7) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/stage6/branch1/c3: n_filter: 128 filter_size: (7, 7) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/stage6/branch1/c4: n_filter: 128 filter_size: (7, 7) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/stage6/branch1/c5: n_filter: 128 filter_size: (7, 7) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/stage6/branch1/c6: n_filter: 128 filter_size: (1, 1) strides: (1, 1) pad: VALID act: relu
[TL] Conv2d model/stage6/branch1/conf: n_filter: 19 filter_size: (1, 1) strides: (1, 1) pad: VALID act: No Activation
[TL] Conv2d model/stage6/branch2/c1: n_filter: 128 filter_size: (7, 7) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/stage6/branch2/c2: n_filter: 128 filter_size: (7, 7) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/stage6/branch2/c3: n_filter: 128 filter_size: (7, 7) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/stage6/branch2/c4: n_filter: 128 filter_size: (7, 7) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/stage6/branch2/c5: n_filter: 128 filter_size: (7, 7) strides: (1, 1) pad: SAME act: relu
[TL] Conv2d model/stage6/branch2/c6: n_filter: 128 filter_size: (1, 1) strides: (1, 1) pad: VALID act: relu
[TL] Conv2d model/stage6/branch2/pafs: n_filter: 38 filter_size: (1, 1) strides: (1, 1) pad: VALID act: No Activation
[TL] [*] geting variables with kernel
[TL] got 0: model/conv1_1/kernel:0 (3, 3, 3, 64)
[TL] got 1: model/conv1_2/kernel:0 (3, 3, 64, 64)
[TL] got 2: model/conv2_1/kernel:0 (3, 3, 64, 128)
[TL] got 3: model/conv2_2/kernel:0 (3, 3, 128, 128)
[TL] got 4: model/conv3_1/kernel:0 (3, 3, 128, 256)
[TL] got 5: model/conv3_2/kernel:0 (3, 3, 256, 256)
[TL] got 6: model/conv3_3/kernel:0 (3, 3, 256, 256)
[TL] got 7: model/conv3_4/kernel:0 (3, 3, 256, 256)
[TL] got 8: model/conv4_1/kernel:0 (3, 3, 256, 512)
[TL] got 9: model/conv4_2/kernel:0 (3, 3, 512, 512)
[TL] got 10: model/conv4_3/kernel:0 (3, 3, 512, 256)
[TL] got 11: model/conv4_4/kernel:0 (3, 3, 256, 128)
[TL] got 12: model/stage1/branch1/c1/kernel:0 (3, 3, 128, 128)
[TL] got 13: model/stage1/branch1/c2/kernel:0 (3, 3, 128, 128)
[TL] got 14: model/stage1/branch1/c3/kernel:0 (3, 3, 128, 128)
[TL] got 15: model/stage1/branch1/c4/kernel:0 (1, 1, 128, 512)
[TL] got 16: model/stage1/branch1/confs/kernel:0 (1, 1, 512, 19)
[TL] got 17: model/stage1/branch2/c1/kernel:0 (3, 3, 128, 128)
[TL] got 18: model/stage1/branch2/c2/kernel:0 (3, 3, 128, 128)
[TL] got 19: model/stage1/branch2/c3/kernel:0 (3, 3, 128, 128)
[TL] got 20: model/stage1/branch2/c4/kernel:0 (1, 1, 128, 512)
[TL] got 21: model/stage1/branch2/pafs/kernel:0 (1, 1, 512, 38)
[TL] got 22: model/stage2/branch1/c1/kernel:0 (7, 7, 185, 128)
[TL] got 23: model/stage2/branch1/c2/kernel:0 (7, 7, 128, 128)
[TL] got 24: model/stage2/branch1/c3/kernel:0 (7, 7, 128, 128)
[TL] got 25: model/stage2/branch1/c4/kernel:0 (7, 7, 128, 128)
[TL] got 26: model/stage2/branch1/c5/kernel:0 (7, 7, 128, 128)
[TL] got 27: model/stage2/branch1/c6/kernel:0 (1, 1, 128, 128)
[TL] got 28: model/stage2/branch1/conf/kernel:0 (1, 1, 128, 19)
[TL] got 29: model/stage2/branch2/c1/kernel:0 (7, 7, 185, 128)
[TL] got 30: model/stage2/branch2/c2/kernel:0 (7, 7, 128, 128)
[TL] got 31: model/stage2/branch2/c3/kernel:0 (7, 7, 128, 128)
[TL] got 32: model/stage2/branch2/c4/kernel:0 (7, 7, 128, 128)
[TL] got 33: model/stage2/branch2/c5/kernel:0 (7, 7, 128, 128)
[TL] got 34: model/stage2/branch2/c6/kernel:0 (1, 1, 128, 128)
[TL] got 35: model/stage2/branch2/pafs/kernel:0 (1, 1, 128, 38)
[TL] got 36: model/stage3/branch1/c1/kernel:0 (7, 7, 185, 128)
[TL] got 37: model/stage3/branch1/c2/kernel:0 (7, 7, 128, 128)
[TL] got 38: model/stage3/branch1/c3/kernel:0 (7, 7, 128, 128)
[TL] got 39: model/stage3/branch1/c4/kernel:0 (7, 7, 128, 128)
[TL] got 40: model/stage3/branch1/c5/kernel:0 (7, 7, 128, 128)
[TL] got 41: model/stage3/branch1/c6/kernel:0 (1, 1, 128, 128)
[TL] got 42: model/stage3/branch1/conf/kernel:0 (1, 1, 128, 19)
[TL] got 43: model/stage3/branch2/c1/kernel:0 (7, 7, 185, 128)
[TL] got 44: model/stage3/branch2/c2/kernel:0 (7, 7, 128, 128)
[TL] got 45: model/stage3/branch2/c3/kernel:0 (7, 7, 128, 128)
[TL] got 46: model/stage3/branch2/c4/kernel:0 (7, 7, 128, 128)
[TL] got 47: model/stage3/branch2/c5/kernel:0 (7, 7, 128, 128)
[TL] got 48: model/stage3/branch2/c6/kernel:0 (1, 1, 128, 128)
[TL] got 49: model/stage3/branch2/pafs/kernel:0 (1, 1, 128, 38)
[TL] got 50: model/stage4/branch1/c1/kernel:0 (7, 7, 185, 128)
[TL] got 51: model/stage4/branch1/c2/kernel:0 (7, 7, 128, 128)
[TL] got 52: model/stage4/branch1/c3/kernel:0 (7, 7, 128, 128)
[TL] got 53: model/stage4/branch1/c4/kernel:0 (7, 7, 128, 128)
[TL] got 54: model/stage4/branch1/c5/kernel:0 (7, 7, 128, 128)
[TL] got 55: model/stage4/branch1/c6/kernel:0 (1, 1, 128, 128)
[TL] got 56: model/stage4/branch1/conf/kernel:0 (1, 1, 128, 19)
[TL] got 57: model/stage4/branch2/c1/kernel:0 (7, 7, 185, 128)
[TL] got 58: model/stage4/branch2/c2/kernel:0 (7, 7, 128, 128)
[TL] got 59: model/stage4/branch2/c3/kernel:0 (7, 7, 128, 128)
[TL] got 60: model/stage4/branch2/c4/kernel:0 (7, 7, 128, 128)
[TL] got 61: model/stage4/branch2/c5/kernel:0 (7, 7, 128, 128)
[TL] got 62: model/stage4/branch2/c6/kernel:0 (1, 1, 128, 128)
[TL] got 63: model/stage4/branch2/pafs/kernel:0 (1, 1, 128, 38)
[TL] got 64: model/stage5/branch1/c1/kernel:0 (7, 7, 185, 128)
[TL] got 65: model/stage5/branch1/c2/kernel:0 (7, 7, 128, 128)
[TL] got 66: model/stage5/branch1/c3/kernel:0 (7, 7, 128, 128)
[TL] got 67: model/stage5/branch1/c4/kernel:0 (7, 7, 128, 128)
[TL] got 68: model/stage5/branch1/c5/kernel:0 (7, 7, 128, 128)
[TL] got 69: model/stage5/branch1/c6/kernel:0 (1, 1, 128, 128)
[TL] got 70: model/stage5/branch1/conf/kernel:0 (1, 1, 128, 19)
[TL] got 71: model/stage5/branch2/c1/kernel:0 (7, 7, 185, 128)
[TL] got 72: model/stage5/branch2/c2/kernel:0 (7, 7, 128, 128)
[TL] got 73: model/stage5/branch2/c3/kernel:0 (7, 7, 128, 128)
[TL] got 74: model/stage5/branch2/c4/kernel:0 (7, 7, 128, 128)
[TL] got 75: model/stage5/branch2/c5/kernel:0 (7, 7, 128, 128)
[TL] got 76: model/stage5/branch2/c6/kernel:0 (1, 1, 128, 128)
[TL] got 77: model/stage5/branch2/pafs/kernel:0 (1, 1, 128, 38)
[TL] got 78: model/stage6/branch1/c1/kernel:0 (7, 7, 185, 128)
[TL] got 79: model/stage6/branch1/c2/kernel:0 (7, 7, 128, 128)
[TL] got 80: model/stage6/branch1/c3/kernel:0 (7, 7, 128, 128)
[TL] got 81: model/stage6/branch1/c4/kernel:0 (7, 7, 128, 128)
[TL] got 82: model/stage6/branch1/c5/kernel:0 (7, 7, 128, 128)
[TL] got 83: model/stage6/branch1/c6/kernel:0 (1, 1, 128, 128)
[TL] got 84: model/stage6/branch1/conf/kernel:0 (1, 1, 128, 19)
[TL] got 85: model/stage6/branch2/c1/kernel:0 (7, 7, 185, 128)
[TL] got 86: model/stage6/branch2/c2/kernel:0 (7, 7, 128, 128)
[TL] got 87: model/stage6/branch2/c3/kernel:0 (7, 7, 128, 128)
[TL] got 88: model/stage6/branch2/c4/kernel:0 (7, 7, 128, 128)
[TL] got 89: model/stage6/branch2/c5/kernel:0 (7, 7, 128, 128)
[TL] got 90: model/stage6/branch2/c6/kernel:0 (1, 1, 128, 128)
[TL] got 91: model/stage6/branch2/pafs/kernel:0 (1, 1, 128, 38)
Start - n_step: 600000 batch_size: 8 lr_init: 4e-05 lr_decay_every_step: 136106
[TL] ERROR: file models/pose.npz doesn't exist.
2018-11-06 15:19:34.013385: W tensorflow/core/framework/op_kernel.cc:1261] Invalid argument: TypeError: affine_transform_cv2() got an unexpected keyword argument 'border_mode'
Traceback (most recent call last):

File "/home/jaakko/.local/lib/python3.6/site-packages/tensorflow/python/ops/script_ops.py", line 206, in call
ret = func(*args)

File "train.py", line 154, in _data_aug_fn
mask_miss = tl.prepro.affine_transform_cv2(mask_miss, transform_matrix, border_mode='replicate')

TypeError: affine_transform_cv2() got an unexpected keyword argument 'border_mode'

2018-11-06 15:19:34.013388: W tensorflow/core/framework/op_kernel.cc:1261] Invalid argument: TypeError: affine_transform_cv2() got an unexpected keyword argument 'border_mode'
Traceback (most recent call last):

File "/home/jaakko/.local/lib/python3.6/site-packages/tensorflow/python/ops/script_ops.py", line 206, in call
ret = func(*args)

File "train.py", line 154, in _data_aug_fn
mask_miss = tl.prepro.affine_transform_cv2(mask_miss, transform_matrix, border_mode='replicate')

TypeError: affine_transform_cv2() got an unexpected keyword argument 'border_mode'

2018-11-06 15:19:34.013904: W tensorflow/core/framework/op_kernel.cc:1261] Invalid argument: TypeError: affine_transform_cv2() got an unexpected keyword argument 'border_mode'
Traceback (most recent call last):

File "/home/jaakko/.local/lib/python3.6/site-packages/tensorflow/python/ops/script_ops.py", line 206, in call
ret = func(*args)

File "train.py", line 154, in _data_aug_fn
mask_miss = tl.prepro.affine_transform_cv2(mask_miss, transform_matrix, border_mode='replicate')

TypeError: affine_transform_cv2() got an unexpected keyword argument 'border_mode'

2018-11-06 15:19:34.014117: W tensorflow/core/framework/op_kernel.cc:1261] Invalid argument: TypeError: affine_transform_cv2() got an unexpected keyword argument 'border_mode'
Traceback (most recent call last):

File "/home/jaakko/.local/lib/python3.6/site-packages/tensorflow/python/ops/script_ops.py", line 206, in call
ret = func(*args)

File "train.py", line 154, in _data_aug_fn
mask_miss = tl.prepro.affine_transform_cv2(mask_miss, transform_matrix, border_mode='replicate')

TypeError: affine_transform_cv2() got an unexpected keyword argument 'border_mode'

2018-11-06 15:19:34.030183: W tensorflow/core/framework/op_kernel.cc:1261] Invalid argument: TypeError: affine_transform_cv2() got an unexpected keyword argument 'border_mode'
Traceback (most recent call last):

File "/home/jaakko/.local/lib/python3.6/site-packages/tensorflow/python/ops/script_ops.py", line 206, in call
ret = func(*args)

File "train.py", line 154, in _data_aug_fn
mask_miss = tl.prepro.affine_transform_cv2(mask_miss, transform_matrix, border_mode='replicate')

TypeError: affine_transform_cv2() got an unexpected keyword argument 'border_mode'

2018-11-06 15:19:34.037101: W tensorflow/core/framework/op_kernel.cc:1261] Invalid argument: TypeError: affine_transform_cv2() got an unexpected keyword argument 'border_mode'
Traceback (most recent call last):

File "/home/jaakko/.local/lib/python3.6/site-packages/tensorflow/python/ops/script_ops.py", line 206, in call
ret = func(*args)

File "train.py", line 154, in _data_aug_fn
mask_miss = tl.prepro.affine_transform_cv2(mask_miss, transform_matrix, border_mode='replicate')

TypeError: affine_transform_cv2() got an unexpected keyword argument 'border_mode'

2018-11-06 15:19:34.037555: W tensorflow/core/framework/op_kernel.cc:1261] Invalid argument: TypeError: affine_transform_cv2() got an unexpected keyword argument 'border_mode'
Traceback (most recent call last):

File "/home/jaakko/.local/lib/python3.6/site-packages/tensorflow/python/ops/script_ops.py", line 206, in call
ret = func(*args)

File "train.py", line 154, in _data_aug_fn
mask_miss = tl.prepro.affine_transform_cv2(mask_miss, transform_matrix, border_mode='replicate')

TypeError: affine_transform_cv2() got an unexpected keyword argument 'border_mode'

Traceback (most recent call last):
File "/home/jaakko/.local/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1334, in _do_call
return fn(*args)
File "/home/jaakko/.local/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1319, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "/home/jaakko/.local/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1407, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.InvalidArgumentError: TypeError: affine_transform_cv2() got an unexpected keyword argument 'border_mode'
Traceback (most recent call last):

File "/home/jaakko/.local/lib/python3.6/site-packages/tensorflow/python/ops/script_ops.py", line 206, in call
ret = func(*args)

File "train.py", line 154, in _data_aug_fn
mask_miss = tl.prepro.affine_transform_cv2(mask_miss, transform_matrix, border_mode='replicate')

TypeError: affine_transform_cv2() got an unexpected keyword argument 'border_mode'

 [[{{node PyFunc}} = PyFunc[Tin=[DT_FLOAT, DT_STRING], Tout=[DT_FLOAT, DT_FLOAT, DT_FLOAT], token="pyfunc_3"](convert_image, arg1)]]
 [[{{node IteratorGetNext}} = IteratorGetNext[output_shapes=[[?,368,368,3], [?,46,46,57], [?,46,46,1]], output_types=[DT_FLOAT, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](OneShotIterator)]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "train.py", line 471, in
single_train(dataset)
File "train.py", line 255, in single_train
sess.run([train_op, total_loss, stage_losses, l2_loss, last_conf, last_paf])
File "/home/jaakko/.local/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 929, in run
run_metadata_ptr)
File "/home/jaakko/.local/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1152, in _run
feed_dict_tensor, options, run_metadata)
File "/home/jaakko/.local/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1328, in _do_run
run_metadata)
File "/home/jaakko/.local/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1348, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: TypeError: affine_transform_cv2() got an unexpected keyword argument 'border_mode'
Traceback (most recent call last):

File "/home/jaakko/.local/lib/python3.6/site-packages/tensorflow/python/ops/script_ops.py", line 206, in call
ret = func(*args)

File "train.py", line 154, in _data_aug_fn
mask_miss = tl.prepro.affine_transform_cv2(mask_miss, transform_matrix, border_mode='replicate')

TypeError: affine_transform_cv2() got an unexpected keyword argument 'border_mode'

 [[{{node PyFunc}} = PyFunc[Tin=[DT_FLOAT, DT_STRING], Tout=[DT_FLOAT, DT_FLOAT, DT_FLOAT], token="pyfunc_3"](convert_image, arg1)]]
 [[node IteratorGetNext (defined at train.py:212)  = IteratorGetNext[output_shapes=[[?,368,368,3], [?,46,46,57], [?,46,46,1]], output_types=[DT_FLOAT, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](OneShotIterator)]]

3D pose estimation training method

HI SIR!
We are currently working on a human body test (similar to the picture below) with a head-top view.

However, we found that openpose is not good for this form of human detection, and there are a lot of missed detections and misdetections.
Sometimes the foot is detected as a hand, sometimes the handle is detected as a foot, which I think is because of the data set used in its model training and the model definition itself.

We know that the key points of the openpose body key point during training are as follows:
{0-'nose', 1-'neck', 2-'Rsho', 3-'Relb', 4-'Rwri',5-'Lsho', 6-'Lelb', 7-'Lwri', 8-'Rhip', 9-'Rkne', 10-'Rank', 11-'Lhip', 12-'Lkne', 13-'Lank', 14-'Leye', 15-'Reye', 16-'Lear', 17-'Rear', 18-'pt19'}
My idea for the overhead view is whether I can focus only on the upper body, that is, keep the label
{, 2-'Rsho', 3-'Relb', 4-'Rwri',5-'Lsho', 6-'Lelb', 7-'Lwri', 18-'pt19'}
And add a "head";
Is this feasible? If so, does the training code need to be modified? and How ?

No trained Model

I saw the High-performance Inference using TensorRT part and I want to do a test on TX2 too,but I can't find the model,will you please share it ?Thanks.

Unable to build TensorRT library

After cloning this repository and running make pack I get the following error:

CMake Warning at cmake/examples.cmake:2 (FIND_PACKAGE):
  By not providing "Findgflags.cmake" in CMAKE_MODULE_PATH this project has
  asked CMake to find a package configuration file provided by "gflags", but
  CMake did not find one.

  Could not find a package configuration file provided by "gflags" with any
  of the following names:

    gflagsConfig.cmake
    gflags-config.cmake

  Add the installation prefix of "gflags" to CMAKE_PREFIX_PATH or set
  "gflags_DIR" to a directory containing one of the above files.  If "gflags"
  provides a separate development package or SDK, be sure it has been
  installed.
Call Stack (most recent call first):
  CMakeLists.txt:18 (INCLUDE)


-- Configuring done
CMake Error at cmake/3rdparty.cmake:20 (ADD_LIBRARY):
  No SOURCES given to target: tracer
Call Stack (most recent call first):
  CMakeLists.txt:17 (INCLUDE)


CMake Error at cmake/3rdparty.cmake:38 (ADD_LIBRARY):
  No SOURCES given to target: stdtensor
Call Stack (most recent call first):
  CMakeLists.txt:17 (INCLUDE)

How can I fix this issue?

[TODO] Speed Up Training

test a image

hi:
how can i use the .npz flie to test a image? predict the pose and mark in the image?
thanks

Training for different image sizes

Procedure for retraining on different image size -

  1. Changed the train_config image size.
  2. No change in any other file.

Gives the following error
2018-11-22 06:22:58.201844: W tensorflow/core/framework/op_kernel.cc:1261] Invalid argument: ValueError: cannot reshape array of size 240672 into shape (368,656,1)
Traceback (most recent call last):

File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/script_ops.py", line 147, in call
ret = func(*args)

File "train.py", line 172, in _data_aug_fn
img_mask = mask_miss.reshape(hin, win, 1)

ValueError: cannot reshape array of size 240672 into shape (368,656,1)

Simplify python dependency

In order to export a model defined in TL to uff format, we need the following pip packages

  • tensorlayer: installed via pip
  • uff: installed via dpkg from a local deb package provided by nvidia

However uff only supports python 3.5, but the default python3 on ubuntu18 is python3.6.
If we install TL via python3.5 -m pip install tensorlayer --user, there will be some missing pip packages, which must be installed via apt instead of pip, i.e. tk.

build cpp error

Scanning dependencies of target see-pose
make[4]: Leaving directory '/home/wangweilin/wwl/openpose/tensorlayer/openpose/cpp/examples/app/cmake-build/Linux'
make[4]: Entering directory '/home/wangweilin/wwl/openpose/tensorlayer/openpose/cpp/examples/app/cmake-build/Linux'
[ 33%] Building CXX object CMakeFiles/see-pose.dir/src/input.cc.o
[ 66%] Building CXX object CMakeFiles/see-pose.dir/src/main.cc.o
In file included from /home/wangweilin/wwl/openpose/tensorlayer/openpose/cpp/examples/app/src/main.cc:9:0:
/home/wangweilin/wwl/openpose/tensorlayer/openpose/cpp/examples/app/src/timer.h:7:7: error: using typedef-name ‘timer_t’ after ‘class’
class timer_t
^
In file included from /usr/include/time.h:47:0,
from /usr/include/pthread.h:24,
from /usr/include/x86_64-linux-gnu/c++/5/bits/gthr-default.h:35,
from /usr/include/x86_64-linux-gnu/c++/5/bits/gthr.h:148,
from /usr/include/c++/5/ext/atomicity.h:35,
from /usr/include/c++/5/memory:73,
from /home/wangweilin/wwl/openpose/tensorlayer/openpose/cpp/examples/app/src/main.cc:1:
/usr/include/x86_64-linux-gnu/bits/types/timer_t.h:7:19: note: ‘timer_t’ has a previous declaration here
typedef __timer_t timer_t;
^
/home/wangweilin/wwl/openpose/tensorlayer/openpose/cpp/examples/app/src/main.cc: In function ‘void pose_example(const std::vector<std::__cxx11::basic_string >&)’:
/home/wangweilin/wwl/openpose/tensorlayer/openpose/cpp/examples/app/src/main.cc:36:41: error: invalid conversion from ‘const void*’ to ‘timer_t {aka void*}’ [-fpermissive]
timer_t _("create_pose_detector");
^
/home/wangweilin/wwl/openpose/tensorlayer/openpose/cpp/examples/app/src/main.cc:44:46: error: invalid conversion from ‘const void*’ to ‘timer_t {aka void*}’ [-fpermissive]
timer_t _("get_detection_tensors");
^
CMakeFiles/see-pose.dir/build.make:86: recipe for target 'CMakeFiles/see-pose.dir/src/main.cc.o' failed
make[4]: *** [CMakeFiles/see-pose.dir/src/main.cc.o] Error 1
make[4]: *** Waiting for unfinished jobs....
make[4]: Leaving directory '/home/wangweilin/wwl/openpose/tensorlayer/openpose/cpp/examples/app/cmake-build/Linux'
CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/see-pose.dir/all' failed
make[3]: *** [CMakeFiles/see-pose.dir/all] Error 2
make[3]: Leaving directory '/home/wangweilin/wwl/openpose/tensorlayer/openpose/cpp/examples/app/cmake-build/Linux'
Makefile:83: recipe for target 'all' failed
make[2]: *** [all] Error 2
make[2]: Leaving directory '/home/wangweilin/wwl/openpose/tensorlayer/openpose/cpp/examples/app/cmake-build/Linux'
Makefile:21: recipe for target 'see_pose' failed
make[1]: *** [see_pose] Error 2
make[1]: Leaving directory '/home/wangweilin/wwl/openpose/tensorlayer/openpose/cpp/examples/app'
Makefile:30: recipe for target 'opencv_example' failed
make: *** [opencv_example] Error 2

live-camera.sh Make Fail

I have trouble compiling the live-camera.sh library.

Environment : Ubuntu 16.04, python3.5 (not anaconda3), cuda9.0, on driver nvidia-396.54.
With all requirement package installed, opencv 3.4.1 built from source:

Python 3.5.2 (default, Nov 12 2018, 13:43:14) 
[GCC 5.4.0 20160609] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import cv2
>>> cv2.__version__
'3.4.1'
>>> import gflags
>>> import tensorrt
>>> tensorrt.__version__
'3.0.4'

Error Message:

[ 91%] Linking CXX executable example-batch-detector
[ 91%] Linking CXX executable example-stream-detector
[ 91%] Linking CXX executable example-live-camera
/usr/bin/ld: warning: libopencv_imgproc.so.3.4, needed by //usr/local/lib/libopencv_imgcodecs.so, may conflict with libopencv_imgproc.so.2.4
/usr/bin/ld: libhelpers.a(vis.cpp.o): undefined reference to symbol '_ZN2cv4lineERKNS_17_InputOutputArrayENS_6Point_IiEES4_RKNS_7Scalar_IdEEiii'
/usr/local/lib/libopencv_imgproc.so.3.4: error adding symbols: DSO missing from command line
collect2: error: ld returned 1 exit status
CMakeFiles/example-batch-detector.dir/build.make:100: recipe for target 'example-batch-detector' failed
make[3]: *** [example-batch-detector] Error 1
make[3]: Leaving directory '/home/d201/Documents/python_project/openpose-plus/cmake-build/Linux'
CMakeFiles/Makefile2:409: recipe for target 'CMakeFiles/example-batch-detector.dir/all' failed
make[2]: *** [CMakeFiles/example-batch-detector.dir/all] Error 2
make[2]: *** Waiting for unfinished jobs....
/usr/bin/ld: warning: libopencv_imgproc.so.3.4, needed by //usr/local/lib/libopencv_imgcodecs.so, may conflict with libopencv_imgproc.so.2.4
/usr/bin/ld: libhelpers.a(vis.cpp.o): undefined reference to symbol '_ZN2cv4lineERKNS_17_InputOutputArrayENS_6Point_IiEES4_RKNS_7Scalar_IdEEiii'
/usr/local/lib/libopencv_imgproc.so.3.4: error adding symbols: DSO missing from command line
collect2: error: ld returned 1 exit status
CMakeFiles/example-stream-detector.dir/build.make:100: recipe for target 'example-stream-detector' failed
make[3]: *** [example-stream-detector] Error 1
make[3]: Leaving directory '/home/d201/Documents/python_project/openpose-plus/cmake-build/Linux'
CMakeFiles/Makefile2:293: recipe for target 'CMakeFiles/example-stream-detector.dir/all' failed
make[2]: *** [CMakeFiles/example-stream-detector.dir/all] Error 2
/usr/bin/ld: warning: libopencv_imgproc.so.3.4, needed by //usr/local/lib/libopencv_videoio.so, may conflict with libopencv_imgproc.so.2.4
/usr/bin/ld: libhelpers.a(vis.cpp.o): undefined reference to symbol '_ZN2cv4lineERKNS_17_InputOutputArrayENS_6Point_IiEES4_RKNS_7Scalar_IdEEiii'
/usr/local/lib/libopencv_imgproc.so.3.4: error adding symbols: DSO missing from command line
collect2: error: ld returned 1 exit status
CMakeFiles/example-live-camera.dir/build.make:100: recipe for target 'example-live-camera' failed
make[3]: *** [example-live-camera] Error 1
make[3]: Leaving directory '/home/d201/Documents/python_project/openpose-plus/cmake-build/Linux'
CMakeFiles/Makefile2:70: recipe for target 'CMakeFiles/example-live-camera.dir/all' failed
make[2]: *** [CMakeFiles/example-live-camera.dir/all] Error 2
make[2]: Leaving directory '/home/d201/Documents/python_project/openpose-plus/cmake-build/Linux'
Makefile:160: recipe for target 'all' failed
make[1]: *** [all] Error 2
make[1]: Leaving directory '/home/d201/Documents/python_project/openpose-plus/cmake-build/Linux'
Makefile:12: recipe for target 'build_with_cmake' failed
make: *** [build_with_cmake] Error 2

According to my investigation, this is likely caused by not specifying opencv libs to g++, and could be solved by adding rules to Makefile.
HOWEVER, the Makefile under cmake-build/Linux/ shouldn't be modified.

Please tell me any other plausible way to solve my problem. Great thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.