GithubHelp home page GithubHelp logo

hzzone / pytorch-openpose Goto Github PK

View Code? Open in Web Editor NEW
2.0K 2.0K 384.0 20.18 MB

pytorch implementation of openpose including Hand and Body Pose Estimation.

Python 4.08% Jupyter Notebook 95.92%
openpose pose-estimation pytorch

pytorch-openpose's People

Contributors

fabian-hertwig-mw avatar gngdb avatar hzzone avatar rizkywellyanto avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pytorch-openpose's Issues

when run'demo_camera.py', error like this

File "demo_camera.py", line 15, in
print(f"Torch device: {torch.cuda.get_device_name()}")
TypeError: get_device_name() missing 1 required positional argument: 'device'

time consuming

hello, I'm wondering if this code can run in real-time when detecting both body and hand keypoints? processing one single image just costed around 9s on GPU when I try the demo.py, that's really time consuming~

Can't open camera by index error when trying to use Kinect

Thank you guys for providing the PyTorch model.

I am trying to get it to work with Kinect using libfreenect2(Openkinect) on my desktop, but I get the error below.

Do you know of a way to resolve it? or
How do I change the settings to connect to Kinect?

(pytorch-openpose) User@computer:~/software/pytorch-openpose$ python demo_camera.py
Torch device: GeForce RTX 2080 Ti
[ WARN:0] global /io/opencv/modules/videoio/src/cap_v4l.cpp (887) open VIDEOIO(V4L2:/dev/video0): can't open camera by index
Traceback (most recent call last):
File "demo_camera.py", line 22, in
candidate, subset = body_estimation(oriImg)
File "/home/femifapo/software/pytorch-openpose/src/body.py", line 31, in call
multiplier = [x * boxsize / oriImg.shape[0] for x in scale_search]
File "/home/femifapo/software/pytorch-openpose/src/body.py", line 31, in
multiplier = [x * boxsize / oriImg.shape[0] for x in scale_search]
AttributeError: 'NoneType' object has no attribute 'shape'

Does the repo support hand dectection?

The description seems to suggest that the hand pose detection is conditioned on the body pose detection. However, in the demo image, it shows a standalone detected hand. I guess my question is: given an image of only a hand, can the model detect it? Because the original OpenPose does not.

How can I get a hand keypoint detector without body

Hello, I want to get a hand keypoint detector without body, is this paper---Hand keypoint detection in single images using multiview bootstrapping not open source? I want to know and learn how it is trained and view the model of the network, the openpose project seems to provide only the code to detect the body.
can you give me some guidance?Thany you very much.

runtime error for demo.jpg -- RuntimeError: Given groups=1, weight of size [256, 512, 3, 3], expected input[1, 3, 46, 32] to have 512 channels, but got 3 channels instead

Any idea what's happening?

SED-ML-0148% python demo1.py
Traceback (most recent call last):
File "demo1.py", line 17, in
candidate, subset = body_estimation(oriImg)
File "python/body.py", line 46, in call
Mconv7_stage6_L1, Mconv7_stage6_L2 = self.model(data)
File "/Users/ehartz01/Desktop/SignPhonologizer/nicey/lib/python2.7/site-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "python/model.py", line 108, in forward
out1 = self.model0(x)
File "/Users/ehartz01/Desktop/SignPhonologizer/nicey/lib/python2.7/site-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "/Users/ehartz01/Desktop/SignPhonologizer/nicey/lib/python2.7/site-packages/torch/nn/modules/container.py", line 92, in forward
input = module(input)
File "/Users/ehartz01/Desktop/SignPhonologizer/nicey/lib/python2.7/site-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "/Users/ehartz01/Desktop/SignPhonologizer/nicey/lib/python2.7/site-packages/torch/nn/modules/conv.py", line 320, in forward
self.padding, self.dilation, self.groups)
RuntimeError: Given groups=1, weight of size [256, 512, 3, 3], expected input[1, 3, 46, 32] to have 512 channels, but got 3 channels instead

Is this available for commercial work?

Hi, Appreciations for such a nice work!!

I was looking this project and noticed that you didn't add any license to this project. I want to know whether this can be used for commercial projects or not.

I searched in official Openpose and found out that they don't offer commercial license for free.

Since your project is a derivative of their work, so it's owned by CMU based on the license of openpose. Could you look this issue from OpenPose repo for more detail?

Also, we can't use their pretrained model which follows the same license as their code, this is the issue, I'm referring to. Since you converted their Caffe model to PyTorch model, it's kind of derivative of their work only and as per their license, your work is CMU's work if used commercially and CMU's Openpose license apply by default.

So, this or any other implementation of Openpose can't be used commercially!

What do you think about this?

ZeroDivisionError

Hello. Thanks for your excellent job. I can successfully in running the demo. But when I was running with another image, I found a ZeroDivisionError

Traceback (most recent call last):
File "/home/hkuit155/Documents/bottom_up/pytorch-openpose/pose_track_demo.py",
line 18, in a
candidate, subset = body_estimation(oriImg)
File "/home/hkuit155/Documents/bottom_up/pytorch-openpose/src/body.py",
line 131, in call
0.5 * oriImg.shape[0] / norm - 1, 0) ZeroDivisionError: float division by zero

Multiple peoples body part IDs

If multiple bodies are detected, the subset seems to increase, which is normal, but the ID for each body part changes. For example, the left eye has the ID of 14 but for three skeletons instead of IDs being multiples of 14 they are put next to each other in the subset so they become 14,15,16. So, now the right eye does not correspond to its ID anymore. Is there a way around this problem, to make all the ID corresponds to the skeleton point even if multiple people are detected.

numpy.AxisError: axis 1 is out of bounds for array of dimension 1

Traceback (most recent call last):
File "/home/moziaijason/pycode/detection/pytorch-openpose-master/src/hand.py", line 89, in
canvas = util.draw_handpose(oriImg, peaks, True)
File "/home/moziaijason/pycode/detection/pytorch-openpose-master/src/util.py", line 94, in draw_handpose
if np.sum(np.all(peaks[e], axis=1)==0)==0:
File "<array_function internals>", line 6, in all
File "/home/moziaijason/.local/lib/python3.6/site-packages/numpy/core/fromnumeric.py", line 2398, in all
return _wrapreduction(a, np.logical_and, 'all', axis, None, out, keepdims=keepdims)
File "/home/moziaijason/.local/lib/python3.6/site-packages/numpy/core/fromnumeric.py", line 90, in _wrapreduction
return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
numpy.AxisError: axis 1 is out of bounds for array of dimension 1

util.py

for peaks in all_hand_peaks:
for ie, e in enumerate(edges):
if np.sum(np.all(peaks[e], axis=1)==0)==0:
print(peaks[e])
x1, y1 = peaks[e[0]]
x2, y2 = peaks[e[1]]
ax.plot([x1, x2], [y1, y2], color=matplotlib.colors.hsv_to_rgb([ie/float(len(edges)), 1.0, 1.0]))

当我运行hand.py的时候,出现上面的错误

Face Keypoints

Are you planning on adding functionality for face keypoints as well? Or how might I go about adding this in?

ZeroDivisionError: float division by zero in body.py

Thanks for your great work!

When I run the demo.py with a change of test_image = 'images/090932040.jpg', there is an error :

body.py:123: RuntimeWarning: invalid value encountered in true_divide
  vec = np.divide(vec, norm)
Traceback (most recent call last):
  File "demo.py", line 16, in <module>
    candidate, subset = body_estimation(oriImg)
  File "src\body.py", line 135, in __call__
    0.5 * oriImg.shape[0] / norm - 1, 0)
ZeroDivisionError: float division by zero

090932040

This is the image I used. Could you help me with the error?

TypeError: can't convert CUDA tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.

First of all: thanks for your amazing work! I would like to use OpenPose, but can't run Caffe in my current setup/environment. This repo is a life-saver.

Now to the point: I got the following error when running the demo.py:
TypeError: can't convert CUDA tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.

I fixed this by changing the line in python/hand.py:
output = self.model(data).numpy()

to:
output = self.model(data).cpu().numpy()

As numpy can't work on stuff that is still on the GPU.. I'm not sure if this needs to be changed in the repo and if this is the best solution for this error? Thanks.

My environment specs are the following:
OS: CentOS Linux release 7.5.1804
CUDA version: 8.0
cuDNN version: 7.0.5
Torch version: 0.4.1
Python version: Python 3.6.6
GPU's: 4x GeForce RTX 2070

The full error stack:

Traceback (most recent call last):
  File "demo.py", line 31, in <module>
    peaks = hand_estimation(oriImg[y:y+w, x:x+w, :])
  File "python/hand.py", line 46, in __call__
    output = self.model(data).numpy()
TypeError: can't convert CUDA tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.

Why not recognize gestures

Why not recognize gestures?

hands_list = util.handDetect(candidate, subset, oriImg)
print(hands_list)

Result
[]
[]
[]
How to solve?
Must be a complete human body structure (including the hand) to detect the hand?

Was the converted model pre-trained on MPII or COCO?

I'm working on something that requires an OpenPose model that was pre-trained on MPII, was the converted model in this repo the MPII or COCO model?

If it's not, any recommendation for getting the MPII model up & running? I tried using caffemodel2pytorch on the trained OP model from this repo, but have been running into errors/issues

body pose inconsistency

There is an inconsistency between your semantics and openpose semantics in the body pose estimation model could you please provide the semantic of each point?

Another inconsistency if a point is absent from the predictions openpose returns (0,0), you don't.

Very slow inference with GPU

First of all, great work! Second of all, I have the following question:
I am running the demo script to do inference for the image that you have provided in the repo, which is called demo.png. While I can successfully get the result of pose estimation for the demo.png, I see that the inference takes about 0.7 sec using GPU.
Is that expected, or I am missing something here?
Thanks

ZeroDivisionError: float division by zero in body.py

RuntimeWarning: invalid value encountered in true_divide
vec = np.divide(vec, norm)
Traceback (most recent call last):
File "demo.py", line 16, in
candidate, subset = body_estimation(oriImg)
File "src\body.py", line 135, in call
0.5 * oriImg.shape[0] / norm - 1, 0)
ZeroDivisionError: float division by zero

format of output video

Thanks for such a great work.
I have a question (may be off-topic), I'm trying to save the output video in mp4 format but I'm getting error
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x56294b5339a0]
moov atom not found pipe:: Invalid data found when processing input

mov format is working but mp4 is not working. Kindly suggest some solution.

How to use cuda to accelerate model inference ?

Hi, guy, i see from task manager when i am runing demo_camera.py that the CUDA usage is only about 10%, another repo is 85%, so i think we can spped up the inference by making more usage of CUDA, but how ? Can you give me some inspiration ?

report bug : Error saving video

hello,
Thank you very much for sharing the code.
When I save the video processing results, there is a bug. The program (demo_camera.py)ends but the video is not saved. The output size will be 6KB or 8KB or other. At first, I thought it was a video coding format error, but it's not.
Finally, I found that the problem was that the size of the frame was changed after the hand model calculation. The input size changed from 720 × 1280 to 400 × 711, which led to the failure of cv2. Videowriter() to save the video.
Please verify and fix this bug.

Error in hand tracking on video

Hey i am running into a very weird issue where i used the model to estimate poses from a video stream it works on the first frame but fails after that with this error..

label_img, label_numbers = label(binary, return_num=True, connectivity = binary.ndim)
TypeError: 'str' object is not callable

Any guidance you can provide on the most probable cause of the error would be very helpful thanks!

How to save results as Json files?

Hello bro.
I have used this repo, it's REALLY cool!
Now I wanna get the json files the Openpose output, could you please help me if you know how to realize?

convert openpose caffemodel to pytorch model

Hello,
convert tool :https://github.com/vadimkantorov/caffemodel2pytorch
caffemodel:(from https://github.com/CMU-Perceptual-Computing-Lab/openpose_train/tree/master/experimental_models)
http://posefs1.perception.cs.cmu.edu/OpenPose/models/pose/1_25BBkg/body_25b/pose_iter_XXXXXX.caffemodel

when I convert caffemodel ,what happen as follow:
Skipping layer [prelu4_2, PReLU, PReLU]: not found in caffemodel2pytorch.modules dict
Skipping layer [prelu4_3_CPM, PReLU, PReLU]: not found in caffemodel2pytorch.modules dict
Skipping layer [prelu4_4_CPM, PReLU, PReLU]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mprelu1_stage0_L2_0, PReLU, PReLU]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mprelu1_stage0_L2_1, PReLU, PReLU]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mprelu1_stage0_L2_2, PReLU, PReLU]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mconv1_stage0_L2_concat, Concat, Concat]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mprelu2_stage0_L2_0, PReLU, PReLU]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mprelu2_stage0_L2_1, PReLU, PReLU]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mprelu2_stage0_L2_2, PReLU, PReLU]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mconv2_stage0_L2_concat, Concat, Concat]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mprelu3_stage0_L2_0, PReLU, PReLU]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mprelu3_stage0_L2_1, PReLU, PReLU]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mprelu3_stage0_L2_2, PReLU, PReLU]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mconv3_stage0_L2_concat, Concat, Concat]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mprelu4_stage0_L2_0, PReLU, PReLU]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mprelu4_stage0_L2_1, PReLU, PReLU]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mprelu4_stage0_L2_2, PReLU, PReLU]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mconv4_stage0_L2_concat, Concat, Concat]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mprelu5_stage0_L2_0, PReLU, PReLU]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mprelu5_stage0_L2_1, PReLU, PReLU]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mprelu5_stage0_L2_2, PReLU, PReLU]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mconv5_stage0_L2_concat, Concat, Concat]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mprelu6_stage0_L2, PReLU, PReLU]: not found in caffemodel2pytorch.modules dict
Skipping layer [concat_stage1_L2, Concat, Concat]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mprelu1_stage1_L2_0, PReLU, PReLU]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mprelu1_stage1_L2_1, PReLU, PReLU]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mprelu1_stage1_L2_2, PReLU, PReLU]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mconv1_stage1_L2_concat, Concat, Concat]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mprelu2_stage1_L2_0, PReLU, PReLU]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mprelu2_stage1_L2_1, PReLU, PReLU]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mprelu2_stage1_L2_2, PReLU, PReLU]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mconv2_stage1_L2_concat, Concat, Concat]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mprelu3_stage1_L2_0, PReLU, PReLU]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mprelu3_stage1_L2_1, PReLU, PReLU]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mprelu3_stage1_L2_2, PReLU, PReLU]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mconv3_stage1_L2_concat, Concat, Concat]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mprelu4_stage1_L2_0, PReLU, PReLU]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mprelu4_stage1_L2_1, PReLU, PReLU]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mprelu4_stage1_L2_2, PReLU, PReLU]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mconv4_stage1_L2_concat, Concat, Concat]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mprelu5_stage1_L2_0, PReLU, PReLU]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mprelu5_stage1_L2_1, PReLU, PReLU]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mprelu5_stage1_L2_2, PReLU, PReLU]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mconv5_stage1_L2_concat, Concat, Concat]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mprelu6_stage1_L2, PReLU, PReLU]: not found in caffemodel2pytorch.modules dict
Skipping layer [concat_stage2_L2, Concat, Concat]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mprelu1_stage2_L2_0, PReLU, PReLU]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mprelu1_stage2_L2_1, PReLU, PReLU]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mprelu1_stage2_L2_2, PReLU, PReLU]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mconv1_stage2_L2_concat, Concat, Concat]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mprelu2_stage2_L2_0, PReLU, PReLU]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mprelu2_stage2_L2_1, PReLU, PReLU]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mprelu2_stage2_L2_2, PReLU, PReLU]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mconv2_stage2_L2_concat, Concat, Concat]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mprelu3_stage2_L2_0, PReLU, PReLU]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mprelu3_stage2_L2_1, PReLU, PReLU]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mprelu3_stage2_L2_2, PReLU, PReLU]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mconv3_stage2_L2_concat, Concat, Concat]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mprelu4_stage2_L2_0, PReLU, PReLU]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mprelu4_stage2_L2_1, PReLU, PReLU]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mprelu4_stage2_L2_2, PReLU, PReLU]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mconv4_stage2_L2_concat, Concat, Concat]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mprelu5_stage2_L2_0, PReLU, PReLU]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mprelu5_stage2_L2_1, PReLU, PReLU]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mprelu5_stage2_L2_2, PReLU, PReLU]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mconv5_stage2_L2_concat, Concat, Concat]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mprelu6_stage2_L2, PReLU, PReLU]: not found in caffemodel2pytorch.modules dict
Skipping layer [concat_stage3_L2, Concat, Concat]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mprelu1_stage3_L2_0, PReLU, PReLU]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mprelu1_stage3_L2_1, PReLU, PReLU]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mprelu1_stage3_L2_2, PReLU, PReLU]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mconv1_stage3_L2_concat, Concat, Concat]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mprelu2_stage3_L2_0, PReLU, PReLU]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mprelu2_stage3_L2_1, PReLU, PReLU]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mprelu2_stage3_L2_2, PReLU, PReLU]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mconv2_stage3_L2_concat, Concat, Concat]: not found in caffemodel2pytorch.modules dict
Skipping layer [Mprelu3_stage3_L2_0, PReLU, PReLU]: not found in caffemodel2pytorch.modules dict

can you help me?thx!

Could you tell me the model used evaluation result on COCO2017?

This code is very nice. Thank you for your contribution. I just used your demo to test on coco2017 dataset, and I got the following results:
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets= 20 ] = 0.291
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets= 20 ] = 0.505
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets= 20 ] = 0.282
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.143
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.498
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 20 ] = 0.344
Average Recall (AR) @[ IoU=0.50 | area= all | maxDets= 20 ] = 0.530
Average Recall (AR) @[ IoU=0.75 | area= all | maxDets= 20 ] = 0.348
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.155
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.604
It seems so low. Could you tell me the mAP you test, or some test details? Thanks a lot.

run error for body.py

hi, when run the body.py ,meet follow error
body.py:121: RuntimeWarning: invalid value encountered in true_divide
vec = np.divide(vec, norm)
Traceback (most recent call last):
File "body.py", line 217, in
candidate, subset = body_estimation(oriImg)
File "body.py", line 134, in call
0.5 * oriImg.shape[0] / norm - 1, 0)
ZeroDivisionError: float division by zero
I use python3.6 ,pytorch 0.4 .

also when i use pyton2.7 pytorch 0.40
Traceback (most recent call last):
File "body.py", line 218, in
candidate, subset = body_estimation(oriImg)
File "body.py", line 49, in call
Mconv7_stage6_L1, Mconv7_stage6_L2 = self.model(data)
File "/root/anaconda3/envs/caffe/lib/python2.7/site-packages/torch/nn/modules/module.py", line 491, in call
result = self.forward(*input, **kwargs)
File "/root/dance/AIdance/pytorch-openpose/python/model.py", line 591, in forward
out1 = self.model0(x)
File "/root/anaconda3/envs/caffe/lib/python2.7/site-packages/torch/nn/modules/module.py", line 491, in call
result = self.forward(*input, **kwargs)
File "/root/anaconda3/envs/caffe/lib/python2.7/site-packages/torch/nn/modules/container.py", line 91, in forward
input = module(input)
File "/root/anaconda3/envs/caffe/lib/python2.7/site-packages/torch/nn/modules/module.py", line 491, in call
result = self.forward(*input, **kwargs)
File "/root/anaconda3/envs/caffe/lib/python2.7/site-packages/torch/nn/modules/conv.py", line 301, in forward
self.padding, self.dilation, self.groups)
RuntimeError: Given groups=1, weight[256, 512, 3, 3], so expected input[1, 3, 46, 50] to have 512 channels, but got 3 channels instead

do you have any advice , thanks!

The environment

Now that you use the anaconda to manage the package.So what's the meaning that you you use the pip to get the package????

you only install the packages by pip into the system,not the virtual envs in the anaconda..

Left hand vs. Right hand

Hi, maybe you could help me with something or have found a solution already.

First of all, I'm using a different approach for actually detecting the hands and hand-regions in the image (that works faster than the OpenPose one), and I'm also using Keras+TF, not Pytorch, but that is irrelevant.

I've noticed, that the hand-keypoint detector works much better (i.e. gives much more accurate keypoints) for one hand (the right one, I think), but works far worse for the other one. Example below:

[image]

I assume that you're supposed to know beforehand (pardon the pun), which hand you're processing, and then flip the image horizontally, when necessary? Or is there some other trick to it?

Is the performance too bad?

Is the performance too bad? It takes 1 - 2 seconds to predict a image on GTX 1070 8G. Is there a plan to optimize performance and increase the speed of predicit?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.