GithubHelp home page GithubHelp logo

felixchenfy / realtime-action-recognition Goto Github PK

View Code? Open in Web Editor NEW
839.0 22.0 247.0 6.84 MB

Apply ML to the skeletons from OpenPose; 9 actions; multiple people. (WARNING: I'm sorry that this is only good for course demo, not for real world applications !!! Those ary very difficult !!!)

License: MIT License

Python 100.00%
action-recognition openpose machine-learning time-series

realtime-action-recognition's Introduction

Multi-person Real-time Action Recognition Based-on Human Skeleton

Highlights: 9 actions; multiple people (<=5); Real-time and multi-frame based recognition algorithm.

Updates: On 2019-10-26, I refactored the code; added more comments; and put all settings into the config/config.yaml file, including: classes of actions, input and output of each file, OpenPose settings, etc.

Project: This is my final project for EECS-433 Pattern Recognition in Northwestern Univeristy on March 2019. A simpler version where two teammates and I worked on is here.

Warning: Since I used the 10 fps video and 0.5s-window for training, you must also limit your video fps to be about 10 fps (7~12 fps) if you want to test my pretrained model on your own video or web camera.

Contents:

1. Algorithm

I collected videos of 9 Types of actions: ['stand', 'walk', 'run', 'jump', 'sit', 'squat', 'kick', 'punch', 'wave']. The total video lengths are about 20 mins, containing about 10000 video frames recorded at 10 frames per second.

The workflow of the algorithm is:

  • Get the joints' positions by OpenPose.
  • Track each person. Euclidean distance between the joints of two skeletons is used for matching two skeletons. See class Tracker in lib_tracker.py
  • Fill in a person's missing joints by these joints' relative pos in previous frame. See class FeatureGenerator in lib_feature_proc.py. So does the following.
  • Add noise to the (x, y) joint positions to try to augment data.
  • Use a window size of 0.5s (5 frames) to extract features.
  • Extract features of (1) body velocity and (2) normalized joint positions and (3) joint velocities.
  • Apply PCA to reduce feature dimension to 80. Classify by DNN of 3 layers of 50x50x50 (or switching to other classifiers in one line). See class ClassifierOfflineTrain in lib_classifier.py
  • Mean filtering the prediction scores between 2 frames. Add label above the person if the score is larger than 0.8. See class ClassifierOnlineTest in lib_classifier.py

For more details about how the features are extracted, please see my report.

2. Install Dependency (OpenPose)

We need Python >= 3.6.

2.1. Download tf-pose-estimation

This project uses a OpenPose program developped by ildoonet. The source project has been deleted. I've managed to fork it to here: tf-pose-estimation.

Please download it:

export MyRoot=$PWD
cd src/githubs  
git clone https://github.com/felixchenfy/ildoonet-tf-pose-estimation
mv ildoonet-tf-pose-estimation tf-pose-estimation

2.2. Download pretrained models

The mobilenet_thin models are already included in the project. No need to download. See folder:

src/githubs/tf-pose-estimation/models/graph✗ ls
cmu  mobilenet_thin  mobilenet_v2_large  mobilenet_v2_small

If you want to use the original OpenPose model which is named "cmu" here, you need to download it:

cd $MyRoot/src/githubs/tf-pose-estimation/models/graph/cmu  
bash download.sh  

2.3. Insteall libraries

Basically you have to follow the tutorial of tf-pose-estimation project. If you've setup the env for that project, then it's almost the same env to run my project.

Please follow its tutorial here. I've copied what I ran to below:

conda create -n tf tensorflow-gpu
conda activate tf

cd $MyRoot/src/githubs/tf-pose-estimation
pip3 install -r requirements.txt
pip3 install jupyter tqdm

# Install tensorflow.
# You may need to take a few tries and select the version that is compatible with your cuDNN. If the version mismatches, you might get this error: "Error : Failed to get convolution algorithm."
pip3 install tensorflow-gpu==1.13.1

# Compile c++ library as described [here](https://github.com/felixchenfy/ildoonet-tf-pose-estimation#install-1):
sudo apt install swig
pip3 install "git+https://github.com/philferriere/cocoapi.git#egg=pycocotools&subdirectory=PythonAPI"
cd $MyRoot/src/githubs/tf-pose-estimation/tf_pose/pafprocess
swig -python -c++ pafprocess.i && python3 setup.py build_ext --inplace

Then install some small libraries used by me:

cd $MyRoot
pip3 install -r requirements.txt

2.4. Verify installation

Make sure you can successfully run its demo examples:

cd $MyRoot/src/githubs/tf-pose-estimation
python run.py --model=mobilenet_thin --resize=432x368 --image=./images/p1.jpg

If you encounter error, you may try to search in google or tf-pose-estimation's issue. The problem is probably due to the dependency of that project.

3. Program structure

Diagram

Trouble shooting:

  • How to change features?

    In utils/lib_feature_proc.py, in the class FeatureGenerator, change the function def add_cur_skeleton!

    The function reads in a raw skeleton and outputs the feature generated from this raw skeleton as well as previous skeletons. The feature will then be saved to features_X.csv by the script s3_preprocess_features.py for the next training step.

  • How to include joints of the head?

    You need to change the aforementioned add_cur_skeleton function.

    I suggest you to write a new function to extract the head features, and then append them to the returned variable(feature) of add_cur_skeleton.

    Please read def retrain_only_body_joints in utils/lib_feature_proc.py if you want to add the head joints.

  • How to change the classifier to RNN?

    There are two major changes to do:

    First, change the aforementioned add_cur_skeleton. Instead of manually extracing time-serials features as does by the current script, you may simply stack the input skeleton with previous skeletons and then output it.

    Second, change the def __init__ and def predict function of class ClassifierOfflineTrain in utils/lib_classifier.py to add an RNN model.

Main scripts

The 5 main scripts are under src/. They are named under the order of excecution:

src/s1_get_skeletons_from_training_imgs.py    
src/s2_put_skeleton_txts_to_a_single_txt.py
src/s3_preprocess_features.py
src/s4_train.py 
src/s5_test.py

The input and output of these files as well as some parameters are defined in the configuration file config/config.yaml. I paste part of it below just to provide an intuition:

classes: ['stand', 'walk', 'run', 'jump', 'sit', 'squat', 'kick', 'punch', 'wave']

image_filename_format: "{:05d}.jpg"
skeleton_filename_format: "{:05d}.txt"

features:
  window_size: 5 # Number of adjacent frames for extracting features. 

s1_get_skeletons_from_training_imgs.py:
  openpose:
    model: cmu # cmu or mobilenet_thin. "cmu" is more accurate but slower.
    img_size: 656x368 #  656x368, or 432x368, 336x288. Bigger is more accurate.
  input:
    images_description_txt: data/source_images3/valid_images.txt
    images_folder: data/source_images3/
  output:
    images_info_txt: data_proc/raw_skeletons/images_info.txt # This file is not used.
    detected_skeletons_folder: &skels_folder data_proc/raw_skeletons/skeleton_res/
    viz_imgs_folders: data_proc/raw_skeletons/image_viz/

s2_put_skeleton_txts_to_a_single_txt.py:
  input:
    # A folder of skeleton txts. Each txt corresponds to one image.
    detected_skeletons_folder: *skels_folder
  output:
    # One txt containing all valid skeletons.
    all_skeletons_txt: &skels_txt data_proc/raw_skeletons/skeletons_info.txt

s3_preprocess_features.py:
  input: 
    all_skeletons_txt: *skels_txt
  output:
    processed_features: &features_x data_proc/features_X.csv
    processed_features_labels: &features_y data_proc/features_Y.csv

s4_train.py:
  input:
    processed_features: *features_x
    processed_features_labels: *features_y
  output:
    model_path: model/trained_classifier.pickle

For how to run the main scripts, please see the Section 4. How to run: Inference and 6. How to run: Training.

4. How to run: Inference

Introduction

The script src/s5_test.py is for doing real-time action recognition.

The classes are set in config/config.yaml by the key classes.

The supported input includes video file, a folder of images, and web camera, which is set by the command line arguments --data_type and --data_path.

The trained model is set by --model_path, e.g.:model/trained_classifier.pickle.

The output is set by --output_folder, e.g.: output/.

The test data (a video, and a folder of images) are already included under the data_test/ folder.

An example result of the input video "exercise.avi" is:

output/exercise/
├── skeletons
│   ├── 00000.txt
│   ├── 00001.txt
│   └── ...
└── video.avi

Also, the result will be displayed by cv2.imshow().

Example commands are given below:

Test on video file

python src/s5_test.py \
    --model_path model/trained_classifier.pickle \
    --data_type video \
    --data_path data_test/exercise.avi \
    --output_folder output

Test on a folder of images

python src/s5_test.py \
    --model_path model/trained_classifier.pickle \
    --data_type folder \
    --data_path data_test/apple/ \
    --output_folder output

Test on web camera

python src/s5_test.py \
    --model_path model/trained_classifier.pickle \
    --data_type webcam \
    --data_path 0 \
    --output_folder output

5. Training data

Download my data

Follow the instructions in data/download_link.md to download the data. Or, you can create your own. The data and labelling format are described below.

Data format

Each data subfolder (e.g. data/source_images3/jump_03-02-12-34-01-795/) contains images named as 00001.jpg, 00002.jpg, etc.
The naming format of each image is defined in config/config.yaml by the sentence: image_filename_format: "{:05d}.jpg".

The images to be used as training data and their label are configured by this txt file: data/source_images3/valid_images.txt.
A snapshot of this txt file is shown below:

jump_03-02-12-34-01-795
52 59
72 79

kick_03-02-12-36-05-185
54 62

In each paragraph,
the 1st line is the data folder name, which should start with "${class_name}_". The 2nd and following lines specify the staring index and ending index of the video that corresponds to that class.

Let's take the 1st paragraph of the above snapshot as an example: jump is the class, and the frames 52~59 & 72~79 of the video are used for training.

Classes

The classes are set in config/config.yaml under the key word classes. No matter how many classes you put in the training data (set by the folder name), only the ones that match with the classes in config.yaml are used for training and inference.

6. How to run: Training

First, you may read

to know the training data format and the input and output of each script.

Then, follow the following steps to do the training:

  • Collect your own data and label them, or use my data. Here is tool to record images from web camera.
  • If you are using your data, change the values of classes and images_description_txt and images_folder inside config/config.yaml.
  • Depend on your need, you may change parameters in config/config.yaml.
  • Finally, run the following scripts one by one:
    python src/s1_get_skeletons_from_training_imgs.py
    python src/s2_put_skeleton_txts_to_a_single_txt.py 
    python src/s3_preprocess_features.py
    python src/s4_train.py 

By default, the intermediate data are saved to data_proc/, and the model is saved to model/trained_classifier.pickle.
After training is done, you can run the inference script src/s5_test.py as described in Section 4. How to run: Inference.

7. Result and Performance

Unfortunately this project only works well on myself, because I only used the video of myself.

The performance is bad for (1) people who have different body shape, (2) people are far from the camera. How to improve? I guess the first thing to do is to collect larger training set from different people. Then, improve the data augmentation and featuer selection.

Besides, my simple tracking algorithm only works for a few number of people (maybe 5).

Due to the not-so-good performance of action recognition, I guess you can only use this project for course demo, but not for any commercial applications ... T.T

realtime-action-recognition's People

Contributors

felixchenfy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

realtime-action-recognition's Issues

train and track 5 joints in head & lstm approach

Hi, awesome work! I want to train new data including head joints as well. Please let me know of changes to be done in the code?
A couple more questions: does this also track sequence of frames with body movements/actions in terms of speed? Have you thought about using rnn/lstm? Thanks a lot :)

What are peaks, heatMat_up, pafMat_up?

peaks, heatMat_up, pafMat_up = self.persistent_sess.run(
            [self.tensor_peaks, self.tensor_heatMat_up, self.tensor_pafMat_up], feed_dict={
                self.tensor_image: [img], self.upsample_size: upsample_size
            })

Could somebody explain what actually are peaks, heatMat_up and pafMat_up?

Dose this work be supported by paper?

Thanks for sharing this work, for research purposes, does this work be supported paper ? If it does, please offer it.

您好:
感謝分享作品,為了列入文獻,想請問這篇作品是否有論文?如果有能否提供下?再麻煩了

ValueError: Input contains NaN, infinity or a value too large for dtype('float64').

Hi, when I try to train (s4_train.py) my own train data (including yours) i get this error:

Traceback (most recent call last):
  File "src/s4_train.py", line 112, in <module>
    main()
  File "src/s4_train.py", line 99, in main
    model.train(tr_X, tr_Y)
  File "/home/suleyman/Desktop/COMPUTER VISION VIDEO CLASSIFICATION/Action-Recognition/src/../utils/lib_classifier.py", line 84, in train
    self.pca.fit(X)
  File "/home/suleyman/anaconda3/envs/comp_vision/lib/python3.7/site-packages/sklearn/decomposition/pca.py", line 341, in fit
    self._fit(X)
  File "/home/suleyman/anaconda3/envs/comp_vision/lib/python3.7/site-packages/sklearn/decomposition/pca.py", line 382, in _fit
    copy=self.copy)
  File "/home/suleyman/anaconda3/envs/comp_vision/lib/python3.7/site-packages/sklearn/utils/validation.py", line 542, in check_array
    allow_nan=force_all_finite == 'allow-nan')
  File "/home/suleyman/anaconda3/envs/comp_vision/lib/python3.7/site-packages/sklearn/utils/validation.py", line 56, in _assert_all_finite
    raise ValueError(msg_err.format(type_err, X.dtype))
ValueError: Input contains NaN, infinity or a value too large for dtype('float64').

Could you please help me with it? Thank you very much

Why it can't recognize all the people?

Deepin 截圖_選取範圍_20200202115825

According to the picture, it seems that people P5, P6 can be recognized clearly, but why their actions can not be defined?(I've trained for the class "UseComputer")

Also, I figure out that it doesn't perform well when there is more than 4 people in the same frame, how can I modify that?

如圖,人P5,P6 的骨骼節點應該算辨識得蠻清楚的, 但是為何他們的動作沒有被辨識出來?(我已有訓練"使用電腦"這個類別的動作)

同樣的,我發現當同一畫面/場景中有多於4人出現時,效果似乎不太好,我應該怎麼修改才能應用於多人的狀況呢?

Keypoints to check for New training ?

Hi, thanks for this huge work!!

My intend is to test your solution on a JETSON NANO (inference only)-->
Your pre-trained model runs at 3.4 FPS with realtime webcam video input (224x224, mobilenet_v2) on my device = Great

I've prepared my own training set of images based on video, but the movie rate is 25FPS (not 10 FPS).
As you used a 5 Frames sliding windows, what are the keypoints to check in the code :
-For training ?
-For Inference on nano?

Thanks

No such file or directory: 'skeleton_data/skeletons5_info.txt'

When running Train.ipynb

the cell:X0, Y0, clip_indices, classes = load_my_data('skeleton_data/skeletons5_info.txt')

it has an error
"FileNotFoundErrorTraceback (most recent call last)
in ()
----> 1 X0, Y0, clip_indices, classes = load_my_data('skeleton_data/skeletons5_info.txt')

in load_my_data(filepath)
23
24 def load_my_data(filepath):
---> 25 with open(filepath, 'r') as f:
26 dataset = simplejson.load(f)
27

FileNotFoundError: [Errno 2] No such file or directory: 'skeleton_data/skeletons5_info.txt'
"
Where can I get the "skeletons5_info.txt"?
I have downloaded "source_images3" but didn't see "skeletons5_info.txt" in it.

problem.pdf

some issues about data

Hello, thank you for sharing, I have read your data, I have some questions, you are processing the video data by frame, there will be some frames that are not part of the action, I don’t understand you are How to deal with it, can I replace the data with time series relationship with the collected pictures,thank you @felixchenfy

`ValueError: n_components=100 must be between 0 and min(n_samples, n_features)=16 with svd_solver='full'

i am trying to training a new pose, i run the s4_train.py, but i got the error message below:
ValueError: n_components=100 must be between 0 and min(n_samples, n_features)=16 with svd_solver='full'

about number of data:
After train-test split: Size of training data X: (16, 314) Number of training samples: 16 Number of testing samples: 8

where can i change the number of n_components? or is there another way to solve it? Please reply.

我嘗試訓練一個新的姿勢,在跑s4_train.py時出現了以下錯誤訊息:
ValueError: n_components=100 must be between 0 and min(n_samples, n_features)=16 with svd_solver='full'

有關於資料的數量
After train-test split: Size of training data X: (16, 314) Number of training samples: 16 Number of testing samples: 8

請問應該從哪裡調整 n_components的數量,或有無其他解決方法?再麻煩回覆了

Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.

Hello,
I have installed Tensorflow-gpu version 2.1.0, CUDA version 10.1, and cuDNN v.7.6.4. I try to run a keras program (python3 examples/mnist_cnn.py) and receive the error titled above. I've read that this is a compatibility issue but I can't seem to fine the right combination of Tensorflow, CUDA, and cuDNN to make this exmaple work. I would appreciate any kind of help.

certain action sometimes recognize as walk

hello i just retrain using your existing dataset with only sit, stand, and walk, when i do realtime recognize, just like stand it is recognize as walk it is obvious that person not moving anywhere, also when sit action recognize as walk, i dont know why it is so obvious seen from joint that person is sitting but why still recognize as walk? can you give me some advice to improve this?

Train more than 9 actions

Hello and thank you for sharing your work with us! I've successfully managed to run the training with your dataset and the webcam demo, but I've encountered an error after training my own dataset with 19 different actions when running run_detector.py via webcam:

Realtime-Action-Recognition/src/mylib/action_classifier.py", line 85, in smooth_scores
score_sums += score
ValueError: operands could not be broadcast together with shapes (9,) (19,) (9,)

I've updated the action_labels in run_detector.py, but couldn't find any other declaration code for the number of action classes. I guess this might be a matrix dimension related problem. Could you help me with that?

ValueError: non-broadcastable output operand with shape (1,) doesn't match the broadcast shape (2,)

i'm trying to train a new pose, i resize my video as 640x480, but it shows error message below when i run the s5_test.py:

ValueError: non-broadcastable output operand with shape (1,) doesn't match the broadcast shape (2,)

image

Does anyone who also have this problem? what does this mean?How to solve it?

我正嘗試訓練一個新的姿勢,我將我的影片調整成640x480,但在我執行s5_test.py時仍出現問題:

ValueError: non-broadcastable output operand with shape (1,) doesn't match the broadcast shape (2,)

請問有遇過這個問題嗎? 這是代表甚麼問題?我應該怎麼解決?

Number of inputs neurons in MLPClassifier

Firstly, thanks for your updating the repo.
I'm a beginner in Machine learning, my questions is what number of inputs neurons in MLPClassifier. In my knowledge, the input of MLP is 'm' feature, what is the feature in here, 13 (x, y) coordinate and temporal feature of 5 frames?.

Quetion : google.protobuf.message.DecodeError: Unexpected end-group tag.

Thanks for share so interting work .
I can definitely works on the tf-pose-estition .
But when I am doing the step train on the data set : run the "python src/s1_get_skeletons_from_training_imgs.py" it was told the error .
Is the version of tensorflow wrong ?(By the way ,my version is 1.12.0 for the limitation of GPU )
I am looking for a lot of answers but did not solve it , could you please ?
Thank you a lot !

train error:s4_train

the error is the following:
File "src/s4_train.py", line 129, in
main()
File "src/s4_train.py", line 120, in main
evaluate_model(model, CLASSES, tr_X, tr_Y, te_X, te_Y)
File "src/s4_train.py", line 90, in evaluate_model
te_Y, te_Y_predict, classes, normalize=False, size=(12, 8))
File "/home/shao/tf-pose_train/src/../utils/lib_plot.py", line 46, in plot_confusion_matrix
ax.figure.colorbar(im, ax=ax)
NameError: name 'im' is not defined

More flexible improvement #Rnn #Flexible code

Hi! Awesome work updating the repo... But it would help if you can point out where and how to include all joints in head (since you removed them) and how to change the model architecture so I could experiment with Rnn. Thanks a lot.

(Appreciate if you keep this open so we can work on it)

No module named 'utils.lib_images_io

$ python src/s5_test.py --t webcam
Traceback (most recent call last):
File "src/s5_test.py", line 54, in
import utils.lib_images_io as lib_images_io
ModuleNotFoundError: No module named 'utils.lib_images_io'

Please help me !

export MyRoot=$PWD

Hi I am beginner in programming. Can i know how can we run this line of code?

The training image link could not be opened.

Hello, the download link of the training image you provided cannot be opened. Could you please provide another download link?I would like to see how your training images are tagged.Can only use pictures when training?Can I just play video?Or do you have to frame video into pictures?

how to run the real-time demo from your web camera?

Hey Chen!

First of all, thanks for your contribution on this project. I have the following questions to ask you:

  1. In the testing phase, how to call the camera?
  2. In the stage of test it on webcam, is there any code you can refer to?

looking forward to your reply!
Ths again!

yours,
light

model deploy

Please,how to deploy this classification model on mobilephone. Can you give me some advices.Thank you!

about openpose model

how can i choose the model of openpose during training?
the default is mobilenet_thin!
i wanna to compare accuracy using different models

Limtation for only one CLASS ?

HI! Is it possible to use only one class? For example I have "normal" situations, and I want to detect only one type of event (Fall detection).
Best regards

Training Video with 2 person

Hi, thanks for your work!
My problem is I have a video with only 2 people doing different actions
Then how does the the valid_images.txt work here?
like I set the frame and labeled as a class swinging, It will capture two player's action separately, and both classify to the swinging class?
and by the way, where can I change the threshold of the confidence of predicting?
Thanks a lot!!

Training Error

Hello
I'm doing training this model according to your instructions.
However, when I command python3 s1_get_skeletons_from_training_imgs.py, there are no outputfiles to the path src/../data_proc/raw_skeletons/skeleton_res/.
Of course the data_proc path does not made.

What is the problem?

Multi-GPU

Hi,
Thank you for your interesting work.

Any tips on running the TfPoseEstimator inference on multiple GPUs?

Video format

I tested the webcam successfully, and then I want to test the video, however, error reporting
"[2019-12-18 14:52:29595] [tfposeestimator] [debug] information + original shape = 1200x7202019-12-18 14:52:29595 debug information + original shape = 1200x720"
and "program ends with open (model_path, 'RB') as F:Filenotfounderror: [errno 2] no such file or directory: 'model / trained_classifier. Pickle' "
my video resolution is 1200 × 720. Do you want to know if your code has any requirements for video resolution?

Problem when running run.py Issue

I keep getting this issue whenever i run this script

lib_handle = py_tf.TF_LoadLibrary(library_filename)
tensorflow.python.framework.errors_impl.NotFoundError: libnvinfer.so.5: cannot open shared object file: No such file or directory

Program running speed

In the first time, the program runs about 10-15 fps. However, I do not know what i did. The program runs 5-11fps (i just reinstall numpy). My GPU is RTX 2080 ti and I use cmu 656x368.

Tensorflow issue

Hi,
Does anyone ever face this issue while trying to run? Kindly let me know if anyone know how to fix this. Thanks a lot!

image

google.protobuf.message.DecodeError: Error parsing message

When running the program s5_test.py , an error is reported
[2019-12-08 22:12:45,904] [TfPoseEstimator] [INFO] loading graph from /home/ps/anaconda3/envs/hagongda_py3/lib/python3.7/site-packages/tf_pose-0.1.1-py3.7-linux-x86_64.egg/tf_pose_data/graph/cmu/graph_opt.pb(default size=432x368) 2019-12-08 22:12:45,904 INFO loading graph from /home/ps/anaconda3/envs/hagongda_py3/lib/python3.7/site-packages/tf_pose-0.1.1-py3.7-linux-x86_64.egg/tf_pose_data/graph/cmu/graph_opt.pb(default size=432x368) Traceback (most recent call last): File "s5_test.py", line 315, in <module> skeleton_detector = SkeletonDetector(OPENPOSE_MODEL, OPENPOSE_IMG_SIZE) File "/home/ps/NewDisk1/HIT_workspace/lxc/actionRecognition/Realtime-Action-Recognition_v2/Realtime-Action-Recognition-master/src/../utils/lib_openpose.py", line 83, in __init__ tf_config=self._config) File "/home/ps/anaconda3/envs/hagongda_py3/lib/python3.7/site-packages/tf_pose-0.1.1-py3.7-linux-x86_64.egg/tf_pose/estimator.py", line 313, in __init__ graph_def.ParseFromString(f.read()) google.protobuf.message.DecodeError: Error parsing message (hagongda_py3) ps@ps-Super-Server:~/NewDisk1/HIT_workspace/lxc/actionRecognition/Realtime-Action-Recognition_v2/Realtime-Action-Recognition-master/src$ google.protobuf.message.DecodeError: Error parsing message google.protobuf.message.DecodeError::未找到命令

S5_test data path absolute path?

hi!
I'd like to use absolute path while running S5_test.py
The --data path can't work with absolute path, where can I change it?
Thanks a lot

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.