GithubHelp home page GithubHelp logo

hongsukchoi / pose2mesh_release Goto Github PK

View Code? Open in Web Editor NEW
600.0 16.0 69.0 9.52 MB

Official Pytorch implementation of "Pose2Mesh: Graph Convolutional Network for 3D Human Pose and Mesh Recovery from a 2D Human Pose", ECCV 2020

License: MIT License

Python 99.91% Shell 0.09%
single-view graph-convolutional-network 3d-mesh 2d-human-pose 3d-human-pose 3d-human-mesh rgb-image eccv2020 eccv

pose2mesh_release's Introduction

Pose2Mesh: Graph Convolutional Network for 3D Human Pose and Mesh Recovery from a 2D Human Pose

pose to mesh quality results

News

  • Update 21.04.27: Update PoseFix code and AMASS dataloader. Lowered PA-MPJPE, MPVPE on 3DPW!
  • Update 21.04.09: Update 3DPW evaluation code. Add temporal smoothing code and PA-MPVPE calculation code. They are commented for faster evaluation, but you can uncomment them in evaluate function of ${ROOT}/data/PW3D/dataset.py.
  • Update 21.04.09: Add demo on multiple people, and make a rendered mesh be overlayed on an input image
  • Update 20.11.016: Increased accuracy on 3DPW using DarkPose 2D pose outputs.

Introduction

This repository is the offical Pytorch implementation of Pose2Mesh: Graph Convolutional Network for 3D Human Pose and Mesh Recovery from a 2D Human Pose (ECCV 2020). Below is the overall pipeline of Pose2Mesh. overall pipeline

Install Guidelines

  • We recommend you to use an Anaconda virtual environment. Install PyTorch >= 1.2 according to your GPU driver and Python >= 3.7.2, and run sh requirements.sh.

Quick Demo

  • Download the pre-trained Pose2Mesh according to this.
  • Prepare SMPL and MANO layers according to this.
  • Prepare a pose input, for instance, as input.npy. input.npy should contain the coordinates of 2D human joints, which follow the topology of joint sets defined here. The joint orders can be found in each ${ROOT}/data/*/dataset.py.

Demo on a Single Person

  • Run python demo/run.py --gpu 0 --input_pose demo/h36m_joint_input.npy --joint_set human36.
  • You can replace demo/h36m_joint_input.npy and human36 with your input numpy file and one of {human36,coco,smpl,mano}.
  • Add --input_img {img_path} on the command if you want to a rendered mesh overlayed on an input image.
  • The outputs demo_pose2d.png, demo_mesh.png, and demo_mesh_.obj will be saved in ${ROOT}/demo/result/.

Demo on Multiple People

  • Download demo input from here and place them under ${ROOT}/demo/.
  • Run python demo/run.py --gpu 0.
  • Outputs on a sampled image from CrowdPose datasest will be saved in ${ROOT}/demo/result/.
  • You can change an input image and some details in lines 264~278 of ${ROOT}/demo/run.py.

Results

Here I report the performance of Pose2Mesh.

💪 Update: We increased the performance on 3DPW using GT meshes obtained from NeuralAnnot on COCO and AMASS. The annotations from NeuralAnnot are yet to be released.
💪 Update: The performance on 3DPW has increased using DarkPose 2D detection, which improved HRNet.

table

Below shows the results when the input is groundtruth 2D human poses. For Human3.6M benchmark, Pose2Mesh is trained on Human3.6M. For 3DPW benchmark, Pose2Mesh is trained on Human3.6M and COCO.

MPJPE PA-MPJPE
Human36M 51.28 mm 35.61 mm
3DPW 63.10 mm 35.37 mm

We provide qualitative results on SURREAL to show that Pose2Mesh can recover 3D shape to some degree. Please refer to the paper for more discussion.

surreal quality results

Directory

Root

The ${ROOT} is described as below.

${ROOT} 
|-- data
|-- demo
|-- lib
|-- experiment
|-- main
|-- manopth
|-- smplpytorch
  • data contains data loading codes and soft links to images and annotations directories.
  • demo contains demo codes.
  • lib contains kernel codes for Pose2Mesh.
  • main contains high-level codes for training or testing the network.
  • experiment contains the outputs of the system, whic include train logs, trained model weights, and visualized outputs.

Data

The data directory structure should follow the below hierarchy.

${ROOT}  
|-- data  
|   |-- Human36M  
|   |   |-- images  
|   |   |-- annotations   
|   |   |-- J_regressor_h36m_correct.npy
|   |   |-- absnet_output_on_testset.json 
|   |-- MuCo  
|   |   |-- data  
|   |   |   |-- augmented_set  
|   |   |   |-- unaugmented_set  
|   |   |   |-- MuCo-3DHP.json
|   |   |   |-- smpl_param.json
|   |-- COCO  
|   |   |-- images  
|   |   |   |-- train2017  
|   |   |   |-- val2017  
|   |   |-- annotations  
|   |   |-- J_regressor_coco.npy
|   |   |-- hrnet_output_on_valset.json
|   |-- PW3D 
|   |   |-- data
|   |   |   |-- 3DPW_latest_train.json
|   |   |   |-- 3DPW_latest_validation.json
|   |   |   |-- darkpose_3dpw_testset_output.json
|   |   |   |-- darkpose_3dpw_validationset_output.json
|   |   |-- imageFiles
|   |-- AMASS
|   |   |-- data
|   |   |   |-- cmu
|   |-- SURREAL
|   |   |-- data
|   |   |   |-- train.json
|   |   |   |-- val.json
|   |   |   |-- hrnet_output_on_testset.json
|   |   |   |-- simple_output_on_testset.json
|   |   |-- images
|   |   |   |-- train
|   |   |   |-- test
|   |   |   |-- val
|   |-- FreiHAND
|   |   |-- data
|   |   |   |-- training
|   |   |   |-- evaluation
|   |   |   |-- freihand_train_coco.json
|   |   |   |-- freihand_train_data.json
|   |   |   |-- freihand_eval_coco.json
|   |   |   |-- freihand_eval_data.json
|   |   |   |-- hrnet_output_on_testset.json
|   |   |   |-- simple_output_on_testset.json

If you have a problem with 'download limit' when trying to download datasets from google drive links, please try this trick.

  • Go the shared folder, which contains files you want to copy to your drive
  • Select all the files you want to copy
  • In the upper right corner click on three vertical dots and select “make a copy”
  • Then, the file is copied to your personal google drive account. You can download it from your personal account.

Pytorch SMPL and MANO layer

  • For the SMPL layer, I used smplpytorch. The repo is already included in ${ROOT}/smplpytorch.
  • Download basicModel_f_lbs_10_207_0_v1.0.0.pkl, basicModel_m_lbs_10_207_0_v1.0.0.pkl, and basicModel_neutral_lbs_10_207_0_v1.0.0.pkl from here (female & male) and here (neutral) to ${ROOT}/smplpytorch/smplpytorch/native/models. For the MANO layer, I used manopth. The repo is already included in ${ROOT}/manopth. Download MANO_RIGHT.pkl from here at ${ROOT}/manopth/mano/models.

Experiment

The experiment directory will be created as below.

${ROOT}  
|-- experiment  
|   |-- exp_*  
|   |   |-- checkpoint  
|   |   |-- graph 
|   |   |-- vis 
  • experiment contains train/test results of Pose2Mesh on various benchmark datasets. We recommed you to create the folder as a soft link to a directory with large storage capacity.

  • exp_* is created for each train/test command. The wildcard symbol refers to the time of the experiment train/test started. Default timezone is UTC+9, but you can set to your local time.

  • checkpoint contains the model checkpoints for each epoch.

  • graph contains visualized train logs of error and loss.

  • vis contains *.obj files of meshes and images with 2D human poses or human meshes.

Pretrained model weights

Download pretrained model weights from here to a corresponding directory.

${ROOT}  
|-- experiment  
|   |-- posenet_human36J_train_human36 
|   |-- posenet_cocoJ_train_human36_coco_muco
|   |-- posenet_smplJ_train_surreal
|   |-- posenet_manoJ_train_freihand
|   |-- pose2mesh_human36J_train_human36
|   |-- pose2mesh_cocoJ_train_human36_coco_muco
|   |-- pose2mesh_smplJ_train_surreal
|   |-- pose2mesh_manoJ_train_freihand
|   |-- posenet_human36J_gt_train_human36
|   |-- posenet_cocoJ_gt_train_human36_coco
|   |-- pose2mesh_human36J_gt_train_human36
|   |-- pose2mesh_cocoJ_gt_train_human36_coco

Running Pose2Mesh

joint set topology

Start

  • Pose2Mesh uses different joint sets from Human3.6M, COCO, SMPL, and MANO for Human3.6M, 3DPW, SURREAL, and FreiHAND benchmarks respectively. For the COCO joint set, we manually add 'Pelvis' and 'Neck' joints by computing the middle point of 'L_Hip' and 'R_Hip', and 'L_Shoulder' and 'R_Shoulder' respectively.
  • In the lib/core/config.py, you can change settings of the system including a train/test dataset to use, a pre-defined joint set, a pre-trained PoseNet, a learning schedule, GT usage, and so on.
  • Note that the first dataset on the DATASET.{train/test}_list should call build_coarse_graphs function for the graph convolution setting. Refer to the last line of __init__ function in ${ROOT}/data/Human36M/dataset.py.

Train

Select the config file in ${ROOT}/asset/yaml/ and train. You can change the train set and pretrained posenet by your own *.yml file.

1. Pre-train PoseNet

To train from the scratch, you should pre-train PoseNet first.

Run

python main/train.py --gpu 0,1,2,3 --cfg ./asset/yaml/posenet_{input joint set}_train_{dataset list}.yml

2. Train Pose2Mesh

Copy best.pth.tar in ${ROOT}/experiment/exp_*/checkpoint/ to ${ROOT}/experiment/posenet_{input joint set}_train_{dataset list}/. Or download the pretrained weights following this.

Run

python main/train.py --gpu 0,1,2,3 --cfg ./asset/yaml/pose2mesh_{input joint set}_train_{dataset list}.yml

Test

Select the config file in ${ROOT}/asset/yaml/ and test. You can change the pretrained model weight. To save sampled outputs to obj files, change TEST.vis value to True in the config file.

Run

python main/test.py --gpu 0,1,2,3 --cfg ./asset/yaml/{model name}_{input joint set}_test_{dataset name}.yml

Reference

@InProceedings{Choi_2020_ECCV_Pose2Mesh,  
author = {Choi, Hongsuk and Moon, Gyeongsik and Lee, Kyoung Mu},  
title = {Pose2Mesh: Graph Convolutional Network for 3D Human Pose and Mesh Recovery from a 2D Human Pose},  
booktitle = {European Conference on Computer Vision (ECCV)},  
year = {2020}  
}  

Related Projects

I2L-MeshNet_RELEASE
3DCrowdNet_RELEASE
TCMR_RELEASE
Hand4Whole_RELEASE
HandOccNet
NeuralAnnot_RELEASE

pose2mesh_release's People

Contributors

hongsukchoi avatar mjpvz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pose2mesh_release's Issues

Joints order of the parsed data

Hello Choi,

Thanks for the wonderful contribution. I was wondering what is joints order of the parsed data (17, 2). You mentioned in another issue you used (Gyeongsik Moon) work for parsing the data but I'm not sure which project exactly.

Thanks in advanced for your response

Handling missing keypoints

Hi! And thank you for the awesome work!
How would you recommend dealing with missing keypoints? Should the corresponding rows in input.npy be filled with NaNs or 0s?

Surreal preprocessing

Hi @hongsukchoi, thanks for the great work!

I was wondering how did you obtain the global orientation (pose[:3]) for your surreal parsed data ? The pose in your parsed file is different from the one in the original XXX_info.mat files provided by surreal and I was unable to derive the same values from the raw data i.e. using the camera extrinsincs and intrinsics provided by surreal

How to change mesh gender?

I am running demo/run.py with a human36 joint set. I am trying to create a mesh for a specific gender.

In SMPL.get_layer, which is invoked when creating mesh_model in demo/run.py three different layers are created, one for each gender. However, then it appears that these gendered layers are the neutral layer is used for the face, or for the smpl joint regressor. How can I output a female/male mesh?

How may I get MANO parameters from pose2mesh model?

In the demo, it outputs the mesh and rendered joints on 2D image for MANO. Is there a method to output betas & poses instead of the mesh? I have the labeled hand joints, but need to extract mano parameters (betas, poses).

pkl file

Hi, from the demo, it can generate a .obj file, but is it possible to also generate the pkl file in the format of smpl?

Question regarding augmentation

Hi,

I noticed something peculiar in the augmentation part and hence, I'd be graetful if you could clarify this for me. Basically, in your dataset class (for example in Human36M/dataset.py), when you do rotation augmentation for the image (variable rot in degrees in your code), you're not doing the same to your mesh data (mesh_cam on line 352). Instead, only the 2D and 3D joints (joint_img and smpl_joint_cam) are augmented as follows :

"joint_img, trans = j2d_processing(joint_img.copy(), (cfg.MODEL.input_shape[1], cfg.MODEL.input_shape[0]), bbox, rot, 0, None)" on line 368
"joint_cam = j3d_processing(joint_cam, rot, flip, self.flip_pairs)" on line 373

Shouldn't the augmentation be also done for mesh_cam?

Thanks

Body measurements ?

Hello, thank you for your amazing implementation
How can we measure the body measurements like waist, neck to hand, chest etc.
Thank you

About environment

Hello, thank you for your amazing implementation.
Is this Pose2Mesh_RELEASE repo were developed on windows 10 ?

Training and using of PoseNet

Hi @hongsukchoi , superb work and thanks for sharing!
But I have a few questions about the training of PoseNet.

[1] According to Fig. 9 of the suppl, H3.6M and Coco have different definition of joint sets. Thus I am wondering how to combine these two datasets together (as shown in Table 9) to train PoseNet?

[2] Besides, when employing an off-the-shelf 2D pose detector, how to make sure that the input of PoseNet is consistent with the output of the 2D pose detector?

Best

The visualizations seem a bit odd

Hello,I run the demo/run.py, get these outputs:
image
image

the right is the input and I use other pose estimation models to get 2D joints location.
the left is the output of pose2mesh.
the checkpoint is "pose2mesh_cocoJ_gt_train_human36_coco"
Would you like to tell me where the mistake is?
Looking forward to your reply.

As a Beginner

hi @hongsukchoi, you have done amazing work. and of course thanks for sharing it with us.
I am very interested in this domain. But from where I should start, getting no idea.
My system have windows 10 (seems like yours is Linux) with NVIDIA GeForce GTX 1050 Ti with 1T HDD. I have anaconda, pycharm, vs2015. While downloading the dataset its taking a lot of time and sometime getting aborted in middle. Can you advise me, how should I proceed?

image annotation tool

hello dear can you tell me which annotation tool you have use for annotating images. and after which i have to run , to create .npy file
and also can you give step by step tutorial

thanx in advance

Obtaining intermediate meshes

Hi. Thanks for the great work! I had a rather elementary doubt. I wanted to obtain intermediate meshes obtained by mesh coarsening as you have visualized in your paper. The coarsen function takes just the input adjacency matrix as input and outputs the adjacency matrices corresponding to several coarsening levels. The input adjacency matrix is simply a matrix composed of 0s and 1s which indicate node connectivities. However, the other adjacency matrices look different i.e. are not composed of 0s and 1s, and hence I am having trouble understanding them. Could you please help me with how I should go about obtaining the intermediate meshes from the adjacency matrices corresponding to different coarsening levels?

Thanks in advance

Different joints order for different dataset

Hi,
I wondered how you are mapping different joint orders in case of multiple datasets. For example, when training on (human3.6 and coco) and I set in the configuration, the target joint set is set to coco format. How are you mapping human3.6 joints order to coco joints order?
Thanks in advance for your response

MPJPE is 0.0000

I train the posenet with 'posenet_manoJ_train_freihand.yml',why the loss MPLPE is 0.0000?

Detector Question

Which detector should I use that has eyes, nose, and pelvis like Posenet in the introduction picture?

Can I use the plevis coordinates as the center of both hips?

Ground truth mesh image correspondence

Hi,

Thank you for the great work. I had a small question about the data. For 3DPW and COCO, which has images with multiple people in it (even after cropping using the bounding box coordinates provided in your annotations), how did you decide which person to fit the mesh for? Attaching a cropped example for reference from the 3DPW dataset. I'd be grateful if you could let me know as I want to find out the correspondence between the person and the ground truth mesh for every image in COCO/3DPW datasets.

Thanks in advance :)
full3

The influence of input errors of different 2D joints

Hi @hongsukchoi
Thank you for your sharing your great work! I’m a beginner of this domain. And I have some questions.
If I want to study the influence of input errors of each 2D joints, which part of the code should I change? For example, only the L_knee has errors in the input 2D joints.

Wrong overall mesh volume for different person with certain bias

Hi ,

i try to use your amazing work to estimate a person volume by using the fitted SMPL Mesh. I was able to transform the Mesh in my respective camera coordinate system. I calculate for each body region (head, hands, arms, legs etc.) its respective volume and sum them up. I am using the world coordinates from the transformed mesh. A render to the image matches the persons silhouette.

But I found that different persons with different overall mesh representation seem to contain all similar volume of +- 0.05m^3.

It is crucial for my work to have a relatively good estimation of the persons volume but it seems that using the SMPL model wont be a good way to do this.

Could you give me ideas? Is this conversion from the SMPL Mesh to my camera coordinate system correct ? I feel it could be a scaling issue. I hope you could help me out.

BR

Question about the multiple datasets loader.

Hi. Thanks for your code sharing.
I have a question about your code, part of multiple datasets loading.

    class MultipleDatasets(Dataset):
       ...
        def __getitem__(self, index):
            if self.make_same_len:
                db_idx = random.randint(0,self.db_num-1) # uniform sampling
                data_idx = index % self.max_db_data_num
                if data_idx >= len(self.dbs[db_idx]) * (self.max_db_data_num // len(self.dbs[db_idx])): # last batch: random sampling
                    data_idx = random.randint(0,len(self.dbs[db_idx])-1)
                else: # before last batch: use modular
                    data_idx = data_idx % len(self.dbs[db_idx])
        ...
            return self.dbs[db_idx][data_idx]
if data_idx >= len(self.dbs[db_idx]) * (self.max_db_data_num // len(self.dbs[db_idx])): # last batch: random sampling
    data_idx = random.randint(0,len(self.dbs[db_idx])-1)
else: # before last batch: use modular
    data_idx = data_idx % len(self.dbs[db_idx])

I cannot understand these 4 lines. When We get an item from this class.
For my understanding, data_idx should be [0, self.max_db_data_num * self.db_num -1] if self.make_same_len is true.
But the length of self.dbs[db_idx] can be smaller than data_idx. In this case, you can simply get the modular as valid data_idx.

But why do we need the condition like this?

if data_idx >= len(self.dbs[db_idx]) * (self.max_db_data_num // len(self.dbs[db_idx]))

And why data_idx should be drown out by random sampling in this case??

May be a trivial question, but i want to know your purpose! Thank you :)

Compatibility with Openpose 25 annotations?

Hi! I'd like to apply Pose2Mesh with my own data which has only OpenPose 25-joint annotations (which seems to be the 17 coco joints + neck + pelvis + 6 extra joints on the feet, as defined here).
Now that Pose2Mesh uses your customized COCO joints (original 17 + neck + pelvis) for 3DPW training, can I directly apply that pretrained model to my task?
If so, should I simply use the neck and pelvis joints that's contained in my openpose-25 annotations, or ignore them and recompute them using this line of code?

Thanks in advance, I just wanna make sure I'm using your model in a strictly correct way (great work by the way!).

About Mano parameter

Why the trans parameter of mano can be replaced by the T of the camera extrinsic?

visualize the coarse mesh model

Hello, thanks for you excellent work!

I started the coarsening process from SMPL template with 6890 vertices and could get graphs of nine different resolutions, but I have no idea how to get the corresponding vertex coordinates to visualize the coarse mesh model.

Hope for your suggestions. Thanks a lot!

The problem of camera parameters

Hello, thank you for your excellent work!
I am a beginner of 3D reconstruction,There are some doubts when reading your code

class OptimzeCamLayer(nn.Module):
    def __init__(self):
        super(OptimzeCamLayer, self).__init__()

        self.img_res = 500 / 2
        self.cam_param = nn.Parameter(torch.rand((1,3)))

    def forward(self, pose3d):
        output = pose3d[:, :, :2] + self.cam_param[None, :, 1:]
        output = output * self.cam_param[None, :, :1] * self.img_res + self.img_res
        return output


def get_model():
    model = OptimzeCamLayer()

    return model

Why are camera parameters random and how can they work correctly?
Hope to get your answer, thank you

Problem in FreiHand data-loader

When I tried to train the PostNet on the FreiHand dataset, I got the following errors:

File "/home/nankaingy/data/code/pose-shape/Pose2Mesh_RELEASE/main/../data/FreiHAND/dataset.py", line 202, in getitem
mano_mesh_cam, mano_joint_cam = self.get_mano_coord(mano_param, cam_param)
File "/home/nankaingy/data/code/pose-shape/Pose2Mesh_RELEASE/main/../data/FreiHAND/dataset.py", line 172, in get_mano_coord
R, t = np.array(cam_param['R'], dtype=np.float32).reshape(3, 3), np.array(cam_param['t'], dtype=np.float32).reshape(3)
KeyError: 'R'

Since the argument ‘cam_param’ loaded from 'freihand_train_data.json',the keys of ‘cam_param‘ dict in JSON file only contains 'focal' and 'princpt' without 'R' or 't', as shown below:

freihand_$SPLIT_data.json
|-- db_idx: {
‘cam_param’: {
‘focal’: focal lengths in x- and y-axis (intrinsic. pixel unit.),
‘princpt’: principal point coordinates in x- and y-axis (intrinsic. pixel unit.)
},
‘mano_param’: {
‘pose’: 48-dimensional mano pose vector (theta),
‘shape’: 10-dimensional mano shape vector (beta)
},
‘joint_3d’: 21x3 joint coordinates (MANO joint set. meter unit.) from mesh,
‘scale’: groundtruth scale provided from FreiHAND
}
How can I fix this?

Visualize Mesh

Hi, Thanks for your amazing work.

I have trouble using your render.py code to visualize the ground truth mesh of Human3.6 dataset. I can get the vertices and faces from your dataloader. But when I try to render it, I can't find the camera parameters according to your function. I tried to use the rotation and translation, focal length, and principal point in the original dataset but the rendered result is blank.

Could you explain the camera parameters in the render.py? Are these parameters related to the camera parameters in the dataset?

Thank you very much

3d keypoints

Hi, I replaced the 3d keypoints with the ones that I got from VideoPose3D and the generated mesh look really weird. Is it because of any normalization that you processed with the 3D keypoints? Thanks.

unzip pretrained weights

Hi, thank you for putting up and share this excellent work. A siily question, I was trying to download the pretrained weights (as final_pth.tar), but had troubles even decompressing them. I m using a windows 10 pc and have tried 7z and tar command on colab notebook and jupyter notebook but none of them worked (I could decompress other tar files from other sources) so was wondering what tools I should use to decompress your weights files? Thank you

Problem in demo

hello
in demo/run.py path to weights set as model_chk_path = './experiment/exp_07-07_23:02:27.03/checkpoint'

i have problems:
Traceback (most recent call last):
File "/home/alexe1ka/my_experiments/Pose2Mesh_RELEASE/demo/../lib/funcs_utils.py", line 134, in load_checkpoint
checkpoint = torch.load(checkpoint_dir, map_location='cuda')
File "/home/alexe1ka/.pyenv/versions/my_env/lib/python3.7/site-packages/torch/serialization.py", line 581, in load
with _open_file_like(f, 'rb') as opened_file:
File "/home/alexe1ka/.pyenv/versions/my_env/lib/python3.7/site-packages/torch/serialization.py", line 230, in _open_file_like
return _open_file(name_or_buffer, mode)
File "/home/alexe1ka/.pyenv/versions/my_env/lib/python3.7/site-packages/torch/serialization.py", line 211, in init
super(_open_file, self).init(open(name, mode))
FileNotFoundError: [Errno 2] No such file or directory: './experiment/exp_07-07_23:02:27.03/checkpoint/checkpoint0.pth.tar'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/alexe1ka/my_experiments/Pose2Mesh_RELEASE/demo/run.py", line 130, in
model, joint_regressor, joint_num, skeleton, graph_L, graph_perm_reverse = get_joint_setting(mesh_model, joint_category=joint_set)
File "/home/alexe1ka/my_experiments/Pose2Mesh_RELEASE/demo/run.py", line 96, in get_joint_setting
checkpoint = load_checkpoint(load_dir=model_chk_path)
File "/home/alexe1ka/my_experiments/Pose2Mesh_RELEASE/demo/../lib/funcs_utils.py", line 137, in load_checkpoint
raise ValueError("No checkpoint exists!\n", e)
ValueError: ('No checkpoint exists!\n', FileNotFoundError(2, 'No such file or directory'))

Process finished with exit code 1

where i can get this file?

Confused about the performance of Pose2mesh on Human3.6M

The performance of Pose2mesh on Human3.6M:
Training with Human3.6M:
MPJPE:64.9
PA-MPJPE:48.0

Training with Human3.6M and COCO:
MPJPE:67.9
PA-MPJPE:49.9

Best result:
MPJPE:64.9
PA-MPJPE:46.3

As mentioned in the paper,using more datasets to train Pose2mesh will decrease the performance on Human3.6M. I wonder the best result on Human3.6M is supposed to be trained with Human3.6M dataset only? In this case, the best result should be the same with the one trained with Human3.6M. Or it should be trained with Human3.6M+COCO+MuCo? Would you please show me the training settings on Human3.6M?

How can I get the *.npy file for the demo

Hello, much appreciate your amazing work!

I got one question here, how can I generate the *.npy file for the demo file to generate the mesh *.obj file.

What's more, for example, I have the 3d poses presented by three dimensions Cartesian coordinate system(xyz-axis). like the image below. How can I convert the original 3d pose to the *.npy file?

image

Broken link to models

A link in the readme file is broken.

Download basicModel_f_lbs_10_207_0_v1.0.0.pkl, basicModel_m_lbs_10_207_0_v1.0.0.pkl, and basicModel_neutral_lbs_10_207_0_v1.0.0.pkl from here (female & male)..

The "here" link is broken and now only the neural model is available.

about your other code 3DCrowdNet

Hello, I recently read your other paper, 3DCrowdNet. I was greatly inspired and wanted to ask you when this code will be open source. Thank you and look forward to your reply

Visualization result on 3DPW dataset

Following #32, I'm tring to visualize the groud truth and test result on 3DPW using your render_mesh function. It seems even the ground truth mesh cannot totally fit the image well. I wonder if this is normal ? Since the ground truth is obtained using SMPLify-X. Or should I follow the pipeline in demo/run.py to let the project_net to learn the camera params?

        # get camera parameters
        project_net = models.project_net.get_model(crop_size=virtual_crop_size).cuda()
        joint_input = coco_joint_img
        out = optimize_cam_param(project_net, joint_input, crop_size=virtual_crop_size)

        # vis mesh
        color = colorsys.hsv_to_rgb(np.random.rand(), 0.5, 1.0)
        orig_img = render(out, orig_height, orig_width, orig_img, mesh_model.face, color)#s[idx])
        cv2.imwrite(output_path + f'{img_name[:-4]}_mesh_{idx}.png', orig_img)
def render_mesh(img, mesh, face, cam_param):
    # mesh
    mesh = trimesh.Trimesh(mesh, face)
    rot = trimesh.transformations.rotation_matrix(
	np.radians(180), [1, 0, 0])
    mesh.apply_transform(rot)
    material = pyrender.MetallicRoughnessMaterial(metallicFactor=0.0, alphaMode='OPAQUE', baseColorFactor=(1.0, 1.0, 0.9, 1.0))
    mesh = pyrender.Mesh.from_trimesh(mesh, material=material, smooth=False)
    scene = pyrender.Scene(ambient_light=(0.3, 0.3, 0.3))
    scene.add(mesh, 'mesh')
    
    focal, princpt = cam_param['focal'], cam_param['princpt']
    camera = pyrender.IntrinsicsCamera(fx=focal[0], fy=focal[1], cx=princpt[0], cy=princpt[1])
    scene.add(camera)
 
    # renderer
    renderer = pyrender.OffscreenRenderer(viewport_width=img.shape[1], viewport_height=img.shape[0], point_size=1.0)
   
    # light
    light = pyrender.DirectionalLight(color=[1.0, 1.0, 1.0], intensity=0.8)
    light_pose = np.eye(4)
    light_pose[:3, 3] = np.array([0, -1, 1])
    scene.add(light, pose=light_pose)
    light_pose[:3, 3] = np.array([0, 1, 1])
    scene.add(light, pose=light_pose)
    light_pose[:3, 3] = np.array([1, 1, 2])
    scene.add(light, pose=light_pose)

    # render
    rgb, depth = renderer.render(scene, flags=pyrender.RenderFlags.RGBA)
    rgb = rgb[:,:,:3].astype(np.float32)
    valid_mask = (depth > 0)[:,:,None]

    # save to image
    img = rgb * valid_mask + img * (1-valid_mask)
    return img

image_00001
groud truth result

image_00001
test result

Test on pretrained manoJ_train_freihand.yaml

Hi,

I use your pretrained model and test on freihand data as below,
python main/test.py --gpu 0,1,2,3 --cfg ./asset/yaml/pretrained manoJ_test_freihand.yaml

the average error output is 4700.
Is it so?

MoVi dataset

In the original paper, it is mentioned that MoVi dataset is used to train PoseNet. However. no relevant parameters regarding training with MoVi dataset is found in sset/yaml/posenet_{input joint set}train{dataset list}.yml. I wonder how MoVi dataset comes into play here. Would you specify?

coco.npy file location

I am interested in your repo :)

Is there a J_regressor_coco.npy file uploaded?

I can't find the file.

Colab demo ?

Hello, thank you for your amazing implementation
Can you provide a Google colab notebook to show the demo of this repo, I'm sure it'll help everyone and it'll increase the popularity of your repo as well and it'll be much appreciated,
Thank you

test issue about PW3D

Hi,
I trained pose2mesh on Human36m using
"pose2mesh_human36J_train_human36.yml"
, how can I test this model on PW3D ?
I'm looking forward to hearing from you. Many thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.