GithubHelp home page GithubHelp logo

hassony2 / manopth Goto Github PK

View Code? Open in Web Editor NEW
593.0 15.0 102.0 241 KB

MANO layer for PyTorch, generating hand meshes as a differentiable layer

License: GNU General Public License v3.0

Python 100.00%
mano pytorch layer hand

manopth's Introduction

Manopth

MANO layer for PyTorch (tested with v0.4 and v1.x)

ManoLayer is a differentiable PyTorch layer that deterministically maps from pose and shape parameters to hand joints and vertices. It can be integrated into any architecture as a differentiable layer to predict hand meshes.

image

ManoLayer takes batched hand pose and shape vectors and outputs corresponding hand joints and vertices.

The code is mostly a PyTorch port of the original MANO model from chumpy to PyTorch. It therefore builds directly upon the work of Javier Romero, Dimitrios Tzionas and Michael J. Black.

This layer was developped and used for the paper Learning joint reconstruction of hands and manipulated objects for CVPR19. See project page and demo+training code.

It reuses part of the great code from the Pytorch layer for the SMPL body model by Zhang Xiong (MandyMo) to compute the rotation utilities !

It also includes in mano/webuser partial content of files from the original MANO code (posemapper.py, serialization.py, lbs.py, verts.py, smpl_handpca_wrapper_HAND_only.py).

If you find this code useful for your research, consider citing:

  • the original MANO publication:
@article{MANO:SIGGRAPHASIA:2017,
  title = {Embodied Hands: Modeling and Capturing Hands and Bodies Together},
  author = {Romero, Javier and Tzionas, Dimitrios and Black, Michael J.},
  journal = {ACM Transactions on Graphics, (Proc. SIGGRAPH Asia)},
  publisher = {ACM},
  month = nov,
  year = {2017},
  url = {http://doi.acm.org/10.1145/3130800.3130883},
  month_numeric = {11}
}
  • the publication this PyTorch port was developped for:
@INPROCEEDINGS{hasson19_obman,
  title     = {Learning joint reconstruction of hands and manipulated objects},
  author    = {Hasson, Yana and Varol, G{\"u}l and Tzionas, Dimitris and Kalevatykh, Igor and Black, Michael J. and Laptev, Ivan and Schmid, Cordelia},
  booktitle = {CVPR},
  year      = {2019}
}

The training code associated with this paper, compatible with manopth can be found here. The release includes a model trained on a variety of hand datasets.

Installation

Get code and dependencies

  • git clone https://github.com/hassony2/manopth
  • cd manopth
  • Install the dependencies listed in environment.yml
    • In an existing conda environment, conda env update -f environment.yml
    • In a new environment, conda env create -f environment.yml, will create a conda environment named manopth

Download MANO pickle data-structures

  • Go to MANO website
  • Create an account by clicking Sign Up and provide your information
  • Download Models and Code (the downloaded file should have the format mano_v*_*.zip). Note that all code and data from this download falls under the MANO license.
  • unzip and copy the models folder into the manopth/mano folder
  • Your folder structure should look like this:
manopth/
  mano/
    models/
      MANO_LEFT.pkl
      MANO_RIGHT.pkl
      ...
  manopth/
    __init__.py
    ...

To check that everything is going well, run python examples/manopth_mindemo.py, which should generate from a random hand using the MANO layer !

Install manopth package

To be able to import and use ManoLayer in another project, go to your manopth folder and run pip install .

cd /path/to/other/project

You can now use from manopth import ManoLayer in this other project!

Usage

Minimal usage script

See examples/manopth_mindemo.py

Simple forward pass with random pose and shape parameters through MANO layer

import torch
from manopth.manolayer import ManoLayer
from manopth import demo

batch_size = 10
# Select number of principal components for pose space
ncomps = 6

# Initialize MANO layer
mano_layer = ManoLayer(mano_root='mano/models', use_pca=True, ncomps=ncomps)

# Generate random shape parameters
random_shape = torch.rand(batch_size, 10)
# Generate random pose parameters, including 3 values for global axis-angle rotation
random_pose = torch.rand(batch_size, ncomps + 3)

# Forward pass through MANO layer
hand_verts, hand_joints = mano_layer(random_pose, random_shape)
demo.display_hand({'verts': hand_verts, 'joints': hand_joints}, mano_faces=mano_layer.th_faces)

Result :

random hand

Demo

With more options, forward and backward pass, and a loop for quick profiling, look at examples/manopth_demo.py.

You can run it locally with:

python examples/manopth_demo.py

manopth's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

manopth's Issues

any method to visualize two hands?

I find in your example or the official MANO example only visualize one hand. do you know any method to visualize two hands at the same time?

how to get hand faces

I use mano to regress hand,but I want to use sdf to compute some losses,can somebody tell me how to get the hand faces

Segmentation fault (core dumped)

When I run the “manopth_mindemo.py”, I get the "Segmentation fault (core dumped)" on Ubuntu. But is OK on my macOs, how can I solve this problem?

Efficient way to get ManoLayer's jacobian matrix

Hello, thx for sharing your code, I got some issues about calculate ManoLayer's jacobian matrix in my project.
I use pytorch official api to get ManoLayer's jacobian but it takes too much time, about 0.2s each sample 1 Nvidia RTX 3090
Here is my code:

from torch.autograd.functional import jacobian
 j = jacobian(mano_layer.forward, (theta, beta)) 

btw,
I only calculate joints' jacobian matrix, if add verts' jacobian matrix, it cost a larger time.
Is there any way to get jacobian more efficient?
How could I get ManoLayer's analytical differential expression?

When will the code be published?

Hi,
I have read the paper and find it interesting. May I ask when the code and dataset will be published cause I want to have a try :D

UV coordinates

Is there any uv mapping for the hand models that you could provide, in order to facilitate texturing?

Joints prediction is outside the mesh

Hello,

I have a problem when I use this MANO implementation. After some epochs, some predicted joints are outside the mesh. How is it possible?
I attach a picture to give an idea about the problem.
Capture

How can I fit my hand image to manopth model

Hello, @hassony2 .

Thanks for sharing.

I have tried the demo.

If I have my own hand image, how can I fit it into the model?

In other words, looks like pose vector and shape vector is an essential input, how can I get these two tensors from own data(e.g., RGB data)?

Thanks & regards!

error

Hi @hassony2 , @gulvarol

I received this error when I start training on sterehands dataset '(traineval.py)', What should I do ?
Thanks

Traceback (most recent call last): File "/home/samira/projects/hand-pose/arxive/obman-cvpr2019/obman_train/traineval.py", line 418, in <module> main(args) File "/home/samira/projects/hand-pose/arxive/obman-cvpr2019/obman_train/traineval.py", line 298, in main fig=fig, File "/home/samira/projects/hand-pose/arxive/obman-cvpr2019/obman_train/mano_train/netscripts/epochpass3d.py", line 81, in epoch_pass sample, return_features=inspect_weights File "/home/samira/projects/hand-pose/arxive/obman-cvpr2019/env/lib/python3.5/site-packages/torch/nn/parallel/data_parallel.py", line 150, in forward return self.module(*inputs[0], **kwargs[0]) File "/home/samira/projects/hand-pose/arxive/obman-cvpr2019/env/lib/python3.5/site-packages/torch/nn/modules/module.py", line 547, in __call__ result = self.forward(*input, **kwargs) File "/home/samira/projects/hand-pose/arxive/obman-cvpr2019/obman_train/mano_train/networks/handnet.py", line 273, in forward use_stereoshape=False, File "/home/samira/projects/hand-pose/arxive/obman-cvpr2019/env/lib/python3.5/site-packages/torch/nn/modules/module.py", line 547, in __call__ result = self.forward(*input, **kwargs) File "/home/samira/projects/hand-pose/arxive/obman-cvpr2019/obman_train/mano_train/networks/branches/manobranch.py", line 181, in forward root_palm=root_palm, File "/home/samira/projects/hand-pose/arxive/obman-cvpr2019/env/lib/python3.5/site-packages/torch/nn/modules/module.py", line 547, in __call__ result = self.forward(*input, **kwargs) File "/home/samira/projects/hand-pose/arxive/obman-cvpr2019/env/lib/python3.5/site-packages/manopth/manolayer.py", line 141, in forward self.th_hands_mean + th_full_hand_pose RuntimeError: The size of tensor a (45) must match the size of tensor b (3) at non-singleton dimension 3

How do you create the hand MANO ground truth and is there are code?

Hello,

Thank you for the work.
Is there are code that creating the ground truth of MANO hand annotation [pose , shape parameter] ?
I saw some article which use SGD to fit iteratively .. wondering is there are implementation of that.
I know you are busy but it would be really helpful if you can help that out.
Thank you.

What is the range of input for manopth?

Hi.
I'm trying to randomly sample hand poses with the mano model.
However, I'm confused about the pose vector of the input.
What is an appropriate range of the pose vector?
If anyone could point a reference about this, it would be appreciated.

Can we drive the mesh using FK?

Thank you so much for your great work!
Can we drive the mesh using FK?

I have a data of 21 joints(xyz), then we can calculate the rotation of the joints. Then we can apply this rotation to template mesh MANO using FK. But the result is weired. Anyone can help?

get rotation matrix

Hi~ @hassony2 :) Thank you for your excellent project !

I have a question about manopth.
Because I want to use FK in Unity, I need to get the local rotation matrix of each joint of mano hand.
How can I get this information?

running on GPU

Hello,
I found that the tensor in manolayer is not moved to "cuda".
I guess it couldn't run on GPU.
Is there any implementation which could run on GPU ?
Thanks a lot!

No module named 'chumpy'

I found that this is a python2 package for auto differentiation, is there any hope for python3 support ?

Questions for initial pose

Hello!

I am quite new to the MANO hand model, however, I wonder why the zero-pose input (45x0) gives a hand with bent fingers. In my option, the T-pose (where the pose vector are all zeros) should result in a flat hand, is there any trick or modify on the MANO model in the manopth layer? Or is the original MANO model have a bent finger on its "T-pose"?

Gradient issue with SVD in batch_rotprojs

Thanks for your re-implementation of MANO layer.

I'm trying to differentiate through MANO layer with rotation matrices mode for both root joint and other joints. I have a question regarding backward pass through SVD in batch_rotprojs function. As we are dealing with rotation matrices, the singular values will always be 1. However, as mentioned in https://pytorch.org/docs/stable/generated/torch.svd.html, the gradient will only be finite when the input does not have zero nor repeated singular values, which definitely contradict with our case here. So I'm wondering whether there is a workaround for this, or I have to stick with axis angles which have no such problems.

p.s., I'm considering differentiating through rotation matrices instead of axis angles, since according to some resources e.g. https://arxiv.org/pdf/2003.09572.pdf, using trigonometric functions tend to be more difficult to train since they are non-injective. As far as I know, converting axis angles involve trigonometry so I decide to skip the process if possible. Indeed, I've also tried training to regress axis angles which does not seem to converge.

Any suggestion would be appreciated! Thanks in advance.

Orientation of the model

Hi, @hassony2 , thanks for sharing the repo.

How can I change the orientation of the hand around the origin without changing the translation?
Because every time I change the first three params (rotation vector) in random_pose in examples/manopth_mindemo.py,
it not only rotate around the origin but it also has some translation.
Is transformation matrix coupled with hand pose in some way ?

EX : examples/manopth_mindemo.py
1.
random_pose = torch.rand(batch_size, ncomps + 3) *0
random_pose[0,0] = 0
random_pose[0,1] = 0
random_pose[0,2] = 0
Screenshot from 2021-12-17 17-00-06

random_pose = torch.rand(batch_size, ncomps + 3) *0
random_pose[0,0] = 0.5
random_pose[0,1] = 0.5
random_pose[0,2] = 0.5
Screenshot from 2021-12-17 17-00-48

Thank you!
Best

Left MANO hand model

Thank you for sharing the code.

When using the left mano model, i find that the finger of hand is bended in the opposite direction of hand palm when the rotation value is positive.

I know that the finger is bended in the direction of hand palm when the rotation value is positive.

Is this normal?

Thank you.
(The below image is the output of left hand mesh when the rotation value is positive.)

image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.