GithubHelp home page GithubHelp logo

otaheri / grab Goto Github PK

View Code? Open in Web Editor NEW
237.0 12.0 27.0 13.1 MB

GRAB: A Dataset of Whole-Body Human Grasping of Objects

Home Page: https://grab.is.tue.mpg.de

License: Other

Python 100.00%
human-object-interaction human-motion grasping motion-tracking contact hand-object-interaction body-object-contact grab-dataset grab grasp

grab's Introduction

GRAB: A Dataset of Whole-Body Human Grasping of Objects (ECCV 2020)

report

GRAB-Teaser [Paper Page ] [ArXiv Paper ]

GRAB is a dataset of full-body motions interacting and grasping 3D objects. It contains accurate finger and facial motions as well as the contact between the objects and body. It contains 5 male and 5 female participants and 4 different motion intents.

Eat - Banana Talk - Phone Drink- Mug See - Binoculars
GRAB-Teaser GRAB-Teaser GRAB-Teaser GRAB-Teaser

The GRAB dataset also contains binary contact maps between the body and objects. With our interacting meshes, one could integrate these contact maps over time to create "contact heatmaps", or even compute fine-grained contact annotations, as shown below:

Contact Heatmaps Contact Annotation
contact contact

Check out the YouTube video below for more details.

Long Video Short Video
LongVideo ShortVideo

Table of Contents

Description

This repository Contains:

  • Code to preprocess and prepare the GRAB data
  • Tools to extract 3D vertices and meshes of the body, hands, and object
  • Visualizing and rendering GRAB sequences

Requirements

This package has the following requirements:

Installation

To install the repo please follow the next steps:

  • Clone this repository and install the requirements:
    git clone https://github.com/otaheri/GRAB
    cd GRAB
    pip install -r requirements.txt
    

Getting started

In order to use the GRAB dataset please follow carefully the steps below, in this exact order:

  • Download the GRAB dataset (ZIP files) from our website. Please do NOT unzip the files yet.

  • Please make sure to follow the instruction on the website to get access to the object_meshes.zip before continuing to the next steps.

  • Put all the downloaded ZIP files for GRAB in a folder.

  • Run the following command to extract the ZIP files.

    python grab/unzip_grab.py   --grab-path $PATH_TO_FOLDER_WITH_ZIP_FILES \
                                --extract-path $PATH_TO_EXTRACT_GRAB_DATASET_TO
  • The extracted data should be in the following structure

    GRAB
    ├── grab
    │   │
    │   ├── s1
    │   └── ...
    │   └── s10
    │
    └── tools
    │    ├── object_meshes
    │    └── object_settings
    │    └── subject_meshes
    │    └── subject_settings
    │    └── smplx_correspondence
    │
    └── mocap (optional)
  • Follow the instructions on the SMPL-X website to download SMPL-X and MANO models.
  • Check the Examples below to process, visualize, and render the data.

Contents of each sequence

Each sequence name has the form object_action_*.npz, i.e. it shows the used object and the action. Also, the parent folder of each sequence shows the subjectID for that sequence. For example "grab/s4/mug_pass_1.npz" shows that subject "s4" passes the mug. The data in each sequence is structured as a dictionary with several keys which contain the corresponding information. Below we explain each of them separately.

  • gender: The gender of the subject performing the action (male or female).

  • sbj_id: The ID of each subject(s1 to s10).

  • obj_name: Name of the object used in the sequence.

  • motion_intent: The action which is performed by the subject on the object.

  • framerate: 120 fps - fixed for all sequences.

  • n_frames: The total number of frames for the sequence.

  • body: We represent the body using the SMPL-X model. SMPL-X is a realistic statistical body model that is parameterized with pose, shape, and translation parameters. Using these parameters we can get the moving 3D meshes for the body. For more details please check the website or paper. This body key contains the required data for the SMPL-X model, which are as follows:

    • params:  These are translation and joint rotations (in axis-angle representation) of the body which are required by the SMPL-X model.

    • vtemp: Gives the relative path for the file that contains the personalized shape for each subject. We pass the v-template as input to the SMPL-X model so that we get a very accurate and realistic shape and motion.

  • lhand & rhand: In addition to the full-body motion, GRAB provides the data to get the motion of each hand individually. Similarly to the body, we use our MANO hand model to do this. MANO provides 3D meshes for the right and left hand given their pose and shape parameters. For more details please check our website and paper. The parameters for the model are saved in the following structure:

    • params: The global translation and joint rotation (in axis-angle representation) for the hand.

    • vtemp: The relative path to the personalized shape of each hand.

  • object: Each object is modeled as a rigid mesh which is rotated and translated.

    • params: Are the rotations and translations for the object during the whole sequence. 

    • object_mesh: Is the relative path to the object mesh.

  • table: Each object is supported by a table at the beginning and ending of all sequences. The table is modeled as a flat surface mesh that can be rotated and translated similar to all other objects. The height of the table might be different across sequences.  The table key contains the params (similar to params for the object, see above) and relative path to the table_mesh. 

  • contact: We measure the contact between the object and the body based on proximity and other criteria as described in our paper. For each motion frame, each object vertex is associated with a number that ranges from 0 to 55. 0 means there is no contact while any other number shows the specific body/hand part that the vertex has contact with. You can find the mapping between the contact number and each body joint using the table here.

Examples

  • Processing the data

    After installing the GRAB package and downloading the data and the models from the SMPL-X website, you should be able to run the grab_preprocessing.py

    python grab/grab_preprocessing.py --grab-path $GRAB_DATASET_PATH \
                                      --model-path $SMPLX_MODEL_FOLDER \
                                      --out-path $PATH_TO_SAVE_DATA
  • Get 3D vertices (or meshes) for GRAB

    In order to extract and save the vertices of the body, hands, and objects in the dataset, you can run the get_grab_vertices.py

    python grab/save_grab_vertices.py --grab-path $GRAB_DATASET_PATH \
                                     --model-path $SMPLX_MODEL_FOLDER
  • Visualizing and rendering 3D interactive meshes

    To visualize and interact with GRAB 3D meshes, run the examples/visualize_grab.py

    python examples/visualize_grab.py --grab-path $GRAB_DATASET_PATH \
                                      --model-path $SMPLX_MODEL_FOLDER

    To render the meshes and save images in a folder please run the examples/render_grab.py

    python examples/render_grab.py --grab-path $GRAB_DATASET_PATH \
                                    --model-path $SMPLX_MODEL_FOLDER \
                                    --render-path $PATH_TO_SAVE_RENDERINGS

Citation

@inproceedings{GRAB:2020,
  title = {{GRAB}: A Dataset of Whole-Body Human Grasping of Objects},
  author = {Taheri, Omid and Ghorbani, Nima and Black, Michael J. and Tzionas, Dimitrios},
  booktitle = {European Conference on Computer Vision (ECCV)},
  year = {2020},
  url = {https://grab.is.tue.mpg.de}
}

We kindly ask you to cite Brahmbhatt et al. (ContactDB website), whose object meshes are used for our GRAB dataset, as also described in our license.

License

Software Copyright License for non-commercial scientific research purposes. Please read carefully the LICENSE file for the terms and conditions and any accompanying documentation before you download and/or use the GRAB data, model and software, (the "Data & Software"), including 3D meshes (body and objects), images, videos, textures, software, scripts, and animations. By downloading and/or using the Data & Software (including downloading, cloning, installing, and any other use of the corresponding github repository), you acknowledge that you have read and agreed to the LICENSE terms and conditions, understand them, and agree to be bound by them. If you do not agree with these terms and conditions, you must not download and/or use the Data & Software. Any infringement of the terms of this agreement will automatically terminate your rights under this LICENSE.

Acknowledgments

Special thanks to Mason Landry for his invaluable help with this project.

We thank:

  • Senya Polikovsky, Markus Hoschle (MH) and Mason Landry (ML) for the MoCap facility.
  • ML, Felipe Mattioni, David Hieber, and Alex Valis for MoCap cleaning.
  • ML and Tsvetelina Alexiadis for trial coordination, and MH and Felix Grimminger for 3D printing,
  • ML and Valerie Callaghan for voice recordings, Joachim Tesch for renderings.
  • Jonathan Williams for the website design, and Benjamin Pellkofer for the IT and web support.
  • Sai Kumar Dwivedi and Nikos Athanasiou for proofreading.

Contact

The code of this repository was implemented by Omid Taheri.

For questions, please contact [email protected].

For commercial licensing (and all related questions for business applications), please contact [email protected].

footer

grab's People

Contributors

dimtzionas avatar michaeljblack avatar otaheri avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

grab's Issues

Incorrect Hand-Object Contact

Hi,

Thank you for sharing the great work! When visualizing the data, I obtain incorrect hand-oject contacts as the examples shown below:
1
2

The hand and object mesh are obtained from the MANO hand data (pose, shape, global rotation, translation) and object data (object template, object global rotation, translation). I wonder if there are errors in my visualization process or the data can include these errors?

Yufei

per vertex displacement error

Hello @otaheri

I am trying to extract the vertices of the body. As described, I run the the get_grab_vertices.py

python grab/save_grab_vertices.py --grab-path $GRAB_DATASET_PATH \
                                 --model-path $SMPLX_MODEL_FOLDER 

I am getting an error related to the vertex displacement in the function blend_shapes

RuntimeError: size of dimension does not match previous size, operand 1, dim 2

How can I get it work?I didn't modify anything in the code just gave the model path and the data path. Thanks

How to get the position of a joint in the world coordinate system

Hello,
is there a way to get the positions of the 15 joint points of the mano hand in the world coordinate system from the data stored under rhand in the dataset?
In addition, what does the 24-dimensional vector of hand_pose under rhand represent?
Thank you so much! ! !

Correspondence between MANO and SMPL hand_pose

Hi,

I noticed that the right_hand_pose PCA provided in body parameters is different from rhand_hand_pose PCA provided in rhand parameters in GRAB data. So when I replace the rhand pose (either fullpose or PCA) as is into SMPLX the hand looks different. I was wondering -- given a MANO pose parameter (fullpose or PCA), is there a way to convert it into right_hand_pose for SMPLX?

Here is an example: Left - right_hand_pose from body param ; Right - hand_pose from rhand param

Thanks!

How to get watertight meshes

Hello, first, thanks for your great work!

It seems that the meshes I get from your method are not watertight.
I just do [os_mesh = o_mesh + s_mesh] and then save os_mesh as a .obj file. o_mesh is from line 95 in example/visualize_grab.py and s_mesh is from line 98 in example/visualize_grab.py.

So is there any watertight version? Or could you please show me how to get watertight .obj files?

Thank you :)

details about global_orient

Thank you for your great work!!!!
I want to ask about the global_orient parameter in rhand. Does it describe the rotation of the root node of the middle finger of the palm in a fixed world coordinate system?

Issue running examples due to unzip directory structure

Hello,

I have been facing issues running the preprocessing and save vertices examples. I've downloaded the dataset for all subjects, unzipped them with the provided unzipping tool, and have also downloaded the SMPLX models.

In both examples, the call to sbj_vtemp = self.load_sbj_verts(sbj_id, seq_data) throws the following error:

File "grab/grab_preprocessing.py", line 148, in data_preprocessing sbj_vtemp = self.load_sbj_verts(sbj_id, seq_data) File "grab/grab_preprocessing.py", line 307, in load_sbj_verts sbj_vtemp = np.array(Mesh(filename=mesh_path).vertices) File "./tools/meshviewer.py", line 47, in __init__ mesh = trimesh.load(filename, process = process) File "/home/tshankar/Research/Code/Robo_Env1/lib/python3.6/site-packages/trimesh/exchange/load.py", line 113, in load resolver=resolver) File "/home/tshankar/Research/Code/Robo_Env1/lib/python3.6/site-packages/trimesh/exchange/load.py", line 623, in parse_file_args raise ValueError('string is not a file: {}'.format(file_obj)) ValueError: string is not a file: ../Data/Datasets/TestGrab/grab/../tools/subject_meshes/male/s1.ply

I believe the error is because the code assumes the unzipping code creates a /tools/ directory in path the GRAB dataset is extracted to. However, despite having run the code as instructed, my unzipped GRAB dataset directory does not contain a /tools/ directory. Since the example code tries to access ../tools/subject_meshes from there, it correspondingly fails.

How can I fix this issue?

Extract files by the script was not working

Hello Omid,
I followed these steps to extract the zip folder, but the result was an empty folder, and when I tried to print the all_zips variable, it was an empty array.

The GRAB dataset is split into separate files per subject. Please do NOT unzip manually! Please take the following steps:

Download all ZIP files in the same folder.
Run our script as explained here to unzip the ZIP files and extract the content in the folder hierarchy that our code expects.

Question on processing data for GrabNet

Hello,
Thank you for the great work! I have been studying this dataset and the associated GrabNet for hand-object interaction, and I tried to replicate the data preparation step for Grabnet, as written in the original paper.

According to the paper, the following rules were applied:
(i) The right hand should be in contact.
(ii) The left hand should not have any contact.
(iii) The object’s vertical position should be at least 5 mm different from its initial one (i.e. it should be lifted from the resting table).
(iv) The right thumb and at least one more finger should be in contact.
(v) A finger is considered a contacting finger, when it is in contact with at least 50 object vertices

While I can check if whether the left and/or right hand is in contact with the object, according to the MANO vertices,
I wonder how I can check rule iv: if the "thumb" and "at least one more finger" should be in contact.
Is there a MANO-vertices segmentation map somewhere? If so, would you mind sharing the annotation map information?

I would also like to know how to exactly calcuate the object's vertical position.

Thanks in advance,

Extraction of the right hand from GRAB

Hello!

For my project, I would like to use only a single (right hand) with GRAB. I plugged MANO as a model type instead of SMPL-X, however I'm facing two problems:

  1. Inaccuracies
    Although the same GRAB data is used, MANO is visibly less accurate for the hand-object interaction than SMPL-X.

Full body:
full body

Right hand:
right hand

Everything seems to be in place from my code level, so I'm assuming the model itself makes a difference. I also get a warning which is probably the reason: WARNING: You are using a MANO model, with only 10 shape coefficients.

Is there a way to get the exact same hand movement with MANO as with the hand in SMPL-X?

  1. Contact forces
    I would like to extract the contact forces only for the right hand. They are saved in two dictionaries:
    • seq_data['contact']['object']
    • seq_data['contact']['body']

For the object dictionary, I can extract values for the right hand according to the table that shows mapping between the contact number and each body joint (values greater than 40).

However in case of the body dictionary, there's an error with dimensions: dimension is 778 but corresponding boolean dimension is 10475. Specifically, MANO expects the size 778 while the list has 10475 values as originally for SMPL-X. I assume that I should take only 778 values from this list that are relevant for the right hand, but this is not clear.

Is there a simple way to extract the contact forces only for the hand?

Thank you,
Bartek

Potential naming conflict with smplx

Issue

Upon installing the required dependencies and smplx, I am met with the following error when running visualize_grab.py (occurs regardless of arguments passed in).

$ python examples/visualize_grab.py
Traceback (most recent call last):
  File "examples/visualize_grab.py", line 25, in <module>
    from tools.objectmodel import ObjectModel
  File "/home/msalvato/miniconda3/envs/myenv2/lib/python3.8/site-packages/tools/__init__.py", line 18, in <module>
    import clean_ch
ModuleNotFoundError: No module named 'clean_ch'

Configurations

Linux (WSL 2) in a conda env (python 3.8.3)
Linux (WSL 2) in a pip env (python 3.8.3 I think?)
Windows in a conda env (python 3.8.3)

Min repro

I checked a min repo in WSL 2 in a new conda env with python 3.8.3

  1. 'pip install -r requirements.txt' in GRAB
  2. 'pip install smplx[all]' and 'python setup.py install' for smplx.
  3. python examples/visualize_grab.py

Solution

I renamed the "tools" module in GRAB to "tools_grab" (and changed all references). I believe the issue is that smplx creates a package named "tools" which also exists as a name in GRAB.

Relation between right-hand global_orientation and wrist joint from full-body pose

Hi,

I am trying to see if I can get the right hand wrist global_orientation from full body pose. My understanding is that the 21 full body joints correspond to these specific joints. The last joint is the right wrist.

However, the value for data['body']['body_pose'][-1, :] does not appear to be the same as data['rhand']['global_orient']. Should these two be the same / related? If so, is there a way for me to get right hand global orientation (wrist pose) from the full-body pose?

Thanks!

About body data and object data

  1. regarding the body data, what's the difference between the following data in body_data, including "body_pose", "fullpose" and "joints"(get from the codes in the attached figure1)? What the meaning of the size of these data, like what the meaning of 63 of 'body_pose', 127 of 'joints', and what the meaning of 165 of 'fullpose'?
    How to get a sequence of joints data both angle and coordinates? Which key in the data dict is the right one?

  2. As for the object data, I want to get a sequence of object vertices, is it right to get the sampled vertices with original coordinates firstly(which is object_data['verts'] with size (1024,3) ) for the first frame, and then rotate and translate it based on the object_data['global_orient'] and object_data['transl'] ?
    How to caluculate it? Is there any existing python function that I can use directly with the input of these params?

Outliers in the table pose

Hello! Thanks for the great dataset!

I've noticed that some sequences have outliers in the table position. The rendered videos confirm their presence (see, for example, "grab\s7\pyramidsmall_pass_1", shortly after the person grabs the object). To get the full list of sequences with outliers you can use the following snippet. I get 242 sequences when it finishes.

tol = 1e-3
for file in glob.glob(PATH_TO_GRAB + r"\*\*.npz"):
    data = parse_npz(file)
    parms = data.table.params
    if np.max(np.linalg.norm(np.diff(parms.global_orient,axis=0),axis=1)) > tol: print(file)

Are there any plans to clean these sequences, or a recommended way to clean the data?

Thanks!

Problem when unzipping the GRAB dataset

Hi, thanks for your excellent work! I downloaded all the zip files from the website. I unzipped the files by running the scripts you offered—— python grab/unzip_grab.py --grab-path $PATH_TO_FOLDER_WITH_ZIP_FILES --extract-path $PATH_TO_EXTRACT_GRAB_DATASET_TO.

But I couldn't obtain the "tools" folder, I checked the steps and didn't know where the problem is.

mesh masurement scale?

Hi Omid,

As I didn't find an explanation for this yet, I would like to confirm that the value in mesh files provided by GRAB is measured in meter. For example,

from psbody.mesh import Mesh
mesh = Mesh(filename='GRAB/tools/object_meshes/contact_meshes/camera.ply')
# mesh.v[:,0].max() = 0.057756002992391586
# mesh.v[:,0].min() = -0.057756002992391586

This means that the diameter of the camera mesh on the z-axis is ~57*2=114 mm. Is this correct?

examples not working and path problems

Hi everybody!
I'm trying to make the code work but I'm doing mistakes (probably while organizing files).
I've downloaded all the files required and unzipped (I think in correct way) but when I try to run the examples I can't see anything and it returns errors.

I've organized my files as you can see in the first image
GRAB_gen
and the subfolder 'grab' has this items
grab_under
the zipped_obj_subj folder contains all the objects and 6 subjects downloaded from GRAB page.

As you can see from this screen, this is what I type for see an example of movement
erorr

I think that my problem is about how I downloaded, collected and unzipped SMPLX components.
I've read multiple times all the documentations and informations on git but I can't find a way out.

Please help! :(

How to Include 'jaw' Joint and Eye Joints in a Kinematic Chain?

Hello,
I am currently working with a kinematic chain represented by lists of joint indices in Python, similar to the following example:

t2m_kinematic_chain = [
[0, 2, 5, 8, 11], # pelvis --> right_hip ---> right_knee ---> right_ankle ---> right_foot
[0, 1, 4, 7, 10], # pelvis ---> left_hip ---> left_knee ---> left_ankle ---> left_foot
[0, 3, 6, 9, 12, 15], # pelvis ---> spine1 ---> spine2 ---> spine3 ---> neck ---> head
[9, 14, 17, 19, 21], # spine3 ---> right_collar ---> right_shoulder ---> right_elbow ---> right_wrist
[9, 13, 16, 18, 20] # spine3 ---> left_collar ---> left_shoulder ---> left_elbow ---> left_wrist
]

Now, I want to include the 'jaw' joint (index 22) and the eye joints (indices 23 and 24) into this kinematic chain. I'm unsure about the connection between the 'jaw' joint and the 'head' joint (index 15). How should I modify the kinematic chain to include these joints properly? Should I connect the 'jaw' joint directly to the 'head' joint or consider another approach?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.