GithubHelp home page GithubHelp logo

otaheri / grabnet Goto Github PK

View Code? Open in Web Editor NEW
225.0 13.0 29.0 66.13 MB

GrabNet: A Generative model to generate realistic 3D hands grasping unseen objects (ECCV2020)

Home Page: https://grab.is.tue.mpg.de

License: Other

Python 99.60% Shell 0.40%
grasping grasps hand-object-interaction object-manipulation hand-grasping generative-model generating-grasp mano

grabnet's People

Contributors

dimtzionas avatar michaeljblack avatar nghorbani avatar otaheri avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

grabnet's Issues

Typo in GrabNet data setup

old:
python grabnet/data/unzip_data.py --data-path $PATH_TO_FOLDER_WITH_ZIP_FILES
--ectract-path $PATH_TO_EXTRACT_DATASET_TO

new:
python grabnet/data/unzip_data.py --data-path $PATH_TO_FOLDER_WITH_ZIP_FILES
--extract-path $PATH_TO_EXTRACT_DATASET_TO

Obtaining the Object Meshes / Preprocessing of GRAB data for Grabnet

Hello, Omid!

Q1 ) I saw on the paper of GRAB (specifically about GRABNET) how you pre-processed the GRAB dataset for GRABNET usage with several filtering processes (eg. get rid of period of data when the left hand is touching the object). Is it possible for me to find that code as well?

Thank you!

Generate two hand graps? Of full body graps?

Hi, great work!
I was wondering if it was possibile to generate two hand grasps? I mean render scenes where both hand are always present, maybe one of the two hands is grasping the object or maybe both hands are grasping the same object, or both hands are grasping diffrent objects?
What about rendering the whole body instead of only the hands?

how to avoid the collision?

(the scale is not correct and I cannot fix it)

for this cell:

meshes = torch.load('data/grabnet_data/meshes.pt')

N = len(meshes)

for i in range(N):

    obj = meshes[i][0]
    hand = meshes[i][1]

    hand.set_vertex_colors(vc=[245, 191, 177])

    def_color = np.array([[102, 102, 102, 255]])
    if hasattr(obj.visual, 'vertex_colors') and (obj.visual.vertex_colors == np.repeat(def_color, obj.vertices.shape[0], axis=0)).all():
        obj.set_vertex_colors(vc=name_to_rgb['yellow'])
    
    scene = obj.scene()
    scene.add_geometry(hand)
    a = scene.show()
    display(a)
    time.sleep(10)
    clear_output(wait=True)

When I run it for a surfboard that is scaled-down, there is still some collision. How can I avoid this collision since it is not realistic?
Screenshot from 2021-04-13 20-16-54

As you see, the surfboard has no holes itself so not sure how one of the fingers has penetrated into the surfboard.
Screenshot from 2021-04-13 20-19-04

Difference of rhand_betas and rhand(v-template).ply

Hi @otaheri , thanks for the great work!
I'm using the GrabNet and GRAB dataset. I'm confused about the relationship of rhand_betas and rhand v-templates? Are they mutually exclusive? And I find that the v-templates is not equal to mano_layer.forward(betas), is this normal?

WHAT should be 'obj_info.npy' in 'data' folder?

Context: I'm trying the second example to generate grasps for test data and compare to ground truth.

GrabNet Dataset has two zip files for Information Tables, Objects and Subjects, and I downloaded the former one which is named 'grab__objects_info.zip' (btw why isn't it 'data__objects_info.zip'?). I can see, when unzipped, 'grab' folder appears and it contains 'objects_info.npz' with many xxx.npy files in there. The thing is, I cannot figure out which of them should be renamed as 'obj_info.npy' and moved into 'data' folder. I think I also cannot understand why we need to specify an arbitrary 'obj_info.npy' when our main goal is to see how the model works for a given test dataset

Difference / correspondence between GRAB & GrabNet Datasets

Hi, I had some questions regarding the differences between the data for GRAB & GrabNet. I understand that some additional pre-processing was performed on GRAB data to obtain the data for GrabNet. However, the change in variable names and their usage is slightly confusing to me.

Q1. Right-hand parameters

For the right hand sequence data in GRAB, there are 4 parameters with the following dimensions:

  • 'global_orient' (3,)
  • 'hand_pose' (24,)
  • 'transl' (3,)
  • 'fullpose' (45,)

For GrabNet on the other hand, these are 3 main parameters with following dimensions:

  • 'global_orient_rhand_rotmat' (1, 3, 3)
  • 'fpose_rhand_rotmat' (15, 3, 3)
  • 'trans_rhand' (3,)

Could you please tell me how these two sets of parameters are related and what exactly they mean? I see that in GRAB/grab/grab_preprocessing.py you are using the first set of parameters to pass to the right-hand model to obtain right hand vertices. Can right hand vertices be obtained from the second set of parameters as well?

Q2. Object parameters

Similarly, for object parameters:

  • For GRAB, we have: 'transl' (3,) & 'global_orient' (3,).
  • For GrabNet: 'trans_obj' (3,) & 'root_orient_obj_rotmat' (3, 3)

Could you please explain the difference between global_orient and a rotation matrix root_orient_obj_rotmat? Also, the translation parameters don't appear to be aligned. For instance, for GrabNet, all trans_obj values appear to be zeros which is not the case in GRAB for transl.

Q3. Folder naming.

I noticed a discrepancy for "lift" in GRAB dataset, which I believe is being referred to as "pick_all" in GrabNet. Just wanted to confirm with you whether these correspond to each other?

Thanks!

psbody cannot import 'plyutils'

Hi, Thanks for your wonderful research work!

I am working on a research project and I'm highly interested in adapting your model into our work. I followed the steps of the installation instructions, but when I tried to run grab_new_objects.py to generate grasps for unseen objects, it constantly gave the error that 'plyutils' cannot be imported. I'm wondering if you can give a hand on how to fix this. Thanks very much!

image

FileNotFoundError: [Errno 2] No such file or directory: 'data/grabnet_data/meshes.pt' in Google Colab demo

Hi Omid,

Thanks a lot for the amazing research work.

So I tried this for my own wine_bottle.obj file that has ~800 faces.
Then all the cells before this one ran correctly.

How should I fix this one?

# run to see all grasps in one view

meshes = torch.load('data/grabnet_data/meshes.pt')
N = len(meshes)
col = int(5*np.sqrt(N/15))

all_meshes = []
bounds = meshes[0][0].bounds[:,:2]

dx = np.linalg.norm(bounds,axis=1).sum()
dx = .15

for i in range(N):

    obj = meshes[i][0]
    hand = meshes[i][1]

    a = i%col
    b = i//col

    offset = np.array([a*dx, b*dx, 0])*2.

    obj.vertices[:] += offset
    hand.vertices[:] += offset


    hand.set_vertex_colors(vc=[245, 191, 177])

    def_color = np.array([[102, 102, 102, 255]])
    if hasattr(obj.visual, 'vertex_colors') and (obj.visual.vertex_colors == np.repeat(def_color, obj.vertices.shape[0], axis=0)).all():
        obj.set_vertex_colors(vc=name_to_rgb['yellow'])

    all_meshes.append(obj)
    all_meshes.append(hand)

all_meshes = Mesh.concatenate_meshes(all_meshes)
all_meshes.show()

Error is:

---------------------------------------------------------------------------
FileNotFoundError                         Traceback (most recent call last)
<ipython-input-20-90bd36258bb9> in <module>()
      1 # run to see all grasps in one view
      2 
----> 3 meshes = torch.load('data/grabnet_data/meshes.pt')
      4 N = len(meshes)
      5 col = int(5*np.sqrt(N/15))

2 frames
/usr/local/lib/python3.7/dist-packages/torch/serialization.py in __init__(self, name, mode)
    209 class _open_file(_opener):
    210     def __init__(self, name, mode):
--> 211         super(_open_file, self).__init__(open(name, mode))
    212 
    213     def __exit__(self, *args):

FileNotFoundError: [Errno 2] No such file or directory: 'data/grabnet_data/meshes.pt'

Screenshot from 2021-04-13 17-49-56

Screenshot from 2021-04-13 17-50-18
Screenshot from 2021-04-13 17-51-36

NaN gradient at the beginning of the training

Hi Otaheri,

Thank you for the great dataset and the code. I have tried to train the model using the provided code and the dataset and have encountered the following error in the first epoch:

2020-11-27 16:14:05,346 - root - INFO - Dataset Train, Vald, Test size respectively: 0.32 M, 31.03 K, 65.32 K
2020-11-27 16:14:08,496 - root - INFO - Total Trainable Parameters for CoarseNet is 14.04 M.
2020-11-27 16:14:08,497 - root - INFO - Total Trainable Parameters for RefineNet is 3.26 M.
2020-11-27 16:14:08,510 - root - INFO - Started Training at 2020-11-27_16:14:08 for 500 epochs
2020-11-27 16:14:08,511 - root - INFO - --- starting Epoch # 001
2020-11-27 16:14:12,249 - root - INFO - [V00]_TR00_E000 - It 00000 - CoarseNet - train: [T:1.58e+01] - [loss_kl = 2.58e-02 | loss_edge = 1.62e-01 | loss_mesh_rec = 2.18e+00 | loss_dist_h = 7.86e+00 | loss_dist_o = 5.62e+00]
[W python_anomaly_mode.cpp:60] Warning: Error detected in DivBackward0. Traceback of forward call that caused the error:
File "/home/workspace/GrabNet/train.py", line 106, in
grabnet_trainer.fit()
File "/home/workspace/GrabNet/grabnet/train/trainer.py", line 412, in fit
train_loss_dict_cnet, train_loss_dict_rnet = self.train()
File "/home/workspace/GrabNet/grabnet/train/trainer.py", line 216, in train
drec_cnet = self.coarse_net(**dorig)
File "/home/anaconda3/envs/grabnet/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/workspace/GrabNet/grabnet/models/models.py", line 133, in forward
hand_parms = self.decode(z_s, bps_object)
File "/home/workspace/GrabNet/grabnet/models/models.py", line 117, in decode
results = parms_decode(pose, trans)
File "/home/workspace/GrabNet/grabnet/models/models.py", line 208, in parms_decode
pose_full = CRot2rotmat(pose)
File "/home/workspace/GrabNet/grabnet/tools/utils.py", line 87, in CRot2rotmat
b2 = F.normalize(reshaped_input[:, :, 1] - dot_prod * b1, dim=-1)
File "/home/anaconda3/envs/grabnet/lib/python3.8/site-packages/torch/nn/functional.py", line 3752, in normalize
return input / denom
(function print_stack)
Traceback (most recent call last):
File "/home/workspace/GrabNet/train.py", line 106, in
grabnet_trainer.fit()
File "/home/workspace/GrabNet/grabnet/train/trainer.py", line 412, in fit
train_loss_dict_cnet, train_loss_dict_rnet = self.train()
File "/home/workspace/GrabNet/grabnet/train/trainer.py", line 221, in train
loss_total_cnet.backward()
File "/home/anaconda3/envs/grabnet/lib/python3.8/site-packages/torch/tensor.py", line 185, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/anaconda3/envs/grabnet/lib/python3.8/site-packages/torch/autograd/init.py", line 125, in backward
Variable._execution_engine.run_backward(
RuntimeError: Function 'DivBackward0' returned nan values in its 0th output.

Process finished with exit code 1

Could you shed some light on how to fix the above issue?

thanks

Regards,
Juil

How to get the bps.npz

Hi,

Very nice work! I wanted to ask how you created the bps.npz file that we download? Also, you have used the rhand_weight.npy file. Did you get it by pre-processing the GRAB dataset?

Questions for training codes

Hi, Omid. Thanks for your sharing. I have two questions when I reading codes.

  1. During training, why does NOT the self.rhm_train need grad?
    with torch.no_grad():
  2. What is the difference between fpose_rhand_rotmat_f and fpose_rhand_rotmat in training data?
    Hope for your reply. Thanks.

Please enter either pointcloud or meshes to compute bps

Hi,
While running grab_new_objects.py encountering following issue:

python grabnet/tests/grab_new_objects.py --obj-path contact_meshes/camera.ply --rhm-path mano_v1_2/

2022-10-19 16:56:41,969 - root - INFO - Restored CoarseNet model from grabnet/models/coarsenet.pt
2022-10-19 16:56:41,975 - root - INFO - Restored RefineNet model from grabnet/models/refinenet.pt
2022-10-19 16:56:41,984 - root - INFO - #################
Colors Guide:
Gray ---> GrabNet generated grasp

Traceback (most recent call last):
File "/home/aa/Documents/GitHub/GrabNet/grabnet/tests/grab_new_objects.py", line 235, in
grab_new_objs(grabnet,obj_path, rot=True, n_samples=10)
File "/home/aa/Documents/GitHub/GrabNet/grabnet/tests/grab_new_objects.py", line 129, in grab_new_objs
bps_object = bps.encode(verts_obj, feature_type='dists')['dists']
File "/home/aa/miniconda3/envs/Pytorch3D/lib/python3.9/site-packages/bps_torch/bps.py", line 163, in encode
raise ('Please enter either pointcloud or meshes to compute bps!')
TypeError: exceptions must derive from BaseException

Can somebody please help me to solve this one?

Thanks

changing the scale from 1. to 0.3 didn't make any difference in the size of the object in hand

So I changed the scale from 1. which was working great for the camera object to 0.3 (meaning 30cm?) for the wine bottle. In either case of 1. or 0.3 for the value assigned to scale, unfortunately, the wine bottle is the same exact small (inappropriately scaled) size in the hand. Could you please guide how to fix this?

Also, is there a way I could find the exact scale for the objects? Is there a 3D model repo that has such infos?

Thank you.
Screenshot from 2021-04-13 18-59-35

Inquiry on Generating Contact Maps for Hand-Object Interactions

Hi @otaheri,

Firstly, I'd like to commend the exceptional work by Michel Black's group, specifically this one; I'm genuinely impressed by its utility and innovation.

I'm currently focused on generating contact maps for scenarios where a hand interacts with an object. I've successfully executed grab_new_objects.py in this regard. Could you please guide me on how to obtain these contact maps? Additionally, it would be helpful to know the format of these maps.

Looking forward to your guidance.

Thank you!

Unrealistic ground truth grasps

When I run the command python grabnet/tests/test.py ..., I find that ground truth grasps (in blue) are unrealistic. Is it normal behaviour?

Subject 10, and object is a mug.

image

Permission Issue: Not able to access data in the Colab example

Hello, using the Colab example from this project's README.

On line:
!source scripts/prepare_data.sh
I get the error:

/usr/local/lib/python3.10/dist-packages/gdown/parse_url.py:44: UserWarning: You specified a Google Drive link that is not the correct link to download a file. You might want to try `--fuzzy` option or the following url: https://drive.google.com/uc?id=1L_We51j5FwIo5PeLvAK_9I3ZuFU_IWAq
  warnings.warn(
Downloading...
From: https://drive.google.com/file/d/1L_We51j5FwIo5PeLvAK_9I3ZuFU_IWAq/view?usp=sharing
To: /content/GrabNet/data/view?usp=sharing
84.4kB [00:00, 545MB/s]
unzip:  cannot find or open grabnet_data.zip, grabnet_data.zip.zip or grabnet_data.zip.ZIP.
rm: cannot remove 'grabnet_data.zip': No such file or directory
mv: cannot stat 'data/grabnet_data/refinenet.pt': No such file or directory
mv: cannot stat 'data/grabnet_data/coarsenet.pt': No such file or directory

When I go to the link directly, I'm able to see the files but not download them. Please help :)

annotation mismatch in GrabNet dataset

Hi Taheri
Thanks for the great work you have made.

Currently we are working on integrate GrabNet dataset into other project, we find an annotation mismatch in this dataset.

Here I will describe how the mismatch occurs.

From line # 68 - 73 in your code: GrabNet/grabnet/tests/test.py. we think the proper way to load object from your GRAB/data/object_meshes/contact_meshes is:

  1. Rotating the original mesh w.r.t frame_data['root_orient_obj_rotmat']
  2. Translate the rotated mesh w.r.t frame_data['trans_obj']

But after we followed the same steps as you described, the transformed object mesh has an apparent rotation and translation mismatch from the annotation frame_data['verts_object'].

I provide an example to reproduce the mismatch:
For the GrabNet dataset train split, of object flashlight and frame id 17221.

            ### Acquire the frame_data ...

            rot_mat = frame_data["root_orient_obj_rotmat"].numpy().reshape(3, 3)
            transl = frame_data["trans_obj"].numpy()
            obj_verts_downsampled = frame_data["verts_object"].numpy()  # (2048, 3)

            ## obj_mesh is a TriMesh Object directly load from your GRAB/data/object_meshes/contact_meshes
            obj_verts_trasnformed = (rot_mat @ obj_mesh.vertices.T).T + transl

Then we use Open3D to visualize:

  • obj_verts_downsampled (in PointCloud format)
  • obj_verts_trasnformed (in TriangleMesh format)

We provide the screenshot of the example:
mismatch

Thanks for your excellent work again! joy face
And really hope you can help us to figure this out.

Lixin

'Select' object has no attribute 'astype'

Hello,

In case this helps anyone else, when running this I got the following error

> python grabnet/tests/grab_new_objects.py --obj-path models/toothpaste.ply --rhm-path .

2020-10-13 01:13:51,159 - root - INFO - Restored CoarseNet model from grabnet/models/coarsenet.pt
2020-10-13 01:13:51,167 - root - INFO - Restored RefineNet model from grabnet/models/refinenet.pt
Traceback (most recent call last):
  File "grabnet/tests/grab_new_objects.py", line 230, in <module>
    grab_new_objs(grabnet,obj_path, rot=True, n_samples=10)
  File "grabnet/tests/grab_new_objects.py", line 100, in grab_new_objs
    flat_hand_mean=True).to(grabnet.device)
  File "/home/patrick/miniconda3/envs/grab/lib/python3.7/site-packages/mano/model.py", line 74, in load
    return MANO(model_path, is_rhand, **kwargs)
  File "/home/patrick/miniconda3/envs/grab/lib/python3.7/site-packages/mano/model.py", line 220, in __init__
    self.register_buffer('shapedirs', to_tensor(to_np(shapedirs), dtype=dtype))
  File "/home/patrick/miniconda3/envs/grab/lib/python3.7/site-packages/mano/utils.py", line 44, in to_np
    return array.astype(dtype)
AttributeError: 'Select' object has no attribute 'astype'

Modifying the custom MANO package by inserting the following snippet at line 40 of lib/python3.7/site-packages/mano/utils.py works for me. Coverts the chumpy array to numpy

    if 'chumpy.reordering.Select' in str(type(array)):
        array = array.r

Dataset Question: Where in the data is "tools -> object_meshes"?

Hello! I'm trying to download the GrabNet dataset to attempt to re-train the model.

I downloaded and unzipped the data, but don't see the object_meshes folder under tools. Subject_meshes are available in the image below. Should I use the object meshes under the GRAB dataset?

Thank you!

Screenshot 2024-04-03 at 11 20 51 PM Screenshot 2024-04-03 at 11 23 07 PM

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.