GithubHelp home page GithubHelp logo

pku-epic / unidexgrasp Goto Github PK

View Code? Open in Web Editor NEW
91.0 91.0 9.0 70.76 MB

Official code for "UniDexGrasp: Universal Robotic Dexterous Grasping via Learning Diverse Proposal Generation and Goal-Conditioned Policy" (CVPR 2023)

Python 99.91% Shell 0.09%

unidexgrasp's People

Contributors

lhrrhl0419 avatar lym29 avatar mzhmxzh avatar wkwan7 avatar xyz-99 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

unidexgrasp's Issues

Will you finished the dex_generation part?

Hi, thanks for your contribution.

I have test the code of dexgrasp_generation and find due to lack of data the whole training part is unexecutable.
Will you update the data/preparation in the future?

Questions about the code in dex_dataset.py.

Hi,
I'm confused about some codes in "dexgrasp_generation/datasets/dex_dataset.py" and I hope to get some explanations.
First, in line 125, this appears to be a 1x4 vector. What does it correspond to?
plane = recorded_data["plane"]
Second, in line 152-157, pcs_table and pose_matrices appear to be 100x400x3 and 100x4x4 vectors. What 100 means?
obj_pc_path = pjoin(self.root_path, "DFCData", "mesh", category, instance_no, "pcs_table.npy") pose_path = pjoin(self.root_path, "DFCData", "mesh", category, instance_no, "poses.npy") pcs_table = torch.tensor(np.load(obj_pc_path, allow_pickle=True), dtype=torch.float) pose_matrices = torch.tensor(np.load(pose_path, allow_pickle=True), dtype=torch.float)
Third, in line 132, this seems to use the distance between two vectors to retrieve an index from the object's pose, in order to select an object pose for grasping. What is the reason for doing this?
index = (torch.tensor(plane[:3], dtype=torch.float) - pose_matrices[:, 2, :3]).norm(dim=1).argmin()
Finally, there are 100 object poses and about 200 grasp poses in the datasets, I am confused about the code that establishes their correspondence.
Best wishes.

Joint training in grasp generation

Hi there,

I hope this message finds you well. I have been working on training the goal grasp generation stage but feel uncertain regarding to some details of this process, specifically in relation to joint training ipdf and glow with ContactNet.

Firstly, I would like to confirm whether my understanding is correct. Am I supposed to initially train these three networks independently and then proceed to fine-tune the glow network using the pretrained ipdf and ContactNet? I became a bit confused because in the provided config file, the checkpoint selection for ipdf and ContactNet is set as the initial one.

Furthermore, It would be so helpful if you could tell me the number of epochs you have used for the pretraining and joint training.

Thank you so much for presenting such a great work 👍

Provided checkpoint model is inconsistent with the latest code

Hi, when I try to run the eval code using the checkpoints you provided, there are parameters that have mismach size. I was trying to figure out it by myself but none of my attempts worked... Could you please provide a little help ? Thanks a lot!

Here is the output for ipdf model:

RuntimeError: Error(s) in loading state_dict for IPDFModel:
size mismatch for net.implicit_model.feat_mlp.weight: copying a param with shape torch.Size([256, 64]) from checkpoint,
the shape in current model is torch.Size([256, 128]).

Dataset issues

During run dexgrasp_ Generation, an error occurred while training GrassIPDF: FileNotFoundError: [Errno 2] No such file or directory: 'data/DFCData/poses/core/browl-530c2d8f55f9e9e8ce364511e87f52b0';FileNotFoundError: [Errno 2] No such file or directory: 'data/DFCData/meshes/core/bowl-d1addad5931dd337713f2e93cbeac35d However, there are no pose or meshes in the DFCData folder in the downloaded dataset. May I ask if there is an error in the code or if the dataset name is incorrect.

Distilled policy student model checkpoint

Hi, the work is truly inspiring! I have a question: Is the uploaded example model for the policy net the teacher model? If so, could you kindly share the distilled student model checkpoint for quick inference? Thank you so much.

"It seems that I'm missing some utility files related to 'data'."

Hello, thank you very much for open-sourcing this excellent framework and providing a large-scale dataset. While studying, I encountered an error message: ImportError: cannot import name 'get_one_shape' from 'algo.pn_utils.maniskill_learn.utils.data'. I've implemented some functions myself, but I'm unsure about how to implement certain functions. It seems that I might be missing some utility files. In order to ensure that the training process is not affected, would it be possible to provide those missing files? Thank you."

I have a question about training time

Hello, I have a question about training time. I'm using an RTX 3080 with 10GB of VRAM for training, and it takes 3 to 6 hours to complete one epoch. Is this correct? Thank you for your response.

error3

error4

CSDF ---identifier "CHECK_EQ" is undefined

Hello, I encountered error when installing CSDF: identifier "CHECK_EQ" is undefined
Error. I used pip install -e . and python setup.py install to install but still failed. Can you help me solve this problem? Or tell me the environment required for compilation. Thank you

Choose "random" or "grid" for function generate_queries in ipdf_network.py

Thanks for this amazing repo! I have a small question about the code related to the paper. For the generate_queries function in dexgrasp_generation/network/models/graspipdf/ipdf_network.py, there are two modes "random" and "grid". I think in the paper, GraspIPDF mentions generating rotation by "grid" mode but the code is using "random" mode. Is that because the "random" mode has better performance?

Also for the "random" mode, the rotation matrices are directly generated by pytorch3d.transforms.random_rotations, will these rotations be the uniform distribution over SO(3)?

Thanks a lot!

Bad allocate and PyTorch.cannot allocate memory

你好,我运行你们代码dexgrasp_generation部分eval.py的时候出现了std::bad allocate和RuntimeError: falseINTERNAL ASSERT FAILED at "..\aten\src\ATen\MapAllocator.cpp":135, please report a bug to PyTorch.cannot allocate memory(12)等问题,总的来说好像都是内存不足问题。不知道你们运行时是否出现过类似问题,是存在代码未释放内存的现象还是我们服务器的内存不足呢?麻烦您告知一下是否有相关内容。
图例如下
ex1

Pre-trained Checkpoints

Hi, thank you for sharing such a great piece of work!
May I ask where I can download the pre-trained checkpoints?

Possible Bug in pointcloud observation.

PointCloud in observation is:

self.obs_buf[:, point_cloud_start:point_cloud_start + self.num_pc_flatten].copy_(points_fps.reshape(self.num_envs, self.num_pc_flatten))
mask_hand = others["mask_hand"]
mask_object = others["mask_object"]
self.obs_buf[:, point_cloud_start+self.num_pc_flatten:point_cloud_start+self.num_pc_flatten+self.num_pc_downsample].copy_(mask_hand)
self.obs_buf[:, point_cloud_start+self.num_pc_flatten+self.num_pc_downsample:point_cloud_start+self.num_pc_flatten+2*self.num_pc_downsample].copy_(mask_object)

Above code gives obs as [.....| point_clouds_xyz(1024*6)| mask_hand(1024)| mask_object(1024)|...], for example,
[....|(x1,y1,z1, r1,g1,b1),(x2,y2,z2,r2,g2,b2)....|(hm1,hm2,....)|(om1,om2....)|.....]
But when reshaping point cloud:

Yet in student policy:

pc = observations[:, 300:].reshape(-1, 1024, 8)

This is only right if pointcloud is dim-8, which requires the obs to be [.....| point_cloud_xyz_hm_om(1024*8)|...]
in which
point_cloud_xyz_hm_om is torch.cat([pointcloud_xyz, hand_mask.float().unsqueeze(-1), object_mask.float().unsqueeze(-1)], dim=-1), for example [....|(x1,y1,z1,r1,g1,b1,hm1,om1),(x2,y2,z2,r2,b2,g2,hm2,om2)|....]

Question about object data.

Hi, I have two questions, could you help me to understand them?

  1. what's the difference between assets/meshdatav3_pc_feat.zip and assets/meshdatav3_pc_fps.zip.
  2. why there is no "apple" or "banana" in the cfg/train_set.yaml, cfg/test_set_seen_cat.yaml and cfg/test_set_unseen_cat.yaml?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.