GithubHelp home page GithubHelp logo

pytorch_6dof-graspnet's People

Contributors

dependabot[bot] avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

pytorch_6dof-graspnet's Issues

Can not download the pre-trained model

Hello, your work is brilliant, and I have encountered an error when I try to download the pre-trained model, the Google Drive suggests that the file has been moved to your trash, I will appreciate it if you could get the file back.

Training error

I want to train my own weights and follow your instructions (include use the shapenet data and use maniflod to change the type of the data), and the enviroment is nvidia/cuda:11.3.1-cudnn8-devel-ubuntu20.04 and torch version is the same as yours.
use this command to start the training process
python3 train.py --arch {vae,gan,evaluator} --dataset_root_folder ./dataset/unified_grasp_data
However, I encountered an error as shown below.
image
After researching the error, I changed the 'num_workers' parameter to 0, but I encountered another problem as shown below.
image
Have you ever encountered similar version or dataset issues?

No module named 'vtk'

Hi,

Thank you very much for sharing your implementation!

I followed the instructions in the README file to install the dependencies, but I got the following error when pip was trying to build mayavi:

ModuleNotFoundError: No module named 'vtk'

Background Info

  1. I am using Pop OS (an Ubuntu based Linux distro)

  2. I am trying to install using a conda environment created using the following command

conda create -n 6dofgraspnet python=3.6
  1. I have cloned the PointNet Pytorch repo into another folder parallel to the root folder of this repo.

I am wondering if there is anything I can do to resolve this issue.

Mid point computation maybe not correct?

Hi,

The mid point is obtained in Line 672 in utils.py.
In order to compute this value the first two elements from the transformed control points are used (grasp_cps).
According to get_control_point_tensor (line 282) the first two elements are always the same.
Should the line 672 maybe replaced by (1 to -1)

mid = (grasp_cps[:, 0, :] + grasp_cps[:, -1, :]) / 2.0 ?

Unclear instructions on downloading Shapenet data for training

I'm trying download the shapenet dataset for training:

python3 train.py  --arch {vae,gan,evaluator}  --dataset_root_folder $DATASET_ROOT_FOLDER

The README on how to obtain the dataset is unclear to me. Is there a pragmatic way to populate meshes? I don't see how the ID shapenet_ids.txt can be mapped to the Shapenet dataset.

meshes folder: has the folder for all the meshes used. Except cylinder and box the rest of the folders are empty and need to be populated by the downloaded meshes from shapenet.

When I try to train, I reported an error

``File "/aiLab/zzq/pytorch_6dof-graspnet-master/data/base_dataset.py", line 109, in change_object_and_render
cad_path, cad_scale, in_camera_pose, thread_id)
File "/aiLab/zzq/pytorch_6dof-graspnet-master/renderer/online_object_renderer.py", line 116, in change_and_render
color, depth, pc, transferred_pose = self.render(pose)
File "/aiLab/zzq/pytorch_6dof-graspnet-master/renderer/online_object_renderer.py", line 122, in render
self.renderer = pyrender.OffscreenRenderer(400, 400)
File "/home/ai/anaconda3/envs/zzq_shapenet/lib/python3.6/site-packages/pyrender/offscreen.py", line 31, in init
self._create()
File "/home/ai/anaconda3/envs/zzq_shapenet/lib/python3.6/site-packages/pyrender/offscreen.py", line 135, in _create
self._platform.init_context()
File "/home/ai/anaconda3/envs/zzq_shapenet/lib/python3.6/site-packages/pyrender/platforms/egl.py", line 177, in init_context
assert eglInitialize(self._egl_display, major, minor)
File "/home/ai/anaconda3/envs/zzq_shapenet/lib/python3.6/site-packages/OpenGL/platform/baseplatform.py", line 409, in call
return self( *args, **named )
File "/home/ai/anaconda3/envs/zzq_shapenet/lib/python3.6/site-packages/OpenGL/error.py", line 232, in glCheckError
baseOperation = baseOperation,
OpenGL.error.GLError: GLError(
err = 12289,
baseOperation = eglInitialize,
cArguments = (
<OpenGL._opaque.EGLDisplay_pointer object at 0x7fa59b0897b8>,
c_long(0),
c_long(0),
),
result = 0
)
image

bad results

hello,
Thank you very much for sharing this great work.
I have trained a VAE model following the instruction,it seems work well on the NPY dataset you given
6AF97E47358686FB544EE3B3DAF3866E
But if I use the realsense d435i get the pointcloud data,the results are not ideal
930449C1F06619E9E86FD21DD1A25104
867CBAA4350AC6521E6A3783EB240CC2
Obviously i have made some mistakes,but i don't know where it happens,Where do you think i'm most likely to make mistakes,hope to get your help ,thank you very much.

no module named 'pointnet2-ops'

I got this error when I try to run 'train.py':
No module named 'pointnet2-ops'.
I realized that this module is linked to PointNet++, but I still do not know how to install the module after browsing the PointNet++ repo.

How to run demo ?

Can anyone here, give me correct and clear steps to run demo files. Please I want it urgently.....

Memory Leak

Hi Jens, thanks for the amazing pytorch code!
After trying the training script, I noticed that the system memory keeps increasing. Just want to check if you have seen the same issue while you trained your model and your ideas about what can cause the memory leak.

memory_leak

Question about get_inlier_grasp_indices

Hi Jens,

I found that you use get_inlier_grasp_indices to filter out grasps that are far away from the center of the object. I didn't find the similar function in the original tf version code. Is this what you add in this repo?

Also, you use the pc_mean as object center to compute the distance between the grasps and the object (
https://github.com/jsll/pytorch_6dof-graspnet/blob/master/grasp_estimator.py#L62). However, the grasps are computed with the normalized pointcloud, which means the center of the "normalized" object is (0, 0, 0). So I feel you should use (0, 0, 0) instead of pc_mean when you call the get_inlier_grasp_indices. Can you verify whether it is a bug? Thanks.

How can I use this package in ROS?

Hello!

   I'm new in programming.
   I want to use this package in ROS but I don't know how to modify it. Does anyone can help me with it?
                                                                                                  
                                                                                                                                                                                                          Thanks 

Question about remove jittery depth in smoothed object pc

Hello everyone,

have still a question on how to compute the smoothed object point cloud.
My suggestion and currently understanding is, that 10 depth frames (shape: 480x640) are captured, flattened/reshaped and put together into an array of the shape: (307200,10).
The next step is to determine the jittering pixels.
My guess is that we compute the statistics: mean and standard deviation row-wise and identify jittering pixels by:

  1. define a threshold value, i.e.: 0.001
  2. all pixels with standard deviation (all values presented in meter) above this threshold value is marked as a jittery pixel! and will be not considered in further computations.

Since the shape of your smoothed object pc is of the form (307200 - jittered pixels, 3) my next guess is, that the array is back projected to obtain your three dimensional matrix.

Is this correct and if not can you please give me some advise.

PS:
I display both point clouds (using pythons library open3d):

  1. pc (result of the back projected depth frame)
  2. smoothed_object_pc
    And the difference is obvious, that the smoothed_object_pc doesn't possess the table (or ground plate).
    Can the process of identifying jittering pixels described above can identify this table (or ground plate) and how?

Training plots

Hi, do you still have the training plots? How long does it take in terms of wall clock time / what GPU hardware?

Implicit Maximum Likelihood vs "Classic" VAE loss

Hey Jens,

thank you for the really great work! I was wondering if you could share some experiences on training the VAE in the "GAN formulation" using Implicit MLE estimation vs. using the original VAE loss. Did you have issues with mode collapse and did IMLE solve these for you? Did you experience any drawbacks from IMLE?

Training on evaluator

Hi @jsll,
Thanks for the wonderful work. I have been trying to use train.py with continue_train is being set to true. For this I set options same as pretrained_evaluator/opt.yml:

python train.py --arch evaluator --continue_train True --dataset_root_folder /home/tasbolat/GRASP/unified_grasp_data/ --batch_size 350 --niter 1000 --niter_decay 10000 --num_grasps_per_object 70 --num_objects_per_batch 5 --num_threads 3

However, it is not training as expected. The test accuracy directly jumps around 67% (from 78% initially) and loss goes up quickly. Can you advice which parameter i am passing wrong? Thanks

Weird VAE results

Hi @jsll , I found the VAE results are pretty weird compared with the GAN results. Here are the examples:

python -m demo.main --generate_dense_grasps --num_grasp_samples 20 --grasp_sampler_folder checkpoints/vae_pretrained/

image
image

python -m demo.main --grasp_sampler_folder checkpoints/vae_pretrained/

image
image
image
Any ideas?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.