GithubHelp home page GithubHelp logo

anygrasp_sdk's Introduction

AnyGrasp SDK

AnyGrasp SDK for grasp detection & tracking.

[arXiv] [project] [dataset] [graspnetAPI]

Update

  • May 7, 2024 Add new features and flags to AnyGrasp detector:
    • Dense Predictions (default is False)
      • Set dense_grasp=True to enable extremely dense output. It's helpful for some corner cases or prompt-based grasping.
      • Warning: this mode is designed for special scenarios, leading to higher GPU memory, lower inference speed and lower grasp quality. You can crop the point clouds with your own segmantation masks or 3D bounding boxes to improve the performance.
    • Filtering by Objectness Mask (default is True)
      • Set apply_object_mask=False to disable default grasp filtering by objectness masks. This will lead to predictions on backgrounds.
    • Collision Detection (default is True)
      • Set collision_detection=False to disable default collision detection step.
    • These flags are useful for more flexible development, but we highly recommend to use the default setting in common scenarios. See grasp_detection/demo.py for examples.
  • October 8, 2023 Fix a bug in grasp detection inference code, which may cause partial grasp widths exceeding the constrained range.
  • July 20, 2023 Fix a bug in grasp detection inference code, which may cause no prediction when there are only one or two objects.

Video

IMAGE ALT TEXT
AnyGrasp cleaning fragments of a broken pot

IMAGE ALT TEXT
AnyGrasp catching swimming robot fish

Requirements

  • Python 3.6/3.7/3.8/3.9
  • PyTorch 1.7.1 with CUDA 11.0
  • MinkowskiEngine v0.5.4

Installation

  1. Follow MinkowskiEngine instructions to install Anaconda, cudatoolkit, Pytorch and MinkowskiEngine. Note that you need export MAX_JOBS=2; before pip install if you are running on an laptop due to this issue. If PyTorch reports a compatibility issue during program execution, you can re-install PyTorch via Pip instead of Anaconda.

  2. Install other requirements from Pip.

    pip install -r requirements.txt
  1. Install pointnet2 module.
    cd pointnet2
    python setup.py install

License Registration

Due to the IP issue, currently we can only release the SDK library file of AnyGrasp in a licensed manner. Please get the feature id of your machine and fill in the form to apply for the license. See license_registration/README.md for details. If you are interested in code implementation, you can refer to our baseline version of network, or a third-party implementation of our GSNet.

We usually reply in 2 work days. If you do not receive the reply in 2 days, please check the spam folder.

Demo Code

Now you can run your code that uses AnyGrasp SDK. See grasp_detection and grasp_tracking for details.

Citation

Please cite these papers in your publications if it helps your research:

@article{fang2023anygrasp,
  title={AnyGrasp: Robust and Efficient Grasp Perception in Spatial and Temporal Domains},
  author = {Fang, Hao-Shu and Wang, Chenxi and Fang, Hongjie and Gou, Minghao and Liu, Jirong and Yan, Hengxu and Liu, Wenhai and Xie, Yichen and Lu, Cewu},
  journal={IEEE Transactions on Robotics (T-RO)},
  year={2023}
}

@inproceedings{fang2020graspnet,
  title={Graspnet-1billion: A large-scale benchmark for general object grasping},
  author={Fang, Hao-Shu and Wang, Chenxi and Gou, Minghao and Lu, Cewu},
  booktitle={Proceedings of the IEEE/CVF conference on computer vision and pattern recognition},
  pages={11444--11453},
  year={2020}
}

@inproceedings{wang2021graspness,
  title={Graspness discovery in clutters for fast and accurate grasp detection},
  author={Wang, Chenxi and Fang, Hao-Shu and Gou, Minghao and Fang, Hongjie and Gao, Jin and Lu, Cewu},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  pages={15964--15973},
  year={2021}
}

anygrasp_sdk's People

Contributors

chenxi-wang avatar fang-haoshu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

anygrasp_sdk's Issues

How can I get the .zip after applying the license

Hello,
I am trying to install the sdk in order to test the detection module with the UR and a realsense 435.
I have fill out the form for License Registration by [email protected].
But there is no .zip file that contains license return. Is that in the form, or in the email sent to me when I finished the form?
It seems to need to be used in two demon.
Thanks

IndexError of SDK

While trying to run the demo file, error occurs as

IndexError:tuple index out of range

However, I followed your readme file while building the environment and succeed in the past.

I wonder whether the error caused by my environment or SDK?

Difference between AnyGraspTracker update() and AnyGrasp get_grasp()

Hi!

  1. I do not perfectly understand for which case we should use one of those function or the other. Is the tracker update() only to recreate the position of previous grasps? (doesn't seem to be the case since it is used in the grasp_tracking and creates grasps) Which one do you use for experimenting with the robot arm, do you get_grasp() and then only update() when the robot starts to reach it?

  2. Is there any documentation on what arguments could be passed? (example: pass a workspace dimensions "lims" in tracker.update())

没有收到linecse

您好,我这边通过邮箱[email protected]申请了linecse,并没有收到回复,请问还需要提供什么信息?另外ckpt的下载地址是随linexse一起发送吗?还是有别的途径。

depth graph setting

Hello,
I want to run the detection demo on my own images. For example,
3_Color

3_Depth

The camera is Realsense D435, and the graphs are directly saved from Realsense Viewer.
However, it comes out with an error: ValueError: operands could not be broadcast together with shapes (720,1280) (720,1280,3) .
I think it should be a trouble with the depth graph setting.
So I want to know how to save the images correctly?
Thanks in advance for your help!

Can small object grasp_label and collision_label be open source?

Thanks for your excellent work!
When i test with this sdk, i could grasp small objects, like mark pen less than 1cm. But when i test with https://github.com/rhett-chen/graspness_implementation learned with https://graspnet.net/datasets.html,i couldn't generate grasp for small object less than 3cm height.
I guess the reason could be there aren't smaller grasp depth label which anygrasp_sdk used. And generate this label for whole dataset would very time-consuming. Could you please open source small object grasp_label and collision_label?
Any other tips for this would be very appreciate!

Installation problem

Hello,
I am trying to install the sdk in order to test the detection module with a realsense 435i.

I have a small problem during the second part of the installation related to pointnet2. When trying to execute the install command I get this error :

python setup.py install
Traceback (most recent call last):
  File "setup.py", line 7, in <module>
    from torch.utils.cpp_extension import BuildExtension, CUDAExtension
ImportError: No module named torch.utils.cpp_extension

The strange thing is that pytorch is installed and if I try to reinstall it with pip I get this message :

pip install torch
Requirement already satisfied: torch in /home/ubuntu/.local/lib/python3.8/site-packages (1.12.1+cu102)
Requirement already satisfied: typing-extensions in /home/ubuntu/.local/lib/python3.8/site-packages (from torch) (4.5.0)

I even tried with a force reinstall but this doesn't change anything.

I guess I can't use the sdk properly without completing this step so if anyone has a solution it would be a great help.

Thanks

OpenSSL 1.1.1 not support

Hi everyone, thanks for your amazing work.
When I am installing, I found that OpenSSL does not support 1.1.1 version anymore.

(py3-mink) yu@yu-G470:~/se3grasp/anygrasp_sdk/grasp_detection$ sudo apt-get install openssl libssl1.1
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
E: Unable to locate package libssl1.1
E: Couldn't find any package by glob 'libssl1.1'
E: Couldn't find any package by regex 'libssl1.1'

So I was wondering how do you fix this?

Thanks,
Yu

AttributeError: module 'numpy' has no attribute 'float'.

`np.float` was a deprecated alias for the builtin `float`. To avoid this error in existing code, use `float` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.float64` here. The aliases was originally deprecated in NumPy 1.20; for more details and guidance see the original release note at:     https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations

This error occurs for either import statements:

from gsnet import AnyGrasp
from graspnetAPI import GraspGroup

It would be great if you could fix this, so that we are more flexible w.r.t. the numpy versions we can use.

how to set `lims` in grasp_detection demo.py

I'm a little confused about the lims attribute:

# set workspace
xmin, xmax = -0.19, 0.12
ymin, ymax = 0.02, 0.15
zmin, zmax = 0.0, 1.0
lims = [xmin, xmax, ymin, ymax, zmin, zmax]
# get point cloud
xmap, ymap = np.arange(depths.shape[1]), np.arange(depths.shape[0])
xmap, ymap = np.meshgrid(xmap, ymap)
points_z = depths / scale
points_x = (xmap - cx) / fx * points_z
points_y = (ymap - cy) / fy * points_z
# remove outlier
mask = (points_z > 0) & (points_z < 1)
points = np.stack([points_x, points_y, points_z], axis=-1)
points = points[mask].astype(np.float32)
colors = colors[mask].astype(np.float32)
print(points.min(axis=0), points.max(axis=0))
# get prediction
gg, cloud = anygrasp.get_grasp(points, colors, lims)

It says lims is workspace limit. But it seems it is passed into angrasp.get_grasp, and the point cloud that is passed into it is under camera frame. So how do you set lims, is it under camera frame or under world frame?

The 'sklearn' PyPI package is deprecated

Hello, I would like to run it on WSL2. When I ran pip install r requirements. txt, I found that it prompted: The 'sklearn' PyPI package is deprecated, using 'scikit learn' compared to 'sklearn' for pip commands.
The whole log is:
anygrasp_sdk$ pip install -r requirements.txt
Requirement already satisfied: numpy in /home/cqsubuntu/.conda/envs/anygrasp_env/lib/python3.8/site-packages (from -r requirements.txt (line 1)) (1.24.2)
Requirement already satisfied: Pillow in /home/cqsubuntu/.conda/envs/anygrasp_env/lib/python3.8/site-packages (from -r requirements.txt (line 2)) (9.5.0)
Requirement already satisfied: scipy in /home/cqsubuntu/.conda/envs/anygrasp_env/lib/python3.8/site-packages (from -r requirements.txt (line 3)) (1.10.1)
Collecting tqdm
Using cached tqdm-4.65.0-py3-none-any.whl (77 kB)
Collecting graspnetAPI
Using cached graspnetAPI-1.2.10.tar.gz (82 kB)
Preparing metadata (setup.py) ... done
Collecting open3d
Using cached open3d-0.17.0-cp38-cp38-manylinux_2_27_x86_64.whl (420.5 MB)
Collecting transforms3d==0.3.1
Using cached transforms3d-0.3.1.tar.gz (62 kB)
Preparing metadata (setup.py) ... done
Collecting trimesh
Using cached trimesh-3.21.5-py3-none-any.whl (680 kB)
Collecting opencv-python
Using cached opencv_python-4.7.0.72-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (61.8 MB)
Collecting matplotlib
Using cached matplotlib-3.7.1-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (9.2 MB)
Collecting pywavefront
Using cached PyWavefront-1.3.3-py3-none-any.whl (28 kB)
Collecting scikit-image
Using cached scikit_image-0.20.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (13.4 MB)
Collecting autolab_core
Using cached autolab_core-1.1.1-py3-none-any.whl (116 kB)
Collecting autolab-perception
Using cached autolab_perception-1.0.0-py3-none-any.whl (33 kB)
Collecting cvxopt
Using cached cvxopt-1.3.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (12.7 MB)
Collecting dill
Using cached dill-0.3.6-py3-none-any.whl (110 kB)
Collecting h5py
Using cached h5py-3.8.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.7 MB)
Collecting sklearn
Using cached sklearn-0.0.post4.tar.gz (3.6 kB)
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error

× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [18 lines of output]
The 'sklearn' PyPI package is deprecated, use 'scikit-learn'
rather than 'sklearn' for pip commands.

  Here is how to fix this error in the main use cases:
  - use 'pip install scikit-learn' rather than 'pip install sklearn'
  - replace 'sklearn' by 'scikit-learn' in your pip requirements files
    (requirements.txt, setup.py, setup.cfg, Pipfile, etc ...)
  - if the 'sklearn' package is used by one of your dependencies,
    it would be great if you take some time to track which package uses
    'sklearn' instead of 'scikit-learn' and report it to their issue tracker
  - as a last resort, set the environment variable
    SKLEARN_ALLOW_DEPRECATED_SKLEARN_PACKAGE_INSTALL=True to avoid this error

  More information is available at
  https://github.com/scikit-learn/sklearn-pypi-package

  If the previous advice does not cover your use case, feel free to report it at
  https://github.com/scikit-learn/sklearn-pypi-package/issues/new
  [end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed

× Encountered error while generating package metadata.
╰─> See above for output.

note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
So what should I do?

visualization fails

WARNING:root:Failed to import ros dependencies in rigid_transforms.py 
WARNING:root:autolab_core not installed as catkin package, RigidTransform ros methods will be unavailable

so for both demo (grasp_tracking and grasp_detection), I get these 2 lines of warnings in the terminal. What's after that is I only get the prediction in the terminal window, and the visualization part is always skipped. I guess that the check before visualization (cfgs.debug) would fail with the problems indicated by the warnings.

I tried installing autolab by pip but that didnt help. Any help would be appreciated!

No grasp detected after masking

Hello,

I have succesfully installed all the requirements and tested the grasp detection demo with the example data and it works.
Now I would like to test it with my own data so I replace the png files in example_data by my own 2 images (color and depth) captured by a realsense d435i (see below).

color
depth

When I try to run the demo again with my own images I get this error:
No grasp detected after masking ValueError: too many values to unpack (expected 2)

My first guess is that comes from the workspace parameters and the camera intrinsics but after trying a few things the result is still the same.

  • I configured the workspace limits by using realsense-viewer and by noting the points x and y coordinates indicated by the cursor. I did the same thing for z by placing an object high enough to set a limit and by taking the workspace ground as a low limit. (Screenshot of the scene with realsense viewer and the coordinates at the cursor)

Screenshot from 2023-07-18 18-59-34

  • I don't know whether it's the depth camera or the rgb camera intrinsics I should write in the code.
  • Is there anything else I have to do to use this demo on my own data ?

I can provide more context if required but I am using near source version of the demo.py for grasp_detection.

Thanks in advance for your help !

How to get more grasps in grasp_detection?

Hi there, thanks for your work!

I am wondering how to get more grasp proposals. Because for now I want to get a grasp that is close to a given 3D point, but often anygrasp cannot output enough grasps that satisfy the distance. Are there any thresholds to get more grasps?

Thanks!

./license_checker -f does not work

Screenshot from 2024-03-18 14-52-56

I have installed the requirements.txt file and tried fetching a new feature ID. This error is persistent in many systems I have tried in.
The OS is Ubuntu 22.04 as well.

Error : libcrypto.so.1.1: cannot open shared object file: No such file or directory

about ros dependency

when i run demo.sh, i encounter this issue

WARNING:root:Failed to import ros dependencies in rigid_transforms.py
WARNING:root:autolab_core not installed as catkin package, RigidTransform ros methods will be unavailable

and my open3d is no responding. how can i fix this issue ??

AnyGrasp on WSL2

Hello, I have to use AnyGrasp under Windows for some reason. When I get the feature id of my working machine, I notice that the id I get is different every day. Is there a solution to this problem?

cloud is None

你好!我在使用Graspnet数据集跑detection demo时,
gg, cloud = anygrasp.get_grasp(points, colors, lims)
输出的cloud是None,但是gg值是有的。请问是什么原因导致的?

No Response for the License Application

Hello, I filled in the form to apply for the license a week ago but still haven't gotten anything yet. I'm not sure if I missed anything. Could you please check it for me? My name is Xinchao Song. Thank you.

Missing the pkgs of model weights in running demo.sh

Hello, I tried to run the grasp_detection demo, but got this error:
FileNotFoundError: [Errno 2] No such file or directory: 'log/checkpoint_detection.tar'

I find that I miss the message in README.md that "Put model weights under log/."

But I have found no pkgs of model weights in the diretory.
Have I need to train dataset by myself? Or I miss it in other dir?

Thanks for your help

translation data seems to be wrong

Hi! While the model generates satisfying grasp in open3d visualization, the translation term in the grasp data i got always seems to be wrong.

For instance, the model just generated [0.03345555 0.16258606 0.88210291] as the translation term of the highest score grasp. However as i measured(with a ruler), the grasp point should actually be at about [-0.255, -0.074, 0.685], which is definitely not caused by measurement error lol

However, the rotation matrix it generated is actually quite accurate, which some how tells that my image input seems to work.

Screenshot from 2023-11-23 21-03-17

Screenshot from 2023-11-23 21-02-55

While in open3d the grasp seems to adequately located, which really confused me

Do you have any idea on this? Any help would be appreciated!

realsense test problem

I have 3 problem in my testing for anygrasp_sdk using a realsense d435i. I hope you can help me with these questions. Thanks!

  1. Points form realsense d435i are seem poor. For example:
    image
    image
    Have you done some calibrate for your realsense?
  2. After the depth image is aligned to the RGB image, more grasp can be obtained by using the intrinsics parameters of the original depth camera to calculate the point cloud than by using the intrinsics parameters of the RGB camera, especially when no grasp can be detected by using the intrinsics parameters of the RGB camera. Why does this phenomenon happen?
  3. I found that using 1280x720 images had a better chance of detecting grasp than using 640x480 images. Is this a normal phenomenon?

RuntimeError: CUDA out of memory.

RuntimeError: CUDA out of memory. Tried to allocate 528.00 MiB (GPU 0; 1.95 GiB total capacity; 895.74 MiB already allocated; 385.38 MiB free; 990.00 MiB reserved in total by PyTorch)

I tried running sh demo.sh in grasp_detection with NVIDIA Quadro P600 ( 2GB )
Cuda is getting out of memory.

Is there any alternative, or what is the ideal GPU for this (minimum requirement)?

license_registration not working

Hi authors, thank you for your amazing work!
1711073165462

I found the license_registration not working, do you have any idea why?

Thank you very much!

Segmentation fault (core dumped)

System specifications:
CUDA 11.0
Ubuntu 20.04
Python 3.8.17
GPU: NVIDIA RTX A4000 (16 GB)
PyTorch 1.7.1
gcc/g++ 9.4

ERROR:
On running sh demo.sh for both graspnet_detection and graspnet_tracking, the following error appears:
Segmentation fault (core dumped)

After testing , we found that in graspnet_detection, the error occurs at line 55 of demo.py
'gg, cloud = anygrasp.get_grasp(points, colors, lims)`

This indicates that the problem is in the binaries of anygrasp.

AttributeError: module 'numpy' has no attribute 'float'

Hello, after installation, here is a numpy error but I can't find any traceback of it.

license passed: True, state: FvrLicenseState.PASSED
AttributeError: module 'numpy' has no attribute 'float'.
`np.float` was a deprecated alias for the builtin `float`. To avoid this error in existing code, use `float` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.float64` here.
The aliases was originally deprecated in NumPy 1.20; for more details and guidance see the original release note at:
    https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations

I wonder this error came from this line in grasp_detection/demo.py

from gsnet import AnyGrasp

The current numpy version of is 1.24.4, but when I downgrade it to 1.19.5, here is another error

RuntimeError: module compiled against API version 0x10 but this version of numpy is 0xd
AttributeError: FvrLicenseWrapper

The above exception was the direct cause of the following exception:

ImportError: initialization failed

Can you provide any tips? Thank you!

Incorrect setup.py for pointnet2

I believe in pointnet2/setup.py has an error in line 20: it should be packages not pacakges.

Could the repo be updated please? I would like to use this in a docker container and it is not possible to do this fix and install the dependencies from my dockerfile well.

Thanks

mismatched version of GLIBC

ImportError: /lib/x86_64-linux-gnu/libm.so.6: version GLIBC_2.29 not found (required by /home/xeon199/anygrasp_sdk-main/grasp_detection/gsnet.so)

OS : Ubuntu 18.04

about three additional labels

Thank you for your contributions. As I was reading through the paper, I noticed that three additional labels have been added to the training data. I was wondering if these three new annotations haven't been made available to the public yet?

cuda out of memory when executing `anygrasp.get_grasp(points, colors, lims)`

I run your code of grasp detection to get grasp poses for my own experiment.

I use a gpu with 40GB memory and profile the gpu memory usage line by line with set_trace function, confused to see before executing anygrasp.get_grasp(points, colors, lims), the program's memory usage is under 10GB, but while executing anygrasp.get_grasp(points, colors, lims), it allocates so much memory that my program raises an out-of-memory exception. The points and colors both have a shape of (1003434, 3). It will be helpful to know why the program applies for more than 30GB memory.

How to get gripper position for the robot grasping?

Thanks for your wonderful work for robot manipulation! I am a newer for this but want to apply AnyGrasp to household robot. Could you tell me where I can get the gripper position( [x,y,z,pitch, yaw, roll] ) in th code! Thanks for your kindly reply! 🌹

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.