GithubHelp home page GithubHelp logo

collaborative-pushing-grasping's Introduction

Collaborative Pushing and Grasping of Tightly Stacked Objects via Deep Reinforcement Learning

Abstract—Directly grasping the tightly stacked objects may cause collisions and result in failures, degenerating the functionality of robotic arms. Inspired by the observation that first pushing objects to a state of mutual separation and then grasping them individually can effectively increase the success rate, we devise a novel deep Q-learning framework to achieve collaborative pushing and grasping. Specifically, an efficient nonmaximum suppression policy (PolicyNMS) is proposed to dynamically evaluate pushing and grasping actions by enforcing a suppression constraint on unreasonable actions. Moreover, a novel data driven pushing reward network called PR-Net is designed to effectively assess the degree of separation or aggregation between objects. To benchmark the proposed method, we establish a dataset containing common household items dataset (CHID) in both simulation and real scenarios. Although trained using simulation data only, experiment results validate that our method generalizes well to real scenarios and achieves a 97% grasp success rate at a fast speed for object separation in the real-world environment.

PDF | Video Results

Installation

This implementation requires the following dependencies (tested on Ubuntu 16.04 LTS):

  • Python 2.7. The default python version in the ubuntu 16.04.

  • ROS Kinetic. You can quickly install the ROS and Gazebo by following the wiki installation web(if you missing dependency package, you can install these package by runing the following):

    sudo apt-get install ros-kinetic-(Name)
    sudo apt-get install ros-kinetic-(the part of Name)*   # * is the grammar of regular expression
  • NumPy, SciPy, OpenCV-Python, Matplotlib. You can quickly install/update these dependencies by running the following:

    pip install numpy scipy opencv-python matplotlib
  • Torch == 1.0.0 and Torchvision == 0.2.1

    pip install torch==1.0.0 torchvision==0.2.1
  • CUDA and cudnn. You need to install the GPU driver、the cuda and the cudnn, this code has been tested with CUDA 9.0 and cudnn 7.1.4 on two 1080Ti GPU(11GB).

Train or Test the algorithm in the Simulation(Gazebo)

  1. download this repository and compile the ROS workspace.

    git clone https://github.com/nizhihao/Collaborative-Pushing-Grasping.git
    mv /home/user/Collaborative-Pushing-Grasping/myur_ws /home/user
    cd /home/user/myur_ws
    catkin_make -j1
    echo "source /home/user/myur_ws/devel/setup.bash" >> ~/.bashrc
    source ~/.bashrc

    The Introduction of the ROS package.

    dh_hand_driver   	# dh gripper's driver
    drl_push_grasp    # the RL package(all algorithm code in this package)
    gazebo-pkgs       # Gazebo grasp plugin
    robotiq_85_gripper   # robotiq_85 gripper package
    universal_robot-kinetic-devel  	# UR robotic arm package
    ur_modern_driver 		     # UR robotic arm driver in the real world 
    ur_robotiq 			  # URDF, Objects Mesh, Initial Env, MoveIt config package
  2. If you want to train in Gazebo, You can run the following code and set is_sim=True, is_testing=False in the main.py.

    If you want to test in Gazebo, You can run the following code and set is_sim=True, is_testing=True in the main.py.

    Tips: 
    1.this repository only provides the training or testing process about Collaborative pushing grasping method, the training process of pushing policy and grasping policy don't export in this repository.
    2.You need to open a new Terminal for each command Line.
    
    roslaunch ur_robotiq_gazebo ur_robotiq_gazebo.launch   # run the Gazebo and MoveIt node
    rosrun drl_push_grasp main.py   # run the agent

    The Introduction of the drl_push_grasp(RL) package.

    logger.py      # save the log
    main.py   	   # main func
    multi_model.py  # network structure
    trainer.py   	 # agent 
    ur_robotiq.py  # environment
    utils.py		   # some function module
  3. when finish the train or test, run the following code to draw the performance curve.

    cd /home/user/myur_ws/src/drl_push_grasp/scripts/
    # only compare the push or grasp policy
    python plot_ablation_push.py '../logs/YOUR-SESSION-DIRECTORY-NAME-HERE-01' '../logs/YOUR-SESSION-DIRECTORY-NAME-HERE-02'
    python plot_ablation_grasp.py '../logs/YOUR-SESSION-DIRECTORY-NAME-HERE-01' '../logs/YOUR-SESSION-DIRECTORY-NAME-HERE-02'
    # To plot the performance of pushing-grasping policy over training time
    python plot.py '../logs/YOUR-SESSION-DIRECTORY-NAME-HERE'
    

Running on a Real Robot (UR10)

The same code in this repository can be used to test on a real UR10 robot arm (controlled with ROS).

Setting Up Camera System

The latest version of our system uses RGB-D data captured from an Intel® RealSense™ D435 Camera. We provide a lightweight C++ executable that streams data in real-time using librealsense SDK 2.0 via TCP. This enables you to connect the camera to an external computer and fetch RGB-D data remotely over the network while training. This can come in handy for many real robot setups. Of course, doing so is not required -- the entire system can also be run on the same computer.

Installation Instructions:

  1. Download and install librealsense SDK 2.0

  2. Navigate to drl_push_grasp/scripts/realsense and compile realsense.cpp:

    cd /home/user/myur_ws/src/drl_push_grasp/scripts/realsense
    cmake .
    make
  3. Connect your RealSense camera with a USB 3.0 compliant cable (important: RealSense D400 series uses a USB-C cable, but still requires them to be 3.0 compliant to be able to stream RGB-D data).

  4. To start the TCP server and RGB-D streaming, run the following:

    ./realsense

Calibrating Camera Extrinsics

We provide a simple calibration script to estimate camera extrinsics with respect to robot base coordinates. To do so, the script moves the robot gripper over a set of predefined 3D locations as the camera detects the center of a moving 4x4 checkerboard pattern taped onto the gripper. The checkerboard can be of any size (the larger, the better).

run the following to move the robot and calibrate:

cd /home/user/myur_ws/src/drl_push_grasp/scripts/realsense && ./realsense  # run the realsense to obtain the camera data
roslaunch ur_modern_driver ur10_bringup_joint_limited.launch robot_ip:=192.168.1.186 # run the ur10 arm
roslaunch ur10_moveit_config ur10_moveit_planning_#execution.launch  # run the MoveIt node
roslaunch dh_hand_driver dh_hand_controller.launch  # run the dh gripper
python calibrate_myrobot.py  # calibrate

Testing

If you want to test in real world, You can run the following code and set is_sim=False, is_testing=True in the main.py.

cd /home/user/myur_ws/src/drl_push_grasp/scripts/realsense && ./realsense  # run the realsense to obtain the camera data
roslaunch ur_modern_driver ur10_bringup_joint_limited.launch robot_ip:=192.168.1.186 # run the ur10 arm
roslaunch ur10_moveit_config ur10_moveit_planning_execution.launch  # run the MoveIt node
roslaunch dh_hand_driver dh_hand_controller.launch  # run the dh gripper
rosrun drl_push_grasp main.py  # run the agent

Citing

If you find this code useful in your work, please consider citing:

@article{yang2021collaborative,
  title={Collaborative pushing and grasping of tightly stacked objects via deep reinforcement learning},
  author={Yang, Yuxiang and Ni, Zhihao and Gao, Mingyu and Zhang, Jing and Tao, Dacheng},
  journal={IEEE/CAA Journal of Automatica Sinica},
  volume={9},
  number={1},
  pages={135--145},
  year={2021},
  publisher={IEEE}
}

Contact

If you have any questions or find any bugs, please let me know: ZhiHao Ni([email protected]).

collaborative-pushing-grasping's People

Contributors

nizhihao avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

collaborative-pushing-grasping's Issues

关于高度图

我在你的代码里看见了高度图的转换,我在实际使用的过程中得到高度图值一直为0,您能告诉我是什么原因吗?

程序启动后gazebo黑屏

尊敬的倪先生:
您好!很抱歉打扰您。
我尝试运行您的代码时,出现了问题,在几番百度之后也找不到办法,想咨询您是否有遇到过类似问题并解决的。
感谢!

以下是问题描述:
首先启动gazebo,roslaunch ur_robotiq_gazebo ur_robotiq_gazebo.launch一切正常
然后rosrun drl_push_grasp main.py启动程序,这时gazebo就黑屏了。
不久gazebo自动关闭,rviz还在,程序也报错停下。

4CAB7218@E7FF171E 0CDA3566
7AADA895@766DFB6C 0CDA3566

E62922A2@ECA1C56A 0CDA3566

data set

Hi, can you share this dataset?
datasets_path = '/home/user/new_drl_datasets(more_like_real)_before20201019/'

Could you share your trained model ?

Dear Ni,

I extremely appreciate your work. I want to reproduce your paper ,but my computer is ubuntu20.04(ROS noetic). So,I can't train the model. Could you share your pre-trained model that I will use it in my VMware?

Hoping to hear from you!
Xia Xiaowu

resource not found: pr2_description

Hi, thank you for your code.
However, when I try to follow the instruction and reimplement the whole project, the error happened. It seems that some files are missing. Could you please help?
when I execute roslaunch ‘ur_robotiq_gazebo ur_robotiq_gazebo.launch’,

hint ‘resource not found: pr2_description’,
when processing file: /home/ljh0929/myur_ws/src/ur_robotiq/ur_robotiq_description/urdf/ur_robotiq.urdf.xacro
Invalid tag: Cannot load command parameter [robot_description]: command [/opt/ros/kinetic/lib/xacro/xacro --inorder '/home/ljh0929/myur_ws/src/ur_robotiq/ur_robotiq_description/urdf/ur_robotiq.urdf.xacro'] returned with code [2].

Param xml is
The traceback for the exception was written to the log file.

OSError: [Errno 2] No such file or directory

Hi, thanks a lot for the code.
I am getting this error message when running main.py:
OSError: [Errno 2] No such file or directory: '/home/turtlebot/new_drl_datasets(more_like_real)_before20201019/'
Can you tell me if it is due to the lack of this dataset? And where can I get this dataset? Looking forward to your reply.

missing files?

Hi, thank you for your code.
However, when I try to follow the instruction and reimplement the whole project, the error happened. It seems that some files are missing. Could you please help?

fatal error: dh_hand_driver/ActuateHandAction.h: No such file or directory
2 | #include <dh_hand_driver/ActuateHandAction.h>
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
make[2]: *** [dh_hand_driver/CMakeFiles/hand_controller_client.dir/build.make:63: dh_hand_driver/CMakeFiles/hand_controller_client.dir/src/test_client.cpp.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:3117: dh_hand_driver/CMakeFiles/hand_controller_client.dir/all] Error 2
make: *** [Makefile:141: all] Error 2
Invoking "make -j1" failed

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.