GithubHelp home page GithubHelp logo

kmriiwa_graspmixer_demo's Introduction

graspmixer_demo

6D pose estimation and grasping experiments using KMR iiwa

Description

The goal of the project is to perform 6D pose estimation of TLESS objects, and do grasping experiments. We use pose Microsoft Azure DK RGB-D camera which is mounted on the kmr iiwa's link 7 (end effector), to perform pose estimation. Given the estimated 6D pose of the object, after a certain set of coordinate transformations, the robot is commanded to grasp the object.

Current progress - the robot can grasp the object, and move to an instructed pose successfully.
Next steps (in progress) - implement a grasp planner to find an efficient grasp pose.

Process flow

The pose estimation pipeline is as shown in the following figure. flow_chart (1)

Visuals

grasping-ezgif com-video-to-gif-converter

Code setup

๐Ÿ“ /graspmixer
โ”œโ”€โ”€ ๐Ÿ“„ CMakeLists.txt
โ”œโ”€โ”€ ๐Ÿ“„ LICENSE
โ”œโ”€โ”€ ๐Ÿ“„ README.md
โ”œโ”€โ”€ ๐Ÿ“„ large_obj_06.stl
โ”œโ”€โ”€ ๐Ÿ“ launch
โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ kmriiwa_bringup_graspmixer.launch
โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ kmriiwa_bringup_graspmixer_cal.launch
โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ move_above_part.launch
โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ segmentation_testing.launch
โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ segmentation_testing_2.launch
โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ wrist_camera_graspmixer.launch
โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ wrist_camera_graspmixer_cal.launch
โ”‚   โ””โ”€โ”€ ๐Ÿ“„ wristcam_demo.launch
โ”œโ”€โ”€ ๐Ÿ“ output_models
โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ large_obj_01.stl
โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ large_obj_04.stl
โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ large_obj_06.stl
โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ large_obj_11.stl
โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ large_obj_14.stl
โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ large_obj_19.stl
โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ large_obj_25.stl
โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ small_obj_01.stl
โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ small_obj_04.stl
โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ small_obj_06.stl
โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ small_obj_11.stl
โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ small_obj_14.stl
โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ small_obj_19.stl
โ”‚   โ””โ”€โ”€ ๐Ÿ“„ small_obj_25.stl
โ”œโ”€โ”€ ๐Ÿ“„ package.xml
โ”œโ”€โ”€ ๐Ÿ“„ pose_estimation_ros.py
โ”œโ”€โ”€ ๐Ÿ“ scripts
โ”‚   โ”œโ”€โ”€ ๐Ÿ“ __pycache__
โ”‚   โ”‚   โ””โ”€โ”€ ๐Ÿ“„ pose_estimation_globalRegistration_basic_class.cpython-38.pyc
โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ move_above_part.py
โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ point_cloud_ex.py
โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ pose_estimation.py
โ”‚   โ”œโ”€โ”€ ๐Ÿ“ remove these
โ”‚   โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ camera_info_modifier.py
โ”‚   โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ capture_img.py
โ”‚   โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ cv2_test.py
โ”‚   โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ dummy.txt
โ”‚   โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ extractHSVrange.py
โ”‚   โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ grey_box.png
โ”‚   โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ load_bag.py
โ”‚   โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ pose_est_pc_topic.py
โ”‚   โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ pose_estimation_globalRegistration.py
โ”‚   โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ pose_estimation_globalRegistration_basic.py
โ”‚   โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ pose_estimation_globalRegistration_basic_SDF.py
โ”‚   โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ pose_estimation_ros.py
โ”‚   โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ pose_estimation_server.py
โ”‚   โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ topic_2_image.py
โ”‚   โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ view_images.py
โ”‚   โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ view_pc.py
โ”‚   โ”‚   โ””โ”€โ”€ ๐Ÿ“„ view_pc_class.py
โ”‚   โ””โ”€โ”€ ๐Ÿ“„ usbcam_2_kinect_cal_math.py
โ”œโ”€โ”€ ๐Ÿ“ src
โ”‚   โ””โ”€โ”€ ๐Ÿ“„ manual_calibration_pub.py
โ””โ”€โ”€ ๐Ÿ“ temp
    โ””โ”€โ”€ ๐Ÿ“„ mesh.stl

launch/ contains all the necessary launch files to spin relevant nodes and topics.

output_models/ contains CAD models of TLESS objects (big and small)

scripts/ contains all the relevant codes of the project, important ones are pose_estimation_globalRegistration_basic_class.py and move_above_part.py

src/ contains the manual_calibration_pub.py that is used to facilitate the hand-eye calibration procedure

Initial setup

Hardware setup

Turn the robot on and wait for the pendant to boot up. Once the robot is fully booted, please proceed to select LLBR_iiwa_7_R800in the Robot drop down.

Make sure you have ros core running before you proceed with the next step.

Next, we need to turn on ROS driver from the pendant. From the Applications tab, please select ROSKmriiwaController. Once this is selected, you need to press enable, & press and hold the play button on the pendant till you see the message 'all nodes connected to the ROS master' (for about 10 seconds).

Software setup

Make sure you have the necessary dependencies installed in local.

Dependencies

Python
OpenCv
Open3d
numpy
scipy
ROS-Noetic
tf_conversions
cv_bridge
kmr iiwa :)

Prior to implementing the entire pipeline, ie., pose estimation and grasping, please perform a hand-eye calibration. Follow this link for more info.

Performing Hand-eye calibration

We are doing eye-in-hand calibration.

Open a new terminal and start ros.

rosrun -p 30001

Open another terminal, and execute the following:

roslaunch graspmixer_demo kmriiwa_bringup_graspmixer_cal.launch

Make sure you do not have any static_transform_publisher in wrist_camera_graspmixer_cal.launch. This is important as we intend to find the transformation of the camera w.r.t end effector.

Make sure to select the following in hand-eye calibration gui:
In target tab
image Topic : usb_cam_kmriiwa_link
CameraInfo Topic : usb_cam_kmriiwa_link_info

In context tab
Sensor frame : usb_cam_kmriiwa_link
Object frame : handeye_target
End-effector frame : kmriiwa_tcp
Robot base frame : kmriiwa_link_0

In calibrate tab

Choose kmriiwa_manipulator as Planning Group.

Take around 12-15 samples and save the pose file in a local directory. Copy the static_transform_publisher information (quaternion) and paste it in wrist_camera_graspmixer.py.

You can verify the hand-eye calibration by visualizing the frame usb_cam_kmriiwa_wrist in rviz's tf_tree. The frame should originate at exactly where RGB lens of the camera resides.

Assuming a good and reliable hand-eye calibration, open a new terminal and change the directory to graspmixer_demo/launch

roscd graspmixer_demo/launch

Execute the following after changing the directory

roslaunch graspmixer_demo kmriiwa_bringup_graspmixer.launch 

kmriiwa_bringup_graspmixer.launch launches rviz. This is used for visualizing robot model, object in depth cloud, and tf frames.

Running the end-to-end process

In a new terminal, execute the following:

roscd graspmixer_demo/launch

roslaunch graspmixer_demo move_above_part.launch 

The move_above_part.launch launch file is actually responsible to run the pose estimation pipeline and the robot control.

Some notes to consider

When you launch any launch file (except for move_above_part.launch), you are required to enable the pendant and press and hold play for 5-10 seconds (monitor the terminals).

When launching move_above_part.launch, you need to press and hold play on the pendant before launching the script (hold till the end of execution) or till the control reaches embed(). You can let go when you work with embed, but if are doing any robot manipulation from a piece of code, you need to press and hold play on the pendant till the end of execution.

Authors and acknowledgment

Pannaga Sudarshan, Tyler Toner

License

For open source projects, say how it is licensed.

Project status

Implementing grasp planner for determining an efficient grasp pose.

kmriiwa_graspmixer_demo's People

Contributors

pannagas avatar

Stargazers

 avatar

Watchers

 avatar

Forkers

leob03

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.