GithubHelp home page GithubHelp logo

mveres01 / multi-contact-grasping Goto Github PK

View Code? Open in Web Editor NEW
54.0 4.0 21.0 5.08 MB

This project implements a simulated grasp-and-lift process in V-REP using the Barrett Hand, with an interface through a python remote API.

License: BSD 2-Clause "Simplified" License

Python 56.43% Lua 43.57%
grasping multi-contact gripper learning dataset vrep domain-randomization

multi-contact-grasping's Introduction

multi-contact-grasping

This project implements a simulated grasp-and-lift process in V-REP using the Barrett Hand, with an interface through a python remote API. The goal of this project is to collect information on where to place individual contacts of a gripper on an object. The emphasis on constraining the grasp to specific contacts is to promote future work in learning fine manipulation skills.

This is an extension of a previous project, which largely simplifies the collection process, and introduces domain randomization into the image collection scheme.

Sample Data

Sample data can be viewed Here, which makes use of the first 100 Procedurally Generated Random Objects from Google. For each object, all sub-collision objects were merged into a single entity, rescaled to fit the Barrett Hand, and saved in the meshes/ folder. 10 Grasps were attempted for each object, and images were saved in the .PNG format.

Installation

  • Download and Install Python 2.7
  • Download and Install V-REP (http://www.coppeliarobotics.com/downloads.html)
  • (optional) Mesh files in either .obj or .stl format; a sample mesh has been provided in ./data/meshes.
  • (optional) an Xserver if running V-REP in headless mode (i.e. no GUI).

Install the trimesh library:

pip install trimesh==2.29.10

Add the remote API interfaces by copying the following files to lib/:

  • vrep.py and vrepConst.py from path/to/vrep/V-REP_PRO_EDU/programming/remoteApiBindings/python/python/
  • remoteApi from: path/to/vrep/V-REP_PRO_EDU/programming/remoteApiBindings/lib/lib/64Bit/

Collect Grasping Experience

Open scenes/grasp_scene.ttt in V-REP. You should see a table, a few walls, and a Barrett Hand.

Start the scene by clicking the play (triangle) button at the top of the screen. Run the main collection script in src/collect_grasp.py:

cd src
python collect_grasps.py

This will look in the folder data/meshes, and run the first mesh it finds. You should see a mesh being imported into the scene, falling onto the table, then after a short delay a gripper should begin to attempt grasping it. When the gripper closes, it will check whether all the fingertips of the Barrett Hand are in contact with the object - if so, it will attempt to lift the object to a position above the table. Successful grasps (where the object remains in the grippers palm) are recorded and saved in an hdf5 dataset for that specific object in the output/collected folder.

For both pre- and post-grasp, the following information is recorded:

Simulation Property What's Recorded
Reference frames palm, world, workspace, object
Object properties mass, center of mass, inertia
Joint angles All joints of Barrett Hand
Contacts positions, forces, and normals
Images RGB, depth, and a binary object mask

Supplementing the Dataset with Images

Once data collection has finished, we can supplement the dataset with images by indexing the .hdf5 files in the output/collected/ directory:

cd src
python collect_images.py

We first remove potential duplicates grasps from the dataset (based on wrist pose, then contact positions & normals), and remove any extreme outliers. Feel free to modify this file to suit your needs. Next, the simulation will open up or connect to a running V-REP scene, and begin collecting images using data from output/grasping.hdf5. This script uses the state of the simulator at the time of the grasp (i.e. the object pose, gripper pose, angles, etc ...) and restores those parameters before taking an image.

Image randomization is done according to the following properties:

Render Property What's Affected
Lighting Number of lights, relative position
Colour Random RGB for object and workspace
Colour Components Ambient diffuse, specular, emission, auxiliary
Texture Mapping Plane, sphere, cylinder, cube
Texture Pose Position, Orientation
Camera Pose (Resolution, Field of View, near/far planes also supported)

Data from this phase is stored in the output/processed/ folder, where the *.hdf5 file stores the grasp metadata, and images are stored in the corresponding directory. RGB images of the object, object + gripper are stored in the .jpg format, while the depth image is stored as a 16-bit floating point numpy array.

Sample images of the pre-grasp:

A Few things to Note:

This project is still under construction, so things may still be changing. Additionally, a few points to consider:

  1. Complex meshes are difficult to properly simulate: Pure / convex meshes are preferred. There is an option to approximate complex objects with a convex hull, but note that this will change the physical shape of the object, and in some cases may yield weird-looking grasps (i.e. not touching the mesh surface, but touching the hulls surface).
  2. The object is static during the pregrasp, and dynamically simulated during the lift: This avoids potentially moving the object before the fingers come into contact with it.
  3. The same object pose is used for each grasp attempt: This avoids instances where an object may accidentally fall off the table, but can be removed as a constraint from the main script.
  4. A grasp is successful if the object is in the grippers palm at the height of the lift: A proximity sensor attached to the palm is used to record whether it detects an object in a nearby vicinity. A threshold is also specified on the number of contacts between the gripper and the object, which helps limit inconsistencies in the simulation dynamics.
  5. Images are captured from a seperate script after simulations have finished: To avoid introducing extra complexity into the collection script, it is recommended to collect images after experiments have finished running. However, you can still collect images while running if you wish. See src/collect_images.py for an example of how it's done.
  6. You do not need to post-process the collected data: This is just to ensure there's no duplicate grasps in the dataset, but you can run collect images by modifying the script to open the dataset you've collected for an object.

multi-contact-grasping's People

Contributors

mveres01 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

multi-contact-grasping's Issues

ERROR Loading object

Hello, I was running the code but encountered a problem. I was doing exact as instructed and I wonder what caused the problem.

Here's the running result:
mesh_path: E:\vrep_sim\data\meshes\cube.stl
Traceback (most recent call last):
File "E:/vrep_sim/src/collect_grasps.py", line 269, in
collect_grasps(mesh_path, sim)
File "E:/vrep_sim/src/collect_grasps.py", line 168, in collect_grasps
sim.load_object(mesh_path, com, mass, inertia.flatten())
File "E:\vrep_sim\src\simulator.py", line 297, in load_object
raise Exception('Error loading object! Return code ', r)
Exception: ('Error loading object! Return code ', (8, [], [], [], bytearray(b'')))

[/remoteApiCommandServer@childScript:error] 705: in sim.setInt32Signal: one of the function's argument type is not correct.
stack traceback:
[C]: in function 'simSetIntegerSignal'
E:/vrep_sim/scenes/remoteApiCommandServer.lua:705: in main chunk
[C]: in field 'require'
...rogram Files/CoppeliaRobotics/CoppeliaSimEdu/lua/sim.lua:27: in function 'require'
[string "/remoteApiCommandServer@childScript"]:1: in function 'sim_code_function_to_run'
"Error: ",[long string]

And line705 is “simSetIntegerSignal('h_gripper_config_buffer', h_gripper_config_buffer)”

Could anyone give a clue? Many thanks!

Problems about Running the Scripts

My system is ubuntu 16.04 and V-REP version is 3.5 EDU. First, I open "grasp_scene.ttt". But when I run the simulation, I see the barretthand disappeared. Then I run the script "collect_grasps.py", but it comes out that an error occurs. Here is the traceback:
Traceback (most recent call last):

File "", line 1, in
runfile('/media/xiaoqing/Project/Git Grasp/multi-contact-grasping/src/collect_grasps.py', wdir='/media/xiaoqing/Project/Git Grasp/multi-contact-grasping/src')

File "/home/xiaoqing/anaconda2/lib/python2.7/site-packages/spyder/utils/site/sitecustomize.py", line 705, in runfile
execfile(filename, namespace)

File "/home/xiaoqing/anaconda2/lib/python2.7/site-packages/spyder/utils/site/sitecustomize.py", line 94, in execfile
builtins.execfile(filename, *where)

File "/media/xiaoqing/Project/Git Grasp/multi-contact-grasping/src/collect_grasps.py", line 274, in
collect_grasps(mesh_path, sim)

File "/media/xiaoqing/Project/Git Grasp/multi-contact-grasping/src/collect_grasps.py", line 165, in collect_grasps
gripper_offset=candidate_offset)

File "/media/xiaoqing/Project/Git Grasp/multi-contact-grasping/src/collect_grasps.py", line 107, in generate_candidates
h=(p - triangles)**2

ValueError: operands could not be broadcast together with shapes (95,3) (12,3)

Could you help me about this problem? Many thanks.

Questions about the use of simxCallSciptFunction

Hi, I am learning VREP recently for grasp simulation and find your code very helpful. Now I am trying to make some modifications to your code like replacing the barrett hand with other gripper and loading object I prepared in ttm format.

However, I find the use of simxCallSciptFunction in simulator.py hard to understand. For example in the load_object function:

r = vrep.simxCallScriptFunction(self.clientID, 'remoteApiCommandServer', vrep.sim_scripttype_childscript, 'loadObject', in_ints, in_floats, [object_path], bytearray(), vrep.simx_opmode_blocking)

I understand that this is to remotely call a V-REP script, and you have specified the script type as "vrep.sim_scripttype_childscript", which should be a child script that is attached to an object (in this case it is the dummy remoteApiCommandServer) in the scene. But in the child script attached to remoteApiCommandServer, there is only one line:
require('remoteApiCommandServer')

I guess this python function eventually makes call to the Regular API function simImportMesh, so would you please shed some light on how this works?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.