GithubHelp home page GithubHelp logo

pr2 / pr2_pbd Goto Github PK

View Code? Open in Web Editor NEW

This project forked from mayacakmak/pr2_pbd

12.0 12.0 22.0 1.28 MB

Programming by demonstration for the PR2

Python 69.36% Shell 0.10% CMake 4.85% C++ 25.69%

pr2_pbd's People

Contributors

ahendrix avatar jstnhuang avatar mayacakmak avatar mbforbes avatar saineti avatar thedash avatar trainman419 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pr2_pbd's Issues

Poses should be independent of torso height

If you teach the robot to do a wave while at the lowest height, and then raise the torso, the execution might fail because some poses are not reachable. Poses not associated with any landmarks should be defined relative to torso_lift_link instead of base_link.

World.py: object race condition

The following trace occurred during execution

Traceback (most recent call last):
  File "/home/djbutler/rosbuild_ws/pr2_pbd/pr2_pbd_interaction/nodes/interaction.py", line 26, in <module>
    interaction.update()
  File "/home/djbutler/rosbuild_ws/pr2_pbd/pr2_pbd_interaction/src/Interaction.py", line 514, in update
    states = self._get_arm_states()
  File "/home/djbutler/rosbuild_ws/pr2_pbd/pr2_pbd_interaction/src/Interaction.py", line 378, in _get_arm_states
    abs_ee_poses[arm_index])
  File "/home/djbutler/rosbuild_ws/pr2_pbd/pr2_pbd_interaction/src/World.py", line 616, in get_nearest_object
    dist = World.pose_distance(World.objects[i].object.pose,
IndexError: list index out of range

Which appears to be caused by code in World.py where a list's length is checked, but during the loop thereafter, it turns out to be smaller than advertised. This smells like a race condition to me.

Speech commands recognized but not working

Hey Guys, Probably I am probably just doing something wrong, but, the robot is not doing anything. I get the commands to be recognized with the microphone but it just doesn't do anything.

Partial: TEST-MICROPHONE
[INFO] [WallTime: 1413403489.399051] test-microphone
[INFO] [WallTime: 1413403489.399578] Received command:test-microphone
Partial: RELAX-RIGHT-ARM
[INFO] [WallTime: 1413403517.144450] relax-right-arm
[INFO] [WallTime: 1413403517.144824] Received command:relax-right-arm
[INFO] [WallTime: 1413403520.426516] relax-right-arm
[INFO] [WallTime: 1413403520.426937] Received command:relax-right-arm
Partial: OPEN-LEFT-HAND
[INFO] [WallTime: 1413403535.139681] open-left-hand
[INFO] [WallTime: 1413403535.140071] Received command:open-left-hand
Partial: OPEN-RIGHT-HAND
[INFO] [WallTime: 1413403538.072160] open-right-hand
[INFO] [WallTime: 1413403538.072599] Received command:open-right-hand
Partial: SAVE
Partial: ACTION
Partial: EXECUTE-ACTION
[INFO] [WallTime: 1413403680.654847] execute-action
[INFO] [WallTime: 1413403680.655228] Received command:execute-action

Am I supposed to do something else for this to work?

Landmark registration algorithm is wrong

When there are two similar objects in the learned action, sometimes they end up being registered to the same object during execution. This can be avoided by implementing the registration algorithm as described in the RSS paper.

Put all messages into their own package

E.g., pr2_pbd_msgs

This is probably good practice overall. A specific problem right now is that you can't write nosetests and do imports like from pr2_pbd_interaction.msg import ArmState, because it will be unable to find the msg module while importing pr2_pbd_interaction and it's not aware of the catkin devel directory for some reason. This can be remedied if you only have imports like from pr2_pbd_msgs.msg import ArmState. You can still write normal unittests, but then you can't trigger them automatically with catkin run_tests.

Stop publishing TFs for each object

TF never removes frames from its internal list, even after they have stopped being published for a while. MoveIt! uses this list to try and transform every single frame into the planning frame at a high frequency. When we stop publishing the frames for old objects, MoveIt! will fail to transform those frames and publish error messages at high frequency. This leads to MoveIt! spewing error messages really quickly, which wastes CPU and makes it hard to follow logging messages.

Instead, after detecting the objects, we should store their transforms internally and compute all the transforms ourselves. In general, TF should never be used for any frame that is not permanent, especially when used with MoveIt!

Adjusting poses in rviz is really slow

Whenever I want to adjust a pose in rviz (because no IK solution was found for it), it goes really slowly. I'll drag the interactive marker in some direction. Then, the marker will snap back, and the gripper marker will slowly jump a tiny bit (maybe 0.5cm?) at a time in the direction I moved it, on the order of a few seconds for a tiny jump and maybe a minute just to drag the marker a few centimeters.

Only check for IK when needed

The system will check for IK whenever it creates a gripper mesh, coloring it appropriately. This means it checks for IK solutions at unnecessary times:

  • When saving a pose (because poses are demonstrated kinesthetically there must be an IK solution)
  • When loading a previously recorded action

It's only necessary to solve IK:

  • When a pose is edited in rviz
  • Prior to executing an action (because the torso height might change)
  • When executing and a pose is relative to a landmark

The unnecessary checks do slow down the execution quite a bit, it takes O(# actions) seconds to load an action.

Implement or remove face detection

If you set a gaze goal to FOLLOW_FACE (GazeGoal.msg), then in social_gaze.py there is an error as the initialization of this action client (self.faceClient) is commented out.

We need either:

  1. to fully implement this
  2. to remove it
  3. a comment as to why it is currently disabled (at the least)

Add freeze/release head commands

With upcoming perception changes, the robot should be able to search for landmarks in places other than a tabletop.

We should figure out a good way to control the head for demonstrations. One possible way is to use social gaze to have the robot head follow the grippers, but have "freeze head" and "release head" commands.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.