GithubHelp home page GithubHelp logo

unity-technologies / robotics-object-pose-estimation Goto Github PK

View Code? Open in Web Editor NEW
264.0 14.0 70.0 39.5 MB

A complete end-to-end demonstration in which we collect training data in Unity and use that data to train a deep neural network to predict the pose of a cube. This model is then deployed in a simulated robotic pick-and-place task.

License: Apache License 2.0

Dockerfile 2.22% Python 58.35% C# 31.37% CMake 7.79% Shell 0.28%
unity pose-estimation ros urdf robotics autonomy manipulation computer-vision model-training synthetic-data

robotics-object-pose-estimation's Introduction

Object Pose Estimation Demo

License

This tutorial will go through the steps necessary to perform pose estimation with a UR3 robotic arm in Unity. You’ll gain experience integrating ROS with Unity, importing URDF models, collecting labeled training data, and training and deploying a deep learning model. By the end of this tutorial, you will be able to perform pick-and-place with a robot arm in Unity, using computer vision to perceive the object the robot picks up.

Want to skip the tutorial and run the full demo? Check out our Quick Demo.

Want to skip the tutorial and focus on collecting training data for the deep learning model? Check out our Quick Data-Collection Demo.

Note: This project has been developed with Python 3 and ROS Noetic.

Table of Contents


This part includes downloading and installing the Unity Editor, setting up a basic Unity scene, and importing a robot. We will import the UR3 robot arm using the URDF Importer package.


This part focuses on setting up the scene for data collection using the Unity Computer Vision Perception Package. You will learn how to use Perception Package Randomizers to randomize aspects of the scene in order to create variety in the training data.

If you would like to learn more about Randomizers, and apply domain randomization to this scene more thoroughly, check out our further exercises for the reader here.


This part includes running data collection with the Perception Package, and using that data to train a deep learning model. The training step can take some time. If you'd like, you can skip that step by using our pre-trained model.

To measure the success of grasping in simulation using our pre-trained model for pose estimation, we did 100 trials and got the following results:

Success Failures Percent Success
Without occlusion 82 5 94
With occlusion 7 6 54
All 89 11 89

Note: Data for the above experiment was collected in Unity 2020.2.1f1.


This part includes the preparation and setup necessary to run a pick-and-place task using MoveIt. Here, the cube pose is predicted by the trained deep learning model. Steps covered include:

  • Creating and invoking a motion planning service in ROS
  • Sending captured RGB images from our scene to the ROS Pose Estimation node for inference
  • Using a Python script to run inference on our trained deep learning model
  • Moving Unity Articulation Bodies based on a calculated trajectory
  • Controlling a gripping tool to successfully grasp and drop an object.

Support

For questions or discussions about Unity Robotics package installations or how to best set up and integrate your robotics projects, please create a new thread on the Unity Robotics forum and make sure to include as much detail as possible.

For feature requests, bugs, or other issues, please file a GitHub issue using the provided templates and the Robotics team will investigate as soon as possible.

For any other questions or feedback, connect directly with the Robotics team at [email protected].

More from Unity Robotics

Visit the Robotics Hub for more tutorials, tools, and information on robotics simulation in Unity!

License

Apache License 2.0

robotics-object-pose-estimation's People

Contributors

at669 avatar jonathanhunity avatar jonathanleban avatar mkamalza avatar peifeng-unity avatar repodb[bot] avatar sarahwolf32 avatar sleal-unity avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

robotics-object-pose-estimation's Issues

Not compatible with latest perception sdk build 0.8.0-preview.3

While importing scene, PoseEstimationScenario.cs has errors:
1. Assets/TutorialAssets/Scripts/PoseEstimationScenario.cs(27,26): error CS0507: 'PoseEstimationScenario.isIterationComplete': cannot change access modifiers when overriding 'protected' inherited member 'ScenarioBase.isIterationComplete'
2. Assets/TutorialAssets/Scripts/PoseEstimationScenario.cs(28,26): error CS0507: 'PoseEstimationScenario.isScenarioComplete': cannot change access modifiers when overriding 'protected' inherited member 'ScenarioBase.isScenarioComplete'
3. Assets/TutorialAssets/Scripts/PoseEstimationScenario.cs(10,14): error CS0534: 'PoseEstimationScenario' does not implement inherited abstract member 'ScenarioBase.isScenarioReadyToStart.get'

Modified the file to fix these errors and generated data. Noticed that metrics.json are not being created only captures.json are. When starting training, it gives an error in between looking for metrics data. Had to revert back 0.7.0-preview.2 build to regenerate data for training.

Can this be updated to be compatible with perception 0.8.0-preview.3 (or the latest) build. It is suspected that the previous builds had some bug in them which gave erroneous bounding box data which led to poor training results for pose estimation.

Unity compilation issues

Describe the bug
Unity cannot compile the demo repo becauase there are errors.

A clear and concise description of what the bug is.

When using unity 2023 this is the error:
ForwardRenderer is missing RendererFeatures
This could be due to missing scripts or compile error.
UnityEngine.Rendering.Universal.ScriptableRendererData:OnValidate () (at ./Library/PackageCache/[email protected]/Runtime/ScriptableRendererData.cs:86)

When opening the project with unity 2020.2 as the instructions say, there two errors that show up with no descriptions.

To Reproduce
Use unity to open the demo project with an ubuntu machine and try to run the project.

Expected behavior
The project should compile. And the demo should run.

Screenshots
image

image

image

Environment (please complete the following information, where applicable):

  • Unity Version: Unity 2020.2.6f1
  • Unity machine OS + version: Ubuntu 23.04
  • ROS machine OS + version: Ubuntu 23.04
  • ROS–Unity communication: Docker

Error in Mover.py

When I follow along with the Object Pose tutorial I get an error on the step where I click the pose estimation button (in unity).

image

The error above keeps occurring, I have tried resetting the arm prior as well.

Environment:

  • Unity Version: Unity 2020.2.4f1
  • Unity machine OS + version: Windows 10
  • ROS machine OS + version: Ubuntu 20.04 LTS ROS Noetic
  • ROS–Unity communication: native

Unable to install URDF Importer Package

Hello. I'm working on Ubuntu 20 with UnityHub 2.4.2 and Unity Editor 2020.2.7f1. When I try to install the URDF Importer Package: https://github.com/Unity-Technologies/URDF-Importer.git?path=/com.unity.robotics.urdf-importer#v0.2.0, I get the following errors:

  • Script 'Joint' has the same name as built-in Unity component. AddComponent and GetComponent will not work with this script.
  • Library/PackageCache/com.unity.robotics.urdf-importer@f663d6f959/Runtime/Controller/JointControl.cs(15,12): error CS0246: The type or namespace name 'ArticulationBody' could not be found (are you missing a using directive or an assembly reference?)
  • Library/PackageCache/com.unity.robotics.urdf-importer@f663d6f959/Runtime/Controller/FKRobot.cs(12,21): error CS0246: The type or namespace name 'ArticulationBody' could not be found (are you missing a using directive or an assembly reference?)
  • Library/PackageCache/com.unity.robotics.urdf-importer@f663d6f959/Runtime/UrdfComponents/UrdfJoints/UrdfJointPrismatic.cs(23,17): error CS0246: The type or namespace name 'ArticulationDrive' could not be found (are you missing a using directive or an assembly reference?)
  • Library/PackageCache/com.unity.robotics.urdf-importer@f663d6f959/Runtime/Controller/Controller.cs(13,17): error CS0246: The type or namespace name 'ArticulationBody' could not be found (are you missing a using directive or an assembly reference?)

A lot pick up erros

Hi,

In my build the robot almost never succeeds in picking up the cube. Even though I get shell msg "You can start planning" I've noticed three ERRORS in the dock workspace:

  1. [controller_spawner-3]
  2. [ERROR] [1650563249.826889700]: Could not find the planner configuration 'None' on the param server
  3. [ERROR] [1650563266.917313200]: Action client not connected: /follow_joint_trajectory

Are any of these possibly related?

Thank you very much for your time.

ROS failed when I changed the camera rotation

Describe the bug
MicrosoftTeams-image

To Reproduce
Steps to reproduce the behavior:
Just change camera rotation into this, defaul value is 20
MicrosoftTeams-image (1)

Additional context

Idk why it's work fine with default camera but when I change its rotation, it's failed.

Cube not rotating

Hello, thank you for the very beneficial tutorial, I'm currently going through it.
In part 2, I've followed the tutorial up to step 10 without errors. In step 10, the cube is overlaid with a green bounding box, however, it is not rotating. Any idea what could be the problem? I'm using Unity 2020.2.0f1

The following is a screenshot of my editor.
Screen Shot 2021-03-02 at 10 18 32 AM

and if i continue to step 11, same thing, the box moves to a place and then stops moving, like the photo attached:

Screen Shot 2021-03-02 at 10 37 34 AM

Reporting a vulnerability

Hello!

I hope you are doing well!

We are a security research team. Our tool automatically detected a vulnerability in this repository. We want to disclose it responsibly. GitHub has a feature called Private vulnerability reporting, which enables security research to privately disclose a vulnerability. Unfortunately, it is not enabled for this repository.

Can you enable it, so that we can report it?

Thanks in advance!

PS: you can read about how to enable private vulnerability reporting here: https://docs.github.com/en/code-security/security-advisories/repository-security-advisories/configuring-private-vulnerability-reporting-for-a-repository

Resume Training

How can I resume training when it got stopped during the process?

The Cube label for data collection is misplaced in a weird way

Describe the bug

The Cube label is misplaced in a weird way.

To Reproduce

Steps to reproduce the behavior:

Just running a Demo project with the Perception camera turned on (was trying to collect images for model training).

Screenshots

Screenshot 2022-01-19 at 02 00 33

Environment:

  • Unity Version: e.g. Unity 2020.2.6f1 (As suggested)
  • Unity machine OS + version: MacOS 12.1
  • ROS machine OS + version: As suggested
  • ROS–Unity communication: Docker
  • Package branches or versions: As suggested

errors when training the deeplearning code

INFO | 2021-11-28 19:26:19 | pose_estimation.pose_estimation_estimator | MainThread | intermediate mean orientation loss after mini batch 689 in epoch 1 is: 2.1472411858257803
INFO | 2021-11-28 19:26:19 | pose_estimation.pose_estimation_estimator | MainThread | intermediate mean translation loss after mini batch 690 in epoch 1 is: 0.049511323997453935
INFO | 2021-11-28 19:26:19 | pose_estimation.pose_estimation_estimator | MainThread | intermediate mean orientation loss after mini batch 690 in epoch 1 is: 2.1470717681108926
Traceback (most recent call last):
File "/home/e509/miniconda3/envs/pose_estimation/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/e509/miniconda3/envs/pose_estimation/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/e509/unity/Robotics-Object-Pose-Estimation/Model/pose_estimation/cli.py", line 143, in
main()
File "/home/e509/unity/Robotics-Object-Pose-Estimation/Model/pose_estimation/cli.py", line 133, in main
estimator.train()
File "/home/e509/unity/Robotics-Object-Pose-Estimation/Model/pose_estimation/pose_estimation_estimator.py", line 79, in train
train_model(self)
File "/home/e509/unity/Robotics-Object-Pose-Estimation/Model/pose_estimation/train.py", line 53, in train_model
train_loop(
File "/home/e509/unity/Robotics-Object-Pose-Estimation/Model/pose_estimation/train.py", line 85, in train_loop
_train_one_epoch(
File "/home/e509/unity/Robotics-Object-Pose-Estimation/Model/pose_estimation/train.py", line 128, in _train_one_epoch
metric_translation, metric_orientation = evaluation_over_batch(
File "/home/e509/unity/Robotics-Object-Pose-Estimation/Model/pose_estimation/evaluate.py", line 124, in evaluation_over_batch
for index, (images, target_translation_list, target_orientation_list) in enumerate(
File "/home/e509/miniconda3/envs/pose_estimation/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 521, in next
data = self._next_data()
File "/home/e509/miniconda3/envs/pose_estimation/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 561, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/home/e509/miniconda3/envs/pose_estimation/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 32, in fetch
data.append(next(self.dataset_iter))
File "/home/e509/unity/Robotics-Object-Pose-Estimation/Model/pose_estimation/single_cube_dataset.py", line 254, in next
return self.next()
File "/home/e509/unity/Robotics-Object-Pose-Estimation/Model/pose_estimation/single_cube_dataset.py", line 249, in next
data = json.load(file)
File "/home/e509/miniconda3/envs/pose_estimation/lib/python3.8/json/init.py", line 293, in load
return loads(fp.read(),
File "/home/e509/miniconda3/envs/pose_estimation/lib/python3.8/json/init.py", line 357, in loads
return _default_decoder.decode(s)
File "/home/e509/miniconda3/envs/pose_estimation/lib/python3.8/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/home/e509/miniconda3/envs/pose_estimation/lib/python3.8/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

Error: arm/arm: Unable to sample any valid states for goal tree

Hello there,
I am trying to build robotics-object-pose-estimation project in my local machine but after I am running ROS server and try to click on pose estimation button in unity it return error "Error: arm/arm: Unable to sample any valid states for goal tree" Any help?
Thanks

Console logs / stack traces

[ERROR] [1663850579.670529300]: arm/arm: Unable to sample any valid states for goal tree

Screenshots

Screenshot (1)

Environment (please complete the following information, where applicable):

  • Unity Version: [e.g. Unity 2021.3.9f1]
  • Unity machine OS + version: [e.g. Windows 11]
  • ROS machine OS + version: [e.g. Ubuntu 18.04, ROS Noetic]
  • ROS–Unity communication: [e.g. Docker]
  • Package branches or versions: [e.g. [email protected]]

Multiple cube detection

Hello,

First of all thank you for sharing this useful environment.

I am trying to use this environment to detect objects in a box of objects. The problem I found is that I don't know how to do the image inference with multiple objects.

Would it be possible to give an image to the model and get multiple object position predictions as output?

Thank you in advance.

ROS_controller issue

I am launching the pose_est.launch file in ROS and noticing that the project fails out of the box:

The spawner for controller_spawner fails with additional errors.

nrose@LAPTOP-1KFPQ7FH:/mnt/c/Users/rosen/retry/poseEstimation/Robotics-Object-Pose-Estimation$ docker run -it --rm -p 1000:1000 -p 5005:5005 poseestimation:ur3 /bin/bash
root@c9c0df698037:/catkin_ws# source devel/setup.bash 
root@c9c0df698037:/catkin_ws# roslaunch ur3_moveit pose_est.launch 
... logging to /root/.ros/log/d54e6e76-a375-11eb-9f9c-0242ac110002/roslaunch-c9c0df698037-81.log
Checking log directory for disk usage. This may take a while.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.

started roslaunch server http://c9c0df698037:43691/

SUMMARY
========

PARAMETERS
 * /ROS_IP: 172.17.0.2
 * /controller_list: [{'name': '', 'ac...
 * /joint_state_publisher/publish_frequency: 50.0
 * /move_group/allow_trajectory_execution: True
 * /move_group/arm/default_planner_config: None
 * /move_group/arm/planner_configs: ['SBL', 'EST', 'L...
 * /move_group/capabilities: 
 * /move_group/controller_list: [{'name': '', 'ac...
 * /move_group/disable_capabilities: 
 * /move_group/hand/default_planner_config: 
 * /move_group/hand/planner_configs: ['SBL', 'EST', 'L...
 * /move_group/jiggle_fraction: 0.05
 * /move_group/max_range: 5.0
 * /move_group/max_safe_path_cost: 1
 * /move_group/moveit_controller_manager: moveit_simple_con...
 * /move_group/moveit_manage_controllers: True
 * /move_group/octomap_resolution: 0.025
 * /move_group/planner_configs/BFMT/balanced: 0
 * /move_group/planner_configs/BFMT/cache_cc: 1
 * /move_group/planner_configs/BFMT/extended_fmt: 1
 * /move_group/planner_configs/BFMT/heuristics: 1
 * /move_group/planner_configs/BFMT/nearest_k: 1
 * /move_group/planner_configs/BFMT/num_samples: 1000
 * /move_group/planner_configs/BFMT/optimality: 1
 * /move_group/planner_configs/BFMT/radius_multiplier: 1.0
 * /move_group/planner_configs/BFMT/type: geometric::BFMT
 * /move_group/planner_configs/BKPIECE/border_fraction: 0.9
 * /move_group/planner_configs/BKPIECE/failed_expansion_score_factor: 0.5
 * /move_group/planner_configs/BKPIECE/min_valid_path_fraction: 0.5
 * /move_group/planner_configs/BKPIECE/range: 0.0
 * /move_group/planner_configs/BKPIECE/type: geometric::BKPIECE
 * /move_group/planner_configs/BiEST/range: 0.0
 * /move_group/planner_configs/BiEST/type: geometric::BiEST
 * /move_group/planner_configs/BiTRRT/cost_threshold: 1e300
 * /move_group/planner_configs/BiTRRT/frountier_node_ratio: 0.1
 * /move_group/planner_configs/BiTRRT/frountier_threshold: 0.0
 * /move_group/planner_configs/BiTRRT/init_temperature: 100
 * /move_group/planner_configs/BiTRRT/range: 0.0
 * /move_group/planner_configs/BiTRRT/temp_change_factor: 0.1
 * /move_group/planner_configs/BiTRRT/type: geometric::BiTRRT
 * /move_group/planner_configs/EST/goal_bias: 0.05
 * /move_group/planner_configs/EST/range: 0.0
 * /move_group/planner_configs/EST/type: geometric::EST
 * /move_group/planner_configs/FMT/cache_cc: 1
 * /move_group/planner_configs/FMT/extended_fmt: 1
 * /move_group/planner_configs/FMT/heuristics: 0
 * /move_group/planner_configs/FMT/nearest_k: 1
 * /move_group/planner_configs/FMT/num_samples: 1000
 * /move_group/planner_configs/FMT/radius_multiplier: 1.1
 * /move_group/planner_configs/FMT/type: geometric::FMT
 * /move_group/planner_configs/KPIECE/border_fraction: 0.9
 * /move_group/planner_configs/KPIECE/failed_expansion_score_factor: 0.5
 * /move_group/planner_configs/KPIECE/goal_bias: 0.05
 * /move_group/planner_configs/KPIECE/min_valid_path_fraction: 0.5
 * /move_group/planner_configs/KPIECE/range: 0.0
 * /move_group/planner_configs/KPIECE/type: geometric::KPIECE
 * /move_group/planner_configs/LBKPIECE/border_fraction: 0.9
 * /move_group/planner_configs/LBKPIECE/min_valid_path_fraction: 0.5
 * /move_group/planner_configs/LBKPIECE/range: 0.0
 * /move_group/planner_configs/LBKPIECE/type: geometric::LBKPIECE
 * /move_group/planner_configs/LBTRRT/epsilon: 0.4
 * /move_group/planner_configs/LBTRRT/goal_bias: 0.05
 * /move_group/planner_configs/LBTRRT/range: 0.0
 * /move_group/planner_configs/LBTRRT/type: geometric::LBTRRT
 * /move_group/planner_configs/LazyPRM/range: 0.0
 * /move_group/planner_configs/LazyPRM/type: geometric::LazyPRM
 * /move_group/planner_configs/LazyPRMstar/type: geometric::LazyPR...
 * /move_group/planner_configs/PDST/type: geometric::PDST
 * /move_group/planner_configs/PRM/max_nearest_neighbors: 10
 * /move_group/planner_configs/PRM/type: geometric::PRM
 * /move_group/planner_configs/PRMstar/type: geometric::PRMstar
 * /move_group/planner_configs/ProjEST/goal_bias: 0.05
 * /move_group/planner_configs/ProjEST/range: 0.0
 * /move_group/planner_configs/ProjEST/type: geometric::ProjEST
 * /move_group/planner_configs/RRT/goal_bias: 0.05
 * /move_group/planner_configs/RRT/range: 0.0
 * /move_group/planner_configs/RRT/type: geometric::RRT
 * /move_group/planner_configs/RRTConnect/range: 0.0
 * /move_group/planner_configs/RRTConnect/type: geometric::RRTCon...
 * /move_group/planner_configs/RRTstar/delay_collision_checking: 1
 * /move_group/planner_configs/RRTstar/goal_bias: 0.05
 * /move_group/planner_configs/RRTstar/range: 0.0
 * /move_group/planner_configs/RRTstar/type: geometric::RRTstar
 * /move_group/planner_configs/SBL/range: 0.0
 * /move_group/planner_configs/SBL/type: geometric::SBL
 * /move_group/planner_configs/SPARS/dense_delta_fraction: 0.001
 * /move_group/planner_configs/SPARS/max_failures: 1000
 * /move_group/planner_configs/SPARS/sparse_delta_fraction: 0.25
 * /move_group/planner_configs/SPARS/stretch_factor: 3.0
 * /move_group/planner_configs/SPARS/type: geometric::SPARS
 * /move_group/planner_configs/SPARStwo/dense_delta_fraction: 0.001
 * /move_group/planner_configs/SPARStwo/max_failures: 5000
 * /move_group/planner_configs/SPARStwo/sparse_delta_fraction: 0.25
 * /move_group/planner_configs/SPARStwo/stretch_factor: 3.0
 * /move_group/planner_configs/SPARStwo/type: geometric::SPARStwo
 * /move_group/planner_configs/STRIDE/degree: 16
 * /move_group/planner_configs/STRIDE/estimated_dimension: 0.0
 * /move_group/planner_configs/STRIDE/goal_bias: 0.05
 * /move_group/planner_configs/STRIDE/max_degree: 18
 * /move_group/planner_configs/STRIDE/max_pts_per_leaf: 6
 * /move_group/planner_configs/STRIDE/min_degree: 12
 * /move_group/planner_configs/STRIDE/min_valid_path_fraction: 0.2
 * /move_group/planner_configs/STRIDE/range: 0.0
 * /move_group/planner_configs/STRIDE/type: geometric::STRIDE
 * /move_group/planner_configs/STRIDE/use_projected_distance: 0
 * /move_group/planner_configs/TRRT/frountierNodeRatio: 0.1
 * /move_group/planner_configs/TRRT/frountier_threshold: 0.0
 * /move_group/planner_configs/TRRT/goal_bias: 0.05
 * /move_group/planner_configs/TRRT/init_temperature: 10e-6
 * /move_group/planner_configs/TRRT/k_constant: 0.0
 * /move_group/planner_configs/TRRT/max_states_failed: 10
 * /move_group/planner_configs/TRRT/min_temperature: 10e-10
 * /move_group/planner_configs/TRRT/range: 0.0
 * /move_group/planner_configs/TRRT/temp_change_factor: 2.0
 * /move_group/planner_configs/TRRT/type: geometric::TRRT
 * /move_group/planning_plugin: ompl_interface/OM...
 * /move_group/planning_scene_monitor/publish_geometry_updates: True
 * /move_group/planning_scene_monitor/publish_planning_scene: True
 * /move_group/planning_scene_monitor/publish_state_updates: True
 * /move_group/planning_scene_monitor/publish_transforms_updates: True
 * /move_group/request_adapters: default_planner_r...
 * /move_group/sensors: [{'filtered_cloud...
 * /move_group/start_state_max_bounds_error: 0.1
 * /move_group/trajectory_execution/allowed_execution_duration_scaling: 1.2
 * /move_group/trajectory_execution/allowed_goal_duration_margin: 0.5
 * /move_group/trajectory_execution/allowed_start_tolerance: 0.01
 * /robot_description: <?xml version="1....
 * /robot_description_kinematics/arm/kinematics_solver: kdl_kinematics_pl...
 * /robot_description_kinematics/arm/kinematics_solver_search_resolution: 0.005
 * /robot_description_kinematics/arm/kinematics_solver_timeout: 0.005
 * /robot_description_planning/joint_limits/elbow_joint/has_acceleration_limits: False
 * /robot_description_planning/joint_limits/elbow_joint/has_velocity_limits: True
 * /robot_description_planning/joint_limits/elbow_joint/max_acceleration: 0
 * /robot_description_planning/joint_limits/elbow_joint/max_velocity: 3.15
 * /robot_description_planning/joint_limits/finger_joint/has_acceleration_limits: False
 * /robot_description_planning/joint_limits/finger_joint/has_velocity_limits: True
 * /robot_description_planning/joint_limits/finger_joint/max_acceleration: 0
 * /robot_description_planning/joint_limits/finger_joint/max_velocity: 2
 * /robot_description_planning/joint_limits/left_inner_finger_joint/has_acceleration_limits: False
 * /robot_description_planning/joint_limits/left_inner_finger_joint/has_velocity_limits: True
 * /robot_description_planning/joint_limits/left_inner_finger_joint/max_acceleration: 0
 * /robot_description_planning/joint_limits/left_inner_finger_joint/max_velocity: 2
 * /robot_description_planning/joint_limits/left_inner_knuckle_joint/has_acceleration_limits: False
 * /robot_description_planning/joint_limits/left_inner_knuckle_joint/has_velocity_limits: True
 * /robot_description_planning/joint_limits/left_inner_knuckle_joint/max_acceleration: 0
 * /robot_description_planning/joint_limits/left_inner_knuckle_joint/max_velocity: 2
 * /robot_description_planning/joint_limits/right_inner_finger_joint/has_acceleration_limits: False
 * /robot_description_planning/joint_limits/right_inner_finger_joint/has_velocity_limits: True
 * /robot_description_planning/joint_limits/right_inner_finger_joint/max_acceleration: 0
 * /robot_description_planning/joint_limits/right_inner_finger_joint/max_velocity: 2
 * /robot_description_planning/joint_limits/right_inner_knuckle_joint/has_acceleration_limits: False
 * /robot_description_planning/joint_limits/right_inner_knuckle_joint/has_velocity_limits: True
 * /robot_description_planning/joint_limits/right_inner_knuckle_joint/max_acceleration: 0
 * /robot_description_planning/joint_limits/right_inner_knuckle_joint/max_velocity: 2
 * /robot_description_planning/joint_limits/right_outer_knuckle_joint/has_acceleration_limits: False
 * /robot_description_planning/joint_limits/right_outer_knuckle_joint/has_velocity_limits: True
 * /robot_description_planning/joint_limits/right_outer_knuckle_joint/max_acceleration: 0
 * /robot_description_planning/joint_limits/right_outer_knuckle_joint/max_velocity: 2
 * /robot_description_planning/joint_limits/shoulder_lift_joint/has_acceleration_limits: False
 * /robot_description_planning/joint_limits/shoulder_lift_joint/has_velocity_limits: True
 * /robot_description_planning/joint_limits/shoulder_lift_joint/max_acceleration: 0
 * /robot_description_planning/joint_limits/shoulder_lift_joint/max_velocity: 2.16
 * /robot_description_planning/joint_limits/shoulder_pan_joint/has_acceleration_limits: False
 * /robot_description_planning/joint_limits/shoulder_pan_joint/has_velocity_limits: True
 * /robot_description_planning/joint_limits/shoulder_pan_joint/max_acceleration: 0
 * /robot_description_planning/joint_limits/shoulder_pan_joint/max_velocity: 2.16
 * /robot_description_planning/joint_limits/wrist_1_joint/has_acceleration_limits: False
 * /robot_description_planning/joint_limits/wrist_1_joint/has_velocity_limits: True
 * /robot_description_planning/joint_limits/wrist_1_joint/max_acceleration: 0
 * /robot_description_planning/joint_limits/wrist_1_joint/max_velocity: 3.2
 * /robot_description_planning/joint_limits/wrist_2_joint/has_acceleration_limits: False
 * /robot_description_planning/joint_limits/wrist_2_joint/has_velocity_limits: True
 * /robot_description_planning/joint_limits/wrist_2_joint/max_acceleration: 0
 * /robot_description_planning/joint_limits/wrist_2_joint/max_velocity: 3.2
 * /robot_description_planning/joint_limits/wrist_3_joint/has_acceleration_limits: False
 * /robot_description_planning/joint_limits/wrist_3_joint/has_velocity_limits: True
 * /robot_description_planning/joint_limits/wrist_3_joint/max_acceleration: 0
 * /robot_description_planning/joint_limits/wrist_3_joint/max_velocity: 3.2
 * /robot_description_semantic: <?xml version="1....
 * /rosdistro: noetic
 * /rosversion: 1.15.9
 * /source_list: ['/arm_controller...

NODES
  /
    arm_controller_spawner (controller_manager/controller_manager)
    controller_spawner (controller_manager/spawner)
    joint_state_publisher (joint_state_publisher/joint_state_publisher)
    move_group (moveit_ros_move_group/move_group)
    mover (ur3_moveit/mover.py)
    pose_estimation (ur3_moveit/pose_estimation_script.py)
    ros_control_controller_manager (controller_manager/controller_manager)
    server_endpoint (ur3_moveit/server_endpoint.py)

auto-starting new master
process[master]: started with pid [89]
ROS_MASTER_URI=http://localhost:11311

setting /run_id to d54e6e76-a375-11eb-9f9c-0242ac110002
process[rosout-1]: started with pid [99]
started core service [/rosout]
process[joint_state_publisher-2]: started with pid [102]
process[controller_spawner-3]: started with pid [107]
process[arm_controller_spawner-4]: started with pid [108]
process[ros_control_controller_manager-5]: started with pid [109]
process[move_group-6]: started with pid [110]
process[server_endpoint-7]: started with pid [111]
process[pose_estimation-8]: started with pid [112]
process[mover-9]: started with pid [113]
[ INFO] [1619101206.606254500]: Loading robot model 'ur3_with_gripper'...
INFO: cannot create a symlink to latest log directory: [Errno 2] No such file or directory: '/root/.ros/log/latest'
usage: spawner [-h] [--stopped] [--wait-for topic] [--namespace ns] [--timeout T] [--no-timeout] [--shutdown-timeout SHUTDOWN_TIMEOUT]
               controller [controller ...]
spawner: error: the following arguments are required: controller
[controller_spawner-3] process has died [pid 107, exit code 2, cmd /opt/ros/noetic/lib/controller_manager/spawner __name:=controller_spawner __log:=/root/.ros/log/d54e6e76-a375-11eb-9f9c-0242ac110002/controller_spawner-3.log].
log file: /root/.ros/log/d54e6e76-a375-11eb-9f9c-0242ac110002/controller_spawner-3*.log
[ INFO] [1619101207.100121900]: Publishing maintained planning scene on 'monitored_planning_scene'
[ INFO] [1619101207.106714800]: MoveGroup debug mode is OFF
Starting planning scene monitors...
[ INFO] [1619101207.106886400]: Starting planning scene monitor
[ INFO] [1619101207.112188000]: Listening to '/planning_scene'
[ INFO] [1619101207.112463500]: Starting world geometry update monitor for collision objects, attached objects, octomap updates.
[ INFO] [1619101207.117832500]: Listening to '/collision_object'
[ INFO] [1619101207.123391500]: Listening to '/planning_scene_world' for planning scene world geometry
[ INFO] [1619101207.582600100]: Listening to '/head_mount_kinect/depth_registered/points' using message filter with target frame 'world '
[ INFO] [1619101207.593604900]: Listening to '/attached_collision_object' for attached collision objects
Planning scene monitors started.
[ERROR] [1619101207.674301200]: Could not find the planner configuration 'None' on the param server
[ INFO] [1619101207.743439700]: Using planning interface 'OMPL'
[ INFO] [1619101207.750457200]: Param 'default_workspace_bounds' was not set. Using default value: 10
[ INFO] [1619101207.751574400]: Param 'start_state_max_bounds_error' was set to 0.1
[ INFO] [1619101207.752753000]: Param 'start_state_max_dt' was not set. Using default value: 0.5
[ INFO] [1619101207.754203400]: Param 'start_state_max_dt' was not set. Using default value: 0.5
[ INFO] [1619101207.755600200]: Param 'jiggle_fraction' was set to 0.05
[ INFO] [1619101207.757050500]: Param 'max_sampling_attempts' was not set. Using default value: 100
[ INFO] [1619101207.757251700]: Using planning request adapter 'Add Time Parameterization'
[ INFO] [1619101207.757371200]: Using planning request adapter 'Fix Workspace Bounds'
[ INFO] [1619101207.757478800]: Using planning request adapter 'Fix Start State Bounds'
[ INFO] [1619101207.757579900]: Using planning request adapter 'Fix Start State In Collision'
[ INFO] [1619101207.757646300]: Using planning request adapter 'Fix Start State Path Constraints'
[INFO] [1619101208.740942]: Starting server on 172.17.0.2:10000
Ready to estimate pose!
Ready to plan
[ WARN] [1619101212.798241100]: Waiting for /follow_joint_trajectory to come up
[ WARN] [1619101218.799140500]: Waiting for /follow_joint_trajectory to come up
[ERROR] [1619101224.799778900]: Action client not connected: /follow_joint_trajectory
[ INFO] [1619101224.835921200]: Returned 0 controllers in list
[ INFO] [1619101224.861825400]: Trajectory execution is managing controllers
Loading 'move_group/ApplyPlanningSceneService'...
Loading 'move_group/ClearOctomapService'...
Loading 'move_group/MoveGroupCartesianPathService'...
Loading 'move_group/MoveGroupExecuteTrajectoryAction'...
Loading 'move_group/MoveGroupGetPlanningSceneService'...
Loading 'move_group/MoveGroupKinematicsService'...
Loading 'move_group/MoveGroupMoveAction'...
Loading 'move_group/MoveGroupPickPlaceAction'...
Loading 'move_group/MoveGroupPlanService'...
Loading 'move_group/MoveGroupQueryPlannersService'...
Loading 'move_group/MoveGroupStateValidationService'...
[ INFO] [1619101224.996874800]: 

********************************************************
* MoveGroup using: 
*     - ApplyPlanningSceneService
*     - ClearOctomapService
*     - CartesianPathService
*     - ExecuteTrajectoryAction
*     - GetPlanningSceneService
*     - KinematicsService
*     - MoveAction
*     - PickPlaceAction
*     - MotionPlanService
*     - QueryPlannersService
*     - StateValidationService
********************************************************

[ INFO] [1619101224.997049900]: MoveGroup context using planning plugin ompl_interface/OMPLPlanner
[ INFO] [1619101224.997128700]: MoveGroup context initialization complete

You can start planning now!

Problems in step 2 of the tutorial - Add and Set Up Randomizers

Hi there,
I was trying to follow the steps of the tutorial, and I came across an impasse. More specifically, in Part 2, The moment that I search the scripts in c#, to add them in the game Object "Simulation Scenario", appears in the search bar as "not found".
-> At this stage!

image

I tried to test in different versions of Unity editor, and did not succeed. Starting from the topic "Domain Randomization" in some C# scripts are not being recognized as a component in a given Game Object.
Could you steer me somehow???
Thank you in advance.

tcp endpoint 0.7 update

As for anyone trying to run the generated Message classes, this will not work with the current TCP-endpoint package (Version 0.7 and above )
here's a quick fix for the

The type or namespace name 'Pose' does not exist in the namespace 'RosMessageTypes.Geometry' (are you missing an assembly reference?)

or any namespace name errors

  • just Add Msg at the end of the message's class name
    ex:
    public Geometry.Pose pick_pose;
    to
    public Geometry.PoseMsg pick_pose;

Broken gripper with TGS physics solver

Hi, I have been testing the project and encountered that the gripper "broke" when using the improved Temporal Gauss Seidel (TGS) physics solver with it. Here is a repro of the problem, where I open the pre-made project and change the solver from Projected GS to TGS:

TGS

Do I have the possibility to run the training by google colab?

Because of not being privileged with a good machine to work the training of CNN Vgg 16, I would like to run part 3 of the tutorial with the graphics card of google. Is it possible to run in that environment? If yes could explain me in the best possible way.

Thanks :)

Latest ROS-TCP-Connector package version 0.6.0 throws error

I wanted to upgrade the ROS TCP Connector package from v0.2.0light to the newest version v0.6.0 but now I keep getting this error:

Unity.Robotics.ROSTCPConnector.RosTopicState.SendServiceRequest (Unity.Robotics.ROSTCPConnector.MessageGeneration.Message requestMessage, System.Int32 serviceId) (at Library/PackageCache/com.unity.robotics.ros-tcp-connector@7f055108f4/Runtime/TcpConnector/RosTopicState.cs:234)
Unity.Robotics.ROSTCPConnector.ROSConnection.SendServiceMessage[RESPONSE] (System.String rosServiceName, Unity.Robotics.ROSTCPConnector.MessageGeneration.Message serviceRequest) (at Library/PackageCache/com.unity.robotics.ros-tcp-connector@7f055108f4/Runtime/TcpConnector/ROSConnection.cs:310)
Unity.Robotics.ROSTCPConnector.ROSConnection.SendServiceMessage[RESPONSE] (System.String rosServiceName, Unity.Robotics.ROSTCPConnector.MessageGeneration.Message serviceRequest, System.Action`1[T] callback) (at Library/PackageCache/com.unity.robotics.ros-tcp-connector@7f055108f4/Runtime/TcpConnector/ROSConnection.cs:283)
System.Runtime.CompilerServices.AsyncMethodBuilderCore+<>c.<ThrowAsync>b__7_0 (System.Object state) (at <31c0f51ac5a24a22ba784db24f4ba023>:0)
UnityEngine.UnitySynchronizationContext+WorkRequest.Invoke () (at <ad50157ee00e45cdb3c8bd67012f8804>:0)```

How can I control grasp width?

Hello, I changed target object from cube to another 3d model from linemod dataset https://bop.felk.cvut.cz/datasets/.
Also, I changed model to estimate target pose and this can correctly estimate target position.
But, robot knuckles can't grasp my target object (grasp width always wider then object). To solve this problem, I've tried to find parameter like grasp width but I couldn't find this any where (yaml files, mover.py ...).

Can you suggest any idea to solve this problem?

Thank you.

Is this project compatible with Ubuntu 18.04?

Hello there,

I am trying to install this under Ubuntu 18.04. In the Readme's I can't find any OS requirement so I just went ahead and did it. Catkin_make works as expected but when I try to run roslaunch ur3_moveit pose_est.launch I get python 2.7 related errors. Probably because ASFAIK, Melodic is Python 2 based and Noetic Python 3 based. I think there are ways of enabling Python 3 in Melodic. I will try that and post the update here.

Environment

  • Unity 2020.2.7f1
  • Ubuntu 18.04
  • ROS Melodic

System.Net.SocketException: Address already in use

Hello Team,

I'm getting the System.Net.SocketException: Address already in use error from the Unity console.

Troubleshooting workaround by leaving the Override Unity IP Address blank and Change the ROS IP Address to the IP of your Docker container didn't fix the error.

Docker IP Configuration,

root@d283c453367f:/catkin_ws# ifconfig 
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.17.0.3  netmask 255.255.0.0  broadcast 172.17.255.255
        ether 02:42:ac:11:00:03  txqueuelen 0  (Ethernet)
        RX packets 179  bytes 24664 (24.6 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 61  bytes 4008 (4.0 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 53259  bytes 14479754 (14.4 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 53259  bytes 14479754 (14.4 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Unity_IP_Configuration

Regards,
Jegathesan S

Could NOT find ros_tcp_endpoint

In Pick-and-Place with Object Pose Estimation: Quick Demo, Set Up the ROS Side, Step2.
use "docker build -t unity-robotics:pose-estimation -f docker/Dockerfile ."
and show error. What should I do? Thanks!

E:\UnityProjects\2020\Robotics-Object-Pose-Estimation>docker build -t unity-robotics:pose-estimation -f docker/Dockerfile .
[+] Building 14.2s (17/18)
=> [internal] load build definition from Dockerfile 0.1s
=> => transferring dockerfile: 1.41kB 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/ros:noetic-ros-base 5.1s
=> [internal] load build context 2.0s
=> => transferring context: 110.51MB 1.9s
=> [ 1/14] FROM docker.io/library/ros:noetic-ros-base@sha256:68085c6624824d5ad276450d21377d34dccdc75785707f244a9 0.0s
=> CACHED [ 2/14] RUN sudo apt-get update && sudo apt-get install -y vim iputils-ping net-tools python3-pip ros- 0.0s
=> CACHED [ 3/14] RUN sudo -H pip3 --no-cache-dir install rospkg numpy jsonpickle scipy easydict torch==1.7.1+cu 0.0s
=> CACHED [ 4/14] WORKDIR /catkin_ws 0.0s
=> CACHED [ 5/14] COPY ./ROS/src/moveit_msgs /catkin_ws/src/moveit_msgs 0.0s
=> CACHED [ 6/14] COPY ./ROS/src/robotiq /catkin_ws/src/robotiq 0.0s
=> CACHED [ 7/14] COPY ./ROS/src/ros_tcp_endpoint /catkin_ws/src/ros_tcp_endpoint 0.0s
=> CACHED [ 8/14] COPY ./ROS/src/universal_robot /catkin_ws/src/universal_robot 0.0s
=> [ 9/14] COPY ./ROS/src/ur3_moveit /catkin_ws/src/ur3_moveit 1.1s
=> [10/14] COPY ./docker/set-up-workspace /setup.sh 0.1s
=> [11/14] COPY docker/tutorial / 0.1s
=> [12/14] RUN /bin/bash -c "find /catkin_ws -type f -print0 | xargs -0 dos2unix" 1.0s
=> ERROR [13/14] RUN dos2unix /tutorial && dos2unix /setup.sh && chmod +x /setup.sh && /setup.sh && rm /setup.sh 4.8s

[13/14] RUN dos2unix /tutorial && dos2unix /setup.sh && chmod +x /setup.sh && /setup.sh && rm /setup.sh:
#17 0.402 dos2unix: converting file /tutorial to Unix format...
#17 0.406 dos2unix: converting file /setup.sh to Unix format...
#17 1.304 -- The C compiler identification is GNU 9.3.0
#17 1.548 -- The CXX compiler identification is GNU 9.3.0
#17 1.567 -- Check for working C compiler: /usr/bin/cc
#17 1.694 -- Check for working C compiler: /usr/bin/cc -- works
#17 1.696 -- Detecting C compiler ABI info
#17 1.779 -- Detecting C compiler ABI info - done
#17 1.799 -- Detecting C compile features
#17 1.800 -- Detecting C compile features - done
#17 1.806 -- Check for working CXX compiler: /usr/bin/c++
#17 1.895 -- Check for working CXX compiler: /usr/bin/c++ -- works
#17 1.897 -- Detecting CXX compiler ABI info
#17 1.987 -- Detecting CXX compiler ABI info - done
#17 2.007 -- Detecting CXX compile features
#17 2.008 -- Detecting CXX compile features - done
#17 2.376 -- Using CATKIN_DEVEL_PREFIX: /catkin_ws/devel
#17 2.377 -- Using CMAKE_PREFIX_PATH: /opt/ros/noetic
#17 2.377 -- This workspace overlays: /opt/ros/noetic
#17 2.408 -- Found PythonInterp: /usr/bin/python3 (found suitable version "3.8.5", minimum required is "3")
#17 2.409 -- Using PYTHON_EXECUTABLE: /usr/bin/python3
#17 2.409 -- Using Debian Python package layout
#17 2.447 -- Found PY_em: /usr/lib/python3/dist-packages/em.py
#17 2.447 -- Using empy: /usr/lib/python3/dist-packages/em.py
#17 2.585 -- Using CATKIN_ENABLE_TESTING: ON
#17 2.585 -- Call enable_testing()
#17 2.588 -- Using CATKIN_TEST_RESULTS_DIR: /catkin_ws/build/test_results
#17 3.003 -- Forcing gtest/gmock from source, though one was otherwise available.
#17 3.003 -- Found gtest sources under '/usr/src/googletest': gtests will be built
#17 3.003 -- Found gmock sources under '/usr/src/googletest': gmock will be built
#17 3.033 -- Found PythonInterp: /usr/bin/python3 (found version "3.8.5")
#17 3.036 -- Found Threads: TRUE
#17 3.052 -- Using Python nosetests: /usr/bin/nosetests3
#17 3.119 -- catkin 0.8.9
#17 3.119 -- BUILD_SHARED_LIBS is on
#17 3.289 -- BUILD_SHARED_LIBS is on
#17 3.289 -- Using CATKIN_WHITELIST_PACKAGES: moveit_msgs;ros_tcp_endpoint;ur3_moveit;robotiq_2f_140_gripper_visualization;ur_description;ur_gazebo
#17 4.211 -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#17 4.211 -- ~~ traversing 1 packages in topological order:
#17 4.211 -- ~~ - ur3_moveit
#17 4.211 -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#17 4.212 -- +++ processing catkin package: 'ur3_moveit'
#17 4.212 -- ==> add_subdirectory(ur3_moveit)
#17 4.771 -- Could NOT find ros_tcp_endpoint (missing: ros_tcp_endpoint_DIR)
#17 4.771 -- Could not find the required component 'ros_tcp_endpoint'. The following CMake error indicates that you either need to install the package with the same name or change your environment so that it can be found.
#17 4.771 CMake Error at /opt/ros/noetic/share/catkin/cmake/catkinConfig.cmake:83 (find_package):
#17 4.771 Could not find a package configuration file provided by "ros_tcp_endpoint"
#17 4.771 with any of the following names:
#17 4.771
#17 4.771 ros_tcp_endpointConfig.cmake
#17 4.771 ros_tcp_endpoint-config.cmake
#17 4.771
#17 4.771 Add the installation prefix of "ros_tcp_endpoint" to CMAKE_PREFIX_PATH or
#17 4.771 set "ros_tcp_endpoint_DIR" to a directory containing one of the above
#17 4.771 files. If "ros_tcp_endpoint" provides a separate development package or
#17 4.771 SDK, be sure it has been installed.
#17 4.771 Call Stack (most recent call first):
#17 4.771 ur3_moveit/CMakeLists.txt:13 (find_package)
#17 4.771
#17 4.772
#17 4.775 -- Configuring incomplete, errors occurred!
#17 4.775 See also "/catkin_ws/build/CMakeFiles/CMakeOutput.log".
#17 4.775 See also "/catkin_ws/build/CMakeFiles/CMakeError.log".
#17 4.782 Base path: /catkin_ws
#17 4.782 Source space: /catkin_ws/src
#17 4.782 Build space: /catkin_ws/build
#17 4.782 Devel space: /catkin_ws/devel
#17 4.782 Install space: /catkin_ws/install
#17 4.782 Creating symlink "/catkin_ws/src/CMakeLists.txt" pointing to "/opt/ros/noetic/share/catkin/cmake/toplevel.cmake"
#17 4.782 ####
#17 4.782 #### Running command: "cmake /catkin_ws/src -DCATKIN_WHITELIST_PACKAGES=moveit_msgs;ros_tcp_endpoint;ur3_moveit;robotiq_2f_140_gripper_visualization;ur_description;ur_gazebo -DCATKIN_DEVEL_PREFIX=/catkin_ws/devel -DCMAKE_INSTALL_PREFIX=/catkin_ws/install -G Unix Makefiles" in "/catkin_ws/build"
#17 4.782 ####
#17 4.782 Invoking "cmake" failed


executor failed running [/bin/sh -c dos2unix /tutorial && dos2unix /setup.sh && chmod +x /setup.sh && /setup.sh && rm /setup.sh]: exit code: 1

Pose Estimation not working correctly

Describe the bug

The pose estimation is not executed correctly. I get an error regarding model weights and input not being on the same device.
When I change this line to this

    device = torch.device("cpu")

it works fine.

To Reproduce

Used the demo Unity project, therefore did not everything in the 4 readme's.

Console logs / stack traces

[ERROR] [1640807467.034139]: Error processing request: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same
['Traceback (most recent call last):\n', '  File "/opt/ros/noetic/lib/python3/dist-packages/rospy/impl/tcpros_service.py", line 633, in _handle_request\n    response = convert_return_to_response(self.handler(request), self.response_class)\n', '  File "/home/ensar/Robotics-Object-Pose-Estimation/ROS/src/ur3_moveit/scripts/pose_estimation_script.py", line 96, in pose_estimation_main\n    est_position, est_rotation = _run_model(image_path)\n', '  File "/home/ensar/Robotics-Object-Pose-Estimation/ROS/src/ur3_moveit/scripts/pose_estimation_script.py", line 52, in _run_model\n    output = run_model_main(image_path, MODEL_PATH)\n', '  File "/home/ensar/Robotics-Object-Pose-Estimation/ROS/src/ur3_moveit/src/ur3_moveit/setup_and_run_model.py", line 138, in run_model_main\n    output_translation, output_orientation = model(torch.stack(image).reshape(-1, 3, 224, 224))\n', '  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl\n    result = self.forward(*input, **kwargs)\n', '  File "/home/ensar/Robotics-Object-Pose-Estimation/ROS/src/ur3_moveit/src/ur3_moveit/setup_and_run_model.py", line 54, in forward\n    x = self.model_backbone(x)\n', '  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl\n    result = self.forward(*input, **kwargs)\n', '  File "/usr/local/lib/python3.8/dist-packages/torchvision/models/vgg.py", line 43, in forward\n    x = self.features(x)\n', '  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl\n    result = self.forward(*input, **kwargs)\n', '  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/container.py", line 117, in forward\n    input = module(input)\n', '  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl\n    result = self.forward(*input, **kwargs)\n', '  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/conv.py", line 423, in forward\n    return self._conv_forward(input, self.weight)\n', '  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/conv.py", line 419, in _conv_forward\n    return F.conv2d(input, weight, self.bias, self.stride,\n', 'RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same\n']

Expected behavior

A working pose estimation.

Environment (please complete the following information, where applicable):

  • Unity Version: Unity 2020.2.7f1: The demo project was 2020.2.6f1 an older version.
  • Unity machine OS + version: Ubuntu 20.04
  • ROS machine OS + version: Ubuntu 20.04, ROS Noetic
  • ROS–Unity communication: I installed the ROS environment as described in Part 0
  • Package branches or versions: Version 0.8.0-preview.3 - March 24, 2021

Problems when building docker image

I am getting this error when building the docker image. Both on windows or ubuntu. I am attaching the screenshot of the error.
I have followed all the steps.

Screenshot 2022-11-20 at 11 12 52 AM

any suggestion on how to solve this issue?

More complex 3D models

Hello, I would like to know if this model can work well on complex shapes and models. And would maybe I need some changes in data generation process for training, like having images from different camera positions?

And thank you for the project and extensive tutorial!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.