GithubHelp home page GithubHelp logo

multiworld's People

Contributors

anair13 avatar cdevin avatar dwiel avatar hartikainen avatar shikharbahl avatar snasiriany avatar stevenlin1111 avatar vikashplus avatar vitchyr avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

multiworld's Issues

why init 'puck_low/high' with 'hand_low/high'

in this file

line 51 to 54

        if puck_low is None:
            puck_low = self.hand_low[:2]
        if puck_high is None:
            puck_high = self.hand_high[:2]

should we init puck_low/high with paras in line 15 to 16 ?

            puck_low=(-.4, .2),
            puck_high=(.4, 1),

About the observation space for sawyer robot

Hi there,

I want to ask that, is there some tricks to set some keys like proprio_observation, proprio_desired_goal, proprio_achieved_goal. Also, are those things redundant for training?
Screen Shot 2019-10-08 at 8 53 34 PM
Cause, when i only keep:
state_observation=flat_obs,
state_desired_goal=self._state_goal,

some experiment based on this environment could not converge with soft a-c, could you help explain a little bit about this?

Thanks

gym.error.UnregisteredEnv: No registered env with id: SawyerPushNIPS-v0

Hi,

Maybe my question is simple! I am trying to test RIG method. At first I made a conda environment and then I clone Multiworld repository to the Conda packages. When I run a the python program, it shows that SawyerPushNIPS-v0 is not registered. I check the codes, and I think there is not any connection between gym and Multiworld.
How can I solve this issue?

Thank you very much

SawyerXYZ environment

Hi there,
I am trying to use Sawyerxyz environment with sawyerpushandreach task. I found that when i load a mesh as puck, the robot could push the object without "real contact" with the puck(but when enter c from keyboard ,it shows contact on the air not on the object).
Also, the gripper sometimes has abnormal actions
Could you help solve this problem? Thanks
2019-09-27
2019-09-27 (1)
2019-09-27 (2)

Camera

The cameras in camera.py seem to only change the viewer viewpoint. What if I want to change the camera viewpoint of the images outputted in line 177 of image_env.py?

        image_obs = self._wrapped_env.get_image(
            width=self.imsize,
            height=self.imsize,
        )

wrapped_env.get_image calls self.render(..., camera=None). It doesn't seem to use the camera loaded into the viewer from cameras.py.

Saywer Pick and Place : Fixing Goal Sampling

Hello,

I've been working with this repo for the past week, and have noticed (and as mentioned in the documentation) that the sampled goals where the puck is in the hand in sawyer_pick_and_place.py, an uncorrected hand-object configuration is randomly sampled, evaluated, and the final corrected hand-object config is stored in self.presampled_goals.

As you all mentioned in set_to_goal, the hand-object config stored in self.presampled_goals can often fail (the object often does not show to be in the hand at the end of the grasp). I made a simple heuristic for checking whether an attempted grasp was successful (seeing if the object is at least slightly above the table), but I wanted to store the uncorrected hand-object configuration for successful grasps, rather than the corrected one.

My question is, will storing the "wrong", uncorrected hand-object configuration returned from generate_uncorrected_env_goals() into self.presampled_goals interfere with any other parts of the project? While the compute_rewards method in sawyer_pick_and_place.py is using the sampled goals directly, the SkewFit algorithm should just be using latent distance rewards. Does the VAE by any chance use the goal positions in any way? I figure the compute_rewards method in sawyer_pick_and_place.py is being used only for logging anyway.

I'm just trying to run SkewFit by the way. I'm not interested in any of the other RL algorithms that could be using the goal positions. Thank you ahead of time!

Unable to control using the keyboard

Hi! I was trying to use the keyboard control script to control the agent, but pressing the keys for movement (mentioned in the file) seem to have no effect. What am I doing wrong?

Thanks!

sawyer_push_and_reach_mocap_goal_hidden.xml does not exist

I ran the code from the rlkit ,but it seems that something is missing in the environment.

Traceback (most recent call last):
File "rig/pusher/rig.py", line 86, in
use_gpu=True, # Turn on if you have a GPU
File "/home/ubuntu/rlkit/rlkit/launchers/launcher_util.py", line 166, in run_experiment_here
return experiment_function(variant)
File "/home/ubuntu/rlkit/rlkit/launchers/rig_experiments.py", line 40, in grill_her_td3_full_experiment
train_vae_and_update_variant(variant)
File "/home/ubuntu/rlkit/rlkit/launchers/rig_experiments.py", line 86, in train_vae_and_update_variant
return_data=True,
File "/home/ubuntu/rlkit/rlkit/launchers/rig_experiments.py", line 248, in train_vae
variant['generate_vae_dataset_kwargs']
File "/home/ubuntu/rlkit/rlkit/launchers/rig_experiments.py", line 166, in generate_vae_dataset
env = gym.make(env_id)
File "/home/ubuntu/anaconda3/envs/rlkit/lib/python3.5/site-packages/gym/envs/registration.py", line 167, in make
return registry.make(id)
File "/home/ubuntu/anaconda3/envs/rlkit/lib/python3.5/site-packages/gym/envs/registration.py", line 119, in make
env = spec.make()
File "/home/ubuntu/anaconda3/envs/rlkit/lib/python3.5/site-packages/gym/envs/registration.py", line 86, in make
env = cls(**self._kwargs)
File "/home/ubuntu/anaconda3/envs/rlkit/lib/python3.5/site-packages/multiworld-0.0.0-py3.5.egg/multiworld/envs/mujoco/sawyer_xyz/sawyer_push_nips.py", line 478, in init
**kwargs
File "/home/ubuntu/anaconda3/envs/rlkit/lib/python3.5/site-packages/multiworld-0.0.0-py3.5.egg/multiworld/envs/mujoco/sawyer_xyz/sawyer_push_nips.py", line 59, in init
MujocoEnv.init(self, self.model_name, frame_skip=frame_skip)
File "/home/ubuntu/anaconda3/envs/rlkit/lib/python3.5/site-packages/multiworld-0.0.0-py3.5.egg/multiworld/envs/mujoco/mujoco_env.py", line 28, in init
raise IOError("File %s does not exist" % fullpath)
OSError: File /home/ubuntu/anaconda3/envs/rlkit/lib/python3.5/site-packages/multiworld-0.0.0-py3.5.egg/multiworld/envs/assets/sawyer_xyz/sawyer_push_and_reach_mocap_goal_hidden.xml does not exist

Reset flow in real world Sawyer

Hi there,

I am using sawyer_pushing under envs/realworld/sawyer folder and is wrapped by ImageEnv. When I reset, the ImageEnv will perform as follow:

def reset(self):
obs = self.wrapped_env.reset()
if self.num_goals_presampled > 0:
goal = self.sample_goal()
self._img_goal = goal['image_desired_goal']
self.wrapped_env.set_goal(goal)
for key in goal:
obs[key] = goal[key]
elif self.non_presampled_goal_img_is_garbage:
# This is use mainly for debugging or pre-sampling goals.
self._img_goal = self._get_flat_img()
else:
env_state = self.wrapped_env.get_env_state()
self.wrapped_env.set_to_goal(self.wrapped_env.get_goal())
self._img_goal = self._get_flat_img()
self.wrapped_env.set_env_state(env_state)
return self._update_obs(obs)

In if-else case, it will jump to 3rd case (L144-L147). I this situation, I understood that you perform as follows:
Step 1: backup the current state (L144),
Step 2: sample and move to the goal state (L145)
Step 3: get the image observation of goal
Step 4: return to the original location (L147)

At step 2 & 3, in sawyer_control, I saw it require the user to move the object into a goal location to capture the image (link below). But at step 4, I didn't see that it requires the user to return object in reset location. In the simulator, everything is ok.

https://github.com/mdalal2020/sawyer_control/blob/3df2c9e246240388bb2bb5f51ecbfd558a1ef74c/src/sawyer_control/envs/sawyer_pushing.py#L33-L39

Is there any missing here?

I am not sure should I post the issue here or sawyer_control? So if it is not suitable, I will move to sawyer_control.

Thank in advance!

Where can I find `railrl`

I was wondering where I can install the railrl referenced from in the multiworld/multiworld/envs/pygame/walls.py file?

from railrl.torch import pytorch_util as ptu

SawyerDoor environments mocap mismatch

The mocap for the hand doesn't line up with the actual position of the hand. Thus, if we do something like

env = gym.make(some_sawyer_env_id)
obs_dict = env.reset()
print(obs_dict['state_achieved_goal')
obs_dict, rew, done, info = env.step(0*env.action_space.sample())
print(obs_dict['state_achieved_goal')

the second obs_dict should be the same as the first since we didn't do any action. However, since actions set the mocap position relative to its initial position, and since its original position does not line up with what is displayed in the initial obs_dict, the resulting position after "doing nothing" is significantly different from where it starts.

How to get the unflattened image observation?

Is there an easy way to get the unflattened image observation (from Mujoco for example)? And I can't seem to get the same image as the one that shows up in render when I use the internal get_image function:

image_obs = self._wrapped_env.get_image(
            width=self.imsize,
            height=self.imsize,
        )

Is there a way to just get the exact same image that mujoco renders, except as the image observation itself?

Running img_env

I tried to run "SawyerPushNIPS-v0" but I have the following error:

ERROR: GLEW initalization error: Missing GL version

I installed mujoco-py on my computer and it works well. Anyway to fix it ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.