GithubHelp home page GithubHelp logo

aicrowd / real_robots Goto Github PK

View Code? Open in Web Editor NEW
34.0 34.0 18.0 7.94 MB

Gym environments for Robots that learn to interact with the environment autonomously

Home Page: https://www.aicrowd.com/challenges/neurips-2019-robot-open-ended-autonomous-learning

License: MIT License

Makefile 2.44% Python 97.56%

real_robots's People

Contributors

ayushshivani avatar davidemontella avatar emilio-cartoni avatar francesco-mannella avatar skbly7 avatar spmohanty avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

real_robots's Issues

Write a base class for the Policy function

We should write a base class for the Policy function that all participants have to inherit from. That way, we also can have a consistent reference for participants on what their policy functions should provide.
The said class should be well documented (so that the docs show up half decent on Sphinx docs)

Fix flake8 errors

We need to fix all flake8 errors ASAP.

git clone [email protected]:AIcrowd/real_robots.git
cd real_robots
pip install -e .
pip install requirements_dev.txt

flake8 .

should throw a lot of errors along the lines of :

./setup.py:15:1: E302 expected 2 blank lines, found 1
./setup.py:41:80: E501 line too long (82 > 79 characters)
./real_robots/__init__.py:12:1: F401 '.evaluate.evaluate' imported but unused
./real_robots/__init__.py:15:5: E128 continuation line under-indented for visual indent
./real_robots/__init__.py:16:1: E124 closing bracket does not match visual indentation
./real_robots/__init__.py:19:5: E128 continuation line under-indented for visual indent
./real_robots/__init__.py:20:1: E124 closing bracket does not match visual indentation
./real_robots/__init__.py:30:1: E302 expected 2 blank lines, found 1
...
...
and so on

all of them have to be individually dealt with to ensure we have a clean and maintainable code base.

Installation doesn't work:

  • real-robots version:
  • Python version:
  • Operating System:

Description

After installation, importing real_robots fails.

What I Did

pip3 install --user -U real_robots
ipython
In [1]: import real_robots

ImportError Traceback (most recent call last)
in
----> 1 import real_robots

~/.local/lib/python3.5/site-packages/real_robots/init.py in
18 )
19
---> 20 from real_robots.envs import env as real_robot_env
21 from real_robots.evaluate import evaluate
22

ImportError: No module named 'real_robots.envs'

This does not happen if I launch ipython while standing in the real_robots directory (that I git cloned), since it can successfully resolve the real_robots.envs path.
However, since the package is installed via PiPy it shouldn't be a requirement to have a copy downloaded and also to stand in there while importing the package.

[Benchmarking] Good estimation of Timelimits & Resources for participants

Currently here are our estimated for the timelimits and resources that will be made available to the participants, at least in the extrinsic phase (Round-1).

    - intrinsic phase : 1e7
    - extrinsic phase : 2e3
    - extrinsic trials : 350
    # Resources :
        # Round-1 
            - time : 6 hours 
            - cpu : 
                - 8 
                - memory : 30 gb
                - gpu : 1 k80

We need to do some benchmarking to ensure that these numbers correlate well with each other.
@emilio-cartoni : Did you do some benchmarking internally ?

I think we can include a cli script in the library which can do the benchmarking for any controller, so that participants have a good sense of the performance of their controllers.

range value of 1 node

Hi,

Could you tell me the range value of 1 element in observation["joint_positions"] ???

Thanks

Debug mode

the evaluator should support a debug mode, so that participants can make submissions on smaller number of intrinsic/extrinsic timesteps for easier integration testing with the debugger.

Environment assumes goals.npy.npz but no such file is provided.

In the REALRobotEnv class, we assume in the __init__ that the goals_dataset_path will be goals_dataset.npy.npz .
However, if the user does:

import gym, real_robots
env = gym.make('REALRobot-v0')
env.set_goal()

This will fail as no such goals_dataset.npy.npz is provided.
Indeed, in the README we say that users can download such file from https://aicrowd-production.s3.eu-central-1.amazonaws.com/misc/REAL-Robots/goals.npy.npz

On the other hand, we do have the file on the repository (since we also use it to ensure everything works).

Is this the behavior we want?
Or do we want to:

  1. change the environment so that it loads some file from the package (not sure how..)
  2. change the environment so that it does not load anything.... However, it would not be able to go to the extrinsic phase so this is more or less pointless..
  3. Keep the way it is, and maybe add something to the FAQ

I answer myself and I guess 3 is good :)

render('human') and visualization=True render shadows and background both in the GUI and in the robot camera

If you use env.render('human') or put visualization=True in the evaluate function it will open a GUI showing the environment. The GUI renders the environment using shadows and also shows a bluish background.
As a side effect, using the GUI will also make the environment render the shadows and the background on the robot camera.
This means that robots trained on images with the GUI on might have a different performance than when the GUI is off, since images will be different.
It is possible to disable shadows from the GUI using the shortcut s or also by setting shadows off using pybullet.configureDebugVisualizer(pybullet.COV_ENABLE_SHADOWS,0), however the background will still be bluish on the robot camera (without the GUI it will be white).

Bug set_goals_dataset_path(self, path) makes the phase extrinsic

If you run the local evaluation with extrinsic_timesteps less than intrinsic_timesteps, you will see that the intrinsic phase stops exactly after the extrinsic_timesteps.

i.e. when using:
result = real_robots.evaluate(
RandomPolicy,
intrinsic_timesteps=5000,
extrinsic_timesteps=100,
extrinsic_trials=12,
visualize=False,
goals_dataset_path="./goals.npy.npz",
)
intrinsic will stop after 100 timesteps.

This is actually due to set_goals_dataset_path.
The line:
self.goal_idx = 0
should be removed there, otherwise the environment thinks the extrinsic phase is running.
(goal_idx must be -1 while intrinsic phase is running).

Can't do a pull request at the moment, I will leave the issue here meanwhile.

FAQ section

We have to start a dedicated FAQ section which lists down common Gotchas.
(Like the one we have about the visualization affecting the observations).

@emilio-cartoni : Can you start a FAQ.md and link it to the README ?
We can populate it with the example you mentioned above.

Goal miss match in extrinsic phase

I noticed that the first obs['goal'] the controller receives in each extrinsic trial is actually one before. I think this is due to setting a goal after creating observation by resetting environment.

I think simple fix would be something like below:

    def run_extrinsic_trial(self, trial_number):
        observation = self.env.reset()
        reward = 0
        done = False
        self.env.set_goal()

        observation['goal'] = self.env.goal.retina

def run_extrinsic_trial(self, trial_number):
observation = self.env.reset()
reward = 0
done = False
self.env.set_goal()

Make intrinsic phase optional

In the Round-1 of the competition, we will only be running the extrinsic phase. So we should let participants skip the intrinsic phase in the evaluation by passing in 0 or False for intrinsic_timesteps

Tests !!

We dont have a single test at the moment !
While the tests framework is setup and configured, we would need a bit of a hand in writing the tests !!

Phase step_num unable to customize

  • real-robots version:0.1.21
  • Python version:3.8.8
  • Operating System:RedHat

Description

I use the starterkit AIcrowd offered and tried to customize intrinsic phase to 1e6 instead of 15e6 (which is the default value).
The progress bar works well but the environment is not turning "done" to True, after 1e6 steps, and the environment continues to run untill 15e6 steps finished.

What I Did

A fix suggestions

line 32 ~34 of real_robots/envs/env.py assigns default data to global value "intrinsic_timesteps", "extrinsic_timesteps", "extrinsic_trials", which cannot be changed by changing the parameters offered to function "real_robots.evaluate".
Probably these three lines should be moved inside "init" so that they can be modified while building the environment

Depth image

Having depth image data was requested by participants.
Given that nowadays depth cameras are quite common, it makes sense to add this sensor to the robot as well.
We should check how much it would affect the simulator performance if we calculate and return depth data when we render the camera.

Detailed scores for goals

In the current evaluation function, there is no way for the user to know which goals succeeded and which goals failed.
The individual score for a single goal is neither displayed nor returned.
Can the evaluation function return the scores variable as well (maybe renamed as detailed_scores) or would it break something?
(I am thinking about people testing locally, not suggesting to display this detailed score online).

Video generation

  • Round-1 (extrinsic phase) : we will generate the videos from 5 randomly selected trials from the extrinsic phase.
  • round-2 (intrinsic phase) : yet to be decided.
    • could be a timelapse
    • could be snippets of the activity from various phases of training
    • could be a realtime stream ? (this would be really cool !!)

Add scripts for generating a set of random goals file

We should include a file which can generate an arbitrary set of goals as a goals file (Which participants could potentially use for local evaluation, etc).

@emilio-cartoni : If you can write a script, and maybe just dump it in the wifi, then I could make it available as a binary embedded in the library. Feel free to include as many configurable variables as you wish.

set joint position of robot

Hi,

I need to set robot's joint position to make data, but when I use env.step(action) it is too slow to get there, I need 40 iterations to get there

Thanks

AttributeError: 'BodyPart' object has no attribute 'get_position'

What I Did

I tried running real-robots-demo at the terminal, as well as the python code in the readme, but I seem to get the same error. See below. I tested it directly, and the important seems to execute fine. Any thoughts? Thanks!

Output

The default interactive shell is now zsh.
To update your account to use zsh, please run chsh -s /bin/zsh.
For more details, please visit https://support.apple.com/kb/HT208050.
/Users/christopherkeown$ real-robots-demo
pybullet build time: Jul 15 2018 17:08:59

#####################################################################################################################
#####################################################################################################################
.______ _______ ___ __ .______ ______ .______ ______ .. ______.
| _ \ | | / \ | | | _ \ / __ \ | _ \ / __ \ | | / |
| |
) | | |
/ ^ \ | | | |
) | | | | | | |
) | | | | | ---| |---- | (----| / | __| / /_\ \ | | | / | | | | | _ < | | | | | | \ \ | |\ \----.| |____ / _____ \ | ----. | |\ ----.| --' | | |_) | | --' | | | .----) |
| _| ._____||_______/__/ \__\ |_______| | _| .
| _
/ |
/ _/ |__| |_______/
#####################################################################################################################
#####################################################################################################################

  1. Testing setup without visualisation :
    /Users/christopherkeown/anaconda3/lib/python3.6/site-packages/gym/envs/registration.py:14: PkgResourcesDeprecationWarning: Parameters to load are deprecated. Call .resolve and .require separately.
    result = entry_point.load(False)
    WARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.
    WARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.
    /Users/christopherkeown/anaconda3/lib/python3.6/site-packages/pybullet_envs/data/kuka_gripper_description/urdf/kuka_gripper.urdf
    Traceback (most recent call last):
    File "/Users/christopherkeown/anaconda3/bin/real-robots-demo", line 11, in
    sys.exit(demo())
    File "/Users/christopherkeown/anaconda3/lib/python3.6/site-packages/click/core.py", line 764, in call
    return self.main(*args, **kwargs)
    File "/Users/christopherkeown/anaconda3/lib/python3.6/site-packages/click/core.py", line 717, in main
    rv = self.invoke(ctx)
    File "/Users/christopherkeown/anaconda3/lib/python3.6/site-packages/click/core.py", line 956, in invoke
    return ctx.invoke(self.callback, **ctx.params)
    File "/Users/christopherkeown/anaconda3/lib/python3.6/site-packages/click/core.py", line 555, in invoke
    return callback(*args, **kwargs)
    File "/Users/christopherkeown/anaconda3/lib/python3.6/site-packages/real_robots/cli.py", line 59, in demo
    run_episode(env, pi)
    File "/Users/christopherkeown/anaconda3/lib/python3.6/site-packages/real_robots/cli.py", line 28, in run_episode
    observation = env.reset()
    File "/Users/christopherkeown/anaconda3/lib/python3.6/site-packages/real_robots/envs/env.py", line 164, in reset
    return self.get_observation()
    File "/Users/christopherkeown/anaconda3/lib/python3.6/site-packages/real_robots/envs/env.py", line 210, in get_observation
    retina = self.get_retina()
    File "/Users/christopherkeown/anaconda3/lib/python3.6/site-packages/real_robots/envs/env.py", line 193, in get_retina
    self.robot.object_bodies["table"]
    AttributeError: 'BodyPart' object has no attribute 'get_position'

Documentation on the problem statement and env

We need to migrate the documentation from the old repository on the problem statement, observation, actions etc.
They were quite well written. But we need to be mindful to not make the README.md overtly complicated. The trick would be to keep the README as simple as we can with only the bare essentials, and then we link out to separate docs for more details.

Some of the docs were not well formatted, that's something we have to take care of here.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.