GithubHelp home page GithubHelp logo

0xangelo / raylab Goto Github PK

View Code? Open in Web Editor NEW
53.0 4.0 10.0 4.54 MB

Reinforcement learning algorithms in RLlib

License: MIT License

Python 99.42% Makefile 0.58%
reinforcement-learning rllib deep-learning model-based-rl machine-learning streamlit bokeh pytorch generative-models normalizing-flows

raylab's Introduction

raylab

PyPI GitHub Workflow Status Dependabot GitHub CodeStyle

Reinforcement learning algorithms in RLlib and PyTorch.

Installation

pip install raylab

Quickstart

Raylab provides agents and environments to be used with a normal RLlib/Tune setup. You can an agent's name (from the Algorithms section) to raylab info list to list its top-level configurations:

raylab info list SoftAC
learning_starts: 0
    Hold this number of timesteps before first training operation.
policy: {}
    Sub-configurations for the policy class.
wandb: {}
    Configs for integration with Weights & Biases.

    Accepts arbitrary keyword arguments to pass to `wandb.init`.
    The defaults for `wandb.init` are:
    * name: `_name` property of the trainer.
    * config: full `config` attribute of the trainer
    * config_exclude_keys: `wandb` and `callbacks` configs
    * reinit: True

    Don't forget to:
      * install `wandb` via pip
      * login to W&B with the appropriate API key for your
        team/project.
      * set the `wandb/project` name in the config dict

    Check out the Quickstart for more information:
    `https://docs.wandb.com/quickstart`

You can add the --rllib flag to get the descriptions for all the options common to RLlib agents (or Trainers)

Launching experiments can be done via the command line using raylab experiment passing a file path with an agent's configuration through the --config flag. The following command uses the cartpole example configuration file to launch an experiment using the vanilla Policy Gradient agent from the RLlib library.

raylab experiment PG --name PG -s training_iteration 10 --config examples/PG/cartpole_defaults.py

You can also launch an experiment from a Python script normally using Ray and Tune. The following shows how you may use Raylab to perform an experiment comparing different types of exploration for the NAF agent.

import ray
from ray import tune
import raylab

def main():
    raylab.register_all_agents()
    raylab.register_all_environments()
    ray.init()
    tune.run(
        "NAF",
        local_dir="data/NAF",
        stop={"timesteps_total": 100000},
        config={
            "env": "CartPoleSwingUp-v0",
            "exploration_config": {
                "type": tune.grid_search([
                    "raylab.utils.exploration.GaussianNoise",
                    "raylab.utils.exploration.ParameterNoise"
                ])
            }
        },
        num_samples=10,
    )

if __name__ == "__main__":
    main()

One can then visualize the results using raylab dashboard, passing the local_dir used in the experiment. The dashboard lets you filter and group results in a quick way.

raylab dashboard data/NAF/

image

You can find the best checkpoint according to a metric (episode_reward_mean by default) using raylab find-best.

raylab find-best data/NAF/

Finally, you can pass a checkpoint to raylab rollout to see the returns collected by the agent and render it if the environment supports a visual render() method. For example, you can use the output of the find-best command to see the best agent in action.

raylab rollout $(raylab find-best data/NAF/) --agent NAF

Algorithms

Paper Agent Name
Actor Critic using Kronecker-factored Trust Region ACKTR
Trust Region Policy Optimization TRPO
Normalized Advantage Function NAF
Stochastic Value Gradients SVG(inf)/SVG(1)/SoftSVG
Soft Actor-Critic SoftAC
Streamlined Off-Policy (DDPG) SOP
Model-Based Policy Optimization MBPO
Model-based Action-Gradient-Estimator MAGE

Command-line interface

For a high-level description of the available utilities, run raylab --help

Usage: raylab [OPTIONS] COMMAND [ARGS]...

  RayLab: Reinforcement learning algorithms in RLlib.

Options:
  --help  Show this message and exit.

Commands:
  dashboard    Launch the experiment dashboard to monitor training progress.
  episodes     Launch the episode dashboard to monitor state and action...
  experiment   Launch a Tune experiment from a config file.
  find-best    Find the best experiment checkpoint as measured by a metric.
  info         View information about an agent's config parameters.
  rollout      Wrap `rllib rollout` with customized options.
  test-module  Launch dashboard to test generative models from a checkpoint.

Packages

The project is structured as follows :

raylab
|-- agents            # Trainer and Policy classes
|-- cli               # Command line utilities
|-- envs              # Gym environment registry and utilities
|-- logger            # Tune loggers
|-- policy            # Extensions and customizations of RLlib's policy API
|   |-- losses        # RL loss functions
|   |-- modules       # PyTorch neural network modules for TorchPolicy
|-- pytorch           # PyTorch extensions
|-- utils             # miscellaneous utilities

raylab's People

Contributors

0xangelo avatar dependabot-preview[bot] avatar dependabot[bot] avatar hackmd-deploy avatar pyup-bot avatar thiagopbueno avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

raylab's Issues

Scale tril by desired average action stddev

Currently we use a very ad-hoc procedure for scaling the quadratic component of NAF when used for exploration:
https://github.com/angelolovatto/raylab/blob/9820275b17ee085e1955a6d845c0bdf61333f8da/raylab/algorithms/naf/naf_policy.py#L150-L155

A possibly better alternative would be to scale it based on the desired average action stddev. Something like:

scale_tril * (1.0 / average_sttdev(scale_tril)) * desired_avg_stddev

That way we could contain the magnitude of the MultivariateNormal precision matrix while still allowing some freedom for diversifying the action's covariances.

Depfu Error: No dependency files found

Hello,

We've tried to activate or update your repository on Depfu and couldn't find any supported dependency files. If we were to guess, we would say that this is not actually a project Depfu supports and has probably been activated by error.

Monorepos

Please note that Depfu currently only searches for your dependency files in the root folder. We do support monorepos and non-root files, but don't auto-detect them. If that's the case with this repo, please send us a quick email with the folder you want Depfu to work on and we'll set it up right away!

How to deactivate the project

  • Go to the Settings page of either your own account or the organization you've used
  • Go to "Installed Integrations"
  • Click the "Configure" button on the Depfu integration
  • Remove this repo (angelolovatto/raylab) from the list of accessible repos.

Please note that using the "All Repositories" setting doesn't make a lot of sense with Depfu.

If you think that this is a mistake

Please let us know by sending an email to [email protected].


This is an automated issue by Depfu. You're getting it because someone configured Depfu to automatically update dependencies on this project.

Treat batch and sample shape dimensions in Reservoir's `transition_fn`

https://github.com/angelolovatto/raylab/blob/77b6a4ccea1e507cac2fc67bfb657192c44e26af/raylab/envs/reservoir.py#L83-L89

Our current implementation of Reservoir only samples one Gamma random variable per reservoir in the config. However, we should sample batches of rain variables inputting batches of states int the transition_fn. It does not crash currently because of broadcasting rules, as we add the rain samples to the batch of states. However, that shares random variables across batches of transitions, which is not ideal.

Furthermore, we're calculating the log-likelihood as if state and batch dimensions were uncorrelated, yielding a B x N matrix logp, where B is the batch size and N is the state dimension. We should use PyTorch's Independent distribution to squeeze logp across the state dimension, which isn't hard to do. An example can be found in raylab.distributions.DiagMultivariateNormal. I'll get right into it.

A last concern is about the nature of the log-likelihood itself. Even though rain is the only random variable involved in the transition, it is not clear that the transition's logp is the logp of the rain variable only. Although, since all subsequent modifications to the rain variable are additive, I assume that the change of variables theorem would give us the same pdf as the original rain variable anyway. @thiagopbueno, thoughts?

Add documentation to HVAC and Reservoir environments

The title is self-explanatory. The specific classes are the ones below. It would be nice for users to understand these problems without diving too deeply into the code.

https://github.com/angelolovatto/raylab/blob/b73aa99c1dd4061bb9ded02c1de4800158c12c7a/raylab/envs/hvac.py#L33

https://github.com/angelolovatto/raylab/blob/b73aa99c1dd4061bb9ded02c1de4800158c12c7a/raylab/envs/reservoir.py#L32

@thiagopbueno, perhaps you have some description of these tasks lying around?

Running raylab experiment crashes

  • raylab version: 0.14.14
  • Python version: 3.7.9
  • Operating System: Ubuntu 18.04.5 LTS

When running raylab experiment "PG" --config examples/PG/cartpole_defaults.py I get the following error:

Traceback (most recent call last):
  File "/home/palenicek/miniconda3/envs/raylab_test/bin/raylab", line 8, in <module>
    sys.exit(raylab())
  File "/home/palenicek/miniconda3/envs/raylab_test/lib/python3.7/site-packages/click/core.py", line 829, in __call__
    return self.main(*args, **kwargs)
  File "/home/palenicek/miniconda3/envs/raylab_test/lib/python3.7/site-packages/click/core.py", line 782, in main
    rv = self.invoke(ctx)
  File "/home/palenicek/miniconda3/envs/raylab_test/lib/python3.7/site-packages/click/core.py", line 1259, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/home/palenicek/miniconda3/envs/raylab_test/lib/python3.7/site-packages/click/core.py", line 1066, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/home/palenicek/miniconda3/envs/raylab_test/lib/python3.7/site-packages/click/core.py", line 610, in invoke
    return callback(*args, **kwargs)
  File "/home/palenicek/miniconda3/envs/raylab_test/lib/python3.7/site-packages/raylab/cli/utils.py", line 164, in wrapped
    return func(*args, tune_kwargs=tune_kwargs, **kwargs)
  File "/home/palenicek/miniconda3/envs/raylab_test/lib/python3.7/site-packages/raylab/cli/utils.py", line 19, in wrapped
    return func(*args, **kwargs)
  File "/home/palenicek/miniconda3/envs/raylab_test/lib/python3.7/site-packages/click/decorators.py", line 21, in new_func
    return f(get_current_context(), *args, **kwargs)
  File "/home/palenicek/miniconda3/envs/raylab_test/lib/python3.7/site-packages/raylab/cli/utils.py", line 51, in wrapped
    process_tune_kwargs(ctx, **tune_kwargs)
  File "/home/palenicek/miniconda3/envs/raylab_test/lib/python3.7/site-packages/raylab/cli/utils.py", line 172, in process_tune_kwargs
    delete_if_necessary(ctx, osp.join(local_dir, name))
  File "/home/palenicek/miniconda3/envs/raylab_test/lib/python3.7/posixpath.py", line 94, in join
    genericpath._check_arg_types('join', a, *p)
  File "/home/palenicek/miniconda3/envs/raylab_test/lib/python3.7/genericpath.py", line 153, in _check_arg_types
    (funcname, s.__class__.__name__)) from None
TypeError: join() argument must be str or bytes, not 'NoneType'

I tried installing ray lab both using pip, as well as from source master and the last stable release version 0.10.0. I always get this error.

I am not sure, what the problem is. From the documentation I am also not 100% sure on how raylab should be used. I tried running the cli commands, as well as the /example files directly. All of which gave me the above error.

Any ideas on how to fix this issue?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.