Comments (6)
Looking at the source code, it seems like it could be done by adding an if/else in
In your case, the best is probably to fork the RL Zoo to adapt it to your needs (you can still install it as an editable package so you don't have to integrate it in your codebase).
gym.make()
is supposed to return a gym.Env
, not a VecEnv
.
Curious if this might break things down the line, and/or if there is an already built solution I'm missing? (I'd rather not have to integrate the entire rl_zoo3 repo in my project for cleanliness' sake)
We have something similar for a tentative PR with envpool integration: #355
from rl-baselines3-zoo.
Looking at the source code, it seems like it could be done by adding an if/else in
In your case, the best is probably to fork the RL Zoo to adapt it to your needs (you can still install it as an editable package so you don't have to integrate it in your codebase).
gym.make()
is supposed to return agym.Env
, not aVecEnv
.Curious if this might break things down the line, and/or if there is an already built solution I'm missing? (I'd rather not have to integrate the entire rl_zoo3 repo in my project for cleanliness' sake)
We have something similar for a tentative PR with envpool integration: #355
Sounds good, thanks for letting me know. I'll fork and make the changes.
In case anyone else has the same question, I'll be updating the code here:
https://github.com/mcgill-robotics/Humanoid-rl-baselines3-zoo
from rl-baselines3-zoo.
Looking at the source code, it seems like it could be done by adding an if/else in
rl-baselines3-zoo/rl_zoo3/exp_manager.py
Line 622 in aa38145
env = make_env(**env_kwargs)
? More specifically, replace lines 622-632 with:
if self._hyperparams.get("env_is_vectorized", False):
env = make_env(num_envs=n_envs, **env_kwargs)
else:
env = make_vec_env(
make_env,
n_envs=n_envs,
seed=self.seed,
env_kwargs=env_kwargs,
monitor_dir=log_dir,
wrapper_class=self.env_wrapper,
vec_env_cls=self.vec_env_class, # type: ignore[arg-type]
vec_env_kwargs=self.vec_env_kwargs,
monitor_kwargs=self.monitor_kwargs,
)
Curious if this might break things down the line, and/or if there is an already built solution I'm missing? (I'd rather not have to integrate the entire rl_zoo3 repo in my project for cleanliness' sake)
from rl-baselines3-zoo.
Maybe, could you do a pr to update the docs?
from rl-baselines3-zoo.
Maybe, could you do a pr to update the docs?
Sure, however I'm new to the repo so I'm not sure the standards / where to do this. What exactly should I update and with what information? Should I do something along the lines of "If your custom environment implements the Stable Baselines 3 VecEnv class, you will have to update the source code (see issue [....])." in https://github.com/DLR-RM/rl-baselines3-zoo/blob/master/docs/guide/custom_env.rst?
from rl-baselines3-zoo.
What exactly should I update and with what information?
https://github.com/DLR-RM/rl-baselines3-zoo/blob/master/docs/guide/custom_env.rst
yes, this file.
with an explanation/link (link to this issue) on what to do when you have VectorEnv
that are not gym.Env
.
Something like https://stable-baselines3.readthedocs.io/en/master/guide/examples.html#sb3-and-procgenenv
from rl-baselines3-zoo.
Related Issues (20)
- Training DonkeyCar with TQC algorithm with pretrained AE
- [Bug]: Custom Sub-Hyperparameters during train.py -> Optimize HOT 1
- [Question] You must pass an environment when using `HerReplayBuffer` HOT 1
- [Question] RuntimeError: Unable to sample before the end of the first episode. We recommend choosing a value for learning_starts that is greater than the maximum number of timesteps in the environment. HOT 5
- [Question] Custom Eval Callback for train/optimize HOT 2
- [Bug]: TODO: add test dependencies in the `setup.py` HOT 1
- [Bug]: Training suddenly stops at 25000 timesteps and Optuna optimization immediately exits in my custom environment HOT 7
- [Question] exp_manager reward and GAE discount factors HOT 1
- [Bug]: Custom environment not found in gym registry, you maybe meant... error message HOT 1
- [Bug]: Optimization log and optimal policy not in `--optimization-log-path` but in `--log-folder` HOT 1
- [Question] Number of parallel environments with hyperparameters optimization HOT 1
- [Question] RL traning could not reach convergence for a customised environment HOT 1
- [Question] How many startup trials in distributed optimization HOT 2
- [Question] The trained agent resets every 1000 episodes. HOT 3
- Stuck at Local Minimum in PPO with CarRacing-v2 Environment
- [Question] How to render "info" on tensorboard HOT 1
- [Bug]: Docker tag is still 2.2.0a2, but latest rl-baselines3-zoo is now 2.3.0
- [Question] Results vastly different for an agent created with Stable Baselines3 using hyperparameters optimized in RL Baselines3 Zoo. HOT 1
- [Question] Should train & eval environment seeds differ? HOT 4
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from rl-baselines3-zoo.