GithubHelp home page GithubHelp logo

Comments (8)

benblack769 avatar benblack769 commented on June 2, 2024

I tried this with A2C (same code, just with a2c) and got the following error:

Traceback (most recent call last):
  File "independent_atari.py", line 7, in <module>
    experiment.train(frames=2e6)
  File "/home/ben/class_projs/autonomous-learning-library/all/experiments/single_env_experiment.py", line 46, in train
    self._run_training_episode()
  File "/home/ben/class_projs/autonomous-learning-library/all/experiments/single_env_experiment.py", line 73, in _run_training_episode
    action = self._agent.act(state)
  File "/home/ben/class_projs/autonomous-learning-library/all/bodies/_body.py", line 24, in act
    return self.process_action(self.agent.act(self.process_state(state)))
  File "/home/ben/class_projs/autonomous-learning-library/all/bodies/_body.py", line 24, in act
    return self.process_action(self.agent.act(self.process_state(state)))
  File "/home/ben/class_projs/autonomous-learning-library/all/bodies/_body.py", line 24, in act
    return self.process_action(self.agent.act(self.process_state(state)))
  [Previous line repeated 1 more time]
  File "/home/ben/class_projs/autonomous-learning-library/all/agents/a2c.py", line 61, in act
    self._train(states)
  File "/home/ben/class_projs/autonomous-learning-library/all/agents/a2c.py", line 69, in _train
    states, actions, advantages = self._buffer.advantages(next_states)
  File "/home/ben/class_projs/autonomous-learning-library/all/memory/advantage.py", line 38, in advantages
    rewards, lengths = self._compute_returns()
  File "/home/ben/class_projs/autonomous-learning-library/all/memory/advantage.py", line 52, in _compute_returns
    device=self._rewards[0].device
AttributeError: 'float' object has no attribute 'device'

from autonomous-learning-library.

cpnota avatar cpnota commented on June 2, 2024

The PPO implementation is a ParallelAgent/ParallelPreset, so it is not compatible with SingleEnvExperiment. Try using a ParallelEnvExperiment and setting ppo.hyperparameters(n_envs=1).

from autonomous-learning-library.

cpnota avatar cpnota commented on June 2, 2024

I don't think this is a bug, but it would probably be useful for the experiment types to enforce the agent type and throw a helpful error message instead of throwing random runtime errors, so I'm classifying this as "style."

from autonomous-learning-library.

cpnota avatar cpnota commented on June 2, 2024

Merged #241 to develop for now. It should allow n_envs=1 to work.

from autonomous-learning-library.

benblack769 avatar benblack769 commented on June 2, 2024

Ah, I see. Yes, it is very hard to get that from the error message.

Before, when we were trying to use ALL for our primary work with pettingzoo, the # 1 issue we had with the library, the reason that made us turn away from it, was the error messages were just too difficult to understand. It made every little mistake we made take an hour to track down.

Not sure what can be done about that, but explicit type checking would be a good start. For the policies/approximations, shape checking would also be super helpful. I got a ton of weird error messages when I was trying to make custom neural networks, and using incorrect shapes for the input and output layers.

from autonomous-learning-library.

jkterry1 avatar jkterry1 commented on June 2, 2024

from autonomous-learning-library.

benblack769 avatar benblack769 commented on June 2, 2024

So more context for this particular issue, the problem came up with someone wanted to use PPO to train one agent and DQN to use another. This is a very unusual use case that is probably not a good idea, but it brought up the fact that PPO isn't really supported at all for multiagent.

I made a small Preset wrapper and an Agent wrapper to handle this issue.
https://gist.github.com/weepingwillowben/400b42d54b6e57034da1e5293166aa80

Not sure if this should be officially supported or not.

from autonomous-learning-library.

cpnota avatar cpnota commented on June 2, 2024

I think this is fine for single agent now. #288 will handle the multiagent case.

from autonomous-learning-library.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.