GithubHelp home page GithubHelp logo

openai / multiagent-particle-envs Goto Github PK

View Code? Open in Web Editor NEW
2.3K 176.0 785.0 120 KB

Code for a multi-agent particle environment used in the paper "Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments"

Home Page: https://arxiv.org/pdf/1706.02275.pdf

License: MIT License

Python 100.00%
paper

multiagent-particle-envs's Introduction

Status: Archive (code is provided as-is, no updates expected)

Maintained Fork

The maintained version of these environments, which includenumerous fixes, comprehensive documentation, support for installation via pip, and support for current versions of Python are available in PettingZoo (https://github.com/Farama-Foundation/PettingZoo , https://pettingzoo.farama.org/environments/mpe/)

Multi-Agent Particle Environment

A simple multi-agent particle world with a continuous observation and discrete action space, along with some basic simulated physics. Used in the paper Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments.

Getting started:

  • To install, cd into the root directory and type pip install -e .

  • To interactively view moving to landmark scenario (see others in ./scenarios/): bin/interactive.py --scenario simple.py

  • Known dependencies: Python (3.5.4), OpenAI gym (0.10.5), numpy (1.14.5), pyglet (1.5.27)

  • To use the environments, look at the code for importing them in make_env.py.

Code structure

  • make_env.py: contains code for importing a multiagent environment as an OpenAI Gym-like object.

  • ./multiagent/environment.py: contains code for environment simulation (interaction physics, _step() function, etc.)

  • ./multiagent/core.py: contains classes for various objects (Entities, Landmarks, Agents, etc.) that are used throughout the code.

  • ./multiagent/rendering.py: used for displaying agent behaviors on the screen.

  • ./multiagent/policy.py: contains code for interactive policy based on keyboard input.

  • ./multiagent/scenario.py: contains base scenario object that is extended for all scenarios.

  • ./multiagent/scenarios/: folder where various scenarios/ environments are stored. scenario code consists of several functions:

    1. make_world(): creates all of the entities that inhabit the world (landmarks, agents, etc.), assigns their capabilities (whether they can communicate, or move, or both). called once at the beginning of each training session
    2. reset_world(): resets the world by assigning properties (position, color, etc.) to all entities in the world called before every episode (including after make_world() before the first episode)
    3. reward(): defines the reward function for a given agent
    4. observation(): defines the observation space of a given agent
    5. (optional) benchmark_data(): provides diagnostic data for policies trained on the environment (e.g. evaluation metrics)

Creating new environments

You can create new scenarios by implementing the first 4 functions above (make_world(), reset_world(), reward(), and observation()).

List of environments

Env name in code (name in paper) Communication? Competitive? Notes
simple.py N N Single agent sees landmark position, rewarded based on how close it gets to landmark. Not a multiagent environment -- used for debugging policies.
simple_adversary.py (Physical deception) N Y 1 adversary (red), N good agents (green), N landmarks (usually N=2). All agents observe position of landmarks and other agents. One landmark is the ‘target landmark’ (colored green). Good agents rewarded based on how close one of them is to the target landmark, but negatively rewarded if the adversary is close to target landmark. Adversary is rewarded based on how close it is to the target, but it doesn’t know which landmark is the target landmark. So good agents have to learn to ‘split up’ and cover all landmarks to deceive the adversary.
simple_crypto.py (Covert communication) Y Y Two good agents (alice and bob), one adversary (eve). Alice must sent a private message to bob over a public channel. Alice and bob are rewarded based on how well bob reconstructs the message, but negatively rewarded if eve can reconstruct the message. Alice and bob have a private key (randomly generated at beginning of each episode), which they must learn to use to encrypt the message.
simple_push.py (Keep-away) N Y 1 agent, 1 adversary, 1 landmark. Agent is rewarded based on distance to landmark. Adversary is rewarded if it is close to the landmark, and if the agent is far from the landmark. So the adversary learns to push agent away from the landmark.
simple_reference.py Y N 2 agents, 3 landmarks of different colors. Each agent wants to get to their target landmark, which is known only by other agent. Reward is collective. So agents have to learn to communicate the goal of the other agent, and navigate to their landmark. This is the same as the simple_speaker_listener scenario where both agents are simultaneous speakers and listeners.
simple_speaker_listener.py (Cooperative communication) Y N Same as simple_reference, except one agent is the ‘speaker’ (gray) that does not move (observes goal of other agent), and other agent is the listener (cannot speak, but must navigate to correct landmark).
simple_spread.py (Cooperative navigation) N N N agents, N landmarks. Agents are rewarded based on how far any agent is from each landmark. Agents are penalized if they collide with other agents. So, agents have to learn to cover all the landmarks while avoiding collisions.
simple_tag.py (Predator-prey) N Y Predator-prey environment. Good agents (green) are faster and want to avoid being hit by adversaries (red). Adversaries are slower and want to hit good agents. Obstacles (large black circles) block the way.
simple_world_comm.py Y Y Environment seen in the video accompanying the paper. Same as simple_tag, except (1) there is food (small blue balls) that the good agents are rewarded for being near, (2) we now have ‘forests’ that hide agents inside from being seen from outside; (3) there is a ‘leader adversary” that can see the agents at all times, and can communicate with the other adversaries to help coordinate the chase.

Paper citation

If you used this environment for your experiments or found it helpful, consider citing the following papers:

Environments in this repo:

@article{lowe2017multi,
  title={Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments},
  author={Lowe, Ryan and Wu, Yi and Tamar, Aviv and Harb, Jean and Abbeel, Pieter and Mordatch, Igor},
  journal={Neural Information Processing Systems (NIPS)},
  year={2017}
}

Original particle world environment:

@article{mordatch2017emergence,
  title={Emergence of Grounded Compositional Language in Multi-Agent Populations},
  author={Mordatch, Igor and Abbeel, Pieter},
  journal={arXiv preprint arXiv:1703.04908},
  year={2017}
}

multiagent-particle-envs's People

Contributors

asrvsn avatar christopherhesse avatar jkterry1 avatar mordatch avatar pzhokhov avatar ryan-lowe avatar rzilleruelo avatar willdudley avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

multiagent-particle-envs's Issues

reward of simple_reference

The reward for the simple_reference scenario is defined as follows:

def reward(self, agent, world):
    if agent.goal_a is None or agent.goal_b is None:
        return 0.0
    dist2 = np.sum(np.square(agent.goal_a.state.p_pos - agent.goal_b.state.p_pos))
    return -dist2

So every agent is rewarded based on the performance of the other agent, but shouldn't be the reward be the sum of both agents' rewards? For example, something like:

def reward(self, agent, world):
    reward = 0.0
    for agent in world.agents:
        if agent.goal_a is None or agent.goal_b is None:
            continue
        reward -= np.sum(np.square(agent.goal_a.state.p_pos - agent.goal_b.state.p_pos))
    return reward

In the first case, the agents have no direct incentive to move towards the goal because they only get rewarded if the other agent is close to their goal, but in the second case every agent is incentivised to communicate the goal to the other agent and to move toward its own goal.
So wouldn't be a shared reward more appropriate or is this a desired challenge which algorithms have to overcome?

5 undefined names in Python code can raise NameError at runtime

flake8 testing of https://github.com/openai/multiagent-particle-envs on Python 2.7.14

$ flake8 . --count --select=E901,E999,F821,F822,F823 --show-source --statistics

./make_env.py:41:97: F821 undefined name 'arglist'
        env = MultiAgentEnv(world, scenario.reset_world, scenario.reward, scenario.observation, arglist, scenario.benchmark_data)
                                                                                                ^

./make_env.py:43:97: F821 undefined name 'arglist'
        env = MultiAgentEnv(world, scenario.reset_world, scenario.reward, scenario.observation, arglist)
                                                                                                ^

./multiagent/policy.py:40:44: F821 undefined name 'envo'
        return np.concatenate([u. np.zeros(envo.world.dim_c)])
                                           ^

./multiagent/scenarios/simple_speaker_listener.py:60:16: F821 undefined name 'reward'
        return reward(agent, reward)
               ^

./multiagent/scenarios/simple_speaker_listener.py:60:30: F821 undefined name 'reward'
        return reward(agent, reward)
                             ^

5     F821 undefined name 'arglist'

Compatibility with openai/baselines

Are those environments compatible with OpenAI baselines implementation?

At first sights, it looks like the agents in openai/baselines don't support environments with an observable list.

For example the code below gives the exception:

~/tmp/baselines/baselines/deepq/deepq.py in learn(env, network, seed, lr, total_timesteps, buffer_size, exploration_fraction, exploration_final_eps, train_freq, batch_size, print_freq, checkpoint_freq, checkpoint_path, learning_starts, gamma, target_network_update_freq, prioritized_replay, prioritized_replay_alpha, prioritized_replay_beta0, prioritized_replay_beta_iters, prioritized_replay_eps, param_noise, callback, load_path, **network_kwargs)
    202         make_obs_ph=make_obs_ph,
    203         q_func=q_func,
--> 204         num_actions=env.action_space.n,
    205         optimizer=tf.train.AdamOptimizer(learning_rate=lr),
    206         gamma=gamma,

AttributeError: 'list' object has no attribute 'n'

Code that instantiates a baseline agent with a multiagent environment:

from baselines.common.vec_env.subproc_vec_env import SubprocVecEnv
from baselines.run import get_learn_function

from multiagent.environment import MultiAgentEnv
import multiagent.scenarios as scenarios

common_kwargs = dict(total_timesteps=30000, network="mlp", gamma=1.0, seed=0)

learn_kwargs = {
    'a2c' : dict(nsteps=32, value_network='copy', lr=0.05),
    'acktr': dict(nsteps=32, value_network='copy'),
    'deepq': dict(total_timesteps=20000),
    'ppo2': dict(value_network='copy'),
    'trpo_mpi': {}
}
alg = "deepq"

kwargs = common_kwargs.copy()
kwargs.update(learn_kwargs[alg])
learn_fn = lambda e: get_learn_function(alg)(env=e, **kwargs)

def env_fn():
    scenario = scenarios.load("simple_tag.py").Scenario()
    world = scenario.make_world()
    env = MultiAgentEnv(world, scenario.reset_world, scenario.reward, scenario.observation, scenario.benchmark_data)
    return env

env = SubprocVecEnv([env_fn])
model = learn_fn(env)

Fixed frame for entire environment

Hi, I was wondering whether there's a feature to have a single, fixed frame rendering the entire environment (as shown in your demo video in this blog post), rather that individual windows rendering each agent's view for the interactive mode.

Thanks!

Cannot run interactive.py

Hi.

I just cloned the repository and tried to run interactive.py, but got the following traceback:

WARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.
Traceback (most recent call last):
File "bin/interactive.py", line 21, in
env = MultiAgentEnv(world, scenario.reset_world, scenario.reward, scenario.observation, info_callback=None, shared_viewer = False)
File "/home/name/code/multiagent-particle-envs/bin/../multiagent/environment.py", line 68, in init
self.observation_space.append(spaces.Box(low=-np.inf, high=+np.inf, shape=(obs_dim),))
File "/usr/local/lib/python3.5/dist-packages/gym/spaces/box.py", line 33, in init
Space.init(self, shape, dtype)
File "/usr/local/lib/python3.5/dist-packages/gym/core.py", line 161, in init
self.shape = None if shape is None else tuple(shape)
TypeError: 'int' object is not iterable

I have found that I can overcome this particular error if I change (in environment.py row 68)

self.observation_space.append(spaces.Box(low=-np.inf, high=+np.inf, shape=(obs_dim),))

to

self.observation_space.append(spaces.Box(low=-np.inf, high=+np.inf, shape=(obs_dim,)))

However, then I start getting a whole new bunch of errors.

Any clue as to what to I am doing wrong?

cannot run interactive.py

Traceback (most recent call last):
File "/home/shy/桌面/multiagent-particle-envs-master/bin/interactive.py", line 6, in
from multiagent.environment import MultiAgentEnv
File "/home/shy/桌面/multiagent-particle-envs-master/bin/../multiagent/environment.py", line 5, in
from multiagent.multi_discrete import MultiDiscrete
File "/home/shy/桌面/multiagent-particle-envs-master/bin/../multiagent/multi_discrete.py", line 7, in
from gym.spaces import prng
ImportError: cannot import name 'prng'

Can't run interactive.py

agent 1 to agent 0: _   agent 2 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   
Traceback (most recent call last):
  File "./interactive.py", line 34, in <module>
    obs_n, reward_n, done_n, _ = env.step(act_n)
  File "/mnt//Programming/python/multiagent-particle-envs/bin/../multiagent/environment.py", line 94, in step
    reward_n.append(self._get_reward(agent))
  File "/mnt//Programming/python/multiagent-particle-envs/bin/../multiagent/environment.py", line 141, in _get_reward
    return self.reward_callback(agent, self.world)
  File "/mnt//Programming/python/multiagent-particle-envs/bin/../multiagent/scenarios/simple_crypto.py", line 96, in reward
    return self.adversary_reward(agent, world) if agent.adversary else self.agent_reward(agent, world)
  File "/mnt//Programming/python/multiagent-particle-envs/bin/../multiagent/scenarios/simple_crypto.py", line 120, in adversary_reward
    if not (agent.state.c == np.zeros(world.dim_c)).all():
AttributeError: 'bool' object has no attribute 'all'
$ pip list | grep "gym|numpy"

gym                      0.10.5
numpy                    1.14.5
numpy-stl                2.8.0

$ python -V

Python 3.6.0

(a.state.c == np.zeros(world.dim_c)).all()

Can anyone tell me what this means?

Thanks

Internal color setting error in simple_crypto

Hello, I am getting the following error when trying to display a trained policy in simple_crypto

The command I am running:
$ python experiments/train.py --scenario=simple_crypto --display

The trace:

Traceback (most recent call last):
File "experiments/train.py", line 202, in
train(arglist)
File "experiments/train.py", line 153, in train
env.render()
File "/home/ryan/openai/multiagent-particle-envs/multiagent/environment.py", line 233, in render
geom.set_color(*entity.color, alpha=0.5)
TypeError: set_color() got multiple values for argument 'alpha'

World.dim_c parameter meaning

I'm trying to figure out what the World.dim_c paramter does, does anyone know?

It is defined in core.py here:

# multi-agent world
class World(object):
    def __init__(self):
        # list of agents and entities (can change at execution-time!)
        self.agents = []
        self.landmarks = []
        # communication channel dimensionality
        self.dim_c = 0

Undefined name: 'reward' in simple_speaker_listener.py

reward is an undefined name in this context. Should it be self.reward instead?

flake8 testing of https://github.com/openai/multiagent-particle-envs on Python 3.6.3

$flake8 . --count --select=E901,E999,F821,F822,F823 --show-source --statistics

./multiagent/scenarios/simple_speaker_listener.py:60:16: F821 undefined name 'reward'
        return reward(agent, reward)
               ^
./multiagent/scenarios/simple_speaker_listener.py:60:30: F821 undefined name 'reward'
        return reward(agent, reward)
                             ^
2     F821 undefined name 'reward'
2

problem with gym version >= 0.10

Hi, problem seems to be related to version of gym used for multiagent-particle-envs.
Every time when I'm using methods of env like reset or render I'm getting this error.

    states = env.reset()
  File "/home/yakotaki/Downloads/gym-master/gym/core.py", line 71, in reset
    raise NotImplementedError
NotImplementedError

after downgrading gym to any ver < 0.10 error disappear

AttributeError: 'NoneType' object has no attribute 'flip'

Traceback (most recent call last):
File "D:/My_Research/MAS_code/Python/multiagent-particle-envs-master/bin/interactive.py", line 36, in
env.render()
File "D:\My_Research\MAS_code\Python\multiagent-particle-envs-master\bin..\multiagent\environment.py", line 261, in render
results.append(self.viewers[i].render(return_rgb_array = mode=='rgb_array'))
File "D:\My_Research\MAS_code\Python\multiagent-particle-envs-master\bin..\multiagent\rendering.py", line 110, in render
self.window.flip()
File "C:\Python27\lib\site-packages\pyglet\window\win32_init_.py", line 323, in flip
self.context.flip()
AttributeError: 'NoneType' object has no attribute 'flip'

Process finished with exit code 1

Did anyone fix this issue?

simulate

Is it possible to run this project using the spyder?/ Otherwise, what is an easy tool? I have just started learning about this. Please explain how to run it. I import all the dependencies.

partial window

Hi,
I run some scenarios like simple tag and I can only observed a partial window(sometimes the agent run out of the window,I can only see a part of the whole 'world')
Crying for helping T T..

value return in get_done function is wrong

value returned in _get_done function of MultiAgentEnv class in environment.py file should be 'self.done_callback(agent, self.world)'.
correct me if i am wrong.
Thanks

Is done() function necessary? When to finish episode?

Should done function return true only when agent is out of range?

def done(self, agent, world):
x = abs(agent.state.p_pos[0])
y = abs(agent.state.p_pos[1])
if (x > 1.0 or y > 1.0):
return True
return False
Or I also should add condition for terminating episode when goal is reached?

Thanks and sorry, I'm newbie here.

Fix: ImportError: cannot import name 'prng'

The problem of prng is that the gym package is updating, and prng function has been deleted.

you can fix this in code multiagent-particle-envs/multiagent/multi_discrete.py

import gym
# from gym.spaces import prng             # this prng has been canceled

and change this line with:

np_random = np.random.RandomState()
random_array = np_random.rand(self.num_discrete_space)

Closing rendered environment

Is anyone else having trouble closing the environment after it has been rendered? After I run the following code, the window just hangs there.

env.render()
** stuff
env.close()

simple_world_comm.py observation function returns incomplete information

Hi,
I find in the simple_world_comm environment that the obserbation function does not return the informaton you have defined such as food_pos, prey_forest, prey_forest_lead, and the comm variable is rewrited by comm = [world.agents[0].state.c], so comm does not contain communication of all other agents.
i wonder why and how to define the obserbation function

Error when display simple_crypto

Python (3.5.4)
OpenAI gym (0.10.5)
tensorflow (1.8.0)
numpy (1.14.5)

maddpg code: https://github.com/openai/maddpg

I got an error when I run the maddpg code in the simple scene and display it
PS: 8 other environments can work

========================================
Step 1 : train and save model
$ python train.py --scenario simple_crypto

2020-01-22 17:46:58.755855: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
Starting iterations...
steps: 24975, episodes: 1000, mean episode reward: -22.24953384943575, time: 26.487
steps: 49975, episodes: 2000, mean episode reward: -17.158019479170676, time: 43.373
steps: 74975, episodes: 3000, mean episode reward: -14.317337732475657, time: 43.159
steps: 99975, episodes: 4000, mean episode reward: -7.1827372802086815, time: 45.176

========================================
Step 2 : restore model and display
$ python train.py --scenario simple_crypto --display

2020-01-22 17:50:47.412262: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
Loading previous state...
Starting iterations...
agent 1 to agent 0: B agent 2 to agent 0: C agent 0 to agent 1: A agent 2 to agent 1: C agent 0 to agent 2: A agent 1 to agent 2: B
Traceback (most recent call last):
File "train.py", line 211, in
train(arglist)
File "train.py", line 170, in train
env.render()
File "/Users/likejiao/Documents/LKJDocument/Code/github/openai/multiagent-particle-envs/multiagent/environment.py", line 234, in render
geom.set_color(*entity.color, alpha=0.5)
TypeError: set_color() got multiple values for argument 'alpha'

========================================
Is there something wrong with the simple_crypto scenario?

Creating the boundary of the environment

We are trying to add boundaries to the environment, but when we check the code of the entity in core.py, there is only the size property, from our understanding, we can only create circle object. Are there any solutions to create other shape?

TypeError: render() got an unexpected keyword argument 'close'

Hi, when I tried to run the interative.py, I came across some errors:
"
Traceback (most recent call last):
File "interactive.py", line 21, in
env = MultiAgentEnv(world, scenario.reset_world, scenario.reward, scenario.observation, info_callback=None, shared_viewer = False)
File "/home/shao/projects-py2/rl/multiagent-particle-envs/bin/../multiagent/environment.py", line 68, in init
self.observation_space.append(spaces.Box(low=-np.inf, high=+np.inf, shape=(obs_dim,), dtype=np.float32))
TypeError: init() got an unexpected keyword argument 'dtype'
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
File "/usr/lib/python2.7/atexit.py", line 24, in _run_exitfuncs
func(*targs, **kargs)
File "/home/shao/projects-py2/gym/gym/utils/closer.py", line 67, in close
closeable.close()
File "/home/shao/projects-py2/gym/gym/core.py", line 164, in close
self.render(close=True)
TypeError: render() got an unexpected keyword argument 'close'
"
I tried to reinstall the gym, but it also failed.
I would appreciate it if anyone could help me.

Build network for every agent?

Hi, I have an uncertainty thing about the paper. If I have 5 agents in the environment, I need to build 10 networks(5 for actors, 5 for critics) based on the algorithm from the paper, right?

Simple tag scenario multiple adding of rewards for adversaries agents

In environment.py

    def step(self, action_n):
        ... 
        reward_n = []
        ...  
        for agent in self.agents:
            ... 
            reward_n.append(self._get_reward(agent))
            ... 
        # all agents get total reward in cooperative case
        reward = np.sum(reward_n)
        if self.shared_reward:  # simple tag is not shared reward
            reward_n = [reward] * self.n

        return  ...  reward_n,    ... 

For each agent, it will calculate its corresponding rewards. In the simple_tag.py

    def adversary_reward(self, agent, world):
        # Adversaries are rewarded for collisions with agents
        rew = 0
        shape = False
        agents = self.good_agents(world)
        adversaries = self.adversaries(world)
        if shape:  # reward can optionally be shaped (decreased reward for increased distance from agents)
            for adv in adversaries:
                rew -= 0.1 * min([np.sqrt(np.sum(np.square(a.state.p_pos - adv.state.p_pos))) for a in agents])
        if agent.collide:
            for ag in agents:
                for adv in adversaries:
                    if self.is_collision(ag, adv):
                        rew += 10
        return rew

agent.collide is always true. If the agent is an adversary agent, the coder iterates all adversary agents. The results in multiple adding of reward for all adversary agents.

Example traing code?

Hi,

Is there an example somewhere that shows how to train using the baselines implementations for DDPG or other system? Could you give a few pointers on what I'd have to do to try the baselines algorithm implementations on your environment?

I was wondering also environment did you originally train in?

Thanks in advance.

more document about environment

Hi, I am using your environment for mutliagent algorithm's test.

Can I get some more information about observation,, action, reward's information of environments you provide. I have been using simple push environment.

Action Discrete(5) and reward in "simple_tag" env

  1. The action for each agent is Discrete(5). However actually it is Box(5) within (-1, 1).
    The code here
    agent.action.u[0] += action[0][1] - action[0][2] agent.action.u[1] += action[0][3] - action[0][4]
    is used to get p_force and then to get p_vel, So what does action[0][0] do?

  2. The reward of adversary agents for each step is based on is_collision which turns out to be the same reward for each adversary agent even if we consider the penalty in the case shape = True.
    How is it different from self.shared_reward = True in environment.py?

I don't mean to complain, just wonder how it works.
Appreciate it if you guys could answer me.

Errors

When I use command bin/interactive.py --scenario simple.py, it turns out an error:

 /Users/salse/Dropbox/openai/multiagent-particle-envs/multiagent/environment.py(149)_set_action()
    148             import ipdb; ipdb.set_trace()
--> 149             action = [action]
    150 

The action seems undefined.

Also, could you give a example of training? I even don't know what to feed in env.step(). A single integer or a list of integer seems doesn't work.

Error with MultiDiscrete spaces

Hi everybody,
I run into a weird behaviour in the environment.py file in line: 56.

# total action space
if len(total_action_space) > 1:
    # all action spaces are discrete, so simplify to MultiDiscrete action space
    if all([isinstance(act_space, spaces.Discrete) for act_space in total_action_space]):
         act_space = spaces.MultiDiscrete([[0,act_space.n-1] for act_space in total_action_space])

creates an error in the mulidiscrete space. It says that nvec expects an 1 dimensional vector, but it recieves a 2 dimensional vector.

'Done' is always False for all environments

Is there any particular reason why the _step(self,action_n) function in environment.py just returns False for done_n for all agents, irrespective of what is happening in the environment?
Makes it very difficult to train MADDPG, because initially the agents just keep going out of the environment (the render window), and while I believe the environment should return done=True in this case, allowing the training to reset and start over, it just keeps going - making it difficult to learn IMO.

Another question is why are the environments not bounded?

bug in is_collision function

I am referring to the following function:

def is_collision(self, agent1, agent2):
delta_pos = agent1.state.p_pos - agent2.state.p_pos
dist = np.sqrt(np.sum(np.square(delta_pos)))
dist_min = agent1.size + agent2.size
return True if dist < dist_min else False

As I understand it, the above code is supposed to detect when an agent collides with another agent. However, it currently returns false positives when an agent "collides with itself". This is because the calling function passes the same agent entity in for agent1 and agent2 periodically within its loop.

In my fork, I have included the following lines of code to fix the bug:

if agent1 == agent2:
return False

So the code now reads:

def is_collision(self, agent1, agent2):
if agent1 == agent2:
return False
delta_pos = agent1.state.p_pos - agent2.state.p_pos
dist = np.sqrt(np.sum(np.square(delta_pos)))
dist_min = agent1.size + agent2.size
return True if dist < dist_min else False

Please let me know if I have misunderstood something. Thanks :)

error installing with gym 0.10.5

Hi,

I wanted to use these environment in combination with some other code that expects a newer version of gym. It seems that one of your latest updates adapted your codebase to function with gym 10.x, but in practice when running
pip install -e .
I get an error message compalining about gym version, I seems that the "multiagent" package still asks for gym==0.7.
pip._vendor.pkg_resources.ContextualVersionConflict: (gym 0.10.5 (/usr/local/lib/python3.5/dist-packages), Requirement.parse('gym==0.7'), {'multiagent'})

How can I overcome this?

Global state

I see the env.step() function returns the observation of all the agents. Is there any way to get global state information?

[simple_world_comm.py] Just hang and not rendering as animation

Hello Developers,

My Linux Ubuntu 18.04 x64 environment:

uname -a
Linux WorkStation 5.0.0-32-generic #34~18.04.2-Ubuntu SMP Thu Oct 10 10:36:02 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

python3 --version
Python 3.6.8

pip3 list

Package                       Version             Location                                   
----------------------------- ------------------- -------------------------------------------
absl-py                       0.7.1               
alabaster                     0.7.12              
apache-beam                   2.16.0              
appdirs                       1.4.3               
apturl                        0.5.2               
asn1crypto                    0.24.0              
astor                         0.8.0               
astroid                       2.2.5               
asyncio                       3.4.3               
atari-py                      0.1.15              
atomicwrites                  1.3.0               
attrs                         19.1.0              
avro-python3                  1.9.0               
Babel                         2.6.0               
backcall                      0.1.0               
backports.csv                 1.0.7               
baselines                     0.1.5               /home/dragon/quant/python/baselines        
beautifulsoup4                4.7.1               
bintrees                      2.0.7               
bleach                        3.1.0               
blis                          0.2.4               
boto3                         1.9.240             
botocore                      1.12.240            
Box2D                         2.3.2               
Box2D-kengz                   2.3.3               
Brlapi                        0.6.6               
bs4                           0.0.1               
cachetools                    3.1.1               
certifi                       2018.1.18           
cffi                          1.12.3              
chardet                       3.0.4               
Click                         7.0                 
cloudpickle                   1.2.2               
colour                        0.1.5               
command-not-found             0.3                 
contextlib2                   0.5.5               
crcmod                        1.7                 
cryptography                  2.1.4               
cssselect                     1.0.3               
cupshelpers                   1.0                 
cycler                        0.10.0              
cymem                         2.0.2               
Cython                        0.29.10             
data                          0.4                 
dataclasses                   0.7                 
decorator                     4.4.0               
defer                         1.0.6               
defusedxml                    0.6.0               
devscripts                    2.17.12ubuntu1.1    
dill                          0.3.0               
diskcache                     3.1.1               
distro-info                   0.18ubuntu0.18.04.1 
dlib                          19.17.99            
dm-sonnet                     1.34                
docopt                        0.6.2               
docutils                      0.14                
dopamine-rl                   2.0.5               /home/dragon/machine_learning/dopamine     
easydict                      1.9                 
editdistance                  0.5.3               
entrypoints                   0.3                 
enum34                        1.1.6               
et-xmlfile                    1.0.1               
fake-useragent                0.1.11              
fastavro                      0.21.24             
fasteners                     0.15                
findspark                     1.3.0               
flatbuffers                   1.10                
funcsigs                      1.0.2               
future                        0.17.1              
gast                          0.2.2               
gin-config                    0.1.4               
gitdb2                        2.0.5               
GitPython                     2.1.11              
glfw                          1.8.1               
glob2                         0.6                 
gluonts                       0.3.3               
google-api-core               1.13.0              
google-api-python-client      1.7.11              
google-apitools               0.5.28              
google-auth                   1.6.3               
google-auth-httplib2          0.0.3               
google-cloud-bigquery         1.6.1               
google-cloud-bigtable         0.32.2              
google-cloud-core             0.29.1              
google-cloud-datastore        1.7.4               
google-cloud-pubsub           0.39.1              
google-pasta                  0.1.7               
google-resumable-media        0.3.2               
googleapis-common-protos      1.6.0               
gpg                           1.10.0              
graphviz                      0.8.4               
grpc-google-iam-v1            0.11.4              
grpcio                        1.20.1              
gym                           0.10.5              
gym-ctc-executioner           0.0.1               
gym-ctc-marketmaker           0.0.1               
gym-test                      0.0.1               /home/dragon/reinforcementlearning/gym-test
h5py                          2.9.0               
handyspark                    0.2.2a1             
hdfs                          2.5.8               
holidays                      0.9.11              
httplib2                      0.9.2               
icc-rt                        2019.0              
idna                          2.6                 
imageio                       2.5.0               
imagesize                     1.1.0               
importlib-metadata            0.17                
imutils                       0.5.3               
intel-openmp                  2019.0              
ipykernel                     5.1.1               
ipython                       7.5.0               
ipython-genutils              0.2.0               
ipywidgets                    7.4.2               
isort                         4.3.20              
jdcal                         1.4.1               
jedi                          0.13.3              
Jinja2                        2.10.1              
jmespath                      0.9.4               
joblib                        0.13.2              
jsonschema                    3.0.1               
jupyter                       1.0.0               
jupyter-client                5.2.4               
jupyter-console               6.0.0               
jupyter-core                  4.4.0               
Keras                         2.3.1               
Keras-Applications            1.0.8               
Keras-Preprocessing           1.0.9               
keras-rl                      0.4.2               
keyring                       10.6.0              
keyrings.alt                  3.0                 
kiwisolver                    1.1.0               
language-selector             0.1                 
latex                         0.7.0               
launchpadlib                  1.10.6              
lazr.restfulclient            0.13.5              
lazr.uri                      1.0.3               
lazy-object-proxy             1.4.1               
lockfile                      0.12.2              
louis                         3.5.0               
lxml                          4.3.3               
macaroonbakery                1.1.3               
Mako                          1.0.7               
Markdown                      3.1.1               
MarkupSafe                    1.0                 
matplotlib                    3.1.0               
mccabe                        0.6.1               
meld                          3.18.0              
mistune                       0.8.4               
mkl                           2019.0              
mkl-fft                       1.0.6               
mkl-random                    1.0.1.1             
mock                          2.0.0               
monotonic                     1.5                 
more-itertools                7.0.0               
mpi4py                        3.1.0a0             
mpmath                        1.1.0               
multiagent                    0.0.1               
murmurhash                    1.0.2               
mxnet                         1.4.1               
nbconvert                     5.5.0               
nbformat                      4.4.0               
neat-python                   0.92                
netifaces                     0.10.4              
networkx                      2.3                 
nltk                          3.4.5               
notebook                      5.7.8               
numexpr                       2.6.9               
numpy                         1.14.5              
numpy-stl                     2.6.0               
numpydoc                      0.9.1               
oauth                         1.0.1               
oauth2client                  4.1.3               
odfpy                         1.4.0               
olefile                       0.45.1              
opencv-contrib-python         4.1.1.26            
opencv-python                 4.1.1.26            
openpyxl                      2.4.11              
opt-einsum                    3.1.0               
ordered-set                   3.1.1               
packaging                     19.0                
pandas                        0.25.2              
pandocfilters                 1.4.2               
parse                         1.12.0              
parso                         0.4.0               
pbr                           5.4.0               
pexpect                       4.2.1               
pickleshare                   0.7.5               
Pillow                        6.1.0               
pip                           19.3.1              
plac                          0.9.6               
plotly                        3.9.0               
pluggy                        0.12.0              
preshed                       2.0.1               
prometheus-client             0.6.0               
prompt-toolkit                2.0.9               
protobuf                      3.7.1               
psutil                        5.6.2               
ptyprocess                    0.6.0               
py                            1.8.0               
py4j                          0.10.7              
pyaes                         1.6.1               
pyarrow                       0.13.0              
pyasn1                        0.4.5               
pyasn1-modules                0.2.5               
pycairo                       1.16.2              
pyclipper                     1.1.0.post1         
pycodestyle                   2.5.0               
pycparser                     2.19                
pycrypto                      2.6.1               
pycups                        1.9.73              
pydantic                      0.28                
pydot                         1.2.4               
pydotplus                     2.0.2               
PyDrive                       1.3.1               
pyee                          6.0.0               
pyflakes                      2.1.1               
pygame                        1.9.6               
pyglet                        1.2.0               
Pygments                      2.4.0               
pygobject                     3.26.1              
PyLaTeX                       1.3.1               
pylint                        2.3.1               
pymacaroons                   0.13.0              
pymongo                       3.9.0               
PyNaCl                        1.1.2               
PyOpenGL                      3.1.0               
PyOpenGL-accelerate           3.1.0               
pyparsing                     2.4.0               
pyppeteer                     0.0.25              
pyprob                        0.13.0              
PyQt5                         5.12.2              
PyQt5-sip                     4.19.17             
PyQtWebEngine                 5.12.1              
pyquery                       1.4.0               
pyRFC3339                     1.0                 
pyrsistent                    0.15.2              
pyspark                       2.4.3               
pytesseract                   0.3.0               
pytest                        4.6.0               
pytexit                       0.3.3               
python-apt                    1.6.4               
python-dateutil               2.8.0               
python-debian                 0.1.32              
python-magic                  0.4.16              
python-telegram               0.10.0              
python-utils                  2.3.0               
pytz                          2018.3              
PyWavelets                    1.0.3               
pyxdg                         0.25                
PyYAML                        3.12                
pyzmq                         18.0.1              
QtAwesome                     0.5.7               
qtconsole                     4.4.4               
QtPy                          1.7.1               
records                       0.5.3               
redis                         3.2.1               
regex                         2019.8.19           
reportlab                     3.4.0               
requests                      2.22.0              
requests-html                 0.10.0              
requests-unixsocket           0.1.5               
retrying                      1.3.3               
rl                            3.0                 
rope                          0.14.0              
rsa                           4.0                 
s3transfer                    0.2.1               
sacremoses                    0.0.35              
scikit-image                  0.15.0              
scikit-learn                  0.21.1              
scipy                         1.3.0               
scons                         3.0.5               
screen-resolution-extra       0.0.0               
seaborn                       0.9.0               
SecretStorage                 2.3.1               
selenium                      3.141.0             
semantic-version              2.6.0               
Send2Trash                    1.5.0               
sentencepiece                 0.1.83              
setuptools                    41.0.1              
shutilwhich                   1.1.0               
simplejson                    3.13.2              
six                           1.13.0              
smmap2                        2.0.5               
snowballstemmer               1.2.1               
soupsieve                     1.9.1               
spacy                         2.1.4               
Sphinx                        2.0.1               
sphinxcontrib-applehelp       1.0.1               
sphinxcontrib-devhelp         1.0.1               
sphinxcontrib-htmlhelp        1.0.2               
sphinxcontrib-jsmath          1.0.1               
sphinxcontrib-qthelp          1.0.2               
sphinxcontrib-serializinghtml 1.1.3               
spyder                        3.3.6               
spyder-kernels                0.5.1               
SQLAlchemy                    1.3.8               
srsly                         0.0.7               
stable-baselines              2.5.1               
sympy                         1.4                 
system-service                0.3                 
systemd-python                234                 
tables                        3.5.2               
tablib                        0.13.0              
tabulate                      0.8.3               
tbb                           2019.0              
tbb4py                        2019.0              
Telethon                      1.10.6              
tempdir                       0.7.1               
tensorboard                   1.15.0              
tensorflow                    1.15.0rc3           
tensorflow-estimator          1.15.1              
tensorflow-metadata           0.14.0              
tensorflow-probability        0.8.0               
tensorflow-transform          0.14.0              
termcolor                     1.1.0               
terminado                     0.8.2               
testpath                      0.4.2               
testresources                 2.0.1               
tf-agents                     0.2.0rc2            
thinc                         7.0.4               
torch                         1.2.0+cpu           
torchvision                   0.4.0+cpu           
tornado                       6.0.2               
tqdm                          4.32.1              
trading-env                   0.0.1.dev0          
traitlets                     4.3.2               
transformers                  2.1.0               
trfl                          1.0.1               
typed-ast                     1.3.5               
ubuntu-drivers-common         0.0.0               
ufw                           0.36                
ujson                         1.35                
unattended-upgrades           0.1                 
unicodecsv                    0.14.1              
unidiff                       0.5.4               
uritemplate                   3.0.0               
urllib3                       1.22                
usb-creator                   0.3.3               
virtualenv                    16.6.0              
w3lib                         1.20.0              
wadllib                       1.3.2               
wasabi                        0.2.2               
wcwidth                       0.1.7               
webencodings                  0.5.1               
websockets                    7.0                 
Werkzeug                      0.15.4              
wget                          3.2                 
wheel                         0.30.0              
widgetsnbextension            3.4.2               
wrapt                         1.11.1              
wurlitzer                     1.0.2               
xkit                          0.0.0               
xlrd                          1.2.0               
xlwt                          1.3.0               
zipp                          0.5.1               
zmq                           0.0.0               
zope.interface                4.3.2      

I executed the python script:

~/machine_learning/multiagent-particle-envs/multiagent/scenarios$ python3 ../../bin/interactive.py --scenario simple_world_comm.py

It ran and generate few windows with circles on it. Soon after that, it hang or the circles not moving around or interacting with one and another.

Some of the end result/log:

First attempt:

~/machine_learning/multiagent-particle-envs/multiagent/scenarios$ python3 ../../bin/interactive.py --scenario simple_world_comm.py

agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
Traceback (most recent call last):
  File "../../bin/interactive.py", line 36, in <module>
    env.render()
  File "/home/dragon/machine_learning/multiagent-particle-envs/bin/../multiagent/environment.py", line 261, in render
    results.append(self.viewers[i].render(return_rgb_array = mode=='rgb_array'))
  File "/home/dragon/machine_learning/multiagent-particle-envs/bin/../multiagent/rendering.py", line 104, in render
    self.window.dispatch_events()
  File "/home/dragon/.local/lib/python3.6/site-packages/pyglet/window/xlib/__init__.py", line 856, in dispatch_events
    0x1ffffff, byref(e)):
ctypes.ArgumentError: argument 2: <class 'TypeError'>: wrong type

Second attempt:

~/machine_learning/multiagent-particle-envs/multiagent/scenarios$ python3 ../../bin/interactive.py --scenario simple_world_comm.py

agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
agent 1 to agent 0: _   agent 2 to agent 0: _   agent 3 to agent 0: _   agent 4 to agent 0: _   agent 5 to agent 0: _   agent 0 to agent 1: _   agent 2 to agent 1: _   agent 3 to agent 1: _   agent 4 to agent 1: _   agent 5 to agent 1: _   agent 0 to agent 2: _   agent 1 to agent 2: _   agent 3 to agent 2: _   agent 4 to agent 2: _   agent 5 to agent 2: _   agent 0 to agent 3: _   agent 1 to agent 3: _   agent 2 to agent 3: _   agent 4 to agent 3: _   agent 5 to agent 3: _   agent 0 to agent 4: _   agent 1 to agent 4: _   agent 2 to agent 4: _   agent 3 to agent 4: _   agent 5 to agent 4: _   agent 0 to agent 5: _   agent 1 to agent 5: _   agent 2 to agent 5: _   agent 3 to agent 5: _   agent 4 to agent 5: _   
^CTraceback (most recent call last):
  File "../../bin/interactive.py", line 36, in <module>
    env.render()
  File "/home/dragon/machine_learning/multiagent-particle-envs/bin/../multiagent/environment.py", line 261, in render
    results.append(self.viewers[i].render(return_rgb_array = mode=='rgb_array'))
  File "/home/dragon/machine_learning/multiagent-particle-envs/bin/../multiagent/rendering.py", line 124, in render
    self.window.flip()
  File "/home/dragon/.local/lib/python3.6/site-packages/pyglet/window/xlib/__init__.py", line 500, in flip
    self.context.flip()
  File "/home/dragon/.local/lib/python3.6/site-packages/pyglet/gl/xlib.py", line 360, in flip
    self._wait_vsync()
  File "/home/dragon/.local/lib/python3.6/site-packages/pyglet/gl/xlib.py", line 245, in _wait_vsync
    2, (count.value + 1) % 2, byref(count))
KeyboardInterrupt

I hope the developer or someone from the community able to resolve or figure what wrong with the mentioned issue.

Thank you

Reward Setting for simple_tag.py

Hello,

I thank you OpenAI, amazing contributions, the papers and codes help my research work a lot.
I wonder how come the rewards for adversaries in simple_tag.py are same among them while checking MADDPG working on multiagent-particle-envs. As far as I looked through, I guess it is a bug (not sure it is not supposed) at the reward function adversary_reward() in simple_tag.py (called from reward() <- _get_reward() of MultiAgentEnv in environment.py <- step() <- env.step()... for example).
The bug is, each adversarial agent gets its own reward from other adversarial agents' reward, so every time their rewards are same, because do they share their reward together..? Please check the code below if need to fix.

The mentioned function in simple_tag.py is:

def adversary_reward(self, agent, world):
    # Adversaries are rewarded for collisions with agents
    rew = 0
    shape = False
    agents = self.good_agents(world)
    adversaries = self.adversaries(world)
    if shape:  # reward can optionally be shaped (decreased reward for increased distance from agents)
        for adv in adversaries:
            rew -= 0.1 * min([np.sqrt(np.sum(np.square(a.state.p_pos - adv.state.p_pos))) for a in agents])
    if agent.collide:
        for ag in agents:
            for adv in adversaries:
                if self.is_collision(ag, adv):
                    rew += 10
    return rew

then the last part:

    if agent.collide:
        for ag in agents:
            for adv in adversaries:
                if self.is_collision(ag, adv):
                    rew += 10
    return rew

should be:

    if agent.collide:
        for ag in agents:
            if self.is_collision(ag, adv):
                rew += 10
    return rew

With this fix, each agent, with MADDPG in simple_tag.py, gets each reward.
Thanks.

partial observe function?

I wonder whether the framework support agents' partial observation.for example ,every agent just can see 1 miles around itself.

couldn't run interactive.py

Traceback (most recent call last):
File "C:/Users/gajam/multiagent-particle-envs/bin/interactive.py", line 36, in
env.render()
File "C:\Users\gajam\multiagent-particle-envs\bin..\multiagent\environment.py", line 261, in render
results.append(self.viewers[i].render(return_rgb_array = mode=='rgb_array'))
File "C:\Users\gajam\multiagent-particle-envs\bin..\multiagent\rendering.py", line 110, in render
self.window.flip()
File "C:\Users\gajam\Anaconda3\lib\site-packages\pyglet\window\win32_init_.py", line 309, in flip
self.context.flip()
AttributeError: 'NoneType' object has no attribute 'flip'

I changed the gym version. but still, have this problem. anybody can tell me what should do change?

Image observation

Is there a way to receive the entire rendering screen as observation of each robot without actually rendering anything on the screen?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.