GithubHelp home page GithubHelp logo

kinds-of-intelligence-cfi / animal-ai Goto Github PK

View Code? Open in Web Editor NEW
38.0 3.0 7.0 466.53 MB

Animal-AI supports interdisciplinary research to help better understand human, animal, and artificial cognition.

Home Page: https://sites.google.com/csah.cam.ac.uk/animalai/

License: Apache License 2.0

artificial-intelligence animal animal-behaviour deep-reinforcement-learning machine-learning unity ml-agents game-development

animal-ai's People

Contributors

aidan-curtis avatar alhasacademy96 avatar benaslater avatar chaubeyniha avatar heredone avatar johnburden avatar kozzy97 avatar mdcrosby avatar shenweizhou avatar thanksphil avatar wschella avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

animal-ai's Issues

New release(s) cannot render old config file

The following configuration file, found here (because it is very large, I haven't included it directly in this issue), no longer works with any of the version 3.1.x builds. It was built with v3.0.1 and we used it in a previous study. The objects start erupting upwards and destroying my carefully crafted design 😢 . I am running the windows versions and interacting through examples/play.py.

Unable to use VecFrameStack

StableBaselines allows you to stack multiple environments into one single environment. However, it seems that animal AI does not allow for multiple environments to be open. This limits the type agents one can write from scratch.

Close and Restart AAI for multiple yaml files

The following error arises if you want one script to sequentially test agents on various yaml files.

mlagents_envs.exception.UnityWorkerInUseException: Couldn't start socket communication because worker number 0 is still in use. You may need to manually close a previously opened environment or use a different worker number.

Edge of Screen Gradient Colour Overlay

When picking up a reward, the colour of the edges of the screen show green when the user collects a yellow ball/reward. This can also be applied to training an agent as the overlay of the screen will have no effect on training (the agent will not see the edges of the screen flash green).

Toggle on/off? - to be discussed.

Configurable Episode-Ending Notification for Human Testing via YAML

When we are using Animal-AI for human testing, at the moment there is no indication of whether the user passed or failed an episode. In the web-GL version here, a green 'PASS' plus a happy cat emoji appears upon success, and a red 'FAIL' plus a crying cat emoji appears upon failure.

Is there a way to add an option to have these present or absent when calling AAIEnvironment?

Getting the environment for Mac

I was wondering whether there is something wrong with the environment zip for Mac. When I try to download it the link does not work. When I navigate to the website the link points to and download it, I get an app file that my Mac cannot open nor unzip (running Ventura 13.0).

Remove .egg from version control.

As far as I know, this should not be committed to version control. It might also be interfering with normal collection of sources. As far as I know, we don't do anything special, and everything should be possible to specify in setup.py.

Yaml Helper/Visualiser Script

The motivation for this tool was to speed up the process of writing configuration files. Before this tool, if two items were overlapping in a configuration and the Animal-AI engine was launched with said configuration, the user would not get a summary of the overlap/clash. They would then have to nudge the positions of the problematic items repeatedly until the items fit perfectly when running the engine, which would take a lot of time.

The original aims of this tool are to:

  • Give the user a summary of the items that are overlapping, and how the overlap can be overcome
  • Give the user a lightweight 2d top-view visualisation of their configuration arena (for quick debugging, as opposed to loading up Unity which can be time-consuming, especially if done iteratively)

Add Multiple Cameras for Observation During Training

I think by trying to fix the bug of arenas not cycling properly during training, I realized that it would be very beneficial to be able to view the arena with all three camera perspectives, as overlays. The agent would not be susceptible to this change during training but would really help for debugging and potentially more insightful research as currently the camera is fixed at the perspective of the agent and this makes it harder to test for certain purposes and situations.

This setting could also be toggled on/off by the user via the yaml file or even the command line, such as:

class AAIOptions():
"""The options used by the animalai environment"""
arenaConfig: str = attr.ib() #The path of a valid arena config yaml file
useCamera: bool = attr.ib() #If true, then camera observations are returned
resolution: int = attr.ib() #The (square) resolution of camera observations (if useCamera is true)
grayscale: bool = attr.ib() #If the camera observations are grayscale or RGB
useRayCasts: bool = attr.ib() #If true, then raycast observations are returned
raysPerSide: int = attr.ib() #The number of rays on each side of the central ray (see observations doc)
rayMaxDegrees: int = attr.ib() #The number of degrees between the central ray and furthest ray in each direction.
enableMultiObs: bool = attr.ib() # if true, display all cameras on the UI for complete observation during training. False by default.

Open to discussion
[Mid-to-high priority]

Incompatible with RLLib

The Animal AI environment is incompatible with RLLib because the gym wrapper that is used is a tuple whereas RL Lib needs it to be an object.

Update mlagents to 2.3.0-exp FROM 2.1.0-exp

For the best compatibility between unity-side of things and mlagents, both platforms needs to be aligned in terms of their current changes as per mlagents guidlines: https://github.com/Unity-Technologies/ml-agents/blob/main/docs/Installation.md

TL;DR: it states on mlagents documentary that the minimum version of unity editor supported is 2021.x. This conflicts with the current version of unity editor (2020.x). Therefore, to get the best support and experience in using mlagents, both mlagents and unity editor version are upgraded to at least the minimum unity editor version and mlagents package versions.

Environment download needs to be renamed.

It took me a bit to figure this out, but (at least for the linux version) you have to rename the environment executable and environment data folder to "AnimalAI.x86_64", and "AnimalAI_Data" in order for play.py to recognize it. Additionaly, when you extract the folder, you still have to move the files inside the sub-directory to its parent folder. The instructions fail to mention this, making it difficult to run an example.

Update Documentation [High Level]

We should update the current documentation to factor in the latest changes to AAI in a high level setting. Low level documentation is to be completely reworked from the ground up to offer the best and most up to date detailed information on AAI (potentially durign the merge of the two repos into a single, combined central repo of AAI.

[High Priority]

Configurable Camera Positions and Use of 'n' button for human play via YAML

In the Python API in "player mode", it would be great to have an option to call the AAIEnvironment class with certain features disabled. At the moment, you can press /c/ to change the camera view, and press /n/ and/or /r/ to reset or skip to the next episode. When using this for human testing, we would like to be able to disable these features, selecting only one camera view, and disabling the skips. At the moment, we are getting round this by using a different build with those features disabled inside the C# scripts, but if we could do this through the API that would be great.

Unity Environment Exception / API Incompatibility

Hi

I got the following error with a clean install of the latest version. It happens at the end of the episode. I think I heard something about this before, but just wanted to check if this is expected.

Traceback (most recent call last):
  File "/media/Data/PhD/Code/dreamerv3-animalai/aaitest.py", line 48, in <module>
    load_config_and_play(configuration_file=config)
  File "/media/Data/PhD/Code/dreamerv3-animalai/aaitest.py", line 19, in load_config_and_play
    environment = AnimalAIEnvironment(
  File "/media/Data/PhD/Code/dreamerv3-animalai/.env/lib/python3.10/site-packages/animalai/envs/environment.py", line 56, in __init__
    super().__init__(
  File "/media/Data/PhD/Code/dreamerv3-animalai/.env/lib/python3.10/site-packages/mlagents_envs/environment.py", line 151, in __init__
    raise UnityEnvironmentException(
mlagents_envs.exception.UnityEnvironmentException: The communication API version is not compatible between Unity and python. Python API: 0.15.0, Unity API: 1.5.0.
 Please go to https://github.com/Unity-Technologies/ml-agents/releases/tag/latest_release to download the latest version of ML-Agents.

Versions

animalai=2.0.0
animalai-environment=3.0.2

Configurable Agent/Wall/Floor/Skybox Skins

-------[Moved from unity-ai-repo]

It would be fantastic for the V3.0 Paper and Release to be able to configure the walls, floor, and roof of the arena to be different colours and/or textures. At the moment, you have to create walls around the edges and on the floor and roof in order to do this, which is fiddly and takes up space in the arena.

I'm imagining that the config would look something like this:

!ArenaConfig
arenas:
  0: !Arena
    pass_mark: 10
    t: 100
    wall_texture:
    - "picket-fence"
    floor_texture:
    - "white-noise"
    roof_texture:
    - "forest-scene"

Alternatively, you could pass RGB vectors to these objects and they would be solid blocks of that colour.

This would be very useful for building simple tests for simple agents (the picket fences are quite complicated). Having a range of skins (e.g., white noise, forest-scene, city-scape, etc.) could also be handy for building a narrative for human testing (e.g., "you are about to play a game where you will be an animal in a forest hunting for tasty apples...")

Multiple Arenas in a single config

There is a problem with the experimental build when there are multiple arenas in a single file. I have tested the following config:

!ArenaConfig
arenas:
  0: !Arena
    pass_mark: 1
    t: 500
    items:
    - !Item
      name: Agent
      positions:
      - !Vector3 {x: 20, y: 0, z: 20}
      rotations: [0]
    - !Item
      name: GoodGoal
      positions:
      - !Vector3 {x: 20, y: 0, z: 24}
      sizes:
      - !Vector3 { x: 2, y: 2, z: 2 }
  1: !Arena
    pass_mark: 1
    t: 500
    items:
    - !Item
      name: Agent
      positions:
      - !Vector3 {x: 20, y: 0, z: 20}
      rotations: [0]
    - !Item
      name: GoodGoalMulti
      positions:
      - !Vector3 {x: 20, y: 0, z: 24}
      sizes:
      - !Vector3 { x: 2, y: 2, z: 2 }
  2: !Arena
    pass_mark: 1
    t: 500
    items:
    - !Item
      name: Agent
      positions:
      - !Vector3 {x: 20, y: 0, z: 20}
      rotations: [0]
    - !Item
      name: BadGoal
      positions:
      - !Vector3 {x: 20, y: 0, z: 24}
      sizes:
      - !Vector3 { x: 2, y: 2, z: 2 }

I get only the GoodGoalMulti and BadGoal instances, not the GoodGoal instance when I run play.py with v3.1.2.exp. It works with the current stable build v3.1.1.

Trigger Data "Zones"

Have the arena split into zones where each zone can be used to harvest data specific to that area in the arena.

Work in progress feature.

Update PyPI Package

It seems the pypi package is severely outdated so would need a major update to its logic to accommodate the changes in AAI package/API dependencies. This will also fix the issue where the package has incompatible dependencies (I myself have encountered this and it seems to be related to the sub-dependencies of AAI). This will help future new users from encountering these issues entirely.


[High Priority]

Arena with Randomize

I was trying to running animal AI on a yaml file which has three versions of an arena, the goal was to have it randomized when it is training so it shuffles through the arenas randomly. Therefore, I used the following code "-1: !Arena" to instantiate all three arenas. This did not work as expected and results in only training on the last arena.

!ArenaConfig
arenas:
-1: !Arena
pass_mark: 0
t: 250
items:
- !Item
name: Agent
- !Item
name: GoodGoal
- !Item
name: Wall
positions:
- !Vector3 {x: 0.1, y: 0, z: 20}
- !Vector3 {x: 39.9, y: 0, z: 20}
- !Vector3 {x: 20, y: 0, z: 0.1}
- !Vector3 {x: 20, y: 0, z: 39.9}
rotations: [0, 0, 0, 0]
sizes:
- !Vector3 {x: 0.2, y: 10, z: 40}
- !Vector3 {x: 0.2, y: 10, z: 40}
- !Vector3 {x: 39.6, y: 10, z: 0.2}
- !Vector3 {x: 39.6, y: 10, z: 0.2}
colors: #list
- !RGB {r: 0, g: 0, b: 255}
- !RGB {r: 0, g: 0, b: 255}
- !RGB {r: 0, g: 0, b: 255}
- !RGB {r: 0, g: 0, b: 255}
-1: !Arena
pass_mark: 0
t: 250
items:
- !Item
name: Agent
- !Item
name: GoodGoal
- !Item
name: Wall
positions:
- !Vector3 {x: 0.1, y: 0, z: 20}
- !Vector3 {x: 39.9, y: 0, z: 20}
- !Vector3 {x: 20, y: 0, z: 0.1}
- !Vector3 {x: 20, y: 0, z: 39.9}
rotations: [0, 0, 0, 0]
sizes:
- !Vector3 {x: 0.2, y: 10, z: 40}
- !Vector3 {x: 0.2, y: 10, z: 40}
- !Vector3 {x: 39.6, y: 10, z: 0.2}
- !Vector3 {x: 39.6, y: 10, z: 0.2}
colors: #list
- !RGB {r: 255, g: 0, b: 0}
- !RGB {r: 255, g: 0, b: 0}
- !RGB {r: 255, g: 0, b: 0}
- !RGB {r: 255, g: 0, b: 0}
-1: !Arena
pass_mark: 0
t: 250
items:
- !Item
name: Agent
- !Item
name: GoodGoal
- !Item
name: Wall
positions:
- !Vector3 {x: 0.1, y: 0, z: 20}
- !Vector3 {x: 39.9, y: 0, z: 20}
- !Vector3 {x: 20, y: 0, z: 0.1}
- !Vector3 {x: 20, y: 0, z: 39.9}
rotations: [0, 0, 0, 0]
sizes:
- !Vector3 {x: 0.2, y: 10, z: 40}
- !Vector3 {x: 0.2, y: 10, z: 40}
- !Vector3 {x: 39.6, y: 10, z: 0.2}
- !Vector3 {x: 39.6, y: 10, z: 0.2}
colors: #list
- !RGB {r: 255, g: 255, b: 255}
- !RGB {r: 255, g: 255, b: 255}
- !RGB {r: 255, g: 255, b: 255}
- !RGB {r: 255, g: 255, b: 255}

Simple Main Menu

A simple main menu system could be implemented for human-testing. The menu could point the controls and settings of the platform so that the player can easily and quickly understand the core mechanics of the game (ideal for researchers outside of our research cohort). Most games have a main menu (mine included :)) so it would be nice feature to implement in future versions of the platform.

Sample menu items:

Play
Settings - optional, for future versions of the game. Can include a graphics setting.
Controls - a quick and helpful way of teaching the player of the controls.
Objectives - what the purpose and objective of the game is about.

Depending on the complexity of the code - should be relatively straight-forward to implement.

Priority - long-term.

Configurable Arena Size

-------[Moved from unity-ai-repo]

For Version Paper & Release, it would be good to be able to configure the size of the arena. The default can be 40x40 as it is now, but perhaps it would be good to be able to go from, say, 4x4 up to 120x120 or something (maybe infinite sized arenas for v4.0+?). It'll make the spawning coordinates trickier for the user as they'll have to bear in mind what size arena they work with, but otherwise, it'll have many use cases. For example, when training agents, it can be handy to have a small arena first, so that everything is close to everything else and they're more likely to hit a reward by chance.

I'm imagining a config like this:

!ArenaConfig 
arenas:
  0: !Arena
    pass_mark: 0
    t: 100
    arena_dims: [40,40]
    items:
    - !Item
      name: Agent
      positions:
      - !Vector3 {x: -1, y: 0, z: -1}
    - !Item
      name: GoodGoalMulti
      sizes:
      - !Vector3 {x: 0.5, y: 0.5, z: 0.5}

Failing to run examples/gymwrapper.py

I would like to test whether I can use the environment as a gym enviromnent with an RL agent. I am running it on a server and would like the environment to have no graphics. For this I am running the script examples/gymwrapper.py , having added no_graphics=True when calling AnimalAIEnvironment(). However I must be doing something wrong because the script gets stuck and then times out with message:

[UnityMemory] Configuration Parameters - Can be set up in boot.config "memorysetup-bucket-allocator-granularity=16" "memorysetup-bucket-allocator-bucket-count=8" "memorysetup-bucket-allocator-block-size=4194304" "memorysetup-bucket-allocator-block-count=1" "memorysetup-main-allocator-block-size=16777216" "memorysetup-thread-allocator-block-size=16777216" "memorysetup-gfx-main-allocator-block-size=16777216" "memorysetup-gfx-thread-allocator-block-size=16777216" "memorysetup-cache-allocator-block-size=4194304" "memorysetup-typetree-allocator-block-size=2097152" "memorysetup-profiler-bucket-allocator-granularity=16" "memorysetup-profiler-bucket-allocator-bucket-count=8" "memorysetup-profiler-bucket-allocator-block-size=4194304" "memorysetup-profiler-bucket-allocator-block-count=1" "memorysetup-profiler-allocator-block-size=16777216" "memorysetup-profiler-editor-allocator-block-size=1048576" "memorysetup-temp-allocator-size-main=4194304" "memorysetup-job-temp-allocator-block-size=2097152" "memorysetup-job-temp-allocator-block-size-background=1048576" "memorysetup-job-temp-allocator-reduction-small-platforms=262144" "memorysetup-temp-allocator-size-background-worker=32768" "memorysetup-temp-allocator-size-job-worker=262144" "memorysetup-temp-allocator-size-preload-manager=262144" "memorysetup-temp-allocator-size-nav-mesh-worker=65536" "memorysetup-temp-allocator-size-audio-worker=65536" "memorysetup-temp-allocator-size-cloud-worker=32768" "memorysetup-temp-allocator-size-gfx=262144" INFO:mlagents_envs:Environment timed out shutting down. Killing... Traceback (most recent call last): File "/home/eleni/eleni/workspace/playground/GrowAIenvs/animal-ai/animalai/../examples/gymwrapper.py", line 62, in <module> train_agent_single_config(configuration_file=configuration_file) File "/home/eleni/eleni/workspace/playground/GrowAIenvs/animal-ai/animalai/../examples/gymwrapper.py", line 15, in train_agent_single_config aai_env = AnimalAIEnvironment( File "/home/eleni/eleni/workspace/playground/GrowAIenvs/animal-ai/animalai/./animalai/envs/environment.py", line 145, in __init__ super().__init__( File "/home/eleni/anaconda3/envs/animalAIp10/lib/python3.10/site-packages/mlagents_envs/environment.py", line 142, in __init__ aca_output = self.send_academy_parameters(rl_init_parameters_in) File "/home/eleni/anaconda3/envs/animalAIp10/lib/python3.10/site-packages/mlagents_envs/environment.py", line 549, in send_academy_parameters return self.communicator.initialize(inputs) File "/home/eleni/anaconda3/envs/animalAIp10/lib/python3.10/site-packages/mlagents_envs/rpc_communicator.py", line 98, in initialize self.poll_for_timeout() File "/home/eleni/anaconda3/envs/animalAIp10/lib/python3.10/site-packages/mlagents_envs/rpc_communicator.py", line 90, in poll_for_timeout raise UnityTimeOutException( mlagents_envs.exception.UnityTimeOutException: The Unity environment took too long to respond. Make sure that : The environment does not need user interaction to launch The Agents are linked to the appropriate Brains The environment and the Python interface have compatible versions.

Update GIFs in Readme

The GIFs need updating with the new environment, and it might be worth adding new examples for a fresher look.

AnimalAI Multi Arena Episodes

AAI’s episodic structure is currently tied to arenas: agents spawn into an arena and interact with it until it completes (e.g. because the goal is reached, timeout, etc.). Then the agent is reset and the next arena is loaded.

Animals do not interact with their environment in an episodic way:
Learning happens continuously
Animals remember events from arbitrarily far in the past and use this knowledge to shape their behaviour
Context switches are not well defined and instead happen gradually or imperceptibly

Tests in Cognitive Science can make use of these features. For example an episodic memory task may involve allowing an agent to learn a route through a maze to a goal, and then presenting it with the same maze but with a path blocked (Sara and Seraphina are working on tests of this type in babies and AI, respectively).

In AAI currently the only way an agent can learn about the structure of the arena and use that knowledge elsewhere is through training on that arena. This is a problem because training can destroy previous capabilities (catastrophic forgetting), and so we couldn’t analyse a “generalist agent’s” ability to learn the structure of new mazes in this way.

A more valid way to explore this would be to have an agent learn about the structure of the maze in context (i.e. during an episode) and then have that episode continue when the maze is switched to a new configuration.

To allow this, users should be able to specify that multiple arenas can be grouped together into one episode.

Agent Interaction for Reward

The agent would need to interact with it's environment in order to manipulate reward spawning. Like the reward 'centers' in the environment where a reward is spawned at specified timed intervals, a similar scenario would be to spawn a reward every time the agent triggers a condition (such as a 'touching' a button or standing close to a trigger field).

Priority: High

Sign Poster Boards Not Rendering

Hi,

I was trying to build some experiments with the SignPosterBoard object. The following configuration does not seem to render. I have tried with v3.1.1 and v3.1.2.exp. The only build that works is v3.0.1.

The configuration is this:

!ArenaConfig
arenas:
  0: !Arena
    pass_mark: 0
    t: 250
    items:
    - !Item
      name: Agent 
      positions: 
      - !Vector3 {x: 20, y: 0, z: 10} 
      rotations: [0]
      skins:
      - "panda"
    - !Item 
      name: SignPosterboard
      positions:
      - !Vector3 {x: 10, y: 0, z: 32}
      rotations: [270]

I've tried with a range of positions in all 3 dimensions for the signposterboard, with no luck.

Taking action in env

In the AnimalAI environment, when you want to take a step, the action has to be called as action.item(). This may be an issue with AnimalAI or it could be due to the way stable baselines stores the model.

while not done:
action, _states = model.predict(obs)
obs, rewards, done, info = env.step(action.item())
env.render()

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.