GithubHelp home page GithubHelp logo

necto's Introduction

Necto

Nexto Teamplay

What is this?

This is Necto, our community machine learning Rocket League bot. It has learned to play 1's, 2's, and 3's thanks to RLGym. Our end goal is making a version that can take down pros!

So far, we've made 3 versions:
V1: Necto - Around Diamond level.
V2: Nexto - Approximately Grand Champion 1 level in 1v1, 2v2 and 3v3 (top 0.12%, 0.95%, 0.46% of the playerbase respectively)
V3: Tecko - Canceled due to lack of improvement.
V4: N/A - Moved to new project.

Some videos:

How does it work?

These bots are trained with Deep Reinforcement Learning, a type of Machine Learning. We have several games playing at super speeds behind the scenes while the data is collected and learned from. We ingest these games using a custom built distributed learning system.

We define rewards that the bot tries to achieve. Over time, behavior that leads to more reward gets reinforced, which leads to better Rocket League play.

Can I play against it?

Yup! Go download the RLBot pack and Nexto and Necto are some of the bots that you can play against. Make sure fps is set to 120 and VSync is turned off.

Can I watch it learn?

We are currently not training it, but Necto/Nexto/Tecko were shown on our Twitch stream, which is currently streaming other bots training.

Graphs are also available for our fellow nerds.

Could it learn by looking at Pro/SSL replays?

Yes! Inspired by Video PreTraining, we can now use replay files to learn from human gameplay and see years of gameplay before setting a wheel on the field. Occasionally, we showcase its progress on our Twitch stream. In the future, we plan to follow up this "jumpstarted" training with live reinforcement learning.

Here's a repository containing code and explanation

Could it learn by playing against me?

In theory it could, however in practice the rate at which we can collect data by pitting them against each other at high speed would be very hard to match by using humans (we'd need hundreds of people playing at once). The humans would ideally also be approximately equal skill to the bot at all points.

Can I donate my compute to help it learn faster?

We're currently not accepting compute donations but thanks for your interest!

What is Nexto+?

Nexto+ is a secret post-training upgrade to Nexto that increases its already impressive skill. It is not available for play but may make appearances in future RLBot tournaments.

What is Toxic Nexto?

Toxic Nexto is a version of Nexto at the same skill level but provides that authentic Rocket League experience of harsh words and bad vibes. Its equally mean to its opponents as to its teammates.

It can be played against in the RLBot pack.

necto's People

Contributors

danieldowns avatar lucas-emery avatar pizzalord22 avatar rolv-arild avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

necto's Issues

ModuleNotFoundError: No module named 'multi_stage_discrete_policy'

File "C:\Users\example\AppData\Local\necto2\venv\lib\site-packages\rocket_learn\rollout_generator\redis\utils.py", line 60, in _unserialize_model
agent = pickle.loads(buf)
ModuleNotFoundError: No module named 'multi_stage_discrete_policy'
Press any key to continue . . .

Errors while running with RLBot (could not broadcast input array...)

  • Installed RLBot today [ 2 / 16 / 2022 ]
  • Installed the default python version they recommended [ 3.7.9 ]
  • Installed required packages for Necto through the RLBotGUI
  • (noticed I needed node) Installed node from nodejs.org
  • Tried adding the bot to a 2v1 match; bots don't properly run... there are errors in the RLBot debug window

Traceback:

image

The bot doesn't play the game

I installed the bot and it's everything alright, running RL and starting private match. The issue is that nexto, necto doesn't play, only runs straight, ignores the ball.. must be something wrong with the training, it just seems dumb. Anyone knows what is going on? Thanks!!

Pre-normalization question

Why does demo time consider 120 fps ?

The observation builder has a comment in front of the demo timer (Pre-normalized, 120 fps for 10 seconds). However, if the game speed is set to 100, the documentation of rlgym vaguely states that it will run at 240 fps, so why are we considering 120 fps here?

Observation builder file
self.boost_timers -= self.tick_skip / 1200 # Pre-normalized, 120 fps for 10 seconds

RLGym reference
:param game_speed: The speed the physics will run at, leave it at 100 unless your game can't run at over 240fps

Unsolvable error

Hello, I have the RLBot zip file and I cannot use Necto. Everytime I launch it, it says:
ERROR: Could not find a version that satisfies the requirement torch==1.9.1+cpu (from versions: none)
ERROR: No matching distribution found for torch==1.9.1+cpu
Encountered exception: No module named 'rlbot'

I do not know how to solve this, as I have tried many ways to fix it (such as downloading torch manually, placing the RLBot file inside the Necto file, etc.

Any help is appreciated.

[IDEA/Question] How much thought went into emulating RL with a barebones model?

I don't want to sound presumptuous because I'm far from an expect but wouldn't a physics emulation without graphics allow for a simultaneous training of say 10,000 games at once instead of like 50?

That AI wouldn't be working straight off but if you have a decent emulation wouldn't the bot have a huge head start after a minor adaptation phase?

Would this be of interest if I decide to pursue this further?

Having difficulties with the training

I've been trying to train a bot using the training folder in this repository, although I'm not sure if I'm the only one having issues when trying to train it with redis and wandb from docker.
I also tried with the rocket_learn examples and I also used "mklink /d", but I'm having difficulties setting it up.

Could you tell me the steps required to train a bot with the starter files ?

First installation from branch human_addition

I install from branch human_addition and after run runDistributedWorker and get error from rocket_learn:

Traceback (most recent call last):
File "worker.py", line 14, in
from rocket_learn.rollout_generator.redis_rollout_generator import RedisRolloutWorker, _unserialize
ModuleNotFoundError: No module named 'rocket_learn.rollout_generator.redis_rollout_generator'

What version of what branch need to be rocket_learn install? to be this work after install. After that is couple more errors like tuples from rocket_learn and etc that crashes when match start.

Rocket_learn i install from that line
python -m pip install -U git+https://github.com/Rolv-Arild/rocket-learn.git

password and ip for Necto traning

To start computing
clone repo
start cmd as admin
create symlink to repo\training\training repo\training (mklink /D path\to\repo\training\training path\to\repo\training)
close cmd
open repo\training in windows explorer or non-admin cmd
Run runDistributedWorker.bat
enter your name + server ip + server password
when asked for the IP and password, dm Soren and he'll fill you in.

Can i get password and ip or copy of redis database @DanielDowns?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.