GithubHelp home page GithubHelp logo

jnnnnn / deepsaber Goto Github PK

View Code? Open in Web Editor NEW

This project forked from oxai/deepsaber

0.0 1.0 0.0 83.1 MB

A deep learning approach to generating Beat Saber levels

License: GNU General Public License v3.0

Python 77.29% Shell 21.84% JavaScript 0.51% PowerShell 0.36%

deepsaber's Introduction

Google colab (allows you to generate levels on the browser!): https://colab.research.google.com/drive/1a-wN-f7xXjpaqq3tKpR29rtkc-kabo3b#scrollTo=YvhabvameUo4

Join our discord here! https://discord.gg/T6djf8N

Welcome to the readme for DeepSaber, an automatic generator of BeatSaber levels. There is a lot of stuff here, fruit of a lot of work by the team in OxAI Labs. Contact me at guillermo . valle at oxai.org , or on twitter (@guillefix) for any questions/suggestions!

Google Doc: https://docs.google.com/document/d/1UDSphLiWsrbdr4jliFq8kzrJlUVKpF2asaL65GnnfoM/edit

TLDR generation

Requirements/Dependencies

From Pypi, using pip:

From your favorite package manager:

  • sox (e.g. sudo apt-get install sox)
  • ffmpeg

Reccommended hardware:

  • Nvidia GPU with CUDA [:/ unfortunately, stage 2 is too slow in CPU (although it should work in theory.., after removing "cuda" options in "./scrit_generate.sh" below]

(Do this first time generating) Download pre-trained weights from https://mega.nz/#!tJBxTC5C!nXspSCKfJ6PYJjdKkFVzIviYEhr0BSg8zXINBqC5rpA, and extract the contents (two folders with four files in total) inside the folder scripts/training/.

Then, to generate a level simply run (if on linux):

cd scripts/generation

./script_generate.sh [path to song]

Or on windows:

`.\script_generate.ps1 [path to song]

where you should substitute [path to song] with the path to the song which you want to use to generate the level, which should be on wav format (sorry). Also it doesn't like spaces in the filename :P . Generation should take about 3 minutes for a 3 minutes song, but it grows (I think squared-ly) with the length, and it will depend on how good your GPU is (mine is a gtx 1070).

This will generate a zip with the Beat Saber level which should be found in scripts/generation/generated. You should be able to put it in the custom levels folders in the current version of DeepSaber (as of end of 2019).

On windows, you'll have to convert the song .wav to an .ogg file (and subsequently song.egg manually. You can use the ffmpeg invocation from the error messages of the script with a windows build, or use audacity or handbrake or whatever GUI you like.

I also recommending reading about how to use the "open_in_browser" option, described in the next section, which is quite a nice feature to visualize the generated quickly and easy to set up if you have dropbox.

Pro tip: If the generated level doesn't look good (this is deep learning, it's hard to give guarantees :P), try changing in ./script_generate.sh

cpt2=2150000
#cpt2=1200000
#cpt2=1450000

to

#cpt2=2150000
#cpt2=1200000
cpt2=1450000

See below for explanation

Further generation options

[TODO] make this more user friendly.

If you open the script scripts/generation/script_generate.sh in your editor, you can see other options. You can change exp1 and exp2, as well as the corresponding cpt1 and cpt2. These correspond to "experiments" and "checkpoints", and determine where to get the pre-trained network weights. The checkpoints are found in folders inside scripts/training, and cpt1/cpt2, just specify which of the saved iterations to use. If you train your own models, you can change those to generate using your trained models. You can also change them to explore different pre-trained versions available at https://mega.nz/#!VEo3XAxb!7juvH_R_6IjG1Iv_sVn1yGFqFY3sQVuFyvlbbdDPyk4 (for example DeepSaber 1 used the latest in "block_placement_new_nohumreg" for stage 1 and the latest in "block_selection_new"), but the one you downloaded above is the latest one (DeepSaber 2, trained on a more curated dataset), so should typically work best (but there is always some stochasticity and subjectivity so).

You can also change the variable type from deepsaber to ddc to use DDC as the stage 1 (where in times to put notes), while still using deepsaber for stage 2 (which notes to put at each instant for which stage 1 decides to put something). But this requires setting up DDC first. If you do, then just pass the generated stepmania file as a third command argument, and it should work the same.

There is also an "open in browser" option (which is activated by uncommenting the line #--open_in_browser inside the deepsaber if block), which is very useful for testing, as it gives you a link with a level visualizer on the broser. To set it up, you just need to set up the script scripts/generation/dropbox_uploader.sh. This is very easy, just run the script, and it will guide you with how to link it to your dropbox account (you need one.).

A useful parameter to change also is the --peak threshold. It is currently set at about 0.33, but you can experiment with it. Putting it higher, makes it output less notes, and putting it lower, makes more notes.

If you dig deeper, you can also disable the option --use_beam_search, but the outputs are then usually quite random -- you can also try setting the --temperature parameter lower to make it a less so, but beam search is typically better.

Digging even deeper, there is a very hidden option :P inside scripts/generation/generate_stage2.py in line 59, there opt["beam_size"] = 17. You can change this number if you want. Making it larger means the generation will take longer but it will typically be of higher quality (it's as if the model thinks harder about it), and making it smaller has the opposite effect, but can be a good thing to try if you want fast generation for some reason.

You could change opt["n_best"] = 1 to something greater than 1, and change some other code, to get outputs that model thought "less likely" and explore what the model can generate [contact me for more details].

Example of whole pipeline

Requirements/Dependencies

  • numpy
  • librosa
  • pytorch
  • mpi4py (only for training->data_processing)

This is a quick run through the whole pipeline, from getting data, to training to generating:

Run all this in root folder of repo

Get example data

wget -O DataSample.tar.gz https://www.dropbox.com/s/2i75ebqmm5yd15c/DataSample.tar.gz?dl=1

[Can also download the whole dataset here: https://mega.nz/#!sABVnYYJ!ZWImW0OSCD_w8Huazxs3Vr0p_2jCqmR44IB9DCKWxac]

tar xzvf DataSample.tar.gz

mv scripts/misc/bash_scripts/extract_zips.sh DataSample/

cd DataSample; ./extract_zips.sh

rm DataSample/*.zip

mv DataSample/* data/extracted_data

Get reduced state list

wget -O data/statespace/sorted_states.pkl https://www.dropbox.com/s/ygffzawbipvady8/sorted_states.pkl?dl=1

Data augmentation (optional)

scripts/data_processing/augment_data.sh

extract features

Dependencies: librosa, mpi4py (and mpi itself). TODO: make mpi an optional dependency.

You can change the "Expert,ExpertPlus" with any comma-separated (and with no spaces) list of difficulties to train on levels of those difficulties.

mpiexec -n $(nproc) python3 scripts/feature_extraction/process_songs.py data/extracted_data Expert,ExpertPlus --feature_name multi_mel --feature_size 80

mpiexec -n $(nproc) python3 scripts/feature_extraction/process_songs.py data/extracted_data Expert,ExpertPlus --feature_name mel --feature_size 100

pregenerate level tensors (new fix that makes stage 1 training much faster)

The way this works is that we need to run this command for each difficulty level we want to train on. Here Expert and ExpertPlus

mpiexec -n 12 python3 scripts/feature_extraction/process_songs_tensors.py ../../data/DataSample/ Expert --replace_existing --feature_name multi_mel --feature_size 80

mpiexec -n 12 python3 scripts/feature_extraction/process_songs_tensors.py ../../data/DataSample/ ExpertPlus --replace_existing --feature_name multi_mel --feature_size 80

training

Dependencies: pytorch

Train Stage 1. Either of two options:

  • (wavenet_option): scripts/training/debug_script_block_placement.sh
  • (ddc option): scripts/training/debug_script_ddc_block_placement.sh

Train Stage 2: scripts/training/debug_script_block_selection.sh

generation (using the model trained as above)

To generate with the models trained as above, you need to edit scripts/generation/script_generate.sh and change the variable exp1 to the experiment name from which we want to get the trained weights: if following the example above it would be either test_block_placement or test_ddc_block_placement if used ddc; change the variable exp2 to test_block_selection, change cpt1 to the latest block placement iteration, and cpt2 to the latest block selection iteration. The latest iterations can be found by looking for files in the folders in scripts/training/ with the names of the different experiments have the form iter_[checkpoint]_net_.pth.

To use the ddc options, or the "open in browser" option requires more setting up (specially the former). But the above should generate a zip file with the level.

  • The "open in browser" option is very useful for visualizing the level. You just need to set up the script scripts/generation/dropbox_uploader.sh. This is very easy, just run the script, and it will guide you with how to link it to your dropbox account (you need one.)

  • The DDC option requires setting up DDC (https://github.com/chrisdonahue/ddc), which now includes a docker component, and requires its own series of steps. But hopefully the new trained model will supersede this.

Getting the data

[TODO] Here we describe the scripts to scrap Beastsaver and BeastSaber to get the training data

download data

scripts/data_retrieval/download_data.py

obtain the most common states to use for the reduced state representation

scripts/data_processing/state_space_functions.py

train

prepare and preprocess data

data augmentation

scripts/data_processing/augment_data.sh

data preprocessing

scripts/feature_extration/process_songs.py []

##training

scripts/training/script_block_placement.sh

See more at readme in scripts/training/README.md

deepsaber's People

Contributors

achatrian avatar furll avatar goncaloxyz avatar guillefix avatar hiinaspace avatar ralphabb avatar timothyseabrook avatar tseabrook avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.