GithubHelp home page GithubHelp logo

jonike / musical-neural-net Goto Github PK

View Code? Open in Web Editor NEW

This project forked from mcleavey/musical-neural-net

0.0 0.0 0.0 1.43 GB

Train an LSTM to generate piano or violin/piano music.

Home Page: http://www.christinemcleavey.com/human-or-ai

Jupyter Notebook 4.17% Python 95.44% Shell 0.39%

musical-neural-net's Introduction

Clara: A Neural Net Music Generator

Take the AI vs Human Quiz.

Train an AWD-LSTM to generate piano or violin/piano music
Project overview is here.
Detailed paper is here.

Requirements:

Note: From inside the musical-neural-net home directory, run:
ln -s ./replace/this/with/your/path/to/fastai/library fastai 

to create a symbolic link to the fastai library. Alternately, this blog has a clear description of how to get an AWS machine up and running with FastAI already good to go.

You will also likely need to use sudo apt install to get fluidsynth, mpg321, and twolame.

Basic:

Run the Jupyter Notebook BasicIntro.ipynb or follow the individual instructions here. To create generations with a pretrained notewise model, using only the default settings, run:
python make_test_train.py --example
python generate.py -model notewise_generator -output notewise_generation_samples

The output samples will be in data/output/notewise_generation_samples, or open Playlist.ipynb to listen to the output samples. I recommend the free program MuseScore to translate the midi files into sheet music.

Note, you must first make sure the requirements (above) are installed.

Data:

If you use your own midi files, they should go in data/composers/midi/piano_solo or data/composers/midi/chamber (the project expects to see a folder of midi files for each composer, ie: data/composers/midi/piano_solo/bach/example_piece.mid).

Run:

python midi-to-encoding.py

to translate midi files to text files in the various notewise and chordwise options.

My dataset is available here (you can download any or all):
Put these in data/composers/notewise:

Put these in data/composers/chordwise: (Run tar -zxvf thisfilename.tar.gz to expand each one.)

Training and Generation:

  • make_test_train.py - create the training and testing datasets (adjust notewise/chordwise, optionally create only a small sample size)
  • train.py - train an AWD-LSTM (adjust model parameters, dropout, and training regime)
  • generate.py - generate new samples (adjust generation size)
Each script has default settings which should be reasonable, but use --help to see the different options and parameters which can be modified.

If you use the data files I've linked above, those are quite large, and will take a long time to train. If you are looking to experiment with different training networks, I'd highly recommend at first using --sample .1 (10% of the data) with make_test_train.py, so that you have a much smaller dataset to play with and can iterate faster.

Playlist.ipynb is a simple Jupyter Notebook which creates a nicely formatted playlist for listening to all the generations.

Music Critic:

  • make_critic_data.py - create the training and test datasests (requires a trained generation model to create the fake data)
  • critic.py - trains a classifier to predict if a sample is human-composed or LSTM-composed

Composer Classifier:

  • make_composer_data.py - create the training and test datasests (all from human composed pieces)
  • composer_classifier.py - trains a classifier to predict which human composed the piece

Pretrained Models:

Sample pretrained models are included in this repository. They were trained using the default settings (all composers, notewise using a sample frequency 12, chordwise using a sample frequency 4).
  • notewise_generator
  • chordwise_generator
  • chamber_generator (uses notewise encoding)
  • notewise_critic
  • notewise_composer_classifier

For example, use:

python generator.py -model notewise_generator -output notewise_generation_samples --random_freq 0.8 --trunc 3

to generate musical samples.

musical-neural-net's People

Contributors

mcleavey avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.