- Main: https://aiva.ai/
- Release discord: https://discord.com/channels/595651381860368384/596767903215255572
- Tutorial list @ YouTube https://www.youtube.com/watch?v=SR-UWkSTmAQ&list=PLv7BOfa4CxsHp4uDdsmZgpdclrwkdMpOe
- Check Terms and Conditions and create an account
- Generate 3 versions of a compositions on the web interface
- Open the best one in Editor and edit to improve
- Use an influence, repeat to last steps above for another composition
- Identify the Model / Data / Code structure
- Show and tell: including influence and ethics
- Download your work as MIDI (we'll use in 1.3 onwards)
- Main: https://playground.tensorflow.org/
- Tinker with the defalult example: What are features, learning rate, activation functions?
- Challenge: Can you create a neural network that only uses the first 2 features as input, and linear as the activation function?
- Exit: Check around tensorflow @ https://www.tensorflow.org/ (Main tool for Magenta)
- Explore https://magenta.tensorflow.org/get-started
- Find NSynth Explorer at the Web Applications: https://magenta.tensorflow.org/demos/web/ , try it out
- Main: https://magenta.tensorflow.org/nsynth
- Check https://magenta.tensorflow.org/nsynth-instrument If you have Ableton Live or Max/MSP, download the plugin, experiment with the grid. If not, try other web apps.
- Download Magenta Studio https://magenta.tensorflow.org/studio (If you have Ableton Live, the plugin version, if not the standalone)
- Make sure to experiment with all 5 tools: Continue, Groove, Generate, Drumify, and Interpolate. Apply Magenta models to your MIDI file from (1.1).
- Show and tell at 13:30.
- Main: https://interactiveaudiolab.github.io/project/audacity.html
- Download special build of audactiy (Mac only ???)
- Try Usage Example - Upmixing and Remixing with Source Separation with your favorite audio file
- Try adding vocals to your AIVA composition. We have to match the pitch, tempo, harmony, or other attributes with effects.
- Magenta's DDSP: https://magenta.tensorflow.org/ddsp-vst
- DDSP models by its authors: https://drive.google.com/drive/folders/1o00rBOLPNEZWURCimK_QQWpvR8iWVeK5
- DDSP + TikTok-like morphing: https://mawf.io/
- ๐ฅ Neutone: https://neutone.space/
- Google Music LM: https://google-research.github.io/seanet/musiclm/examples/
- Meta AudioCraft: https://audiocraft.metademolab.com/
- Brand-new: web UI for LLMs https://sonauto.app/ (requires Google log in)
Integrating ๐ฅ Neutone: https://neutone.space/ in your workflow, create a short musical piece. Ensure to use demucs for source seperation and several generative models together. Be mindful about the resources (sampling rate, buffer size, Real-time factor, latency etc): In most cases you'll be able to run max three instances even on a high-end computer.
Critical listening and discussion. The final pieces will be linked here.
Harmonai, Dance Diffusion https://www.youtube.com/watch?v=KmB8z2CYjZY
Harmonai https://www.harmonai.org/
Dadabots keynote https://www.youtube.com/watch?v=70PjXAOmQIs
Moises horta hexorcismos https://moiseshorta.audio/
Nice exhibition by ai ethics researcher mirabelle jones, open until aug 27 https://facebook.com/events/s/overs%C3%A6ttelse-af-traumer-transl/2253077421542112/