GithubHelp home page GithubHelp logo

synthax's Introduction

SynthAX: A Fast Modular Synthesizer in JAX ⚡️

Pyversions PyPI version PyPI - License Paper

Accelerating audio synthesis far beyond realtime speeds has a significant role to play in advancing intelligent audio production techniques. SynthAX is a fast virtual modular synthesizer written in JAX. At its peak, SynthAX generates audio over 90,000 times faster than realtime, and significantly faster than the state-of-the-art in accelerated sound synthesis. It leverages massive vectorization and high-throughput accelerators. You can get started here Colab

Basic synthax API Usage

import jax
from synthax.config import SynthConfig
from synthax.synth import ParametricSynth

# Instantiate config
config = SynthConfig(
    batch_size=16,
    sample_rate=44100,
    buffer_size_seconds=4.0
)

# Instantiate synthesizer
synth = ParametricSynth(
    config=config,
    sine=1,
    square_saw=1,
    fm_sine=1,
    fm_square_saw=0
)

# Initialize and run
key = jax.random.PRNGKey(42)
params = synth.init(key)
audio = jax.jit(synth.apply)(params)

Installation

The latest synthax release can directly be installed from PyPI:

pip install synthax

If you want to get the most recent commit, please install directly from the repository:

pip install git+https://github.com/PapayaResearch/synthax.git@main

In order to use JAX on your accelerators, you can find more details in the JAX documentation.

Acknowledgements & Citing

If you use synthax in your research, please cite the following:

@conference{cherep2023synthax,
    title = {SynthAX: A Fast Modular Synthesizer in JAX},
    author = {Cherep, Manuel and Singh, Nikhil},
    booktitle = {Audio Engineering Society Convention 155},
    month = {May},
    year = {2023},
    url = {http://www.aes.org/e-lib/browse.cfm?elib=22261}
}

This project is based on torchsynth. We acknowledge financial support by Fulbright Spain.

synthax's People

Contributors

mcherep avatar nikhilsinghmus avatar

Stargazers

Glenn 'devalias' Grant avatar Kade Heckel avatar Zhou Chang avatar Davide Gabrielli avatar Sebastian Murgul avatar Jarredou avatar  avatar Hao Hao Tan avatar Axel Delafosse avatar  avatar Junichi Shimizu avatar Jonathan avatar Jake avatar Victor Shepardson avatar  avatar Arnau Quera-Bofarull avatar Sofian Mejjoute avatar Jack Armitage avatar  avatar Cyrus Asfa-Vahidi avatar Jordie Shier avatar Slice avatar qgzang avatar 爱可可-爱生活 avatar Jessie Mindel avatar Pawel Cyrta avatar Rodrigo Diaz avatar David Südholt avatar Graham Wheeler avatar Mohamed Meera Fazil  avatar Yuan-Man avatar Darius Petermann avatar Milkii Brewster avatar Hugo Flores García avatar Jessica Shand avatar

Watchers

David Südholt avatar  avatar  avatar

Forkers

yuan-manx pieroot

synthax's Issues

Ensuring gradient flow from `flax` neural network output to `synthax` synth parameters

Hello, synthax team!

I'm working on a project where I use synthax to generate audio based on parameters predicted by a neural network (NN) using flax. The NN predicts a set of parameters that should be passed into the synth using jax.jit(synth.apply)(params) to create audio.

However, I've encountered a challenge regarding the gradient flow from the network predictions to the synthax synthesizer parameters. The goal is to backpropagate through the synthesizer's parameter updates to optimize the NN's predictions. To achieve this, I'm attempting to maintain the gradient information from the NN's output as it's transformed and applied to the synthesizer.

My primary concern is ensuring that the transformation process from the output of the NN to the FrozenDict maintains the gradient information so that the NN can be effectively trained based on the audio output.

Could you provide guidance or best practices on how to ensure that the gradient flow is preserved during this transformation process? Specifically, I'm looking for advice on how to structure this transformation in a way that is compatible with JAX's automatic differentiation, ensuring that gradients can propagate back through the parameter transformation step to the NN. I see that the structure of flax networks and the synths in synthax are similar so I think I am missing something.

Any insights or recommendations would be greatly appreciated.

Thank you!
Jofre

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.