GithubHelp home page GithubHelp logo

Comments (10)

flavioschneider avatar flavioschneider commented on June 18, 2024

I'm not an expert on this, and I think it would be hard to tell without running experiments. If I had to guess (seeing how similar models are scaled in for image generation), I would increase the number of resnet blocks num_blocks: [2, 2, 2, 3, 3, 3] or num_blocks: [2, 2, 2, 4, 4, 4] depending on how large you want to go. You could also play with multipliers to increase the number of channels, e.g. multipliers: [1, 2, 4, 4, 4, 8, 8].

from audio-diffusion-pytorch.

zaptrem avatar zaptrem commented on June 18, 2024

Thanks! I'll try those settings. So you'd leave the attention features/heads/etc the same?

from audio-diffusion-pytorch.

flavioschneider avatar flavioschneider commented on June 18, 2024

Heads definitely, you could increase attention_features to 128 then you would have a total of 128*8=1024 hidden features, which matches Imagen if I'm not wrong. They use twice the number of attention hidden features as channels (since if you have 128 channels with a multiplier of 4 that would be 512 channels).

Btw let me know if you get good results and which setting you end up using :)

from audio-diffusion-pytorch.

zaptrem avatar zaptrem commented on June 18, 2024

I stuck with the numbers you described in your first comment and left attention_features untouched. You can follow the results here: https://wandb.ai/zaptrem/diffusion-pop-3?workspace=user-zaptrem

I also ran into a couple issues with my PC overheating (really hot this weekend!) and doubled the dataset size half way through, which explains the weird loss curves. Additionally, this doesn't include your more recent context_channels commit. Do you think it's worth resetting with the increased attention and context channels? Or seeing this one further through?

Also, does this library use the VAE encoding trick Stable Diffusion uses to increase efficiency?

from audio-diffusion-pytorch.

flavioschneider avatar flavioschneider commented on June 18, 2024

Thanks for sharing! The context channels commit is for some experiments I'm doing with conditioning so it's not necessary for unconditional generation. Hard to tell what's worth trying, I would wait for this experiment to be done and maybe run another where you only change the attention size to compare which is more influent.

I tried the VAE to increase efficiency but it's very hard to train a good VAE. There's no good loss function for audio, and it's also hard to make diffusion work with that. I would leave that away for now if you don't what to do lots of experiments :)

from audio-diffusion-pytorch.

zaptrem avatar zaptrem commented on June 18, 2024

it's very hard to train a good VAE

Could one just steal the pretrained VQVAEs from OpenAI's Jukebox? Or is that type not useful for efficiency improvements like that of Stable Diffusion?

from audio-diffusion-pytorch.

flavioschneider avatar flavioschneider commented on June 18, 2024

I'm not sure that would work since in order to add noise to the encoded input it needs to be in the range -1,1 with a mean of 0. Maybe if properly regularized

from audio-diffusion-pytorch.

zaptrem avatar zaptrem commented on June 18, 2024

That makes sense. Is there any reason a pyramid of diffusers a-la Jukebox's transformer priors couldn't do a similar job? That was my plan once I got something resembling acceptable results out of this level. Also, is there a rule of thumb for when to end training? Or do people just wait until changes are no longer audible/visible?

from audio-diffusion-pytorch.

zaptrem avatar zaptrem commented on June 18, 2024

I switched to the larger attention features version and am getting slightly more encouraging results: https://wandb.ai/zaptrem/diffusion-pop-4?workspace=user-zaptrem

I think I should keep scaling.

Is the learning rate falloff determined by number of epochs, or steps?

from audio-diffusion-pytorch.

flavioschneider avatar flavioschneider commented on June 18, 2024

That makes sense. Is there any reason a pyramid of diffusers a-la Jukebox's transformer priors couldn't do a similar job?

When you say a pyramid of diffusers do you mean like: a first diffusion model predicting a source at 12kHz, then a second upsampling that to 24kHz, and a third to 48kHz?

Also, is there a rule of thumb for when to end training? Or do people just wait until changes are no longer audible/visible?

There isn't. I've noticed that sometimes, even if the loss seems to converge, the quality continues to improve a bit after that. It's hard to find a rule that always applies, since there's no good metric for audio quality.

I switched to the larger attention features version and am getting slightly more encouraging results: https://wandb.ai/zaptrem/diffusion-pop-4?workspace=user-zaptrem

That's very interesting! (For some reason, the provided link seems to be dead)

Is the learning rate falloff determined by number of epochs, or steps?

I didn't add any LR scheduler, but other people I think use InverseLR, CosineAnnealingLR, or LambdaLR scheduling. Also, ideally you would keep a second model with EMA from which you do the sampling so that it's more stable. See for example the trainer in imagen-pytorch. It's something I might add to the trainer in the future.

Btw, I'm going to move this issue into the general discussion :)

from audio-diffusion-pytorch.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.