GithubHelp home page GithubHelp logo

audio-westlakeu / audiossl Goto Github PK

View Code? Open in Web Editor NEW
79.0 6.0 9.0 13.4 MB

A library built for easier audio self-supervised training, downstream tasks evaluation

License: Other

Python 94.44% Shell 5.56%
audio-classification audio-datasets self-supervised-learning audio-pretraining audio-self-supervised-learning audio-representation audioset nsynth speech-commands urbansound8k

audiossl's Introduction

Audio Self Supervised Learning

Audiossl is developed as we implement our own audio self-supervised learning methods(see Methods below). This library provides general modules involved in audio pretraining, such as dataset loading, data transformation and etc.

Methods


Two official implemented audio self-supervised methods are included:

  1. ATST: Audio Representation Learning with Teacher-Student Transformer (Published at (INTERSPEECH2022))

    See audiossl/methods/atst

  2. Self-supervised Audio Teacher-Student Transformer for Both Clip-level and Frame-level Tasks (Accepted by TASLP)

    See audiossl/methods/atstframe

Install


  1. install pytorch ( version 2.1.1 or higher )
conda create -n your_env_name python=3.10.13
conda activate your_env_name
conda install cudatoolkit==11.8 -c nvidia
pip install torch==2.1.1 torchvision==0.16.1 torchaudio==2.1.1 --index-url https://download.pytorch.org/whl/cu118

  1. install audiossl
    git clone https://github.com/Audio-WestlakeU/audiossl
    cd audiossl
    pip install .

Datasets


One of the difficult parts of doing research on audio self-supervised learning is that you need to evaluate pretrained model on diverse downstream datasets. Audiossl implements an unified dataset interface to make evaluation easier. It's also easy to implement a new dataset.

  1. List available datasets

    from audiossl import datasets
    print(datasets.list_all_datasets())
    """ output:
    voxceleb1:
    { 'creator': <function create_voxceleb1 at 0x7fbe285d0f80>,
    'multi_label': False,
    'num_folds': 1,
    'num_labels': 1251}
    us8k:
    { 'creator': <function create_us8k at 0x7fbe285d6170>,
    'multi_label': False,
    'num_folds': 10,
    'num_labels': 10}
    nsynth:
    { 'creator': <function create_nsynth at 0x7fbe285d60e0>,
    'multi_label': False,
    'num_folds': 1,
    'num_labels': 11}
    spcv2:
    { 'creator': <function create_spcv2 at 0x7fbe285d64d0>,
    'multi_label': False,
    'num_folds': 1,
    'num_labels': 35}
    audioset_b:
    { 'creator': <function create_spcv2 at 0x7fbe285d6560>,
    'multi_label': True,
    'num_folds': 1,
    'num_labels': 527}
    audioset:
    { 'creator': <function create_spcv2 at 0x7fbe285d65f0>,
    'multi_label': True,
    'num_folds': 1,
    'num_labels': 527}
    """
  2. Use a dataset

    • Data preparation

      See audiossl/methods/atst/docs/data_prep.md

    • Get a dataset

      from audiossl import datasets
      dsinfo=dataset.get_dataset("nsynth")
      ds = dsinfo.creat_fn(PATH_DATASET,split="train",transform=None,target_transform=None)
  3. Transformations

    See audiossl.transforms.

  4. An easy-to-use lightning data module

    from audiossl.lightning.datamodules import DownstreamDataModule
    data_module = DownStreamDataModule(data_path,
                                      dataset_name,
                                      batch_size=512
                                      transforms=[train_transform,
                                                  valid_transform,
                                                  test_transform],
                                                  )
                                      target_transforms=[train_targe_transform,
                                                        None,
                                                        None])

audiossl's People

Contributors

lmaxwell avatar saoyear avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

audiossl's Issues

Extracting embeddings.

Hello,

Could you please provide me information on how to extract embeddings from the model?

I am working with VoxCeleb2 audio data. I am training the model from scratch and would like to extract embeddings for further processing.

Please do let me know.

Regards,
Sreeni...

Figure background can be black in a dark mode browser

Hello,
Thank you for sharing your code. I just wanted to let you know that your figures might not be correctly shown on the browser of readers.

It's like this if in dark mode.
image

I guess filling the background in white will solve your issue.

Share the training log

Is there any way to share the training log of the trained models? I would like to see how the loss has changed during training and compare it to my training on a custom data set.

not found 'assl' package.

Hi, Thanks for the great work!
When I ran audioset.py to convert Audioset to lmdb-format, met "ModuleNotFoundError: No module named 'assl' "
I think 'audiossl' repository hasn't module named 'assl'
How can I solve it?

questions about atst-frame evaluated on audioset_strong_eval

Hi, I notice that your work is the first one to conduct evaluation on audioset_eval_strong dataset, this is really amazing. Recently, I am going to evaluate my method on this set too, so there are some details I want to ask you,

  1. Does the 'w/o var-pen' mean that the parameter 'alpha_st' in the psds function equals to 0?
  2. While computing the psds2 score on the set, I find that the process gets stuck due to the high CPU demanding, have you ever met the same problem?
  3. Since no weak or unlabeled data is available, is the model trained solely with frame-level binary cross-entropy loss on the 100000+ training set?
    I am looking forward to your reply, many thanks.

ATST: Could you share finetuning details, please?

Dear authors,

First, congratulations on the great results you have achieved.
These are wonderful numbers!
(BTW, I think I should clarify that I am the author of BYOL-A. Thank you for making progress!)

I'm trying to replicate your results, especially finetuning of VoxCeleb1, but I found that some of the details are missing.
Could you show us the exact parameters for finetuning, please?

Missing files are:

  • audiossl/methods/atst/shell/downtream/finetune/eval_env.sh
  • & eval_func.sh

And if you could, could you show me the parameter specifically for reproducing VC1 finetuning?
I was able to start training as follows, but with guessed parameters. I would need the right ones...

cd audiossl/methods/atst
python downstream/train_finetune.py --n_last_blocks 12 \
--pretrained_ckpt_path ./base.ckpt \
--save_path exp \
--learning_rate 5e-2 \
--max_epochs 50 \
--warmup_epochs 5 \
--dataset_name voxceleb1 \
--data_path /lab/data/voxceleb1 \
--batch_size 64

BTW, I'm sure your paper would have been accepted by Interspeech. I hope you could also update the arxiv comment to clarify.

Thank you.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.