GithubHelp home page GithubHelp logo

nikvaessen / w2v2-speaker Goto Github PK

View Code? Open in Web Editor NEW
140.0 4.0 14.0 1.11 MB

Research code for the paper "Fine-tuning wav2vec2 for speaker recognition" found at https://arxiv.org/abs/2109.15053

License: MIT License

Shell 1.57% Python 98.43%

w2v2-speaker's Introduction

Fine-tuning wav2vec2 for speaker recognition

This is the code used to run the experiments in https://arxiv.org/abs/2109.15053 and https://ieeexplore.ieee.org/document/9746952/. Detailed logs of each training run can be found here:

The work can be cited with the following bibtex entry:

@INPROCEEDINGS{vaessen2022w2v2speaker,
  author={Vaessen, Nik and Van Leeuwen, David A.},
  booktitle={ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, 
  title={Fine-Tuning Wav2Vec2 for Speaker Recognition}, 
  year={2022},
  volume={},
  number={},
  pages={7967-7971},
  doi={10.1109/ICASSP43922.2022.9746952}}

Installing dependencies

If poetry is not installed, see https://python-poetry.org/docs/. We also expect at least python 3.8 on the system. If this is not the case, look into https://github.com/pyenv/pyenv for an easy tool to install a specific python version on your system.

The python dependencies can be installed (in a project-specific virtual environment) by:

$ poetry shell  # enter project-specific virtual environment

From now on, every command which should be run under the virtual environment (which looks like (wav2vec-speaker-identification-<ID>-py<VERSION>) $) which is shortened to (xxx) $ .

Then install all required python packages:

(xxx) $ pip install -U pip
(xxx) $ poetry update # install dependencies 

Because PyTorch is currently serving the packages on PiPY incorrectly, we need to use pip to install the specific PyTorch versions we need.

(xxx) $ pip install -r requirements/requirements_cuda101.txt # if CUDA 10.1
(xxx) $ pip install -r requirements/requirements_cuda110.txt # if CUDA 11.0

Make sure to modify/create a requirements file for your operating system and CUDA version.

Finally, install the local package in the virtual environment by running

(xxx) $ poetry install

Setting up the environment

Copy the example environment variables:

$ cp .env.example .env 

You can then fill in .env accordingly.

Downloading and using voxceleb1 and 2

I've experienced that the download links for voxceleb1/2 can be unstable. I recommend manually downloading the dataset from the google drive link displayed on https://www.robots.ox.ac.uk/~vgg/data/voxceleb/vox1.html.

You should end up 4 zip files, which should be placed in $DATA_FOLDER/voxceleb_archives.

  1. vox1_dev_wav.zip
  2. vox1_test_wav.zip
  3. vox2_dev_aac.zip
  4. vox2_test_aac.zip

You should also download the meta files of voxceleb. You can use preparation_scripts/download_pretrained_models.sh to download them to the expected location $DATA_FOLDER/voxceleb_meta.

Converting voxceleb2 data from .m4a to .wav

This requires ffmpeg to be installed on the machine. Check with ffmpeg -version. Assuming the voxceleb2 data is placed at $DATA_FOLDER/voxceleb_archives/vox2_dev_aac.zip and $DATA_FOLDER/voxceleb_archives/vox2_test_aac.zip, run the following commands, starting from the root project directory.

source .env

PDIR=$PWD # folder where this README is located
D=$DATA_FOLDER # location of data - should be set in .env file 
WORKERS=$(nproc --all) # number of CPUs available 

# extract voxceleb 2 data
cd $D
mkdir -p convert_tmp/train convert_tmp/test

unzip voxceleb_archives/vox2_dev_aac.zip -d convert_tmp/train
unzip voxceleb_archives/vox2_test_aac.zip -d convert_tmp/test

# run the conversion script
cd $PDIR
poetry run python preparation_scripts/voxceleb2_convert_to_wav.py $D/convert_tmp --num_workers $WORKERS

# rezip the converted data
cd $D/convert_tmp/train
zip $D/voxceleb_archives/vox2_dev_wav.zip wav -r

cd $D/convert_tmp/test
zip $D/voxceleb_archives/vox2_test_wav.zip wav -r

# delete the unzipped .m4a files
cd $D
rm -r convert_tmp

Note that this process can take a few hours on a fast machine and day(s) on a single (slow) cpu. Make sure to save the vox2_dev_wav.zip and vox2_test_wav.zip files somewhere secure, so you don't have redo this process :).

Downloading pre-trained models.

You can run ./preparation_scripts/download_pretrained_models.sh to download the pre-trained models of wav2vec2 to the required $DATA_DIRECTORY/pretrained_models directory.

Running the experiments

Below we show all the commands for training the specified network. They should reproduce the results in the paper. Note that we used a SLURM GPU cluster and each command therefore includes hydra/launcher=slurm. If you want to reproduce these locally these lines need to be removed.

wav2vec2-sv-ce

auto_lr_find

python run.py +experiment=speaker_wav2vec2_ce \
tune_model=True data/module=voxceleb1 \
trainer.auto_lr_find=auto_lr_find tune_iterations=5000

5k iters, visually around 1e-4

grid search

grid = 1e-5, 5e-5, 9e-5, 1e-4, 2e-4, 5e-4, 1e-3

python run.py -m +experiment=speaker_wav2vec2_ce \
data.dataloader.train_batch_size=66 \
optim.algo.lr=1e-5,5e-5,9e-5,1e-4,2e-4,5e-4,1e-3 \
hydra/launcher=slurm hydra.launcher.exclude=cn104 hydra.launcher.array_parallelism=7

best performance n=3

python run.py -m +experiment=speaker_wav2vec2_ce \
data.dataloader.train_batch_size=66 optim.algo.lr=9e-5 \
seed=26160,79927,90537 \
hydra/launcher=slurm hydra.launcher.exclude=cn104 hydra.launcher.array_parallelism=3

best pooling n=3

python run.py -m +experiment=speaker_wav2vec2_ce \
data.dataloader.train_batch_size=66 optim.algo.lr=9e-5 \
seed=168621,597558,440108 \
network.stat_pooling_type=mean,mean+std,attentive,quantile,first,first+cls,last,middle,random,max \
hydra/launcher=slurm hydra.launcher.exclude=cn104 hydra.launcher.array_parallelism=4

wav2vec2-sv-aam

aam with m=0.2 and s=30

auto_lr_find

python run.py +experiment=speaker_wav2vec2_ce \
tune_model=True data/module=voxceleb1 \
trainer.auto_lr_find=auto_lr_find tune_iterations=5000 \
optim/loss=aam_softmax

grid search

python run.py -m +experiment=speaker_wav2vec2_aam \
data.dataloader.train_batch_size=66 \
optim.algo.lr=1e-5,5e-5,9e-5,1e-4,2e-4,5e-4,1e-3 \
hydra/launcher=slurm hydra.launcher.exclude=cn104 hydra.launcher.array_parallelism=7

same grid

best performance n=3

python run.py -m +experiment=speaker_wav2vec2_aam \
data.dataloader.train_batch_size=66 optim.algo.lr=0.00005 \
seed=29587,14352,70814 \
hydra/launcher=slurm hydra.launcher.exclude=cn104 hydra.launcher.array_parallelism=3

best pooling n=3

python run.py -m +experiment=speaker_wav2vec2_aam \
data.dataloader.train_batch_size=66 optim.algo.lr=0.00005 \
seed=392401,39265,62634  \
network.stat_pooling_type=mean,mean+std,attentive,quantile,first,first+cls,last,middle,random,max \
hydra/launcher=slurm hydra.launcher.exclude=cn104 hydra.launcher.array_parallelism=4

wav2vec2-sv-bce

auto_lr_find

python run.py +experiment=speaker_wav2vec2_pairs \
tune_model=True data/module=voxceleb1_pairs \
trainer.auto_lr_find=auto_lr_find tune_iterations=5000

grid search

5e-6,7e6,9e-6,1e-5,2e-5,3e-5,4e-5,1e-4

python run.py -m +experiment=speaker_wav2vec2_pairs \
optim.algo.lr=5e-6,7e-6,9e-6,1e-5,2e-5,3e-5,4e-5,1e-4 \
data.dataloader.train_batch_size=32 \
hydra/launcher=slurm hydra.launcher.exclude=cn104 hydra.launcher.array_parallelism=8

best performance n=4

python run.py -m +experiment=speaker_wav2vec2_pairs \
optim.algo.lr=0.00003 data.dataloader.train_batch_size=32 \
seed=154233,979426,971817,931201 \
hydra/launcher=slurm hydra.launcher.exclude=cn104 hydra.launcher.array_parallelism=4 

xvector

auto_lr_find

python run.py +experiment=speaker_xvector \
tune_model=True data/module=voxceleb1 \
trainer.auto_lr_find=auto_lr_find tune_iterations=5000

grid search

1e-5,6e-5,1e-4,2e-4,3e-4,4e-4,8e-4,1e-3

python run.py -m +experiment=speaker_xvector \
optim.algo.lr=1e-5,6e-5,1e-4,2e-4,3e-4,4e-4,8e-4,1e-3 \
data.dataloader.train_batch_size=66 \
hydra/launcher=slurm hydra.launcher.exclude=cn105 hydra.launcher.array_parallelism=8

best performance n=3

python run.py -m +experiment=speaker_xvector \
optim.algo.lr=0.0004 trainer.max_steps=100_000 \
data.dataloader.train_batch_size=66 \
seed=82713,479728,979292 \
hydra/launcher=slurm hydra.launcher.exclude=cn105 hydra.launcher.array_parallelism=6 \

ecapa-tdnn

auto_lr_find

python run.py +experiment=speaker_ecapa_tdnn \
tune_model=True data/module=voxceleb1 \
trainer.auto_lr_find=auto_lr_find tune_iterations=5000

grid search

5e-6,1e-5,5e-4,1e-4,5e-3,7e-4,9e-4,1e-3

python run.py -m +experiment=speaker_ecapa_tdnn \
optim.algo.lr=5e-6,1e-5,5e-4,1e-4,5e-3,7e-4,9e-4,1e-3 \
data.dataloader.train_batch_size=66 \
hydra/launcher=slurm hydra.launcher.exclude=cn105 hydra.launcher.array_parallelism=8

best performance n=3

python run.py -m +experiment=speaker_ecapa_tdnn \
optim.algo.lr=0.001 trainer.max_steps=100_000 \
data.dataloader.train_batch_size=66 \
seed=494671,196126,492116 \
hydra/launcher=slurm hydra.launcher.exclude=cn105 hydra.launcher.array_parallelism=6

Ablation

baseline

python run.py -m +experiment=speaker_wav2vec2_aam \
data.dataloader.train_batch_size=66 optim.algo.lr=0.00005 \
seed=392401,39265,62634 network.stat_pooling_type=first+cls \
hydra/launcher=slurm hydra.launcher.array_parallelism=3

unfrozen feature extractor

python run.py -m +experiment=speaker_wav2vec2_aam \
data.dataloader.train_batch_size=66 optim.algo.lr=0.00005 \
seed=914305,386390,865459 network.stat_pooling_type=first+cls \
network.completely_freeze_feature_extractor=False tag=no_freeze \
hydra/launcher=slurm hydra.launcher.array_parallelism=3 hydra.launcher.exclude=cn104

no pre-trained weights

python run.py -m +experiment=speaker_wav2vec2_aam \
data.dataloader.train_batch_size=66 optim.algo.lr=0.00005 \
seed=517646,414321,137524 network.stat_pooling_type=first+cls \
network.completely_freeze_feature_extractor=False network.reset_weights=True tag=no_pretrain \
hydra/launcher=slurm hydra.launcher.array_parallelism=3 hydra.launcher.exclude=cn104

no layerdrop

python run.py -m +experiment=speaker_wav2vec2_aam \
data.dataloader.train_batch_size=66 optim.algo.lr=0.00005 \
seed=15249,728106,821754 network.stat_pooling_type=first+cls \
network.layerdrop=0.0 tag=no_layer \
hydra/launcher=slurm hydra.launcher.array_parallelism=3

no dropout

python run.py -m +experiment=speaker_wav2vec2_aam \
data.dataloader.train_batch_size=66 optim.algo.lr=0.00005 \
seed=627687,883727,154405 network.stat_pooling_type=first+cls \
network.layerdrop=0.0 network.attention_dropout=0 \ 
network.feat_proj_dropout=0 network.hidden_dropout=0 tag=no_drop \
hydra/launcher=slurm hydra.launcher.array_parallelism=3 

no time masking

python run.py -m +experiment=speaker_wav2vec2_aam \
data.dataloader.train_batch_size=66 optim.algo.lr=0.00005 \
seed=602400,553540,419322 network.stat_pooling_type=first+cls \
network.layerdrop=0.0 network.attention_dropout=0 network.feat_proj_dropout=0 \
network.hidden_dropout=0 network.mask_time_prob=0 tag=no_mask \
hydra/launcher=slurm hydra.launcher.array_parallelism=3 

batch size 32

python run.py -m +experiment=speaker_wav2vec2_aam \
data.dataloader.train_batch_size=32 trainer.max_steps=200_000 \
optim.algo.lr=0.00005 network.stat_pooling_type=first+cls \
tag=bs_32 seed=308966,753370,519822 \
hydra/launcher=slurm hydra.launcher.array_parallelism=3 

batch size 128

python run.py -m +experiment=speaker_wav2vec2_aam \
data.dataloader.train_batch_size=128 trainer.max_steps=50_000 \
optim.algo.lr=0.00005 seed=54375,585956,637400 \
network.stat_pooling_type=first+cls tag=bs_128 \
hydra/launcher=slurm hydra.launcher.array_parallelism=3 hydra.launcher.exclude=cn104

constant lr=3e-6

python run.py -m +experiment=speaker_wav2vec2_aam \
data.dataloader.train_batch_size=66 optim.algo.lr=3e-6 \
seed=549686,190215,637679 network.stat_pooling_type=first+cls \
optim/schedule=constant tag=lr_low \
hydra/launcher=slurm hydra.launcher.array_parallelism=3 

constant lr=5e-5

python run.py -m +experiment=speaker_wav2vec2_aam \
data.dataloader.train_batch_size=66 optim.algo.lr=0.00005 \
seed=419703,980724,124995 network.stat_pooling_type=first+cls \
optim/schedule=constant tag=lr_same \
hydra/launcher=slurm hydra.launcher.array_parallelism=3  

tri_stage

python run.py -m +experiment=speaker_wav2vec2_aam \
data.dataloader.train_batch_size=66 optim.algo.lr=0.00005 \
seed=856797,952324,89841 network.stat_pooling_type=first+cls \
optim/schedule=tri_stage tag=lr_3stage \
optim.schedule.scheduler.lr_lambda.initial_lr=1e-7 optim.schedule.scheduler.lr_lambda.final_lr=1e-7 \
hydra/launcher=slurm hydra.launcher.array_parallelism=3

exp decay

python run.py -m +experiment=speaker_wav2vec2_aam \
data.dataloader.train_batch_size=66 optim.algo.lr=0.00005 seed=962764,682423,707761 \
network.stat_pooling_type=first+cls optim/schedule=exp_decay tag=lr_exp_decay \
optim.schedule.scheduler.lr_lambda.final_lr=1e-7 \
hydra/launcher=slurm hydra.launcher.array_parallelism=3  

w2v2-speaker's People

Contributors

davidavdav avatar nikvaessen avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

w2v2-speaker's Issues

Can I use the code by windows os? Or just linux can run?

hello, i find that this project is using in linux os, and when i try to use it by windows os, there throw a problem shows like below.
image
UserWarning: torchaudio C++ extension is not available.
warnings.warn('torchaudio C++ extension is not available.')
Error executing job with overrides: ['+experiment=speaker_wav2vec2_ce', 'tune_model=True', 'data/module=voxceleb1', 'trainer.auto_lr_find=auto_lr_find', 'tune_iterations=5000', 'optim/loss=aam_softmax']

So, i want to ask you a question that can I use this project by windows os?

readme中的一个小错误

作者您好,在阅读readme中关于下载meta数据的描述,文中是否应该修改成使用download_voxceleb_meta.sh,即
You should also download the meta files of voxceleb. You can use preparation_scripts/download_voxceleb_meta.sh to download them to the expected location $DATA_FOLDER/voxceleb_meta

Max retries exceeded with url

When I run the code, I met error as followed:
requests.exceptions.ConnectTimeout: HTTPSConnectionPool(host='s3-proxy.huggingface.tech', port=443): Max retries exceeded with url: /lfs.huggingface.co/facebook/wav2vec2-base/3249fe98bfc62fcbc26067f724716a6ec49d12c4728a2af1df659013905dff21?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA4N7VTDGOZQA2IKWK%2F20230104%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230104T080039Z&X-Amz-Expires=259200&X-Amz-Signature=6160a058dde038990c80ce830b5774c87121b778dfa5f4bf22c337be64383514&X-Amz-SignedHeaders=host&response-content-disposition=attachment%3B%20filename%3D%22pytorch_model.bin%22&x-id=GetObject (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7faae2481d60>, 'Connection to s3-proxy.huggingface.tech timed out. (connect timeout=10.0)'))
I wonder if you have suggestions about that, thank you sooo much. It almost drove me insane. : (

strange error isinstance() arg 2 must be a type or tuple of types

MKL_NUM_THREADS=1 CUDA_VISIBLE_DEVICES=1 python run.py +experiment=speaker_ecapa_tdnn tune_model=True data/module=dogbark trainer.auto_lr_find=auto_lr_find tune_iterations=5000
/home/arthur/.conda/envs/w2v2/lib/python3.8/site-packages/pytorch_lightning/core/decorators.py:65: LightningDeprecationWarning: The @auto_move_data decorator is deprecated in v1.3 and will be removed in v1.5. Please use trainer.predict instead for inference. The decorator was applied to forward
rank_zero_deprecation(
/home/arthur/.conda/envs/w2v2/lib/python3.8/site-packages/pytorch_lightning/core/decorators.py:65: LightningDeprecationWarning: The @auto_move_data decorator is deprecated in v1.3 and will be removed in v1.5. Please use trainer.predict instead for inference. The decorator was applied to forward
rank_zero_deprecation(
data_folder: ${oc.env:DATA_FOLDER}
temp_folder: ${oc.env:TEMP_FOLDER}
log_folder: ${oc.env:LOG_FOLDER}
seed: 42133724
tune_model: true
tune_iterations: 5000
verify_model: false
fit_model: true
eval_model: true
load_network_from_checkpoint: null
use_cometml: ${oc.decode:${oc.env:USE_COMET_ML}}
gpus: ${oc.decode:${oc.env:NUM_GPUS}}
project_name: ecapa-tdnn
experiment_name: ${random_uuid:}
tag: ${now:%Y-%m-%d}
callbacks:
to_add:

  • lr_monitor
  • ram_monitor
  • checkpoint
    lr_monitor:
    target: pytorch_lightning.callbacks.LearningRateMonitor
    ram_monitor:
    target: src.callbacks.memory_monitor.RamMemoryMonitor
    frequency: 100
    checkpoint:
    target: pytorch_lightning.callbacks.model_checkpoint.ModelCheckpoint
    monitor: val_eer
    save_top_k: 1
    mode: min
    filename: '{epoch}.{step}.{val_eer:.4f}.best'
    save_last: true
    every_n_val_epochs: 1
    last_checkpoint_pattern: '{epoch}.{step}.{val_eer:.4f}.last'
    data:
    module:
    target: src.data.modules.speaker.voxceleb.VoxCelebDataModuleConfig
    use_voxceleb1_dev: true
    use_voxceleb1_test: true
    use_voxceleb2_dev: false
    use_voxceleb2_test: false
    all_voxceleb1_is_test_set: false
    has_train: true
    has_val: false
    has_test: true
    test_split_file_path: ${data_folder}/veri_test.txt
    shards_folder: ${data_folder}/dog_barking_shards
    extraction_folder: ${temp_folder}/dog_barking
    split_mode: equal
    train_val_ratio: 0.97
    num_val_speakers: -1
    eer_validation_pairs: 10000
    sequential_same_speaker_samples: 1
    min_unique_speakers_per_shard: 500
    discard_partial_shards: true
    voxceleb1_train_zip_path: ${data_folder}/voxceleb_archives/vox1_dev_wav.zip
    voxceleb1_test_zip_path: ${data_folder}/voxceleb_archives/vox1_test_wav.zip
    voxceleb2_train_zip_path: ${data_folder}/voxceleb_archives/vox2_dev_wav.zip
    voxceleb2_test_zip_path: ${data_folder}/voxceleb_archives/vox2_test_wav.zip
    train_collate_fn: pad_right
    val_collate_fn: default
    test_collate_fn: default
    add_batch_debug_info: false
    limit_samples: -1
    batch_processing_mode: categorical
    pipeline:
    train_pipeline:
    • selector_train
    • filterbank
    • normalizer
      val_pipeline:
    • selector_val
    • filterbank
    • normalizer
      test_pipeline:
    • filterbank
    • normalizer
      selector_train:
      target: src.data.preprocess.random_chunks.AudioChunkSelector
      selection_strategy: random
      desired_chunk_length_sec: 3
      selector_val:
      target: src.data.preprocess.random_chunks.AudioChunkSelector
      selection_strategy: start
      desired_chunk_length_sec: 3
      filterbank:
      target: src.data.preprocess.audio_features.FilterBank
      n_mels: 40
      normalizer:
      target: src.data.preprocess.input_normalisation.InputNormalizer2D
      normalize_over_channels: true
      shards:
      target: src.data.common.WebDataSetShardConfig
      samples_per_shard: 500
      use_gzip_compression: true
      shuffle_shards: true
      queue_size: 1024
      dataloader:
      target: src.data.common.SpeakerDataLoaderConfig
      train_batch_size: 32
      val_batch_size: ${data.dataloader.train_batch_size}
      test_batch_size: 1
      num_workers: 2
      pin_memory: true
      evaluator:
      target: src.evaluation.speaker.cosine_distance.CosineDistanceEvaluator
      center_before_scoring: false
      length_norm_before_scoring: false
      max_num_training_samples: 0
      network:
      target: src.lightning_modules.speaker.ecapa_tdnn.EcapaTDNNModuleConfig
      input_mel_coefficients: ${data.pipeline.filterbank.n_mels}
      lin_neurons: 192
      channels:
  • 1024
  • 1024
  • 1024
  • 1024
  • 3072
    kernel_sizes:
  • 5
  • 3
  • 3
  • 3
  • 1
    dilations:
  • 1
  • 2
  • 3
  • 4
  • 1
    attention_channels: 128
    res2net_scale: 8
    se_channels: 128
    global_context: true
    pretrained_weights_path: null
    explicit_stat_pool_embedding_size: null
    explicit_num_speakers: null
    tokenizer:
    target: src.tokenizer.tokenizer_wav2vec2.Wav2vec2TokenizerConfig
    tokenizer_huggingface_id: facebook/wav2vec2-base-960h
    optim:
    algo:
    target: torch.optim.Adam
    lr: 0.0001
    weight_decay: 0
    betas:
    • 0.9
    • 0.999
      eps: 1.0e-08
      amsgrad: false
      schedule:
      scheduler:
      target: torch.optim.lr_scheduler.OneCycleLR
      max_lr: ${optim.algo.lr}
      total_steps: ${trainer.max_steps}
      div_factor: 25
      monitor: null
      interval: step
      frequency: null
      name: null
      loss:
      target: src.optim.loss.aam_softmax.AngularAdditiveMarginSoftMaxLoss
      input_features: 192
      output_features: 5994
      margin: 0.2
      scale: 30
      output_features: 48
      trainer:
      target: pytorch_lightning.Trainer
      gpus: ${gpus}
      accelerator: null
      num_nodes: 1
      min_epochs: null
      max_epochs: null
      min_steps: null
      max_steps: 100000
      val_check_interval: 5000
      accumulate_grad_batches: 1
      progress_bar_refresh_rate: 500
      deterministic: false
      limit_train_batches: 1.0
      limit_val_batches: 1.0
      limit_test_batches: 1.0
      fast_dev_run: false
      precision: 32
      num_sanity_val_steps: 2
      auto_lr_find: auto_lr_find
      gradient_clip_val: 0

3.9.0
pytorch_lightning.version='1.4.5'
torch.version='1.8.2+cu102'
[2023-03-27 20:54:04,608][pytorch_lightning.utilities.seed][INFO] - Global seed set to 42133724
Error executing job with overrides: ['+experiment=speaker_ecapa_tdnn', 'tune_model=True', 'data/module=dogbark', 'trainer.auto_lr_find=auto_lr_find', 'tune_iterations=5000']
Traceback (most recent call last):
File "/home/arthur/.conda/envs/w2v2/lib/python3.8/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 62, in _call_target
return target(*args, **kwargs)
File "", line 33, in init
File "/home/arthur/dog_verification/w2v2-speaker-master/src/config_util.py", line 26, in post_init
post_init_type_cast(self)
File "/home/arthur/dog_verification/w2v2-speaker-master/src/config_util.py", line 41, in post_init_type_cast
elif isinstance(value, typehint_cls):
TypeError: isinstance() arg 2 must be a type or tuple of types

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/arthur/.conda/envs/w2v2/lib/python3.8/site-packages/hydra/_internal/utils.py", line 211, in run_and_report
return func()
File "/home/arthur/.conda/envs/w2v2/lib/python3.8/site-packages/hydra/_internal/utils.py", line 368, in
lambda: hydra.run(
File "/home/arthur/.conda/envs/w2v2/lib/python3.8/site-packages/hydra/_internal/hydra.py", line 110, in run
_ = ret.return_value
File "/home/arthur/.conda/envs/w2v2/lib/python3.8/site-packages/hydra/core/utils.py", line 233, in return_value
raise self._return_value
File "/home/arthur/.conda/envs/w2v2/lib/python3.8/site-packages/hydra/core/utils.py", line 160, in run_job
ret.return_value = task_function(task_cfg)
File "run.py", line 38, in run
return run_train_eval_script(cfg)
File "/home/arthur/dog_verification/w2v2-speaker-master/src/main.py", line 429, in run_train_eval_script
dm = construct_data_module(cfg)
File "/home/arthur/dog_verification/w2v2-speaker-master/src/main.py", line 127, in construct_data_module
dm_cfg = hydra.utils.instantiate(cfg.data.module)
File "/home/arthur/.conda/envs/w2v2/lib/python3.8/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 180, in instantiate
return instantiate_node(config, *args, recursive=recursive, convert=convert)
File "/home/arthur/.conda/envs/w2v2/lib/python3.8/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 249, in instantiate_node
return _call_target(target, *args, **kwargs)
File "/home/arthur/.conda/envs/w2v2/lib/python3.8/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 64, in _call_target
raise type(e)(
File "/home/arthur/.conda/envs/w2v2/lib/python3.8/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 62, in _call_target
return target(*args, **kwargs)
File "", line 33, in init
File "/home/arthur/dog_verification/w2v2-speaker-master/src/config_util.py", line 26, in post_init
post_init_type_cast(self)
File "/home/arthur/dog_verification/w2v2-speaker-master/src/config_util.py", line 41, in post_init_type_cast
elif isinstance(value, typehint_cls):
TypeError: Error instantiating 'src.data.modules.speaker.voxceleb.VoxCelebDataModuleConfig' : isinstance() arg 2 must be a type or tuple of types

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "run.py", line 48, in
run()
File "/home/arthur/.conda/envs/w2v2/lib/python3.8/site-packages/hydra/main.py", line 49, in decorated_main
_run_hydra(
File "/home/arthur/.conda/envs/w2v2/lib/python3.8/site-packages/hydra/_internal/utils.py", line 367, in _run_hydra
run_and_report(
File "/home/arthur/.conda/envs/w2v2/lib/python3.8/site-packages/hydra/_internal/utils.py", line 251, in run_and_report
assert mdl is not None
AssertionError

strange error: isinstance() arg 2 must be a type or tuple of types

MKL_NUM_THREADS=1 CUDA_VISIBLE_DEVICES=1 python run.py +experiment=speaker_ecapa_tdnn tune_model=True data/module=dogbark trainer.auto_lr_find=auto_lr_find tune_iterations=5000
/home/arthur/.conda/envs/w2v2/lib/python3.8/site-packages/pytorch_lightning/core/decorators.py:65: LightningDeprecationWarning: The @auto_move_data decorator is deprecated in v1.3 and will be removed in v1.5. Please use trainer.predict instead for inference. The decorator was applied to forward
rank_zero_deprecation(
/home/arthur/.conda/envs/w2v2/lib/python3.8/site-packages/pytorch_lightning/core/decorators.py:65: LightningDeprecationWarning: The @auto_move_data decorator is deprecated in v1.3 and will be removed in v1.5. Please use trainer.predict instead for inference. The decorator was applied to forward
rank_zero_deprecation(
data_folder: ${oc.env:DATA_FOLDER}
temp_folder: ${oc.env:TEMP_FOLDER}
log_folder: ${oc.env:LOG_FOLDER}
seed: 42133724
tune_model: true
tune_iterations: 5000
verify_model: false
fit_model: true
eval_model: true
load_network_from_checkpoint: null
use_cometml: ${oc.decode:${oc.env:USE_COMET_ML}}
gpus: ${oc.decode:${oc.env:NUM_GPUS}}
project_name: ecapa-tdnn
experiment_name: ${random_uuid:}
tag: ${now:%Y-%m-%d}
callbacks:
to_add:

  • lr_monitor
  • ram_monitor
  • checkpoint
    lr_monitor:
    target: pytorch_lightning.callbacks.LearningRateMonitor
    ram_monitor:
    target: src.callbacks.memory_monitor.RamMemoryMonitor
    frequency: 100
    checkpoint:
    target: pytorch_lightning.callbacks.model_checkpoint.ModelCheckpoint
    monitor: val_eer
    save_top_k: 1
    mode: min
    filename: '{epoch}.{step}.{val_eer:.4f}.best'
    save_last: true
    every_n_val_epochs: 1
    last_checkpoint_pattern: '{epoch}.{step}.{val_eer:.4f}.last'
    data:
    module:
    target: src.data.modules.speaker.voxceleb.VoxCelebDataModuleConfig
    use_voxceleb1_dev: true
    use_voxceleb1_test: true
    use_voxceleb2_dev: false
    use_voxceleb2_test: false
    all_voxceleb1_is_test_set: false
    has_train: true
    has_val: false
    has_test: true
    test_split_file_path: ${data_folder}/veri_test.txt
    shards_folder: ${data_folder}/dog_barking_shards
    extraction_folder: ${temp_folder}/dog_barking
    split_mode: equal
    train_val_ratio: 0.97
    num_val_speakers: -1
    eer_validation_pairs: 10000
    sequential_same_speaker_samples: 1
    min_unique_speakers_per_shard: 500
    discard_partial_shards: true
    voxceleb1_train_zip_path: ${data_folder}/voxceleb_archives/vox1_dev_wav.zip
    voxceleb1_test_zip_path: ${data_folder}/voxceleb_archives/vox1_test_wav.zip
    voxceleb2_train_zip_path: ${data_folder}/voxceleb_archives/vox2_dev_wav.zip
    voxceleb2_test_zip_path: ${data_folder}/voxceleb_archives/vox2_test_wav.zip
    train_collate_fn: pad_right
    val_collate_fn: default
    test_collate_fn: default
    add_batch_debug_info: false
    limit_samples: -1
    batch_processing_mode: categorical
    pipeline:
    train_pipeline:
    • selector_train
    • filterbank
    • normalizer
      val_pipeline:
    • selector_val
    • filterbank
    • normalizer
      test_pipeline:
    • filterbank
    • normalizer
      selector_train:
      target: src.data.preprocess.random_chunks.AudioChunkSelector
      selection_strategy: random
      desired_chunk_length_sec: 3
      selector_val:
      target: src.data.preprocess.random_chunks.AudioChunkSelector
      selection_strategy: start
      desired_chunk_length_sec: 3
      filterbank:
      target: src.data.preprocess.audio_features.FilterBank
      n_mels: 40
      normalizer:
      target: src.data.preprocess.input_normalisation.InputNormalizer2D
      normalize_over_channels: true
      shards:
      target: src.data.common.WebDataSetShardConfig
      samples_per_shard: 500
      use_gzip_compression: true
      shuffle_shards: true
      queue_size: 1024
      dataloader:
      target: src.data.common.SpeakerDataLoaderConfig
      train_batch_size: 32
      val_batch_size: ${data.dataloader.train_batch_size}
      test_batch_size: 1
      num_workers: 2
      pin_memory: true
      evaluator:
      target: src.evaluation.speaker.cosine_distance.CosineDistanceEvaluator
      center_before_scoring: false
      length_norm_before_scoring: false
      max_num_training_samples: 0
      network:
      target: src.lightning_modules.speaker.ecapa_tdnn.EcapaTDNNModuleConfig
      input_mel_coefficients: ${data.pipeline.filterbank.n_mels}
      lin_neurons: 192
      channels:
  • 1024
  • 1024
  • 1024
  • 1024
  • 3072
    kernel_sizes:
  • 5
  • 3
  • 3
  • 3
  • 1
    dilations:
  • 1
  • 2
  • 3
  • 4
  • 1
    attention_channels: 128
    res2net_scale: 8
    se_channels: 128
    global_context: true
    pretrained_weights_path: null
    explicit_stat_pool_embedding_size: null
    explicit_num_speakers: null
    tokenizer:
    target: src.tokenizer.tokenizer_wav2vec2.Wav2vec2TokenizerConfig
    tokenizer_huggingface_id: facebook/wav2vec2-base-960h
    optim:
    algo:
    target: torch.optim.Adam
    lr: 0.0001
    weight_decay: 0
    betas:
    • 0.9
    • 0.999
      eps: 1.0e-08
      amsgrad: false
      schedule:
      scheduler:
      target: torch.optim.lr_scheduler.OneCycleLR
      max_lr: ${optim.algo.lr}
      total_steps: ${trainer.max_steps}
      div_factor: 25
      monitor: null
      interval: step
      frequency: null
      name: null
      loss:
      target: src.optim.loss.aam_softmax.AngularAdditiveMarginSoftMaxLoss
      input_features: 192
      output_features: 5994
      margin: 0.2
      scale: 30
      output_features: 48
      trainer:
      target: pytorch_lightning.Trainer
      gpus: ${gpus}
      accelerator: null
      num_nodes: 1
      min_epochs: null
      max_epochs: null
      min_steps: null
      max_steps: 100000
      val_check_interval: 5000
      accumulate_grad_batches: 1
      progress_bar_refresh_rate: 500
      deterministic: false
      limit_train_batches: 1.0
      limit_val_batches: 1.0
      limit_test_batches: 1.0
      fast_dev_run: false
      precision: 32
      num_sanity_val_steps: 2
      auto_lr_find: auto_lr_find
      gradient_clip_val: 0

3.9.0
pytorch_lightning.version='1.4.5'
torch.version='1.8.2+cu102'
[2023-03-27 20:54:04,608][pytorch_lightning.utilities.seed][INFO] - Global seed set to 42133724
Error executing job with overrides: ['+experiment=speaker_ecapa_tdnn', 'tune_model=True', 'data/module=dogbark', 'trainer.auto_lr_find=auto_lr_find', 'tune_iterations=5000']
Traceback (most recent call last):
File "/home/arthur/.conda/envs/w2v2/lib/python3.8/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 62, in _call_target
return target(*args, **kwargs)
File "", line 33, in init
File "/home/arthur/dog_verification/w2v2-speaker-master/src/config_util.py", line 26, in post_init
post_init_type_cast(self)
File "/home/arthur/dog_verification/w2v2-speaker-master/src/config_util.py", line 41, in post_init_type_cast
elif isinstance(value, typehint_cls):
TypeError: isinstance() arg 2 must be a type or tuple of types

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/arthur/.conda/envs/w2v2/lib/python3.8/site-packages/hydra/_internal/utils.py", line 211, in run_and_report
return func()
File "/home/arthur/.conda/envs/w2v2/lib/python3.8/site-packages/hydra/_internal/utils.py", line 368, in
lambda: hydra.run(
File "/home/arthur/.conda/envs/w2v2/lib/python3.8/site-packages/hydra/_internal/hydra.py", line 110, in run
_ = ret.return_value
File "/home/arthur/.conda/envs/w2v2/lib/python3.8/site-packages/hydra/core/utils.py", line 233, in return_value
raise self._return_value
File "/home/arthur/.conda/envs/w2v2/lib/python3.8/site-packages/hydra/core/utils.py", line 160, in run_job
ret.return_value = task_function(task_cfg)
File "run.py", line 38, in run
return run_train_eval_script(cfg)
File "/home/arthur/dog_verification/w2v2-speaker-master/src/main.py", line 429, in run_train_eval_script
dm = construct_data_module(cfg)
File "/home/arthur/dog_verification/w2v2-speaker-master/src/main.py", line 127, in construct_data_module
dm_cfg = hydra.utils.instantiate(cfg.data.module)
File "/home/arthur/.conda/envs/w2v2/lib/python3.8/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 180, in instantiate
return instantiate_node(config, *args, recursive=recursive, convert=convert)
File "/home/arthur/.conda/envs/w2v2/lib/python3.8/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 249, in instantiate_node
return _call_target(target, *args, **kwargs)
File "/home/arthur/.conda/envs/w2v2/lib/python3.8/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 64, in _call_target
raise type(e)(
File "/home/arthur/.conda/envs/w2v2/lib/python3.8/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 62, in _call_target
return target(*args, **kwargs)
File "", line 33, in init
File "/home/arthur/dog_verification/w2v2-speaker-master/src/config_util.py", line 26, in post_init
post_init_type_cast(self)
File "/home/arthur/dog_verification/w2v2-speaker-master/src/config_util.py", line 41, in post_init_type_cast
elif isinstance(value, typehint_cls):
TypeError: Error instantiating 'src.data.modules.speaker.voxceleb.VoxCelebDataModuleConfig' : isinstance() arg 2 must be a type or tuple of types

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "run.py", line 48, in
run()
File "/home/arthur/.conda/envs/w2v2/lib/python3.8/site-packages/hydra/main.py", line 49, in decorated_main
_run_hydra(
File "/home/arthur/.conda/envs/w2v2/lib/python3.8/site-packages/hydra/_internal/utils.py", line 367, in _run_hydra
run_and_report(
File "/home/arthur/.conda/envs/w2v2/lib/python3.8/site-packages/hydra/_internal/utils.py", line 251, in run_and_report
assert mdl is not None
AssertionError

tasks of speech synthesis

Hello, what should I do if I want to use your model results as the speaker ID for the speech synthesis project? Can you publish your training model

Can't install dependencies with Poetry

Hi, I'm having some trouble installing the dependencies with Poetry. I always get this error when Poetry is installing fairseq:

• Installing fairseq (0.10.2): Failed

ChefBuildError

Backend subprocess exited when trying to invoke get_requires_for_build_wheel

Traceback (most recent call last):
  File "<string>", line 214, in <module>
  File "<string>", line 136, in do_setup
  File "/tmp/tmpdc61oe7j/.venv/lib/python3.9/site-packages/setuptools/__init__.py", line 107, in setup
    _install_setup_requires(attrs)
  File "/tmp/tmpdc61oe7j/.venv/lib/python3.9/site-packages/setuptools/__init__.py", line 80, in _install_setup_requires
    _fetch_build_eggs(dist)
  File "/tmp/tmpdc61oe7j/.venv/lib/python3.9/site-packages/setuptools/__init__.py", line 85, in _fetch_build_eggs
    dist.fetch_build_eggs(dist.setup_requires)
  File "/tmp/tmpdc61oe7j/.venv/lib/python3.9/site-packages/setuptools/build_meta.py", line 74, in fetch_build_eggs
    raise SetupRequirementsError(specifier_list)
setuptools.build_meta.SetupRequirementsError: ['cython', 'numpy', 'setuptools>=18.0']

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/jovyan/.local/share/pypoetry/venv/lib/python3.9/site-packages/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
    main()
  File "/home/jovyan/.local/share/pypoetry/venv/lib/python3.9/site-packages/pyproject_hooks/_in_process/_in_process.py", line 335, in main
    json_out['return_val'] = hook(**hook_input['kwargs'])
  File "/home/jovyan/.local/share/pypoetry/venv/lib/python3.9/site-packages/pyproject_hooks/_in_process/_in_process.py", line 118, in get_requires_for_build_wheel
    return hook(config_settings)
  File "/tmp/tmpdc61oe7j/.venv/lib/python3.9/site-packages/setuptools/build_meta.py", line 338, in get_requires_for_build_wheel
    return self._get_build_requires(config_settings, requirements=['wheel'])
  File "/tmp/tmpdc61oe7j/.venv/lib/python3.9/site-packages/setuptools/build_meta.py", line 320, in _get_build_requires
    self.run_setup()
  File "/tmp/tmpdc61oe7j/.venv/lib/python3.9/site-packages/setuptools/build_meta.py", line 335, in run_setup
    exec(code, locals())
  File "<string>", line 217, in <module>
IsADirectoryError: [Errno 21] Is a directory: 'fairseq/examples'


at ~/.local/share/pypoetry/venv/lib/python3.9/site-packages/poetry/installation/chef.py:152 in _prepare
    148│ 
    149│                 error = ChefBuildError("\n\n".join(message_parts))
    150│ 
    151│             if error is not None:
  → 152│                 raise error from None
    153│ 
    154│             return path
    155│ 
    156│     def _prepare_sdist(self, archive: Path, destination: Path | None = None) -> Path:

Note: This error originates from the build backend, and is likely not a problem with poetry but with fairseq (0.10.2) not supporting PEP 517 builds. You can verify this by running 'pip wheel --use-pep517 "fairseq (==0.10.2)"'.

I also tried the suggestion at the end but it gives another Error:

Collecting fairseq==0.10.2
  Using cached fairseq-0.10.2.tar.gz (938 kB)
  Installing build dependencies ... done
  Getting requirements to build wheel ... error
  error: subprocess-exited-with-error
  
  × Getting requirements to build wheel did not run successfully.
  │ exit code: 1
  ╰─> [30 lines of output]
      Traceback (most recent call last):
        File "<string>", line 214, in <module>
        File "<string>", line 136, in do_setup
        File "/tmp/pip-build-env-067bbnv_/overlay/lib/python3.9/site-packages/setuptools/__init__.py", line 107, in setup
          _install_setup_requires(attrs)
        File "/tmp/pip-build-env-067bbnv_/overlay/lib/python3.9/site-packages/setuptools/__init__.py", line 80, in _install_setup_requires
          _fetch_build_eggs(dist)
        File "/tmp/pip-build-env-067bbnv_/overlay/lib/python3.9/site-packages/setuptools/__init__.py", line 85, in _fetch_build_eggs
          dist.fetch_build_eggs(dist.setup_requires)
        File "/tmp/pip-build-env-067bbnv_/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 74, in fetch_build_eggs
          raise SetupRequirementsError(specifier_list)
      setuptools.build_meta.SetupRequirementsError: ['cython', 'numpy', 'setuptools>=18.0']
      
      During handling of the above exception, another exception occurred:
      
      Traceback (most recent call last):
        File "/home/jovyan/.cache/pypoetry/virtualenvs/wav2vec-speaker-identification-bn1jopCZ-py3.9/lib/python3.9/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
          main()
        File "/home/jovyan/.cache/pypoetry/virtualenvs/wav2vec-speaker-identification-bn1jopCZ-py3.9/lib/python3.9/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
          json_out['return_val'] = hook(**hook_input['kwargs'])
        File "/home/jovyan/.cache/pypoetry/virtualenvs/wav2vec-speaker-identification-bn1jopCZ-py3.9/lib/python3.9/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 118, in get_requires_for_build_wheel
          return hook(config_settings)
        File "/tmp/pip-build-env-067bbnv_/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 338, in get_requires_for_build_wheel
          return self._get_build_requires(config_settings, requirements=['wheel'])
        File "/tmp/pip-build-env-067bbnv_/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 320, in _get_build_requires
          self.run_setup()
        File "/tmp/pip-build-env-067bbnv_/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 335, in run_setup
          exec(code, locals())
        File "<string>", line 217, in <module>
      IsADirectoryError: [Errno 21] Is a directory: 'fairseq/examples'
      [end of output]
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error

× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.

note: This error originates from a subprocess, and is likely not a problem with pip.

I also tried to update fairseq to v0.12.0 but the other dependencies in the project are not compatible. Do you have any suggestions or can you update the dependencies?

Edit: Forgot to mention: I'm using Ubuntu 20.04.5 LTS (Focal Fossa), python 3.9, pip 23.0.1

details about other python library

Thanks for your code. I'm just getting started with python. There are always some errors when I configure the environment. Can you list the versions of the pytorch-lightning, torchmetric, and etc. python libraries you use?

Costume Dataset

First of all, thanks for sharing the code of your paper.

I would like to train the aam model on my own data. What should be the configuration of my data for training on it?

requirement.txt

Could you please release the version of the packages you use? Not just the torch version but the other packages version

Using Wav2Vec2 model

Thanks for your code,it helped me a lot,but I tried to use Wav2Vec2 model and got error as following:
RuntimeError: Expected 2D (unbatched) or 3D (batched) input to conv1d, but got input of size: [32, 1, 301, 40]
It showed the error at output.
`def wav2vec2_embed_raw_audio(input_tensor: t.Tensor, model: Wav2Vec2Model) -> t.Tensor:

output: Wav2Vec2BaseModelOutput = model(input_tensor)

features = output.last_hidden_state
features = features.transpose(1, 2)

return features`

Should I transform t.Tensor into the path of voxceleb or it should be t.tensor.

And I have another problem,download_pretrained_models.sh downloaded some .pt files,but it seemed to be useless,and I don't know where to use them.

releasing fine-tuned model

Hello, thanks for your work and code, which help my research a lot. I wonder if you could provide the fine-tuned model?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.