Comments (8)
from pytorch_xvectors.
If I understood your procedure correctly, you have prepared the training data using Kaldi's dihard recipe, and trained xvectors using this repo, right?
The Kaldi repo reports 26.30% DER using supervised calibiration (https://github.com/kaldi-asr/kaldi/blob/master/egs/dihard_2018/v2/run.sh), while Pytorch xvectors returned similar numbers using spectral clustering (https://github.com/manojpamk/pytorch_xvectors/blob/master/README.md). Note that the dihard recipe uses voxceleb corpora for xvector training.
Now as to why the non-augmented model returned similar DER, I am not sure. It is likely that the clean data is large enough to show significant improvements in this task.
Manoj
from pytorch_xvectors.
Thanks for your relpy.
Sorry to bother you. I have seen you achieve better results, so I want to consult you on a few quetions.
I didn't use your repo to train xvector model. I have reproduced the x-vector model before. The model structure is same as yours. But the training steps are a bit different from yours. I didn't execute the 'prepare for egs' procedure. Instead, I used the MFCC features obtained from the voxceleb copora to train x-vector model directly. But yours are the 'egs'. I think maybe it causes the difference in performence.
Another question, I have seen some resluts in 'diarize.sh'.(https://github.com/manojpamk/pytorch_xvectors/blob/master/egs/diarize.sh). The results on DIHARD2-dev using plda are worse than the kaldi baseline. Is there any trouble on computing plda score?
Yuan
from pytorch_xvectors.
Preparing features in the egs format mainly assists training - by ensuring samples have the same duration (i.e number of frames) within a batch. Further, samples in egs files are subset from the utterances themselves, so you can think of them as generating multiple equal-duration examples from the same utterance. Note that both kaldi and this repo perform CMVN and remove non-speech frames before egs file preparation.
All things said, I dont think 27% DER is too bad.
I believe the higher PLDA numbers are due to the AHC threshold not optimized - I currently set it to 0.
from pytorch_xvectors.
Thanks a lot. I will add the 'egs' on my experiment.
Hope to ask you more question.
Thanks again!
from pytorch_xvectors.
Hi, Manoji
Sorry to bother you. I don't konw how to make evaluation on AMI dataset. Is there any recipe about it?
Thanks.
from pytorch_xvectors.
from pytorch_xvectors.
Hi Yuan,
Do you already have the AMI corpus downloaded?
- For audio, check out the kaldi recipe (https://github.com/kaldi-asr/kaldi/blob/master/egs/ami/s5/run_ihm.sh)
- I don't know if the RTTMs are available, but they can be created using the segments and utt2spk files prepared using the kaldi recipe.
- To evaluate diarization, use this script (https://github.com/manojpamk/pytorch_xvectors/blob/master/egs/diarize.sh) after setting the wavDir and rttmDir variables appropriately.
- To determine the train-dev-eval session splits, check out this paper: https://arxiv.org/pdf/1902.03190.pdf
Manoj
from pytorch_xvectors.
Related Issues (15)
- What's the shape of network's input HOT 3
- Missing file for training meta learning embeddings HOT 1
- run.pl tasks all failed while running pytorch_run.sh HOT 1
- Could not find common file: exp/xvector_nnet_1a/egs//egs.1.ark HOT 13
- Running speaker embeeding training on multiple GPUs on single node HOT 1
- train_proto
- [How to?] Embeddings for each .wav file in dataset folder HOT 2
- tdnn layers HOT 1
- training data
- pre-trained model download error
- which mfcc.conf do you use? HOT 6
- Some problems when making evaluation on AMI dev and test dataset. HOT 2
- Provide example for inference in Python HOT 1
- Fail to get access to preTrained/models/
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from pytorch_xvectors.