roman-vygon / triplet_loss_kws Goto Github PK
View Code? Open in Web Editor NEWLearning Efficient Representations for Keyword Spotting with Triplet Loss
License: MIT License
Learning Efficient Representations for Keyword Spotting with Triplet Loss
License: MIT License
Hi!
First of all, thank you for open souring the code. I have tried to replicate the results and I have found a few issues during the training process.
After all of that, I can load your pretrained model Res15_35 (as there are no manifests files for 12 yet provided) and I can achieve the accuracy on Triplet evaluation. On the other hand though, there's no learning when training my model from scratch. The command used follows:
python TripletEncoder.py --name=test_encoder --manifest=35 --mode=Res15 --per_class=5 --per_batch=10 --hidden_size=45
Several per_batch and per_class parameters have been tested and same behaviour: The Triplet loss is always oscillating around 1.1 and 0.7 but there's not an evident decrease while training.
Then running the infer train script through:
python infer_train.py --name=res15_encoder --manifest=35 --model=Res15 --enc_step=25440 --hidden_size=45
The resulting Avg Accuracy is arround 20-35. This is not happening when loading the pretrained model, do you know what could be happening?
Thanks in advanced,
Biel.
Hi~
I follow your README.md
But I didn't find anywhere to get libri100_train.json, libri100_dev.json, libri100_test.json
to convert LibriWords manifests with convert_path_prefix.ipynb
Could the pretrained weight be provided?
thanks
Torch autograd expects Variable with requires_grad set to True but doesn't find such Variables. Any idea as to how to get around this?
After installing the required libraries. I have been trying to execute your pipeline. It is giving Segmentation fault (core dumped) every time. Can you pls share requirement.txt and also ideas to combat this problem?
Dear author, this google link seems invalid, can you just reshare the files 'convert_path_prefix.ipynb'? Thanks a lot!
Hello Author,
I'm trying to train the TripletEncoder.py but GPU throws CUDA out of memory error. Can you specify the memory requirements to train this code?
Dear authors
With what parameters did you train Res8 with manifest 12?
Could you provide the pre-trained model? It's missing in the run folder.
Thank you for your help!
Hi there,
Thank you for the great repo! Can I please know if you guys have also open sourced the KNN code
you have used after triplet loss trained representations and the pre-trained models on triplet loss
?
Thank You
Hi there,
Thank you for the great repo! While installing the packages listed in the Readme file using CUDA, I came across some version conflicts and circular dependencies in packages which are difficult to resolve. If possible, can anyone send through a requirements.txt file? It would make it easier and would be highly appreciated.
Kind Regards
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.