"# speech_recognition"
Summary:
During the work two models were observed: Simple Audio Recognition from Tensorflow and the model similar to introduced in https://yerevann.github.io/2015/10/11/spoken-language-identification-with-deep-convolutional-networks/. Both of them were trained using dataset used in speech_commands example from Tensorflow.
Steps to reproduce:
For the first model:
- Install Docker on your machine
- Pull Tensorflow Docker image using following command: docker pull tensorflow/tensorflow
- Run container: docker run -it -p 8888:8888 tensorflow/tensorflow
- Go through the steps from the tutorial: https://www.tensorflow.org/versions/master/tutorials/audio_recognition
For the second model:
- Install Anaconda2 package from https://www.anaconda.com/download/
- Go to the Scripts directory and run: conda install -c anaconda theano
conda install -c anaconda lasagne - Download the files from the repository
- Copy the dataset from the Docker container to Host
- Change the
png_folder
and listfile paths in theano/main.py to your ones - Create spectrograms for all of the files from dataset using
create_spectrograms.py
- Run
theano/main.py
.
Link to presentation: https://drive.google.com/open?id=0BzosZ0Y6TKpHT0thYlB0NlVESTQ