GithubHelp home page GithubHelp logo

Comments (42)

imranparuk avatar imranparuk commented on August 20, 2024

I'm getting this error also

from 3d-convolutional-speaker-recognition.

8rV1n avatar 8rV1n commented on August 20, 2024

@imranparuk Generally, I think it's the new APIs of tensorflow that differs from the one of the coders'. I tried to use conv3d instead of conv2d (both the paper and the readme says a 3D-conv is used) in the all cnn_speech.py files and it worked partly. I think @astorfi should specify a tensorflow version to us or update the code for the new version of tf framework.

from 3d-convolutional-speaker-recognition.

imranparuk avatar imranparuk commented on August 20, 2024

I have tried multiple older versions of TF and this code base, no combination seems to be working. I have also tried the conv3d but the shape of the nets don't correspond to what is written in the paper. It fails because of incorrect dimensions that go into tf.squeeze. Would be great if the author helps resolve the issue.

from 3d-convolutional-speaker-recognition.

8rV1n avatar 8rV1n commented on August 20, 2024

@imranparuk I‘ve just removed the tf.squeeze lines as I've said PARTLY working. I guess that those squeeze lines may just be used for normalizing the extra 1s in the shape( removing (2,3,1) -> (2,3)). So it should be handled by the conv3d or something.
By the way, I've not finished reading the paper.

from 3d-convolutional-speaker-recognition.

imranparuk avatar imranparuk commented on August 20, 2024

I think I solved it. Use an older version of tensorflow, and uninstall your current version of numpy. Let tensorflow install the numpy version it requires.
I tried Tensorflow V1.0.0

from 3d-convolutional-speaker-recognition.

8rV1n avatar 8rV1n commented on August 20, 2024

@imranparuk Wow, It seems from a version of acient times. By the way, I'm planning to rewrite the code by using keras as the paper as possible. It should be out till the 3rd, Sep.

from 3d-convolutional-speaker-recognition.

imranparuk avatar imranparuk commented on August 20, 2024

@ArvinSiChuan I would contribute to that.

from 3d-convolutional-speaker-recognition.

astorfi avatar astorfi commented on August 20, 2024

@ArvinSiChuan @imranparuk Thank you all for your contribution.
I am trying to do a Pytorch version of this code as well, using a public dataset.
Will inform you all in this repo when/if I finished it.
Thanks

from 3d-convolutional-speaker-recognition.

8rV1n avatar 8rV1n commented on August 20, 2024

@astorfi Which dataset would you want to use? I would think the mvu multimodal dataset reported in the paper isn't the one of public choices. And I'm finding datasets suit for this task. If you just give some, it would be perfect!

from 3d-convolutional-speaker-recognition.

imranparuk avatar imranparuk commented on August 20, 2024

@astorfi @ArvinSiChuan , I'm also finding it very difficult to get the multi-modal dataset from the paper. Would be great if we could use a open source dataset.

from 3d-convolutional-speaker-recognition.

8rV1n avatar 8rV1n commented on August 20, 2024

@imranparuk I'm considering the datasets in openslr. The TED-LIUM and Free ST Chinese Mandarin Corpus may be usefult but also need to be well preprocessed. Hope @astorfi could give us some suggestion to this.

from 3d-convolutional-speaker-recognition.

astorfi avatar astorfi commented on August 20, 2024

Perhaps VoxCeleb dataset is one of the best options.

from 3d-convolutional-speaker-recognition.

astorfi avatar astorfi commented on August 20, 2024

@ArvinSiChuan @imranparuk I agree that one of the problems is the dataset is restricted. It takes a lot of effort for me to tune it for a new dataset as I am not working on this project anymore.

from 3d-convolutional-speaker-recognition.

imranparuk avatar imranparuk commented on August 20, 2024

@astorfi I have actually been using the VoxCeleb dataset, I wanted to try using the Mozilla Open Voice. However it seems to require conversion from mp3 to wav. Just lazy to do that.

My question is, is it illegal or unethical to include these datasets in your own repository since they all require you to formally request them?

from 3d-convolutional-speaker-recognition.

8rV1n avatar 8rV1n commented on August 20, 2024

@imranparuk From my point of view, with the citing of the author and the following of the license(CC BY-SA 4.0 for VoxCeleb), we can use the dataset in our experiements or something. I would like to explain which and how a dataset is used rather including the real dataset files in a repo.

from 3d-convolutional-speaker-recognition.

8rV1n avatar 8rV1n commented on August 20, 2024

@imranparuk @astorfi I'm now doing mfec at Here. keras, docs and etc will be comig soon. Would you help me finding bugs or some thing?

from 3d-convolutional-speaker-recognition.

astorfi avatar astorfi commented on August 20, 2024

@ArvinSiChuan Thank you for your effort. Sure. I will be more than happy to help.
For SpeechPy please refer to it's newly published technical report.

@Article{torfi2018speechpy,
title={SpeechPy-A Library for Speech Processing and Recognition},
author={Torfi, Amirsina},
journal={arXiv preprint arXiv:1803.01094},
year={2018}
}

from 3d-convolutional-speaker-recognition.

8rV1n avatar 8rV1n commented on August 20, 2024

@astorfi I have some problem about the model. I've built the model with the structure like:

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input-layer (InputLayer)     (None, 20, 80, 40, 1)     0         
_________________________________________________________________
conv1-1 (Conv3D)             (None, 18, 80, 36, 16)    256       
_________________________________________________________________
activation1-1 (PReLU)        (None, 18, 80, 36, 16)    829440    
_________________________________________________________________
conv1-2 (Conv3D)             (None, 16, 36, 36, 16)    6928      
_________________________________________________________________
activation1-2 (PReLU)        (None, 16, 36, 36, 16)    331776    
_________________________________________________________________
pool-1 (MaxPooling3D)        (None, 16, 36, 18, 16)    0         
_________________________________________________________________
conv2-1 (Conv3D)             (None, 14, 36, 15, 16)    3088      
_________________________________________________________________
activation2-1 (PReLU)        (None, 14, 36, 15, 16)    120960    
_________________________________________________________________
conv2-2 (Conv3D)             (None, 12, 15, 15, 16)    6160      
_________________________________________________________________
activation2-2 (PReLU)        (None, 12, 15, 15, 16)    43200     
_________________________________________________________________
pool-2 (MaxPooling3D)        (None, 12, 15, 7, 16)     0         
_________________________________________________________________
conv3-1 (Conv3D)             (None, 10, 15, 5, 16)     2320      
_________________________________________________________________
activation3-1 (PReLU)        (None, 10, 15, 5, 16)     12000     
_________________________________________________________________
conv3-2 (Conv3D)             (None, 8, 9, 5, 16)       5392      
_________________________________________________________________
activation3-2 (PReLU)        (None, 8, 9, 5, 16)       5760      
_________________________________________________________________
conv4-1 (Conv3D)             (None, 6, 9, 3, 16)       2320      
_________________________________________________________________
activation4-1 (PReLU)        (None, 6, 9, 3, 16)       2592      
_________________________________________________________________
conv4-2 (Conv3D)             (None, 4, 3, 3, 16)       5392      
_________________________________________________________________
activation4-2 (PReLU)        (None, 4, 3, 3, 16)       576       
_________________________________________________________________
flatten_1 (Flatten)          (None, 576)               0         
_________________________________________________________________
fc (Dense)                   (None, 128)               73856     
_________________________________________________________________
ac_softmax (Dense)           (None, 695)               89655     
=================================================================
Total params: 1,541,671
Trainable params: 1,541,671
Non-trainable params: 0
_________________________________________________________________

Is this structure the same as yours? Or my input feature pipline is incorrect? Or something with the optimizer? I've got a zero acc in Keras evaluation. Could you help me finding out what's the problem there?

from 3d-convolutional-speaker-recognition.

sivagururaman avatar sivagururaman commented on August 20, 2024

After moving to Tensorflow v1.0.0, I could get past the training. But the enrollment is failing:
Traceback (most recent call last):
File "./code/2-enrollment/enrollment.py", line 330, in
tf.app.run()
File "/home/osboxes/anaconda3/envs/py36/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 44, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "./code/2-enrollment/enrollment.py", line 201, in main
for i in xrange(FLAGS.num_clones):
NameError: name 'xrange' is not defined

Any idea why this is happening?

Thanks!

from 3d-convolutional-speaker-recognition.

8rV1n avatar 8rV1n commented on August 20, 2024

@sivagururaman The error there explained the problem is xrange, which is not compatible in python 3.x.

from 3d-convolutional-speaker-recognition.

sivagururaman avatar sivagururaman commented on August 20, 2024

@ArvinSiChuan, Do i need to move to someother python version? Something like 2.7?

from 3d-convolutional-speaker-recognition.

8rV1n avatar 8rV1n commented on August 20, 2024

@sivagururaman Yes, that's one way. You can also try to use range() instead.

from 3d-convolutional-speaker-recognition.

sivagururaman avatar sivagururaman commented on August 20, 2024

@ArvinSiChuan Okay. Thanks. WIll try the suggestion and get back if I get stuck somewhere else.

from 3d-convolutional-speaker-recognition.

sivagururaman avatar sivagururaman commented on August 20, 2024

@ArvinSiChuan Thanks! It worked...
The demo is fine.

Now how do I go about using our own data set for training, enrollment and evaluation? Our problem space is sub set to this, I suppose. As we need only text dependent speaker_id.....
Any thoughts here would be helpful..

from 3d-convolutional-speaker-recognition.

sivagururaman avatar sivagururaman commented on August 20, 2024

@ArvinSiChuan I am now able to run the demo as needed.

Do you have the input preprocess of wav file handy which I can use? i wanted to extract the input feature of my own clip and use the rest of the n/w for the evaluation.

from 3d-convolutional-speaker-recognition.

8rV1n avatar 8rV1n commented on August 20, 2024

@sivagururaman You could refer to input part in the code folder.

from 3d-convolutional-speaker-recognition.

sivagururaman avatar sivagururaman commented on August 20, 2024

@ArvinSiChuan Thanks. I am going through that now....

from 3d-convolutional-speaker-recognition.

sivagururaman avatar sivagururaman commented on August 20, 2024

@ArvinSiChuan One quick question:
What do the hdf5 files in the data/ folder contain? Are they the extracted speech features from the input?
If so, how do i then convert the MFEC vectors obtained from the input code to the hdf5 format?

from 3d-convolutional-speaker-recognition.

imranparuk avatar imranparuk commented on August 20, 2024

@sivagururaman good question. The input portion of the code seems incomplete, we should take some initiative and complete it... eventually. Try something like this (but better) maybe?

    datasetTest = AudioDataset(files_path='/home/imran/Documents/projects/wd/wd-3d-cnn/code/0-input/file_path_enrollment_eval.txt', audio_dir=args.audio_dir,
                           transform=Compose([CMVN(), Feature_Cube(cube_shape=(20, 80, 40), augmentation=True), ToOutput()]))

    datasetTrain = AudioDataset(files_path='/home/imran/Documents/projects/wd/wd-3d-cnn/code/0-input/file_path_enrollment_enroll.txt', audio_dir=args.audio_dir,
                           transform=Compose([CMVN(), Feature_Cube(cube_shape=(20, 80, 40), augmentation=True), ToOutput()]))
    # idx is the representation of the batch size which chosen to be as one sample (index) from the data.
    # ex: batch_features = [dataset.__getitem__(idx)[0] for idx in range(32)]
    # The batch_features is a list and len(batch_features)=32.

    lengthTest = datasetTest.__len__()
    lengthTrain = datasetTrain.__len__()

    out_array_features_train = list()

    fileh = tables.open_file('da_dataset.h5', mode='w')
    a = tables.Float32Atom()
    b = tables.Int32Atom()
    
    array_a = fileh.create_earray(fileh.root, 'label_evaluation', b, (0,))
    array_b = fileh.create_earray(fileh.root, 'label_enrollment', b, (0,))
    array_c = fileh.create_earray(fileh.root, 'utterance_evaluation', a, (0, 80, 40, 1))
    array_d = fileh.create_earray(fileh.root, 'utterance_enrollment', a, (0, 80, 40, 1))

    for x in range(0, lengthTest):
        feature, label = datasetTest.__getitem__(x)
        feature = feature.swapaxes(1, 2).swapaxes(2, 3)
        feature = feature[:,:,:,0:1]
        feature = np.squeeze(np.array(feature), axis=0)
        print(feature.shape)
        array_a.append(np.array([label]))
        array_c.append(np.array([feature]))

    for x in range(0, lengthTrain):
        feature, label = datasetTrain.__getitem__(x)
        feature = feature.swapaxes(1, 2).swapaxes(2, 3)
        feature = feature[:, :, :, 0:1]
        feature = np.squeeze(np.array(feature), axis=0)
        array_b.append(np.array([label]))
        array_d.append(np.array([feature]))

    # close the file...
    fileh.close()

from 3d-convolutional-speaker-recognition.

sivagururaman avatar sivagururaman commented on August 20, 2024

@imranparuk I could not test this input code. Will do so in coming days and let you know of the updates.

Just after a glance of the code: I understand that we would have da_datast.h5 which will have the enrollment and evaluation data. if I need to test with another sample say new utterance.wav, how do i do that?

from 3d-convolutional-speaker-recognition.

imranparuk avatar imranparuk commented on August 20, 2024

@sivagururaman Another good question... I wrote my own prediction code to do that but it's too long to post here. A simpler way would be to create a dataset file the same way but the text file only has 1 item... Then pass it to the model in a similar way.

I place the task of posting the code to another user...

from 3d-convolutional-speaker-recognition.

sivagururaman avatar sivagururaman commented on August 20, 2024

from 3d-convolutional-speaker-recognition.

imranparuk avatar imranparuk commented on August 20, 2024

@sivagururaman no, there is a text file provided with the git repo which has a particular format to identify speakers.

0 file1.wav
1 file2.wav
1 file3.wav
2 file4.wav

etc

from 3d-convolutional-speaker-recognition.

sivagururaman avatar sivagururaman commented on August 20, 2024

from 3d-convolutional-speaker-recognition.

sivagururaman avatar sivagururaman commented on August 20, 2024

from 3d-convolutional-speaker-recognition.

MSAlghamdi avatar MSAlghamdi commented on August 20, 2024

@ArvinSiChuan & @astorfi
Could you please take a look to issue #47 ?

from 3d-convolutional-speaker-recognition.

MSAlghamdi avatar MSAlghamdi commented on August 20, 2024

@imranparuk , thank you for sharing you code.

I tried it and it gave the following (please note that I used h5dump -d /utterance_enrollment da_dataset.h5 &> da_dataset.log to read inside the .h5 file and i copied only the tail of the log file):


   (8,79,26,0): -12.7884,
   (8,79,27,0): -11.2512,
   (8,79,28,0): -12.5186,
   (8,79,29,0): -12.0625,
   (8,79,30,0): -12.3308,
   (8,79,31,0): -12.3839,
   (8,79,32,0): -12.5028,
   (8,79,33,0): -12.5818,
   (8,79,34,0): -12.0501,
   (8,79,35,0): -12.4087,
   (8,79,36,0): -13.032,
   (8,79,37,0): -13.4173,
   (8,79,38,0): -12.3723,
   (8,79,39,0): -11.7837
   }
   ATTRIBUTE "CLASS" {
      DATATYPE  H5T_STRING {
         STRSIZE 6;
         STRPAD H5T_STR_NULLTERM;
         CSET H5T_CSET_ASCII;
         CTYPE H5T_C_S1;
      }
      DATASPACE  SCALAR

I think this won't work. If we considered the first shape dimension (8) is the idx of speakers in my file_path.txt (there are 9 wav files), then we took just one utterance for each of them (last one = 0 and must be the utterance idx). That will give an error about the # of the utterances when the demo is run.

from 3d-convolutional-speaker-recognition.

imranparuk avatar imranparuk commented on August 20, 2024

@MSAlghamdi Hey man, I actually stopped doing the static dataset method (Where you extract the features beforehand).
I am taking a more dynamic approach to this (the features are extracted batch by batch)

I have started a project based off this project but written in Keras.
You can check it out here: Keras-Speaker-Recognition

PS: @astorfi I will make sure you are given credit for your work. I just created this project, haven't had time to complete the README.md

from 3d-convolutional-speaker-recognition.

MSAlghamdi avatar MSAlghamdi commented on August 20, 2024

@imranparuk Good work! Thank you for sharing it.

I still have hope to do it in simpler static way. I tried anther method that has some issues.
If you can help, it could take both advantages of you static method and the flexibility of @Chegde8 in issue #41 .
The advantage in yours is the ability to arrange the feature arrays and the order of the axes so the structure of the .h5 file will be suitable for the code.
I combined both codes to get the following:


   datasetTest = AudioDataset(files_path='file_path_test.txt', audio_dir='Audio',
                           transform=Compose([CMVN(), Feature_Cube(cube_shape=(20, 80, 40), augmentation=True), ToOutput()]))

    datasetTrain = AudioDataset(files_path='file_path_train.txt', audio_dir='Audio',
                           transform=Compose([CMVN(), Feature_Cube(cube_shape=(20, 80, 40), augmentation=True), ToOutput()]))

###############    TEST DATASET       ####################
    idx_test = 0
    f1 = open('file_path_train.txt','r')
    for line in f1:
        idx_test = idx_test + 1

    lab_test = []
    feat_test = []
    for i in range(idx_test):
        feature, label = datasetTest.__getitem__(i)

        lab_test.append(label)

#	feature.shap= (1, 20, 80, 40).
#	make it like: (1, 80, 40, 20)
        feature = feature.swapaxes(1, 2).swapaxes(2, 3)
        feat_test.append(feature[0,:,:,:])

###############    TRAIN DATASET       ####################
    idx = 0
    f = open('file_path_train.txt','r')
    for line in f:
        idx = idx + 1
    lab_train = []
    feat_train = []
    for i in range(idx):
        feature, label = datasetTrain.__getitem__(i)

        lab_train.append(label)

        feature = feature.swapaxes(1, 2).swapaxes(2, 3)
        feat_train.append(feature[0,:,:,:])

    h5file = tables.open_file('/root/3D_CNN/3D-convolutional-speaker-recognition/data/devel_try.hdf5', 'w')

    label_test = h5file.create_carray(where = '/', name = 'label_test', obj = lab_test, byteorder = 'little')
    label_array = h5file.create_carray(where = '/', name = 'label_train', obj = lab_train, byteorder = 'little')

    utterance_test = h5file.create_earray(where = '/', name = 'utterance_test', chunkshape = [1,80,40,20], obj = feat_test, byteorder = 'little')
    utterance_train = h5file.create_earray(where = '/', name = 'utterance_train', chunkshape = [1,80,40,20], obj = feat_train, byteorder = 'little')
    h5file.close()

The .h5 file was created in a good shape. There's another issue popped up when I ran the demo. The posted h5 file with the project is as same structure as mine but mine has negative #'s. I think this's an issue with generating the features in the input file.py

from 3d-convolutional-speaker-recognition.

astorfi avatar astorfi commented on August 20, 2024

@MSAlghamdi Hey man, I actually stopped doing the static dataset method (Where you extract the features beforehand).
I am taking a more dynamic approach to this (the features are extracted batch by batch)

I have started a project based off this project but written in Keras.
You can check it out here: Keras-Speaker-Recognition

PS: @astorfi I will make sure you are given credit for your work. I just created this project, haven't had time to complete the README.md

Thank you so much for your effort and great work.

from 3d-convolutional-speaker-recognition.

MSAlghamdi avatar MSAlghamdi commented on August 20, 2024

Thank you @astorfi for your kindness and your grate project.
I will be more appreciative if you tell us how did you created your .hdf5 in your work.

My master's thesis is an evaluation of yours with other SV systems. It seems your has the ability to beat them. I'm just stuck with only yours because of the h5 file issues.

from 3d-convolutional-speaker-recognition.

astorfi avatar astorfi commented on August 20, 2024

Thank you @astorfi for your kindness and your grate project.
I will be more appreciative if you tell us how did you created your .hdf5 in your work.

My master's thesis is an evaluation of yours with other SV systems. It seems your has the ability to beat them. I'm just stuck with only yours because of the h5 file issues.

Thanks for your kind words. Please consider the following directions:

  • Update your fork so you have the latest modifications.
  • The HDF5 was the old way of me for the data pipeline.
  • I recommend the direct feature generation. Please refer to input_feature.py function for further details and an accurate input pipeline.

Bests

from 3d-convolutional-speaker-recognition.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.