GithubHelp home page GithubHelp logo

About DeepID2 about faceverification HOT 36 OPEN

happynear avatar happynear commented on July 16, 2024
About DeepID2

from faceverification.

Comments (36)

happynear avatar happynear commented on July 16, 2024

Recently I am working on face alignment. As a friend of mine said, he successfully trained a DeepID2 model, by setting the loss_weight of contrastive loss to a very low value, say 1e-5. You may try it again.

from faceverification.

denghuiru avatar denghuiru commented on July 16, 2024

@happynear
Thank you very much! I will try it again!!

from faceverification.

denghuiru avatar denghuiru commented on July 16, 2024

@happynear
By the way, the margin in the contrastiveloss has great effect on the loss results, how to set this value, and besides, can your friends share the net structure with me?
Thanks very much!

from faceverification.

happynear avatar happynear commented on July 16, 2024

As the DeepID2 paper described, using same pair only can already get accuracy very close to the best performance. In this case, we can generate pairs of images of same label, and set the margin to 0.

from faceverification.

denghuiru avatar denghuiru commented on July 16, 2024

@happynear
I have another question, does your friend construct the DeepID2 model just the same as you provided, if not, can you please tell me the difference or key points.Besides, I also use webface to train the model, but I modify the data_layer.cpp to generate the pair images lmdb datasets(image1 image2 label1 label2) and then use slice layer to slice the pair image to image1 and image2 and run two convnets, almost same as your model, does this manner right or not? if not, how do you generate the pair images?

from faceverification.

happynear avatar happynear commented on July 16, 2024
  • His net is the same with mine, and is described in the CASIA-webface paper.
  • I use matlab script to generate the list of image pairs. They are in ./dataset folder of this repo.

from faceverification.

denghuiru avatar denghuiru commented on July 16, 2024

@happynear
Thank you very much! I wll do this process again!

from faceverification.

denghuiru avatar denghuiru commented on July 16, 2024

@happynear
hi, as in webface datasets is very unbalanced, it has 10575 persons, but the amount of same person is small, have you made some stategies to make the generated pair images balanced such as the probability of been the same person or the different person is almost equal?

from faceverification.

happynear avatar happynear commented on July 16, 2024

Face++ has done a experiment. The conclusion is to delete all identity with less than 10 images.

Paper: http://arxiv.org/pdf/1501.04690.pdf .

from faceverification.

denghuiru avatar denghuiru commented on July 16, 2024

@happynear
Hi, I found a problem during generate pair images using your matlab scripts, I want to generate only the same pairs and set the same_r=1,but this can not generate corresponding train.txt, if i set the same_r=0.5 and it works, how to solve this problem. My datasets is webface.

from faceverification.

happynear avatar happynear commented on July 16, 2024

This is really a messy code. Just try more times....

from faceverification.

happynear avatar happynear commented on July 16, 2024

In fact, for same pair generation, I suggest you to use a full permutation, i. e. generate all possible pairs. Since there are not too many images in a single identity, the final number will not be too large.

Moreover, you can extract a pre-trained feature first and mine the hard paris only, then fine-tune the pre-trained network with DeepID2.

from faceverification.

denghuiru avatar denghuiru commented on July 16, 2024

@happynear
I am not very clearly about fine-tuning the pre-trained model with DeepID2, you mean that i can use any one pre-trained model as the initialization of DeepID2?

from faceverification.

happynear avatar happynear commented on July 16, 2024

Sure, DeepID1 model and DeepID2 model can share the same trained parameters.

from faceverification.

denghuiru avatar denghuiru commented on July 16, 2024

@happynear
I found a problem that, i can generate two train.txt with all the same pairs, then using caffe to convert them to lmdb datasets, however, if the training process random pick two images every time but not in order, the two images must be the different class, how to deal this problem? Thank you!

from faceverification.

dfdf avatar dfdf commented on July 16, 2024

About the DeepID2, in your implementation you use:

"layer {
name: "simloss"
type: "ContrastiveLoss"
loss_weight: 0.00001
contrastive_loss_param {
margin: 0
}
bottom: "pool5"
bottom: "pool5_p"
bottom: "label1"
bottom: "label2"
top: "simloss"
}"

You use 4 blobs as bottom, but in the docs from caffe the constrative loss only use 3 (the third parameter is the similarity, it value should be 0 if differs or 1 if simmilar), so as I am reading, your similarity parameter is based only in the label1 which I think is totally wrong. Is this correct? Or you changed something else in the code?

Thank you, and by the way good work!

from faceverification.

denghuiru avatar denghuiru commented on July 16, 2024

@dfdf
The contrastiveloss sure can only accept 3 blobs as input, however, in the DeepID2 , as it simultaneous use identification and verification imformation to training the net, it must use each image's label which lead to we can not just use the similarity of pair data(0 or 1) to the contrastive loss, so in order to use the pair label, we must modify the contrastiveloss layer to accept four blobs as input, the modify of contrastive loss can refer to the author's windows caffe.

from faceverification.

happynear avatar happynear commented on July 16, 2024

@dfdf
As @denghuiru said, I have modified contrastive loss layer to get a more elegant implementation. The modified layer is in my caffe windows repository https://github.com/happynear/caffe-windows/blob/master/src/caffe/layers/contrastive_loss_layer.cpp .

from faceverification.

denghuiru avatar denghuiru commented on July 16, 2024

@happynear
hi, I recently follow your suggestion to do the deepID2, using all the same pairs and set the loss weight to 1*10-5, and using a small datasets only has 9000 pairs, however,during the training process, the contrastive loss become very small and nearly has no contribute to the total loss, is this right?
2. Besides, how about the accuracy dose your friends get based on deepid2 model and how does he calculate the face verification accuracy?
3. In deepid2, how to check whether the two loss(contrastive loss and softmaxloss) adjust the parameter together, i always think the process of adjust parameter is wrong! Thank you!

from faceverification.

dfdf avatar dfdf commented on July 16, 2024

@denghuiru What are the values that you are using on the solver? Thank You

from faceverification.

denghuiru avatar denghuiru commented on July 16, 2024

@dfdf
The author has provided the solver in the caffe_proto and i also use his verison!

from faceverification.

dfdf avatar dfdf commented on July 16, 2024

@denghuiru you got any improvements doing what you suggest? You were able to make the network to converge, using a valid constrative loss?

from faceverification.

denghuiru avatar denghuiru commented on July 16, 2024

@dfdf
I am just follow the author's suggestion to doing this model and have not made some impressive results, i think in order to fully implement this model still need a long time! However, if you made any progress please contact me anytime!

from faceverification.

dfdf avatar dfdf commented on July 16, 2024

@denghuiru Yeah, sure. I am still trying to converge the network on training, but if I got any progress I will post here

from faceverification.

denghuiru avatar denghuiru commented on July 16, 2024

@dfdf
Thank you very much, and i am also trying to converge the network, but has not made much progress!

from faceverification.

dfdf avatar dfdf commented on July 16, 2024

@denghuiru

I got some ideas from a discussion list in the caffe users.

I will try to do the follow:

I created 3 diferents prototxt and try to execute them only changing the value o constrative loss weight and the learning rate

1 - Constrative loss = 0 and Learning Rate: 0.01
2 - Learning Rate: 0.001", "Constrative: 0.00032"
3 - Learning Rate: 0.0001","Constrative: 0.006"

I will try for 30~50k iterations until I change the parameters.
It will take a while to train this, but I will post here what I can achiev.

If you have any progress in the mean time, feel free to warn me. Thanks

from faceverification.

denghuiru avatar denghuiru commented on July 16, 2024

@dfdf
At present, I have generate the datasets following the authors scripts, and run the model, the learning rate is 0.01 and the contrastiveloss weight is 0.00001, however i found the softmaxloss is always about 9.0 and dosen't decrease, i still try to found why this happen, have you meet this problems?

from faceverification.

dfdf avatar dfdf commented on July 16, 2024

By the way, if we could train the DeepID2 (my network are still training now), how do we get the output vector for one image? Should we get the combination of dropout5 + dropout5_p? Or we choose one of them as the deepid output?

from faceverification.

denghuiru avatar denghuiru commented on July 16, 2024

@dfdf
Once you get the trained model, you can extract the feature of dropout5 only, it is 320 dimension, then do face verification.
By the way, will you share your model with me? does your model converge correctly?

from faceverification.

dfdf avatar dfdf commented on July 16, 2024

@happynear I found a problem with your caffe version, I am using your windows version (caffe repository) but it seens that you don't have the last version for the constrative loss, for example this bug is not fixed in your version:
nickcarlevaris/caffe@7e2fceb

Maybe that is the reason for some problems to converge the network

from faceverification.

happynear avatar happynear commented on July 16, 2024

@dfdf This is another type of contrastive loss. Would you like to merge it and try it yourself? I am looking for your results.

from faceverification.

dfdf avatar dfdf commented on July 16, 2024

I tried to merge it in the windows version, but I am not so experienced with compilers, I will try on another machine using ubuntu

from faceverification.

dfdf avatar dfdf commented on July 16, 2024

In this thread: BVLC/caffe#2308, they reported that the equation is different from what is pointed in LeCun's paper (The same used in the DeepID2 paper), maybe as they suggest that could be one of the reasons that are affecting the network training speed (or not converging at all).

from faceverification.

cheer37 avatar cheer37 commented on July 16, 2024

Hi all.
This thread is interesting.
I am also struggling with siamese with softmax + contrastive.
I am getting the contrastive loss as a 1.#QNAN.
Have you ever met this situation? i set up the parameters like you, margin = 0, loss_weight = 1e-5.
My net is casia model.
Please help me.
and also if you made any progress, please let me know too.
Thanks.

from faceverification.

 avatar commented on July 16, 2024

@happynear , is your model coincident with the DeepID2 paper (Deep Learning Face Representation by Joint Identification-Verification) ? I find that your Convolution layers are twice as much as the paper's.

from faceverification.

happynear avatar happynear commented on July 16, 2024

No, my implementation is based on the Webface paper.

from faceverification.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.