GithubHelp home page GithubHelp logo

huangyangyu / seqface Goto Github PK

View Code? Open in Web Editor NEW
129.0 11.0 32.0 8.9 MB

SeqFace : Making full use of sequence information for face recognition

Home Page: https://arxiv.org/pdf/1803.06524.pdf

License: MIT License

Python 6.30% CMake 1.07% Makefile 0.25% Jupyter Notebook 51.92% C++ 35.63% Cuda 3.98% MATLAB 0.33% HTML 0.07% CSS 0.09% Shell 0.33% Dockerfile 0.03%
face-recognition seqface caffe sequence-database

seqface's Introduction

SeqFace : Making full use of sequence information for face recognition

Paper link: https://arxiv.org/abs/1803.06524

by Wei Hu, Yangyu Huang, Fan Zhang, Ruirui Li, Wei Li, Guodong Yuan

Recent Update

2018.06.09: 1.Our new ResNet-64 model achieved 99.87% accuracy on LFW, 98.16%(clean) and 83.58%(not clean) accuracy on Megaface challenge 1.

2018.05.04: 1.Our new ResNet-64 model achieved 99.85% accuracy on LFW without 6 pairs correction.

2018.03.21: 1.Release the code of LSRLoss and DSALoss layer; 2.Add example of Seqface.

2018.03.20: 1.Publish our paper; 2.Release test dataset and test code.

2018.03.15: 1.Create the repository; 2.Release our model.

Contents

  1. Requirements
  2. Dataset
  3. Model-and-Result
  4. How-to-test
  5. Example
  6. Demo
  7. Contact
  8. Citation
  9. License

Requirements

  1. Caffe (see: Caffe installation instructions)

  2. MTCNN (see: MTCNN - face detection & alignment)

Dataset

All faces in our dataset are detected by MTCNN and aligned by util.py. The structure of trainning dataset and testing dataset is shown below. Please note that the testing dataset have already be processed by detection and alignment, So you can reproduce our result directly by running our evaluating script.

Training Dataset

MS-Celeb-1M + Celeb-Seq

Testing Dataset

LFW @BaiduDrive, @GoogleDrive

YTF @BaiduDrive, @GoogleDrive

Testing Features

You can also use the precomputed feature instead of the testing dataset to evaluate our method.

LFW: @BaiduDrive, @GoogleDrive

YTF: @BaiduDrive, @GoogleDrive

Model-and-Result

We released our ResNet-27 model, which can be downloaded by the link below. The model was trained in caffe, please refer to our paper for the detailed training process.

Caffe: ResNet-27 @BaiduDrive, @GoogleDrive (non-commercial use only)

Performance:

Dataset Model Dataset LFW YTF
SeqFace 1 ResNet-27 MS-Celeb-1M + Celeb-Seq 99.80 98.00
SeqFace 1 ResNet-64 MS-Celeb-1M + Celeb-Seq 99.83 98.12

How-to-test

Refer to run.sh, which contains two parameters, the first one("mode") means the running mode you use("feature" or "model"), the other one("dataset") means the dataset you choose("LFW" or "YTF").

step 1: git clone https://github.com/ydwen/caffe-face.git

step 2: compile caffe

step 3: download model and testing dataset, then unzip them

step 4: run evaluate.py in LFW or YTF directory

You can try the command below to verify seqface of LFW dataset in feature mode.

./run.sh feature LFW

Example

image

The usage of LSR loss layer and DSA loss layer in train_val.prototxt:

layer {
    name: "lsro_loss"
    type: "LSROLoss"
    bottom: "fc6"
    bottom: "label000"
    top: "lsro_loss"
    loss_weight: @loss_weight
}

layer {
    name: "dsa_loss"
    type: "DSALoss"
    bottom: "fc5"
    bottom: "label000"
    top: "dsa_loss"
    param {
        lr_mult: 1
        decay_mult: 0
    }
    dsa_loss_param {
        faceid_num: @faceid_num
        seqid_num: @seqid_num
        center_filler {
            type: "xavier"
        }
        lambda: @lambda
        dropout_ratio: @dropout_ratio
        gamma: @gamma
        alpha: @alpha
        beta: @beta
        use_normalize: @use_normalize
        scale: @scale
    }
    loss_weight: @loss_weight
}

Demo

You can experience our algorithm on demo page

Contact

Wei Hu

Yangyu Huang

Citation

Waiting

License

SeqFace(not including the model) is released under the MIT License

seqface's People

Contributors

huangyangyu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

seqface's Issues

question about your demo

I running your web demo of face verification. And there have two result which are sim old and sim new. what is that meaning?

Missed Detections on YTF

Hi,

I have tried to run the MTCNN detector for YTF dataset, but there are some frames with no detections. How do you deal with this? Did you discard these frames?

confusion about NoiseTolerantFRLayer<Dtype>::Backward_cpu

I have some confusion about backward of NoiseTolerantFRLayer, why when skip_ is True, bottom_diff multiply zero? why not just return as the situation "iter_<start_iter_"?

` void NoiseTolerantFRLayer::Backward_cpu(const vector<Blob>& top,
const vector& propagate_down,
const vector<Blob
>& bottom)
{
if (propagate_down[0])
{
const Dtype* label_data = bottom[2]->cpu_data();
const Dtype* top_diff = top[0]->cpu_diff();
Dtype* bottom_diff = bottom[0]->mutable_cpu_diff();
const Dtype* weight_data = weights_.cpu_data();

      int count = bottom[0]->count();
      int num = bottom[0]->num();
      int dim = count / num;

      if (top[0] != bottom[0]) caffe_copy(count, top_diff, bottom_diff);

      if (this->phase_ != TRAIN) return;

      if (iter_ < start_iter_) return;

      // backward
      for (int i = 0; i < num; i++)
      {
          int gt = static_cast<int>(label_data[i]);
          if (gt < 0) continue;
          for (int j = 0; j < dim; j++)
          {
              bottom_diff[i * dim + j] *= skip_ ? Dtype(0.0) : weight_data[i];
          }
      }
  }

}`

Similarity scores do not look meaningful

Hi

First of all, thank you for your work.

I am trying to run the trained model with some custom images given by me. The results did not make sense so I'd like to ask you if I'm missing something.

1- I prepared a small dataset of a few images
dataset.zip

2- Then I created pairs.txt like this:

adile1.jpg adile2.jpg 1
adile1.jpg adile3.jpg 1 
sener1.jpg sener2.jpg 1 
munir2.jpg adile1.jpg 0 
munir1.jpg sener1.jpg 0
sener2.jpg adile2.jpg 0

3- I run evaluate.py in the LFW folder. I have obtained a similarity score for each pair which are given as follows:

adile1.jpg adile2.jpg 1 -4.43
adile1.jpg adile3.jpg 1 -2.06
sener1.jpg sener2.jpg 1 -0.07
munir2.jpg adile1.jpg 0 -0.88
munir1.jpg sener1.jpg 0 -3.38
sener2.jpg adile2.jpg 0 -5.03

The results confused me a bit. Similarity between "adile1" and "adile2" (same person) is -4.43 while similarity between "sener2" and "adile2" is -5.03. If you inspect the above list, it is hard to observe a proportion in the similarity score for similar and/or different people.

How can it be?

Secondly, is there a rough threshold for the similarity score between two images of the same person? For instance, score is between -5 and 0 for the same person and it is far less than this (e.g., -100) for different people?

Thanks in advance.

Best regards

caffe compilation fails

Hello,

make all for the caffe comes with error

src/caffe/layers/dsa_loss_layer.cpp:366:1: note: in expansion of macro ‘STUB_GPU’
STUB_GPU(DSALossLayer);
^~~~~~~~
Makefile:573: recipe for target '.build_release/src/caffe/layers/dsa_loss_layer.o' failed
make: *** [.build_release/src/caffe/layers/dsa_loss_layer.o] Error 1
make: *** Waiting for unfinished jobs....

Combined with ArcFace

Hi, thanks for your code. Great Work.
Whether the losses in your paper can be combined with ArcFace? if I want to combine the losses with ArcFace, it should be LSR-ArcFace+DSA loss? After combination, whether face recognition efficacy can be better?
It's mentioned that the accuary of LSR-L2-SphereFace+DSA is slightly lower than L2-SphereFace in table 1 in your paper. @huangyangyu

about training expirence

Thank you for your generous share , your model has very good robustness through large-scale data testing. I am trying to reiteration papers on resnet20 , but met some difficulties ,such as convergence difficulties, fine-tune failure. Could you share some training expirence for us , or give some more detailed proposals about how to prepare training data ,how to set params , perhaps there are some tricks ,easily neglected ,which make the training hard to convergence .
Looking forward to your reply.

Model for pytorch

Hi :)
Can you please release the model with Pytorch compatibility?

Thx in advance!

Caffe compile error

In file included from src/caffe/layers/cudnn_bn_layer.cpp:8:0:
./include/caffe/layers/cudnn_bn_layer.hpp:21:36: error: expected template-name before ‘<’ token
 class CuDNNBNLayer : public BNLayer<Dtype> {
                                    ^
./include/caffe/layers/cudnn_bn_layer.hpp:21:36: error: expected ‘{’ before ‘<’ token
./include/caffe/layers/cudnn_bn_layer.hpp:21:36: error: expected unqualified-id before ‘<’ token
src/caffe/layers/cudnn_bn_layer.cpp:16:38: error: invalid use of incomplete type ‘class caffe::CuDNNBNLayer<Dtype>’
       const vector<Blob<Dtype>*>& top) {
                                      ^
In file included from src/caffe/layers/cudnn_bn_layer.cpp:8:0:
./include/caffe/layers/cudnn_bn_layer.hpp:21:7: error: declaration of ‘class caffe::CuDNNBNLayer<Dtype>’
 class CuDNNBNLayer : public BNLayer<Dtype> {
       ^
src/caffe/layers/cudnn_bn_layer.cpp:33:38: error: invalid use of incomplete type ‘class caffe::CuDNNBNLayer<Dtype>’
       const vector<Blob<Dtype>*>& top) {
                                      ^
In file included from src/caffe/layers/cudnn_bn_layer.cpp:8:0:
./include/caffe/layers/cudnn_bn_layer.hpp:21:7: error: declaration of ‘class caffe::CuDNNBNLayer<Dtype>’
 class CuDNNBNLayer : public BNLayer<Dtype> {
       ^
src/caffe/layers/cudnn_bn_layer.cpp:67:36: error: invalid use of incomplete type ‘class caffe::CuDNNBNLayer<Dtype>’
 CuDNNBNLayer<Dtype>::~CuDNNBNLayer() {
                                    ^
In file included from src/caffe/layers/cudnn_bn_layer.cpp:8:0:
./include/caffe/layers/cudnn_bn_layer.hpp:21:7: error: declaration of ‘class caffe::CuDNNBNLayer<Dtype>’
 class CuDNNBNLayer : public BNLayer<Dtype> {
       ^
In file included from ./include/caffe/blob.hpp:8:0,
                 from ./include/caffe/layers/cudnn_bn_layer.hpp:6,
                 from src/caffe/layers/cudnn_bn_layer.cpp:8:
src/caffe/layers/cudnn_bn_layer.cpp:77:19: error: explicit instantiation of ‘class caffe::CuDNNBNLayer<float>’ before definition of template
 INSTANTIATE_CLASS(CuDNNBNLayer);
                   ^
./include/caffe/common.hpp:43:18: note: in definition of macro ‘INSTANTIATE_CLASS’
   template class classname<float>; \
                  ^
src/caffe/layers/cudnn_bn_layer.cpp:77:19: error: explicit instantiation of ‘class caffe::CuDNNBNLayer<double>’ before definition of template
 INSTANTIATE_CLASS(CuDNNBNLayer);
                   ^
./include/caffe/common.hpp:44:18: note: in definition of macro ‘INSTANTIATE_CLASS’
   template class classname<double>
                  ^
make: *** [.build_release/src/caffe/layers/cudnn_bn_layer.o] 错误 1

Use spherefacce code test your model

Thank your share the nice job.
I have download your res-27 model,and test it on LFW,but only get 99.48%,maybe something wrong when I align photos.
I use MTCNN detect all photos,and align it with this
coord5point = [ 46.29460144, 59.69630051;
81.53179932, 59.50139999;
64.02519989, 79.73660278;
49.54930115, 100.3655014 ;
78.72990417, 100.20410156];
then crop to be 128x128.
Am I right? Or problem is here?
At the end,I use evaluation code in sphereface,and can get accuracy 99.48%(with image flip),without image flip it can get 99.43%.
Can you give me some advise about how should I do?
BTW,I also try your python code norml2_sim,but can get the same result.
Thanks very much!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.