GithubHelp home page GithubHelp logo

seasonsh / probabilistic-face-embeddings Goto Github PK

View Code? Open in Web Editor NEW
336.0 9.0 56.0 36.14 MB

(ICCV 2019) Uncertainty-aware Face Representation and Recognition

License: MIT License

Python 99.97% Shell 0.03%
face-recognition tensorflow deep-neural-networks deep-learning uncertainty

probabilistic-face-embeddings's People

Contributors

seasonsh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

probabilistic-face-embeddings's Issues

ValueError: Dimension 0 in both shapes must be equal, but are 0 and 512

Hi , I try to run the PFE model. It works well when I run eval_lfw with tensorflow 2.1 and tensorflow 1.x but when I tried to run it with tensorflow 2.2 and more I have this error : ValueError: Node 'gradients/UncertaintyModule/fc_log_sigma_sq/BatchNorm/cond/FusedBatchNorm_1_grad/FusedBatchNormGrad' has an _output_shapes attribute inconsistent with the GraphDef for output #3: Dimension 0 in both shapes must be equal, but are 0 and 512. Shapes are [0] and [512]
It happens when the model is loading when it does saver = tf.compat.v1.train.import_meta_graph(meta_file, clear_devices=True, import_scope=scope) to import the meta file of the PFE_sphere64_msarcface_am model
Capture du 2021-06-28 10-49-40
Capture du 2021-06-28 10-43-59
To reproduce the error : download https://drive.google.com/drive/folders/10RnChjxtSAUc1lv7jbm3xkkmhFYyZrHP?usp=sharing and run eval_lfw with parameters --model_dir pretrained/PFE_sphere64_msarcface_am --dataset_path data/Dataset --protocol_path ./proto/pairs_dataset.txt

Thank you for your help

face comparison

can you please tell how to compare two face images after crop and image pre-processing(112x96). Which comparison function should we use and what should be the threshold for the same?

scale_and_shift function

Thank you for your amazing work.
Could you explain the meaning of scale_and_shift function in uncertainty_module.py?

Fuck PFE

there are too many bugs in PFE !!!

Why is loss rising

I trained on my own dataset, the reg loss is rising , and loss is a negative number. Why does this happen?

Alignment Module

Perhaps it is better to treat as a python module as the code in README did not work, however, the following works:

python -m align.align_dataset \
data/ldmark_casia_mtcnncaffe.txt CASIA-WebFace_aligned \
--prefix CASIA-WebFace_original \
--image_size 96 112

Pytorch-PFE

#!/usr/bin/env python3
#-- coding:utf-8 --
"""
Created on 2020/04/23
author: lujie
"""
import torch
import torch.nn as nn
import torch.nn.functional as F
from IPython import embed

class MLSLoss(nn.Module):

def __init__(self, mean = False):

    super(MLSLoss, self).__init__()
    self.mean = mean

def negMLS(self, mu_X, sigma_sq_X):

    if self.mean:
        XX = torch.mul(mu_X, mu_X).sum(dim=1, keepdim=True)
        YY = torch.mul(mu_X.T, mu_X.T).sum(dim=0, keepdim=True)
        XY = torch.mm(mu_X, mu_X.T)
        mu_diff = XX + YY - 2 * XY
        sig_sum = sigma_sq_X.mean(dim=1, keepdim=True) + sigma_sq_X.T.sum(dim=0, keepdim=True)
        diff    = mu_diff / (1e-8 + sig_sum) + mu_X.size(1) * torch.log(sig_sum)
        return diff
    else:
        mu_diff = mu_X.unsqueeze(1) - mu_X.unsqueeze(0)
        sig_sum = sigma_sq_X.unsqueeze(1) + sigma_sq_X.unsqueeze(0)
        diff    = torch.mul(mu_diff, mu_diff) / (1e-10 + sig_sum) + torch.log(sig_sum)
        diff    = diff.sum(dim=2, keepdim=False)
        return diff

def forward(self, mu_X, log_sigma_sq, gty):
    
    # mu_X     = F.normalize(mu_X) # TODO
    non_diag_mask = (1 - torch.eye(mu_X.size(0))).int()
    if gty.device.type == 'cuda':
        non_diag_mask = non_diag_mask.cuda(0)      
    sig_X    = torch.exp(log_sigma_sq)
    loss_mat = self.negMLS(mu_X, sig_X)
    gty_mask = (torch.eq(gty[:, None], gty[None, :])).int()
    pos_mask = (non_diag_mask * gty_mask) > 0
    pos_loss = loss_mat[pos_mask].mean()
    return pos_loss

if name == "main":

mls = MLSLoss(mean=False)
gty = torch.Tensor([1, 2, 3, 2, 3, 3, 2])
muX = torch.randn((7, 3))
siX = torch.rand((7,3))
diff = mls(gty, muX, siX)
print(diff)

this is my MLSLoss, is the anything wrong with MLSLoss ?

Pytorch-PFE

#!/usr/bin/env python3
#-- coding:utf-8 --
"""
Created on 2020/04/23
author: lujie
"""
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.nn import Parameter
from IPython import embed

class UncertaintyModule(nn.Module):
''' Evaluate the log(sigma^2) '''

def __init__(self, in_feat = 512):

    super(UncertaintyModule, self).__init__()
    self.fc1   = Parameter(torch.FloatTensor(in_feat, in_feat))
    self.bn1   = nn.BatchNorm1d(in_feat)
    self.relu  = nn.PReLU(in_feat)
    self.fc2   = Parameter(torch.FloatTensor(in_feat, in_feat))
    self.bn2   = nn.BatchNorm1d(in_feat)
    self.register_buffer('gamma', torch.ones(1) * 1e-4)
    self.register_buffer('beta', torch.zeros(1) - 7.0)
    nn.init.xavier_uniform_(self.fc1)
    nn.init.xavier_uniform_(self.fc2)


def forward(self, x):

    x = self.relu(self.bn1(F.linear(x, self.fc1)))
    x = self.bn2(F.linear(x, self.fc2))
    # x = self.gamma * x + self.beta
    x = torch.log(1e-6 + torch.exp(x))
    return x

if name == "main":

mls = UncertaintyHead(in_feat=5)
muX = torch.randn((20, 5))
diff = mls(muX)
print(diff)

emm, is there anything wrong with my UncertaintyModule ?

issues about IJB-A dataset

thanks for your great work on PFE.
when I tried to test IJB-A dataset, some issues occured.
I download IJB-A from two different sources:

  1. https://drive.google.com/uc?export=download&confirm=TCkR&id=11p1eVSpyHZQUG0uBGyRoFnOXXTuZ501c
  2. https://drive.google.com/uc?export=download&confirm=APsr&id=1WdQ62XJuvw0_K4MUP5nXOhv2RsEBVB1f
    however, none of them can be successfully used by "crop_ijba.py" as well as "align_dataset.py".
    the result of running script "crop_ijba.py" seems weird with lots of black images:
    Screenshot from 2019-10-27 22-57-34

could you please share me the dataset or point out what mistakes I made?

PFE-pytorch

this is my PFE-Pytorch-version[https://github.com/Ontheway361/pfe-pytorch], but it is something amazing that there is neg mls-loss during the training...

pretrained model

@seasonSH

Since CASIA dataset is difficult to get, can you provide your pretrained models for performance evaluation?

Thanks
Kaishi

Potential bug/issue in the loss function

Hi,

Thanks for this great work. Your paper is well written as well.

Based on your paper & code my understanding is as follows -

a) You construct a batch such that you have 4 images per class and there could be n number of classes. In your paper you mentioned using n=64

b) You want to compare images of the same class and compute the MLS Score. In other words, it is incorrect to compare images from different class.

c) You make use masking features to ensure you achieve the step (b)

However, it seems that you would have duplicate comparisons amongst the images of the same class. For e.g.

Let's say you have images x1,x2,x3 & x4 for a class C and when comparing you should only take the values as a result of following
x1 - x2
x1 - x3
x1 - x4
x2 - x3
x2 - x4
x3 - x4

where as your code seem to doing -

x1 - x2
x1 - x3
x1 - x4
x2 - x1 <-- additional (same as x1 - x2 because you square them)
x2 - x3
x2 - x4
x3 - x1 <--- additional
x3 - x2 <---- additional
x3 - x4
x4 - x1 <-- additional
x4 - x2 <---- additional
x4 - x3 <---- additional

There is a chance that I have misunderstood the code so please feel free to correct me.

Thanks again for your great work

Regards
Kapil

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.