GithubHelp home page GithubHelp logo

rshaojimmy / rfmetafas Goto Github PK

View Code? Open in Web Editor NEW
141.0 141.0 29.0 814 KB

[AAAI 2020] Pytorch codes for Regularized Fine-grained Meta Face Anti-spoofing

Home Page: https://arxiv.org/pdf/1911.10771.pdf

Python 100.00%
domain-generalization face-antispoofing meta-learning

rfmetafas's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rfmetafas's Issues

Data Processing Problem

I have some problem about how to process the origin dataset.

  1. How do you change the video data into image? How often should I get a screenshot? (Such as every 5 frames?) Could you tell me the whole num of images in every dataset after the pre-processing?
  2. Should the input image been croped to face part or remain the whole scene?(In my experiment, PRNet can only generate depth for face image.)

Thanks for your great work and hope for relpy!

depth map

Is the depth map label range 0-255 or 0-1? If it is the latter, what is the normalization method?

Questions about project details

Hi,
thanks for sharing the code.
I have some questions about your project that I would like to confirm.
1.What did your meta learner mainly learn?(FeatExtractor Weight or some hyperparameters?)
2.When testing, is it not necessary to generate a face depth map? because it is only used to auxiliary supervision, like an image label?
3.we need to generate a depth map through PRNet and add it to the input ? or your code contains this generate function?
4.Is it possible to achieve real-time detection?

Thanks.

精度问题

您好,我按照您的要求送入网络rgb人脸图片,但是不管是real还是fake的样本,分数都在0.5左右,没有大的变化。请问您遇到过这种问题吗? @ @rshaojimmy

Error encountered running "python main.py --training_type Test"

Hi,

Thanks for sharing your code.
I have installed and setup the prerequisite software.
However when I run the Testing mode using this command "python main.py --training_type Test", I encountered the following error. The program main.py seems to be asking for the command line argument parameter tstfile.
Appreciate your advice. Thanks

File "main.py", line 175, in
main(parser.parse_args())
File "main.py", line 22, in main
savefilename = osp.join(args.tstfile, args.tstdataset+'to'+args.dataset_target+args.snapshotnum)
AttributeError: 'Namespace' object has no attribute 'tstfile'

Best Regards,
Benjamin

How did you get ~_depth images?

Hello, I'm Minha Kim researching anti-spoofing in korea.
Firstly, I appreciate you sharing this method with the public.

I have one question, how did you get ~_depth images?
As I know, all datasets don't contain depth images.
So, I wonder how did you make or get the dataset as a depth type of image.

Thank you :)
With best regards,
Minha Kim.

fake_depth.jpg

您可以提供一下fake_depth.jpg吗,所有假脸的深度是不是都假设为0了,那么您在训练的过程中用的fake_depth.jpg是什么样的呢,可以描述一下吗

some questions about datasets partitioning

nice, it is a good job.but,when i run it ,some quetions occers.as shown below:
3
i do not know the composition of documents,please show me the hierarchy of data set files. looking forward to your replay.

Data processing problem

if getreal and name=='idiap':
depth_dir = os.path.join('depth', dirlist[0], dirlist[1], dirlist[2], imgname + '_depth.jpg')
elif getreal and name=='CASIA':
depth_dir = os.path.join('depth', dirlist[0], dirlist[1], dirlist[2], imgname + '_depth.jpg')
elif getreal and name=='MSU':
depth_dir = os.path.join('depth', dirlist[0], dirlist[1], imgname + '_depth.jpg')
elif getreal and name=='OULU':
depth_dir = os.path.join('depth', dirlist[0], imgname + '_depth.jpg')
else:
==> depth_dir = os.path.join('depth', 'fake_depth.jpg')

Why you let all name of fake_depth_image are 'fake_depth.jpg' ?
Did you not use them?

Question about datasets

I have some questions about the code In datasets/DatasetLoader.py :

if getreal:
            filename = 'image_list_real.txt'
        else:
            filename = 'image_list_fake.txt'

What's the content in image_list_real.txt about and how do you get these files?
I cannot get these files after I download the datasets so I am curious about it.

Question about the code

Hi, a great job. But when I reuse your code to trian model by myself, I encounter the following problems:

  1. I find the model is too large, and when training, the results are very shocking. For example, when I use the epoch100 trained_model to test, the ACC in testing-set is 80%, but when I use the epoch101 trained_model to test, the ACC in testing-set is below 60%. Do you know why?

  2. I cannot reproduce the results of the C&I&M to O scenario in your paper. The best AUC results that I achieved by using your code is 82%, which has a large gap compared to the results published in your paper 91.16%. How do you test your model on Oulu dataset? Do you use the all data (training and testing) or just the testing data? Why I cannot reproduce the results?

Thank you very much!!

Ask for the code

I am sorry for the inconveninent I have caused. I would like to ask, do you reproduce the code of MetaReg?

why the hsv image also as the input?

i see the hsv image also as the input in your code, and don't mark it in your paper . can you tell me what information dose hsv image can provide?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.