GithubHelp home page GithubHelp logo

syaringan357 / android-mobilefacenet-mtcnn-faceantispoofing Goto Github PK

View Code? Open in Web Editor NEW
200.0 13.0 72.0 30.46 MB

Use tensorflow Lite on Android platform, integrated face detection (MTCNN), face anti spoofing (CVPR2019-DeepTreeLearningForZeroShotFaceAntispoofing) and face comparison (MobileFaceNet use InsightFace loss)

License: MIT License

Java 100.00%

android-mobilefacenet-mtcnn-faceantispoofing's Introduction

MobileFaceNet-Android

This project includes three models.

MTCNN(pnet.tflite, rnet.tflite, onet.tflite), input: one Bitmap, output: Box. Use this model to detect faces from an image.

FaceAntiSpoofing(FaceAntiSpoofing.tflite), input: one Bitmap, output: float score. Use this model to determine whether the image is an attack.

MobileFaceNet(MobileFaceNet.tflite), input: two Bitmaps, output: float score. Use this model to judge whether two face images are one person.

iOS platform implementation: https://github.com/syaringan357/iOS-MobileFaceNet-MTCNN-FaceAntiSpoofing

References

https://github.com/vcvycy/MTCNN4Android
This project is the Android implementaion of MTCNN face detection.

https://github.com/davidsandberg/facenet
Use the MTCNN here to convert .tflite, so that you can adapt to any shape.

https://github.com/jiangxiluning/facenet_mtcnn_to_mobile
Here's how to convert .tflite.

https://github.com/yaojieliu/CVPR2019-DeepTreeLearningForZeroShotFaceAntispoofing
Face Anti-spoofing. I trained FaceAntiSpoofing.tflite, which only supports print attack and replay attack. If you have other requirements, please use this source code to retrain.

https://github.com/sirius-ai/MobileFaceNet_TF
Use this model for face comparison on mobile phones because it is very small.

BUILD

After putting .tflite in your assets directory, remember to add this code to your gradle:
aaptOptions {
  noCompress "tflite"
}

SCREEN SHOT

android-mobilefacenet-mtcnn-faceantispoofing's People

Contributors

ebichui avatar syaringan357 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

android-mobilefacenet-mtcnn-faceantispoofing's Issues

我要给作者一个大大的赞

虽然识别效果很一般,同一个人侧一侧头就无法识别是同一个人了。另外好像也没做人脸方向矫正,人脸对齐?总的来说这几天翻了十几个MobileFaceNet的安卓项目了,这是唯一一个能跑通的!!!而且亲民的用了TFLite!明明引一个TFLite lib + 训练好的模型写几行Java代码就能搞定的事情,真的很服又是移植OpenCV又是移植NCNN的,一堆C++代码拿过来不是这里编译错误就是那里差环境。

本人是AI门外汉,另外想咨询一下如果想有针对性的识别某个人或者提高识别率该怎么做?
我目前理解的AI端测落地的流程通常是在Github上找别人预训练好的模型,然后通过工具转换成对应的tflite或ncnn/mnn之类的模型后在客户端加载后实现相应功能。

但是为什么网上公开的训练方法的人很多,但是提供训练模型.pb的却很少呢?另外很多人的需求是拿了预训练模型后自己重新训练?意义何在呢?是做迁移学习吗?作者项目中的tflite是怎么生成的呢?MobileFaceNet不是提供的是pb文件吗?能否详细说一下,感谢

more robust version

Hello, are you thinking about releasing a more robust version ? More accurate ? Thank you

请教一下如何训练自己的FaceAntiSpoofing.tflite

你好, 请教一下如何训练自己的FaceAntiSpoofing.tflite。我参考了您提供的CVPR2019-DeepTreeLearningForZeroShotFaceAntispoofing,
但不知如何将.png or .jpg 做成训练用的资料。

  1. 如何将.png or .jpg做成.dat ? 还是弄成tf.data呢?
  2. label是用哪种格式? txt, html or xml ? 以及是(class, x1,x2,y1,y2)的形式还是(class, x, ,y, w, h)的形式?

Pretrain model and testing

Hi, thank you for your great work! @syaringan357
I test it with pictures captured from front camera of a phone. Can you provide me more detail informations about:

  • The ROUTE_INDEX = 6 you got during training logs observation, right? I've tested it, i think that more likely ROUTE_INDEX = 5 give me some useful information, since clss_pred[6] allway treat real face as fake face with confidence of 0.99x. Using leaf_score1(clss_pred, leaf_node_mask) totally failed as the same way too - real face but confidence is 0.99x.
  • Any other output can I get ? And how to resolve its name?
outputs.put(interpreter.getOutputIndex("Identity"), clss_pred);
outputs.put(interpreter.getOutputIndex("Identity_1"), leaf_node_mask);
  • Original paper use norm of face mask too? Is there anyway can I get it? As I read the training code, the mask should have shape (batch_size, 8, 256) which is 256 = 16*16 face mask?

Get frame live feed from camera.

Hi there, thanks for putting this together @syaringan357

have you implement on how to get the camera feed live, and detect the anti spoofing live instead by picture capture?

could you please share some inputs on how to get it done, if you've ever worked on it.

Thanks

Any plan to rebuilt the deep learning model ? for bette use GPU delegate

java.lang.IllegalArgumentException: Internal error: Failed to apply delegate: Unrecognized Read selector
Falling back to OpenGL
TfLiteGpuDelegate Init: FromTensorConverter: Batch size != 1 is not supported.
TfLiteGpuDelegate Prepare: delegate is not initialized
Node number 230 (TfLiteGpuDelegateV2) failed to prepare.
Restored original execution plan after delegate application failure.
at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:4035)
at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:4201)
at android.app.servertransaction.LaunchActivityItem.execute(LaunchActivityItem.java:103)
at android.app.servertransaction.TransactionExecutor.executeCallbacks(TransactionExecutor.java:135)
at android.app.servertransaction.TransactionExecutor.execute(TransactionExecutor.java:95)
at android.app.ActivityThread$H.handleMessage(ActivityThread.java:2438)
at android.os.Handler.dispatchMessage(Handler.java:106)
at android.os.Looper.loopOnce(Looper.java:226)
at android.os.Looper.loop(Looper.java:313)
at android.app.ActivityThread.main(ActivityThread.java:8663)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:567)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:1135)
Caused by: java.lang.IllegalArgumentException: Internal error: Failed to apply delegate: Unrecognized Read selector
Falling back to OpenGL

Training the faceantispoofing model

I went through Dr. Liu's repository. I came to the understanding that it utilizes a depth map input image for training but the tflite model in your android app requires only RGB(bitmap) input image. I want to use only RGB input image for my project. I have a few questions

  • Can you suggest an idea on how to train without using depth map as an input?
  • Why do you calculate a Laplacian of the bitmap before passing it to the tflite model? I understand it is for detecting edges. I am very interested in your line of thinking for it.

I thank you for your code as it gave me a lot of insights to make a mobile app. I appreciate your reply.

在Android 12 时常会闪退

java.lang.IllegalArgumentException: Cannot copy from a TensorFlowLite tensor (Identity) with shape [9, 8] to a Java object with shape [1, 8].
at org.tensorflow.lite.Tensor.throwIfDstShapeIsIncompatible(Tensor.java:461)
at org.tensorflow.lite.Tensor.copyTo(Tensor.java:252)
at org.tensorflow.lite.NativeInterpreterWrapper.run(NativeInterpreterWrapper.java:170)
at org.tensorflow.lite.Interpreter.runForMultipleInputsOutputs(Interpreter.java:343)
at com.zwp.mobilefacenet.faceantispoofing.FaceAntiSpoofing.antiSpoofing(FaceAntiSpoofing.java:51)
at com.zwp.mobilefacenet.MainActivity.antiSpoofing(MainActivity.java:406)
at com.zwp.mobilefacenet.MainActivity.access$100(MainActivity.java:55)
at com.zwp.mobilefacenet.MainActivity$2.onClick(MainActivity.java:106)
at android.view.View.performClick(View.java:7792)
at android.widget.TextView.performClick(TextView.java:16112)
at android.view.View.performClickInternal(View.java:7769)
at android.view.View.access$3800(View.java:910)
at android.view.View$PerformClick.run(View.java:30218)
at android.os.Handler.handleCallback(Handler.java:938)
at android.os.Handler.dispatchMessage(Handler.java:99)
at android.os.Looper.loopOnce(Looper.java:226)
at android.os.Looper.loop(Looper.java:313)

used frozen graphs

Hi, thanks for providing such a great repo.

I need to do some quantizations on the tflite models and so I need to have frozen graphs or something like that (a stage before tflite conversion).
would you please let me know which files you have used for generating the provided tflite models?
Thanks

FaceAntiSpoofing training

Could you provide details training process? For example hardware setup, enviroment, trainlogs, .etc

计算embeddings的相似度,为什么不直接用L2距离或者cosine距离,而是?

 private float evaluate(float[][] embeddings) {
        float[] embeddings1 = embeddings[0];
        float[] embeddings2 = embeddings[1];
        float dist = 0;
        for (int i = 0; i < 192; i++) {
            dist += Math.pow(embeddings1[i] - embeddings2[i], 2);
        }
        float same = 0;
        for (int i = 0; i < 400; i++) {
            float threshold = 0.01f * (i + 1);
            if (dist < threshold) {
                same += 1.0 / 400;
            }
        }
        return same;
    }```
- 有点没看懂为什么要这样计算最后两个embedding的相似度?

Model Eye Detection Not Accurate

Hey, we've been using MTCNN4Android for eyes detection for some time. Recently we decided to switch to TFLite (mainly for reducing our app size), and then we encountered your great implementation :)

Our problem is that the eyes coordinates returned by your model are less accurate than the ones from MTCNN4Android.

For getting the eyes coordinates we use box.transform2Point and using the 2 first points.

Attaching screenshots of eyes detection using the MTCNN4Android
MTCNN4Android_1
MTCNN4Android_2
And some using your model
Screen Shot 2019-11-18 at 3 18 11 PM
Screen Shot 2019-11-18 at 3 18 31 PM
Screen Shot 2019-11-18 at 3 18 55 PM

Could you please help us using your model?
Thanks :)

what this mean?

float same = 0;
for (int i = 0; i < 400; i++) {
float threshold = 0.01f * (i + 1);
if (dist < threshold) {
same += 1.0 / 400;
}
}
return same;

i don't understand ?
and
how about use cos? thx

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.