GithubHelp home page GithubHelp logo

brfv5-browser's Introduction

Beyond Reality Face SDK - v5.1.5 (BRFv5) - Platform: Browser

What is BRFv5?

It is a real-time face detection and face tracking SDK. It analyses image data (eg. a camera stream, video stream or a static image) and returns facial landmarks and data to place 3d objects on a face.

alt text

Ready to try!

Read the EULA (eula.txt) carefully before using the trial SDK. You can try and test the trial SDK free of charge. Before you buy a license, please test the SDK thoroughly to see if it meets the requirements of your project. Once you decide to use BRFv5 commercially, please contact us by email. You will receive a separate license agreement, which you must agree to.

Visit us online.

BRFv5 - Getting started.

To test BRFv5 simply visit the JavaScript demo site:

Or download this repository from GitHub and run the index.html on a local server.

ARTOv5 - Getting started.

If you are looking to implement an Augmented Reality Try-On, that places 3d objects on to of the face, we got you covered. Try the demo here:

It's a Vue.js based web component, can be plugged into any website and is easily configurable.

See artov5/artov5_static/js/artov5_api.js for what's configurable.

ARTOv5 is available for paying customers.

Also available is TPPTv5 - ThreeJS Post Processing Tool for ARTOv5.

It allows you to load 3D models into the ThreeJS editor, place them correctly on a 3D head and export those models for ARTOv5. You can also conveniently try on your model within the editor.

Which platforms does BRFv5 support?

HTML5/Browser โ€“ JavaScript (works in Chrome/Firefox/Edge 16/Opera/Safari 11)

Run the index.html on a local server.

iOS - ObjectiveC/C++ (To be Released later in 2019)

Open the Xcode project. Attach your actual iOS device and run the example app on your device (examples need the device camera).

Android - Java (To be Released later in 2019)

Open the Android Studio project. Attach your Android device and run the example app on your device (examples need the device camera).

macOS - C++ utilizing OpenCV for camera access and drawing (To be Released later in 2019)

Have OpenCV brewed (opencv3) on your system. Open the Xcode project and just run it on your Mac.

Windows - C++ utilizing OpenCV for camera access and drawing (To be Released later in 2019)

Good luck in trying to compile OpenCV for your Windows. Update the Visual Studio (2017) project properties that mention OpenCV. Then run the Release x64 target. Fingers crossed!

Adobe AIR - ActionScript 3 on Windows, macOS, iOS and Android (To maybe Released later in 2019)

Use your preferred IDE. Add the src folder and the ANE itself to your class path and run the example class on your desired device (not in a simulator).

Technical overview

BRFv5 comes with the following components:

  • Face Detection - finds faces (rectangles) in image data (camera stream, video or still image)
  • Face Tracking - finds 68 facial landmarks/features

alt text alt text

All available platform-specific packages have approximately the same content and come with a number of examples to demonstrate the use of the SDK.

What image data size does BRFv5 need?

You can input any image size.

Internally BRFv5 uses a XYZx480 (landscape) or 480xXYZ (portrait) image for the analysis. 480px is the base size that every other input size gets scaled to, eg.

landscape:

  • 640 x 480 -> 640 x 480 // fastest, no scaling
  • 1280 x 720 -> 854 x 480
  • 1920 x 1080 -> 854 x 480

portrait:

  • 480 x 640 -> 480 x 640 // fastest, no scaling
  • 720 x 1280 -> 480 x 854
  • 1080 x 1920 -> 480 x 854

BRFv5 scales the results up again, so you don't have to do that yourself. All parameters named *size or *width are pixel values based on the actual image size. eg. telling BRF what face sizes to initially detect:

If you work with a 640x480 camera stream, it would be something like this:

brfv5Config.faceDetectionConfig.minFaceSize = 144

Where as if you work with a 1280x720 camera stream, you will need something like this:

brfv5Config.faceDetectionConfig.minFaceSize = 144 * (720 / 480)

A common factor for the image base size might come in handy:

const inputSize  = Math.min(imageWidth, imageHeight)
const sizeFactor = inputSize / 480.0
brfv5Config.faceDetectionConfig.minFaceSize = 144 * sizeFactor

This is implemented in the examples, just take a look at brfv5__configure.js.

FAQ

Can I track other objects, eg. hands or neck or full body?

  • No, the library is tracking faces only.

Can you improve the performance?

  • BRFv5 has various configuration values that you can use to influence performance.
  brfv5Config.faceTrackingConfig.numFacesToTrack
  brfv5Config.faceTrackingConfig.numTrackingPasses
  brfv5Config.faceTrackingConfig.enableStabilizer
  brfv5Config.faceTrackingConfig.enableFreeRotation
  • BRFv5 also comes with different models, 42 landmarks for 3d object placement and 68 landmarks for detailed analysis, both come in a min, medium and max version. Please try those out and find the most fitting for your project.

Can you make the library smaller?

  • We put a lot of effort into making the smallest possible library. The min versions of the models are the smallest that don't sacrifice too much accuracy. This might not be true for your use case. We might be able to provide even less accurate thus smaller models, but only for commercial customers.

When will the other platforms be available.

  • We plan to work on the other example projects until the end of 2019.

Release notes

v5.1.5 - 17th March 2020

  • Added an ASM.js export. We do now support a manual ASM.js fallback. Since WebAssembly is available almost everywhere, there is no automatic checking/switching.
  • Added a nodejs example. There is an issue with loading more than 6 model chunks for 68l, which leads to wrong results, not sure why yet.

v5.1.3 - 29th January 2020

  • Version bump for emscripten to v1.39.6 - upstream

v5.1.0 - 14th November 2019

  • BRFv5 is now 40% faster.
  • v5.1.0 are not compatible with v5.0.x.
  • Model files are not structured in chunks instead of _min, _medium, _max. Load any number of chunks from 4 to 8 chunks (lower is possible, but accuracy won't be great).

v5.0.2 - 10th November 2019

  • Version bump for emscripten to v1.39.2 - fastcomp

v5.0.1 - 11th September 2019

  • Internal restructuring of code, small fixes.

v5.0.0 - 26th August 2019

It's done! After over a year of development Tastenkunst is proud to announce the release of BRFv5.

Features:

  • Face Tracking
  • Face Detection

JavaScript Examples:

  • Single Face Tracking
  • Multi Face Tracking
  • ThreeJS 3D Object Placement
  • Image Overlay Placement
  • Texture Overlay Placement
  • Face Swap
  • Extended Face Shape Estimation (Forehead)
  • Smile Detection
  • Blink Detection
  • Yawn Detection
  • Coloring Libs

Changes from BRFv4:

  • Smaller models: The largest model is now 5MB (instead of 9MB for BRFv4). The smallest for 3D placement is 2,5MB.
  • Rewrote the whole API to streamline configuration.
  • Removed: Point Tracking. It was rarely requested and might come as a separate library if there is demand for it.
  • Removed ASM package: WebAssembly is now widely supported.

brfv5-browser's People

Contributors

marcelklammer avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

brfv5-browser's Issues

Usage of eval and CSP

On commercial library file eval is being used like this:
eval(Module["UTF8ToString"]($0))
This single line forces us to include 'unsafe-eval' on our content security policy, would it be possible to find an alternative way to load that code so we can avoid using unsafe directives?
Thanks

Changing coloring-libs colors based on input.

Hey, I would like to know, if it is possible , to propagate a hex color code, from the html, to the coloring-libs.js functions, so that i can change the colors on the lips, that are being drawn from handleTrackingResults()

Can i test brfv5 in codepen?

The question is if there any opportunity to test brfv5 in codepen? I usually make some sketches there and show my team before we decide use it or not.

Tracking a single frame (image)

Hello,

We are using this solution for webcam video tracking. Currently, we have an extra need, processing an image using BRFv5 to get the 68 points.
First, we transform the image to 640x480 pixels and we make sure that the face is centred and with good size. But:

  • The face is not detected many times
  • When the face is detected, we process the image BRFManager.update() 10 times with different 68 points coordinates values.

Which configuration should we use?

I share an image as an example of face not detected.

image

// 640 x 480 
const BRF = await BRFv5.createInstance({ width, height, uniqueFrame: true });
const canvas = document.createElement('canvas');
canvas.width = width;
canvas.height = height;
context.drawImage(imageElement, 0, 0, width, height);
const imageData = context.getImageData(0, 0, width, height);

for (let index = 0; index < 10; index += 1) {
        BRF.Manager.update(imageData);
}

const faces = BRF.Manager.getFaces();

const isFaceDetected =
        faces.length > 0 &&
        (faces[0].state === BRF.Instance.BRFv5State.FACE_TRACKING_START ||
          faces[0].state === BRF.Instance.BRFv5State.FACE_TRACKING);

if (isFaceDetected) {
     return resolve(faces[0].vertices);
}
const error = new Error('No face detected');
error.code = 999;
return reject(error);

How can I use this with webpack?

I added this project to my project's node_modules using npm, and when I try to import the file I get this error

Uncaught Error: Invalid reserved bit
    at H (brfv5_js_tk121020_v5.2.0_trial.js?6820:formatted:320)
    at T (brfv5_js_tk121020_v5.2.0_trial.js?6820:formatted:652)
    at Object.z [as process] (brfv5_js_tk121020_v5.2.0_trial.js?6820:formatted:659)
    at XMLHttpRequest.xhrBinary.onload (brfv5_js_tk121020_v5.2.0_trial.js?6820:formatted:1341)

magic numbers when mapping to 3d?

I'm looking through the threejs__brfv5_mapping.js file and wondering if you could explain exactly how and why the translation and rotation values are calculated the way they are. Things seem to work, but I'd like a better understanding of the math that is happening there.

Specifically, how is the z value chosen? It's written as

  let modelZ = 2725

  if(t3d.camera.isPerspectiveCamera) {
    modelZ = (2725 / 480) * (canvasHeight / t3d.sceneScale)
  }

I assume the 480 comes from the camera height, but what about the 2725?

Same thing with the face scale?

    let scale =   face.scale * si * 0.0133

Where does the 0.0133 come from?

Is there a better or simpler way to map the face xy and rotation into a three.js coordinate space?

Face tracking model?

Hello Sir,

I'm glad to see this excellent repo you have done incredible work here. The face tracking implementation is really fast and smooth. I would like to know what model you are using for face tracking?

How to run with parcel

Hi,

I can run the project successfully with "live-server" but I don't know how to run with "parcel". It throws errors while loading the "loadBRFv5Model":
Uncaught Error: Invalid reserved bit at H (brfv5_js_tk290120_v5.1.3_trial.js:10) at T (brfv5_js_tk290120_v5.1.3_trial.js:10) at Object.z [as process] (brfv5_js_tk290120_v5.1.3_trial.js:10) at XMLHttpRequest.xhrBinary.onload (brfv5_js_tk290120_v5.1.3_trial.js:10)

brfv5-ios

Hi:
Thank you very much for your hard work, When will brfv5-ios be released?

Firefox memory leak in webworker example

When running the minimal__webworker__track_one_face.html example in Firefox, the webworker process gradually consumes more and more memory in the background. Eventually, the tracking becomes incredibly slow as too many resources have been consumed.

Here's a snapshot of my Mac's activity monitor after running the example for a few minutes, showing 2.38 GB memory consumption:
activity monitor

The other single-threaded examples tend to maintain ~300 MB memory usage. I also don't experience this issue in Chrome. I'm thinking this may be a bug in Firefox itself, but I'm posting here in case there is a simple implementation fix.

How to detect bangs?

Hello.

I used your API, but it is difficult to detect bangs.
Is there any way to detect bangs.
Should I pay for the API to a upgraded version?

Thanks.

Latency Issues

Hi,

I am using brfv4 but having some latency issues. Is there a way to configure brdv4 to track only 20 or so points as this is all I need>
also it would be great to have user definable points to track...as for animation, I only need a third of the landmark points tracked...this would increase speed and reduce lag to my websocket...
If I purchased brfv5, could I arrange to include the extra optical flow points? It's omission in brv4 is the only reason I won't upgrade.
David Knight

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.