GithubHelp home page GithubHelp logo

hhy5277 / faceswap-3 Goto Github PK

View Code? Open in Web Editor NEW

This project forked from marekkowalski/faceswap

0.0 1.0 0.0 71.67 MB

3D face swapping implemented in Python

License: MIT License

Python 100.00%

faceswap-3's Introduction

FaceSwap

FaceSwap is an app that I have originally created as an exercise for my students in "Mathematics in Multimedia" on the Warsaw University of Technology. The app is written in Python and uses face alignment, Gauss Newton optimization and image blending to swap the face of a person seen by the camera with a face of a person in a provided image.

You will find a short presentation the program's capabilities in the video below (click to go to YouTube): click to go to YouTube

How to use it

To start the program you will have to run a file named zad2.py (Polish for exercise 2), which will require:

  • Python 2.7 (I recommend Anaconda)
  • OpenCV (I used 2.4.13)
  • Numpy
  • dlib
  • pygame
  • PyOpenGL

You can download all of the libraries above either from PIP or from Christoph Gohlke's excellent website: http://www.lfd.uci.edu/~gohlke/pythonlibs/

You will also have to download the face alignment model from here: http://sourceforge.net/projects/dclib/files/dlib/v18.10/shape_predictor_68_face_landmarks.dat.bz2 and unpack it to the main project directory.

A faster and more stable version

A faster and more stable version of FaceSwap is available on Dropbox here. This new version is based on the Deep Alignment Network method, which is faster than the currently used method if ran on a GPU and provides more stable and more precise facial landmarks. Please see the GitHub repository of Deep Alignment Network for setup instructions.

I hope to find time to include this faster version in the repo code soon.

How it works

The general outline of the method is as follows:

First we take the input image (the image of a person we want to see on our own face) and find the face region and its landmarks. Once we have that we fit the 3D model to those landmarks (more on that later) the vertices of that model projected to the image space will be our texture coordinates.

Once that is finished and everything is initialized the camera starts capturing images. For each captured images the following steps are taken:

  1. The face region is detected and the facial landmarks are located.
  2. The 3D models is fitted to the located landmarks.
  3. The 3D models is rendered using pygame with the texture obtained during initialization.
  4. The image of the rendered model is blended with the image obtained from the camera using feathering (alpha blending) and very simple color correction.
  5. The final image is shown to the user.

The most crucial element of the entire process is the fitting of the 3D model. The model itself consists of:

  • the 3D shape (set of vertices) of a neutral face,
  • a number of blendshapes that can be added to the neutral face to produce mouth opening, eyebrow raising, etc.,
  • a set of triplets of indices into the face shape that form the triangular mesh of the face,
  • two sets of indices which establish correspondence between the landmarks found by the landmark localizer and the vertices of the 3D face shape.

The model is projected into the image space using the following equation:

equation

where s is the projected shape, a is the scaling parameter, P are the first two rows of a rotation matrix that rotates the 3D face shape, S_0 is the neutral face shape, w_1-n are the blendshape weights, S_1-n are the blendshapes, t is a 2D translation vector and n is the number of blendshapes.

The model fitting is accomplished by minimizing the difference between the projected shape and the localized landmarks. The minimization is accomplished with respect to the blendshape weights, scaling, rotation and translation, using the Gauss Newton method.

Licensing

The code is licensed under the MIT license, some of the data in the project is downloaded from 3rd party websites:

Contact

If need help or you found the app useful, do not hesitate to let me know.

Marek Kowalski [email protected], homepage: http://home.elka.pw.edu.pl/~mkowals6/

faceswap-3's People

Contributors

marekkowalski avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.