GithubHelp home page GithubHelp logo

marekkowalski / faceswap Goto Github PK

View Code? Open in Web Editor NEW
718.0 30.0 208.0 71.68 MB

3D face swapping implemented in Python

License: MIT License

Python 100.00%
3d-models face-alignment computer-vision face-swap optimization

faceswap's Introduction

FaceSwap

FaceSwap is an app that I have originally created as an exercise for my students in "Mathematics in Multimedia" on the Warsaw University of Technology. The app is written in Python and uses face alignment, Gauss Newton optimization and image blending to swap the face of a person seen by the camera with a face of a person in a provided image.

You will find a short presentation the program's capabilities in the video below (click to go to YouTube): click to go to YouTube

How to use it

To start the program you will have to run a Python script named zad2.py (Polish for exercise 2). You need to have Python 3 and some additional libraries installed. Once Python is on your machine, you should be able to automatically install the libraries by running pip install -r requirements.txt in the repo's root directory.

You will also have to download the face alignment model from here: http://sourceforge.net/projects/dclib/files/dlib/v18.10/shape_predictor_68_face_landmarks.dat.bz2 and unpack it to the main project directory.

A faster and more stable version

A faster and more stable version of FaceSwap is available on Dropbox here. This new version is based on the Deep Alignment Network method, which is faster than the currently used method if ran on a GPU and provides more stable and more precise facial landmarks. Please see the GitHub repository of Deep Alignment Network for setup instructions.

I hope to find time to include this faster version in the repo code soon.

How it works

The general outline of the method is as follows:

First we take the input image (the image of a person we want to see on our own face) and find the face region and its landmarks. Once we have that we fit the 3D model to those landmarks (more on that later) the vertices of that model projected to the image space will be our texture coordinates.

Once that is finished and everything is initialized the camera starts capturing images. For each captured images the following steps are taken:

  1. The face region is detected and the facial landmarks are located.
  2. The 3D models is fitted to the located landmarks.
  3. The 3D models is rendered using pygame with the texture obtained during initialization.
  4. The image of the rendered model is blended with the image obtained from the camera using feathering (alpha blending) and very simple color correction.
  5. The final image is shown to the user.

The most crucial element of the entire process is the fitting of the 3D model. The model itself consists of:

  • the 3D shape (set of vertices) of a neutral face,
  • a number of blendshapes that can be added to the neutral face to produce mouth opening, eyebrow raising, etc.,
  • a set of triplets of indices into the face shape that form the triangular mesh of the face,
  • two sets of indices which establish correspondence between the landmarks found by the landmark localizer and the vertices of the 3D face shape.

The model is projected into the image space using the following equation:

equation

where s is the projected shape, a is the scaling parameter, P are the first two rows of a rotation matrix that rotates the 3D face shape, S_0 is the neutral face shape, w_1-n are the blendshape weights, S_1-n are the blendshapes, t is a 2D translation vector and n is the number of blendshapes.

The model fitting is accomplished by minimizing the difference between the projected shape and the localized landmarks. The minimization is accomplished with respect to the blendshape weights, scaling, rotation and translation, using the Gauss Newton method.

Licensing

The code is licensed under the MIT license, some of the data in the project is downloaded from 3rd party websites:

Contact

If need help or you found the app useful, do not hesitate to let me know.

Marek Kowalski [email protected], homepage: http://home.elka.pw.edu.pl/~mkowals6/

faceswap's People

Contributors

marekkowalski avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

faceswap's Issues

How to interupt the script?

The code is working really nice, thanks for making it available! My problem is that I'm not able to interrupt or cancel the script. I'm using anaconda to execute the zad2.py file, and ctrl+c doesn't work.

My solution so far is to exit anaconda, but it feels a bit heavy. There must be an easier solution?

Face texture.

Hi! I'm very surprised by the quality of your work! I wonder if it is possible to put a more abstract texture on the face ?. It's possible?.

mouth

Can the mouth not be swapped?

why using a neutral 3d face model

why not just use the 68 landmarks of real face from camera and using Delaunay to form the face mesh to draw, and using the input face image's 68 landmarks as texture coord ?

Error with predictor

Hello, I was trying to launch code, but stuck with this error: "Unexpected version found while deserializing dlib::shape_predictor." Have tried to use one from dlib website, but it didn't work out. What version of DLIB did you use for this app? Maybe it can help.

How to make eye blinking

Thanks for great work. There is a lot of help for me
Although I saw other issue about eye blinking, I don't implement eye blinking.
What you said is removing eye parts of mesh (the triangles) that correspond to the eye regions.

  1. how to remove eye parts of mesh in code?
    What i know is
  • len(mesh) = 175
    mean3DShape, blendshapes, mesh, idxs3D, idxs2D = utils.load3DFaceModel("candide.npz")
  • mesh means 'FACE LIST' in candide
    What you mentioned

I don't know where eye parts is in 175. So could you give me advice? Could you give me edidted candide.npz model which remove eye parts?

  1. len(mesh) is differnt from 'FACE LIST' in candide3.wfm. why is it??
    This candide3.wfm is the latest version of candide model(v3.1.6)
    len(mesh) = 175
    'FACE LIST' in candide3.wfm = 184
    What do you think about?
    Maybe are you deleting some parts in 'FACE LIST'?
    What you said about model is 'processed version'

  2. Your Code doesn't work transition between .txt to .npz.

  • transition code
  • .txt
    Is transition code for your 'processed version' candide model?
  • error
    Traceback (most recent call last):
    File "candide.py", line 192, in
    tempMode = np.zeros((1, mean3.shape[0], mean3.shape[1]))
    IndexError: tuple index out of range

face moving while blinking

Hi,

  1. I've noticed a behavior where the whole face is moving when I'm blinking.
    can the blinking be ignored?
    what is causing this behavior?

  2. when the face on the camera is stable and not moving the output is not stable and moving a little bit.
    I assume this is due to a face detection on every frame.
    how can i improve it and provide more stable output for the same input?

Thanks!

Hair Replace

Hi @MarekKowalski, congratulations on the excellent work done (not to mention livescan3d, that is something indescribable), even if Angelina Jolie is not so attractive with my beard.

Now I have to do something similar to what yo've done here: i have to add, to every single frame, Angelina Jolie's hair.
What do you think the beast approach is?
Do you think the update of your candide.npz with hair information makes sense?
I'm a little confused and i want to avoid hair recognition, because of the obvious presence of false positives and negatives.

Thank you in advance
Regards

Face Vibrating

Hi,
First of all great work done by you man!
Hats OFF!

The point here i want to make is that when i run face swap on videos, face is kind of vibrating or moving. So i want to resolve that issue, could you please tell me how i can resolve it

Thanks in advance

Slow rendering

Hi Marek,

great work! Finally we got your code running. Stunning results.
Unfortunately it's quite slow. The image rendered is always showing a few seconds later plus it's not smooth at all. We localised the main performance drop while calling the dlib.get_frontal_face_detector() function. We tried on a Mac and a Windows machine.

Any ideas what we could do to improve the performance?
Thanks in advance.

google colab

any plans for a google colab notebook that works headless, it also has free gpu for 12 hours cycles

Multiple faces in source or target video

My target video (or source for that matter) has 2 persons inside. I have checked that running the scripts would swap on only 1 person's face and in this case, it was the wrong person. May I ask what is the best way to filter out unwanted faces and only swap the chosen person's face?

Tracking breaks with multiple faces

Hey!

tracking breaks when attempting to track multiple faces on the stable/fast faceswap-DAN. To get multiple faces to work properly you have to recalculate every scene. (I'm attempting to swap a face.jpg with all of the faces in a scene)

Here is the modifications I made to get multiple faces working (and swapping), although it becomes extremely slow - definitely not realtime:

https://gist.github.com/samhains/648ec70aab3d5a47920c95c5e0960ee3

Anyone have ideas on fixing this in a more performant way?

How to create my own candide.npz

Hello, MarekKowalski. Thank you for this good work.
now i want to swap source face to the target one. So should i create candide.npz of the target face. And Maybe the 3D face reconstruction technology can offer useful imformation, such as the vertices,the mesh and so on.
Thank you in advance.

Overlay Transparency

Hello Mr. Kowalski,

First of all I have to say you made great work.

I have a question: Is there a way to make the overlay more transparent? What I would like to achieve is to not make a hard overlay of a other face onto the viewer but instead make the person look older. I thought of making the overlay a bit more transparent so the wrinkles etc. are still visible but achieve a more morphed version with the original face.

For support I would be very thankful.

Best regards

Add wiki

May I know how do I run this code? which file I should run?

headless version

Hi! I'm trying to create a headless version of this by replacing PyOpenGL with ModerlGL. I'm planning to use shaders to transfer the texture to the facial 3d model. However it's difficult since I'm new to 3d space & I'm only familiar with DLIB. I hope you can answer these questions abt the parameters you used so I can understand more.

  1. What is textureCoords? like how does it relate to a generic Candide.obj 3D model & the candide mesh? Can I use the candide.obj instead of the candide.npz?
    I understand that shapes2D is the list of facial keypoints for all faces detected.
  2. What is modelParams?

Thank you so much!

Explain about parameters of Candide face model

I saw that on the function you write to read parameters from candide.npz

def load3DFaceModel(filename):
    faceModelFile = np.load(filename)
    mean3DShape = faceModelFile["mean3DShape"]
    mesh = faceModelFile["mesh"]
    idxs3D = faceModelFile["idxs3D"]
    idxs2D = faceModelFile["idxs2D"]
    blendshapes = faceModelFile["blendshapes"]
    mesh = fixMeshWinding(mesh, mean3DShape)
    return mean3DShape, blendshapes, mesh, idxs3D, idxs2D

It is seem that content of file candide.npz is different from candide file from http://www.icg.isy.liu.se/candide/candide3.wfm
And can you explain the meaning of mean3DShape, mesh, idxs3D, idxs2D, blendshapes array?

cv2.error: OpenCV(4.5.2-dev) :-1: error: (-5:Bad argument) in function 'pointPolygonTest'

Hi,

I'm actually testing FaceSwap with the following environment:

➜  FaceSwap lsb_release -a
No LSB modules are available.
Distributor ID:	Ubuntu
Description:	Ubuntu 20.04.2 LTS
Release:	20.04
Codename:	focal
➜  FaceSwap python --version
Python 3.8.5
➜  FaceSwap pip show tensorflow
Name: tensorflow
Version: 2.6.0
Summary: TensorFlow is an open source machine learning framework for everyone.
Home-page: https://www.tensorflow.org/
Author: Google Inc.
Author-email: [email protected]
License: Apache 2.0
Location: ~/.local/lib/python3.8/site-packages
Requires: google-pasta, flatbuffers, wrapt, opt-einsum, termcolor, h5py, keras-preprocessing, grpcio, numpy, wheel, absl-py, tensorflow-estimator, gast, astunparse, typing-extensions, protobuf, six, tensorboard, keras-nightly
Required-by: fawkes
➜  FaceSwap apt show opencv
Package: opencv
Version: 4.5.2-6
Status: install ok installed
Priority: extra
Section: checkinstall
Maintainer: root@myVision
Installed-Size: 252 MB
Provides: build
Download-Size: unknown
APT-Manual-Installed: yes
APT-Sources: /var/lib/dpkg/status
Description: Package created with checkinstall 1.6.3

However, I got the following ERROR message.

➜  FaceSwap python zad2.py 
pygame 2.0.1 (SDL 2.0.14, Python 3.8.5)
Hello from the pygame community. https://www.pygame.org/contribute.html
Press T to draw the keypoints and the 3D model
Press R to start recording to a video file
Loading network...
Input shape: (None, 1, 112, 112)
WARNING (theano.tensor.blas): We did not find a dynamic library in the library_dir of the library we use for blas. If you use ATLAS, make sure to compile it with dynamics library.
[ WARN:0] global ....../opencv/modules/videoio/src/cap_gstreamer.cpp (1081) open OpenCV | GStreamer warning: Cannot query video position: status=0, value=-1, duration=-1
Traceback (most recent call last):
  File "zad2.py", line 66, in <module>
    cameraImg = ImageProcessing.blendImages(renderedImg, cameraImg, mask)
  File "....../face/FaceSwap/FaceSwap-DAN-public/FaceSwap/ImageProcessing.py", line 17, in blendImages
    dists[i] = cv2.pointPolygonTest(hull, (maskPts[i, 0], maskPts[i, 1]), True)
cv2.error: OpenCV(4.5.2-dev) :-1: error: (-5:Bad argument) in function 'pointPolygonTest'
> Overload resolution failed:
>  - Can't parse 'pt'. Sequence item with index 0 has a wrong type
>  - Can't parse 'pt'. Sequence item with index 0 has a wrong type

Any suggestions?

Cheers
Pei

face reading 3D...

glMatrixMode(GL_PROJECTION)
File "C:\Users\vallem.balu\Anaconda3\envs\python_apis\lib\site-packages\OpenGL\error.py", line 234, in glCheckError
baseOperation = baseOperation,
OpenGL.error.GLError: GLError(
err = 1282,
description = b'invalid operation',
baseOperation = glMatrixMode,
cArguments = (GL_PROJECTION,)
)
any one this type error....

How can run zad2.py without camera

Hi there,

can i swap two face from two image with zad2.py and without use of camera?
i mean i have two image named im1 and im2, i want to swap face from these two image.

if i can do that using zad2, how?

Question mean3DShape

Maybe a silly question but how did you calculate mean3DShape? because they are not the same as the vertices on the wfm

DAN, 3d Model and eyes question

Hi,
Great job and thank you for sharing!
I have some follow-up questions after working with your project:

  1. I was trying the DAN version, but i was unable to make it work with GPU. Is it should be enough to tell to theano use GPU via THEANO_FLAGS='device=cuda,floatX=float32'?
    http://deeplearning.net/software/theano/tutorial/using_gpu.html

  2. In order to get the best results i want to edit the 3d mesh model. should the model match the source face or the targets?

  3. i wan't to implement eyes blinking. Could you advice what is the best approach for that?

Thanks in advance!

Modifying the face's mesh

Hello @MarekKowalski,

I asked you a month ago about how to blend a portion of the face, and you responded that I need to save and modify the mesh using MeshLab, In order to do that; what is the mesh that I need to save, is it blendshapes or mesh ? And how am I supposed to save them into obj file, because I tried to open the mesh file in the candide.npz but the MeshLab couldn't open that. Can you give me a quick guide because I'm new in manipulating meshes and working with 3D softwares.

Thanks.

Blend a portion of the face's image

Hello @MarekKowalski, Thank you for this good work it is really impressive.
Now i want to try out making only the eyes appear for example and not the whole face, let's say for example i want to crop from the nose down to the jaw to show only moustache and beard, is there a way to do that i found my self stuck and might need a help.
Thank you in advance.

can I run it on a server without pygame and deskptop?

I'm new to opengl or pygame. The server is a centos6 which has no desktop installed, and failed to install pygame with many difficulties.

When i comment out the pygame stuffs and run python zad2.py, it just goes wrong and the renderedImg is an all-zero array.

Is there any way that I can run it successfully?

About gesture control and masks

Thanks your work,I want to use gesture control to swap face , is it possible?
And can it swap face like Opera Masks?
if it can do ,how to implement.Can you introduce paper about this software?Thank you

Real Time?

@MarekKowalski will this work in real time scenarios for both pictures and short videos? What would be the training time requirements for the model to generate optimum results?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.