GithubHelp home page GithubHelp logo

rubikplayer / flame-fitting Goto Github PK

View Code? Open in Web Editor NEW
710.0 25.0 108.0 10.36 MB

Example code for the FLAME 3D head model. The code demonstrates how to sample 3D heads from the model, fit the model to 3D keypoints and 3D scans.

Home Page: http://flame.is.tue.mpg.de/

Python 79.33% Makefile 0.14% C++ 9.83% Cython 9.81% Shell 0.89%
face-model morphable-model computer-graphics computer-vision flame-model flame-fitting chumpy smpl-x face-alignment 3d-face-alignment

flame-fitting's Introduction

FLAME: Articulated Expressive 3D Head Model

This is an official FLAME repository.

We also provide Tensorflow FLAME and PyTorch FLAME frameworks, and code to convert from Basel Face Model to FLAME.

FLAME is a lightweight and expressive generic head model learned from over 33,000 of accurately aligned 3D scans. FLAME combines a linear identity shape space (trained from head scans of 3800 subjects) with an articulated neck, jaw, and eyeballs, pose-dependent corrective blendshapes, and additional global expression blendshapes. For details please see the scientific publication

Learning a model of facial shape and expression from 4D scans
Tianye Li*, Timo Bolkart*, Michael J. Black, Hao Li, and Javier Romero
ACM Transactions on Graphics (Proc. SIGGRAPH Asia) 2017

and the supplementary video.

This codebase demonstrates

  • Sampling: Load and evaluate FLAME model for random parameters
  • Landmark fitting: Fit FLAME to 3D landmarks
  • Scan fitting: Fit FLAME to a 3D scan

Set-up

The code has been tested with Python 3.6.9.

Clone the git project:

git clone https://github.com/Rubikplayer/flame-fitting.git

Install pip and virtualenv

sudo apt-get install python3-pip python3-venv

Set up virtual environment:

mkdir <your_home_dir>/.virtualenvs
python3 -m venv <your_home_dir>/.virtualenvs/flame-fitting

Activate virtual environment:

cd flame-fitting
source <your_home_dir>/.virtualenvs/flame-fitting/bin/activate

Make sure your pip version is up-to-date:

pip install -U pip

Some requirements can be installed using:

pip install -r requirements.txt

Install mesh processing libraries from MPI-IS/mesh within the virtual environment.

The scan-to-mesh distance used for fitting a scan depends on Eigen. Either download Eigen from here OR clone the repository:

git clone https://gitlab.com/libeigen/eigen.git

After downloading Eigen, you need to compile the code in the directory 'sbody/alignment/mesh_distance'. To do this go to the directory:

cd sbody/alignment/mesh_distance

Edit the file setup.py to set EIGEN_DIR to the location of Eigen. Then type:

make

Data

To download the FLAME model, sign up and agree to the model license under MPI-IS/FLAME. Then run following script:

./fetch_FLAME.sh

Demo

  • Load and evaluate FLAME model: hello_world.py
  • Fit FLAME to 3D landmarks: fit_lmk3d.py
  • Fit FLAME to a 3D scan: fit_scan.py

Fitting a scan requires scan and FLAME model to be in the same local coordiante systems. The fit_scan.py script provides different options by specifying the variable scale_unit to convert from Meters [m] (default), Centimeters [cm], or Milimieters [mm]. Please specify the right unit when running fit_scan.py. If the unit of the measurement unit is unknown, choose scale_unit = 'NA'.

Landmarks

The provided demos fit FLAME to 3D landmarks or to a scan, using 3D landmarks for initialization and during fitting. Both demos use the shown 51 landmarks. Providing the landmarks in the exact order is essential. The landmarks can for instance be obtained with MeshLab using the PickPoints module. PickPoints outputs a .pp file containing the selected points. The .pp file can be loaded with the provided 'load_picked_points(fname)' function in fitting/landmarks.py.

Citing

When using this code in a scientific publication, please cite FLAME

@article{FLAME:SiggraphAsia2017,
  title = {Learning a model of facial shape and expression from {4D} scans},
  author = {Li, Tianye and Bolkart, Timo and Black, Michael. J. and Li, Hao and Romero, Javier},
  journal = {ACM Transactions on Graphics, (Proc. SIGGRAPH Asia)},
  volume = {36},
  number = {6},
  year = {2017},
  url = {https://doi.org/10.1145/3130800.3130813}
}

License

The FLAME model is under a Creative Commons Attribution license. By using this code, you acknowledge that you have read the terms and conditions (https://flame.is.tue.mpg.de/modellicense.html), understand them, and agree to be bound by them. If you do not agree with these terms and conditions, you must not use the code. You further agree to cite the FLAME paper when reporting results with this model.

Supported projects

Visit the FLAME-Universe for an overview of FLAME-based projects.

FLAME supports several projects such as

FLAME is part of SMPL-X: : A new joint 3D model of the human body, face and hands together.

Acknowledgement

Code in smpl_webuser originates from SMPL Python code, and code in sbody originates from SMALR. We thank the authors for pushing these code packages.

flame-fitting's People

Contributors

rubikplayer avatar timobolkart avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

flame-fitting's Issues

How to obtain landmark embedding?

Hi, thank you once again for the great work.
How did you obtain the embedding landmarks for your FLAME model?
What does the lmk_face_idx mean in embedding landmarks?
Lastly, utilizing this method can I extend this to fit SMPL model to whole body scan? Like using the joints in the body as an embedding landmark.
Thank you and waiting for you reply

Bad model performance (fitting)

First of all, thank you for this amazing work.

I am experiencing bad performance fitting results. I want to do retargeting. To do so I am aiming to obtain the shape flame parameters and I was hoping using this repository to have a very close head shape.
Since I want to build a new head that looks very similar to a head in neutral position, I took out of the chumpy optimization all the parameters of expressions and pose so I can optimize just the shape parameters forcing the others to be in zero position:

free_variables = [model.trans, model.betas[:300]] #instead of free_variables = [model.trans, model.pose, model.betas[used_idx] #
ch.minimize( fun      = objectives,
                 x0       = free_variables,
                 method   = 'dogleg',
                 callback = on_step,
                 options  = opt_options )

but the results that I am getting are very bad:
Objective:
https://github.com/Rubikplayer/flame-fitting/assets/136068442/c2410bf1-6fdf-47e7-9d22-df5a08973eb5

Fitting Result:
https://github.com/Rubikplayer/flame-fitting/assets/136068442/fd74f868-996b-4976-ade3-afd653fbd1ee

Would you say they are the same person?
image

Am I doing something wrong? is this the expecting quality of this approach? Thanks

Mediapipe embeddings

Hey there,

Thank you very much for your work all these years!
I would like to ask about FLAME's mediapipe embeddings. Mediapipe's face mesh detection (and face landmark detection) outputs 474 landmarks. FLAME's mediapipe embeddings provide 105 landmarks. Is there any correspondence between those two?

Thanks for your time

The training dataset

Hi,@Rubikplayer

Thanks for making it open.

I want to ask about the input. I'm now a student and very interested in your project FLAME. I'm working on your code now, and wonder if you could share your training code with me or tell me something about how to input one my own picture and get a 4D model about me.

I would appreciate it very much if you can give me some suggestions.
Looking forward to your reply!

Regarding 3d landmark points.

Thank you for your useful work.
I want to get 3d landmark points.
Is there any code that creates 3D landmark file(51, 3)??

Thank you

Fit only shape model and blenshapes

@TimoBolkart
Hi!
I want to use the eos library for my needs using a FLAME model.
For these purposes it is necessary to get a separate array of shape model with all head models. I mean, for the fit, a basis of 300 head models is used and that is the what I want to get.
It seems that i can get what i need with this code
all_shapes = model.shapedirs[:,:,0:300]
This array has shape (5023, 3, 300) This should mean that there are 300 models in the array and in each of the model 5023 points.
But when I try to plot only one model, For example
all_shapes = model.shapedirs[:,:,0:300]
mesh = all_shapes [: , :, 0]

I am getting the result:
image

When I draw the model derived from your example
model.pose[:] = np.random.randn(model.pose.size) * 0.02
model.betas[:300] = np.random.randn(300) * 0.6
model.betas[300:350] = np.random.randn(50) * 0.6
mesh = model.r

Everything works fine
image

Please advise how I can achieve the desired result and get an array of 3d models?
Also I really want to understand how I can get eigenvalues, eigenvectors and orthonormal basis?
Will I be able to get similar objects for blendshapes?

Thank you!!!

how to interpolate between different sets of FLAME params?

Thanks for your excellent work. I want to ask a question about FLAME params.
If I have two sets of FLAME params (with possibly different shapes, expressions, poses), how can I get an "intermediate" set of params, which can represent a mesh that is in between the original two meshes?
I tried using the arithmetic average of two sets of params as a new set of params to generate a FLAME mesh, but the mesh seems to be broken. Will some regulations help, or this is not correct way to do this?

[Question] landmarks order in lmk_3d

Hi, thank you very much for sharing your great work.
I have a question about the input format:
In lmk_3d what are the order of the landmarks in the array?
Do lmk_3d[0] is landmark 17 in the image, then lmk_3d[1] is landmark 18 ext.?
Thank you,
Avi
68blank

How to do registration?

Hello,
Thanks for this nice code!
I want to ask you how to do the registration about FLMAE? Or, where can I get some codes about the data registration?
Thank you for taking the time out of your busy schedule to look at this issue!

Issue with fit_scan.py

Great work. demo fit_scan.py worked well.
But i have tried fit_scan.py with custom 3d face mesh along with its 3d landmarks. But the output flame model doesn't fitted properly(no errors and optimization completed successfully within 20 sec). The output looks like a general face mesh with no adaptation/similarity to input mesh. Do you have any idea on this. i used pr net face mesh
Thanks

Gaussian distributions and optimization problem

Hi, thanks for making it open. I wonder if there are Gaussian distributions for the shape parameter , expression paramerter。 The demo you offer just regards them all have the same weight。
By the way ,when I use FRAME to fit it to 68 landmarks on the 2d landmarks on the picture , it seems that when I use 400 parameter to minimize the fitting energy ,the result is not good. When I use a small set of parameter, the result get better. I realize that FRAME may work better for point cloud fitting because of more constrains. Can you give me some idea about how to optimize it on 2d landmark ,especially when it invokes expression parameter.

can we use fit_scan.py to register our sace data to flame?

Hello. Thank you for your great work first of all.
I want to fit my 3D scan data to the flame model. I have marked 51 feature points and scan data, can we use fit_scan.py to register our sace data to flame? Instead of first fitting 2D/3D facial markers and then fitting 3D (depth) surfaces using the ICP algorithm. I would appreciate any thoughts on this question. Thanks!

How to fitting on BFM07 front face model?

Hi, I have a mesh generated from deep3dface reconstruction model. It has texture and a BFM07 face, I want fitting it to FLAME and using the same texture on flame model. Is that possible?

model.weights format

Hello,

Thanks again for this wonderful work!

I understand that model.weights:(5023, 5). I wanted to know what each of those 5 weights per vertex represents.

Vertex index of each part

Is it possible to provide the list of vertex indices of different parts such as eyes, jaw and head? I want to do it by myself but it will cost a lot of time.

In other words, do you have any suggestions about any convenient methods or tools able to do this? I will really appreciate it if there is a list provided by project owners.

Thanks.

how to normalize lmks3d?

good job for me!
i want to know how to normalize the input lmks3d.The provided lmk3d is work well, like follow
image
but when i use lmk3d * 100, the result is bad.
image

Problems with fit_scan.py

Hello. Thank you for your great work first of all.
I want to fit my 3D scan data to the flame model. I first manually marked 51 feature points with MeshLab(the picture above), and after getting the .pp file, I used the load_picked_points function to convert it into an npy file.Then I used the "NA" coordinate system and got the following results(the picture below). I don't know if I did it right.I am wondering if I have to make other changes, or do you have any good suggestions? I have two questions:

  1. What is the relationship between scan_lmks.npy and flame_static_embedding.pkl? For example, what is the relationship between the 51 point coordinates in the npy file and the idx 、coords in the pkl file.
  2. Do I need to write a flame_static_embedding.pkl corresponding to my own 3D scan data? But I don’t know how to determine the values ​​of lmk_face_idx and lmk_b_coords, what should I do?
    Would you clarify more in detail?
    图片1
    图片2

landmarks extraction

I'm trying to register the FLAME model to the BU-4DFE dataset. but I couldn't automatically extract accurate landmarks from the scans. can you please provide me with the method you used for automatic landmarks extraction from the scans.
Thank you.

General question: model fitting

Does it make sense to fit the scan data to the model twice?
Like first time keeping shape weights more regularized in hopes of getting a better initial alignment and then when the alignment looks right make it more de-regularized to get a tighter fit.

How to generate matching texture

I was using fit_scan.py to fit my 3dMD model, and I noticed that the headspace dataset also used a similar method, but its Flame dataset did not come with texture. I can easily generate a fitted model with fit_scan.py, but 3dMD's texture rely on bmp files. I noticed that there is an example of generating a model with materials from a frontal photo in FLAME's git repository. Is there any way to generate texture for the model generated by fit_scan.py?

How to bring optimized parameter into FLAME Blender model?

Now I inserted beta and pose parameters onto FLAME Blender addon model.
However the mesh was not exactly same with FLAME lbs result. It seems "pose_param" is not working correctly.

I suppose that "pose_params" have 6 float values, and rodrigues vector (x,y,z) for "root" and "jaw" in order.
Therefore, I simply converted into euler(as radius), and set the vector for each bone (as rotation_euler) on Blender FLAME.

Do you have any idea to solve it?

LIcensing of code

Hello, thanks for sharing this code! I have a question regarding the license. In the README I understood that the license of the Code is the same as the FLAME model (that is if I accepted the terms of the FLAME model than I can use this code also). However, inside the code it says that the code is proprietary. Can we use the code in the same way as we use the FLAME model or are there additional restrictions?

how to crop Flame model?

thanks for your model!i want to crop face from FLAME,and use it like sfm bfm...。Is this possible,and could you give me some advice?

How to get head geometry as well in flame object

I have a 3d scan that also has hair geometry over the head. But when converting that scan to Flame, hair geometry and the expressions are lost. Please if you could point out the reason or the solution for the same. Attaching the scan file and the obtained flame obj.
Screenshot 2022-08-02 at 2 24 34 PM
Screenshot 2022-08-03 at 4 25 30 PM

How to do 2D landmark loss minimize?

Thank you for your excellent code!I wonder why you strongly recommend fitting to 2D landmarks but implemented a 3D landmark loss.I tried to implement a 2D loss,using the camera calibration as you suggested in the paper,and transformed the 3D coordinates to pixel coordinates to make a loss function.But it seems that Chumpy can not get the gradient.What's the problem?How should I modify x0?
screenshot-20230909-114940
screenshot-20230909-115006
screenshot-20230909-115148

TF flame-fitting code

Hello!!

I tried to use my 3D scan data(azure kinect dk) to FLAME. However, it takes lots of time.When I check system resource, flame-fitting process is just use CPU core. I want to use GPU to minimize fitting time. So, I wonder you have your flame-fitting code in tensorflow version to use GPU. I already checked other fitting code in TF_FLAME. The mesh fitting code can be adjusted at FLAME 5023 vertices data. ( https://github.com/TimoBolkart/TF_FLAME : fit_3D_mesh.py )

Thank you.

Regarding the issue of expression parameters.

Hello! I fit my scan data to flame. How can I optimize the parameters of expression and jaw to make my data (pictured below) generate new expressions. Can this be achieved in fit_scan.py, or does it require other work? Do you have any good suggestions? Looking forward to your reply, thanks!
图片1

Close eyes situation

@TimoBolkart
Hi,
Thanks for your great work. The objective weights you provided can make the fitting result good most of the time. But when the eyes are closed, the eyes of the fitting result are open. Although I try to tune the weights and add the eye distance objective, the result is still eyes-open.
Here are the implementation of the eye distance objective.

def eyes_distance(lmk):
    # left eye:  [21,22], [25,24] - 1
    # right eye: [27,28], [31,30] - 1
    eye_up = lmk[[20,21,26,27],:]
    eye_bottom = lmk[[24,23,30,29],:]
    dis = ch.array((eye_up - eye_bottom))
    return dis
    
# -----------------------------------------------------------------------------

def eyes_landmark_error_3d( mesh_verts, mesh_faces, lmk_3d, lmk_face_idx, lmk_b_coords, weight=1.0 ):
    """ function: 3d landmark error objective
    """
    # select corresponding vertices
    v_selected = mesh_points_by_barycentric_coordinates( mesh_verts, mesh_faces, lmk_face_idx, lmk_b_coords )
    lmk_num  = lmk_face_idx.shape[0]
    # an index to select which landmark to use
    lmk_selection = np.arange(0,lmk_num).ravel() # use all
    # residual vectors
    lmk3d_obj = weight * (eyes_distance(v_selected[lmk_selection]) - eyes_distance(lmk_3d[lmk_selection]) )

    return lmk3d_obj
# -----------------------------------------------------------------------------

Any help would be appreciated!

Is it possible to register the FLAME head model to a 3d face scan?

I have a collection of 3d face scans, produced by a device equipped with a kinect or structure sensor. Each scan only has facial areas including two ears and parts of the neck, but no back side of the head or the neck.

How can I register those scans with the FLAME model in order to get head meshes in the same topology as FLAME? I would appreciate any thoughts on this question. Thanks!

Fitting more than 51 landmarks in fit_lmk3d

Hi, thanks for your grate jobs! Currently I am testing fitting with 3D landmarks. I have note that it uses 51 landmarks in demo, it works well.How can i extend the number of the landmarks more than 51, expecially landmarks with ears. I have make the pp file using Meshlab, but the result is not good, can you give some advices?Thanks!

small clarification regarding model.r

Hi, thanks for making it open.
Before asking my doubt there is a small mismatch in the names of models downloaded and in the demo codes. It can be easily understood. I just wanted to inform.
I have a small doubt regarding the generation of "model.r" i don't see it anywhere in the code and also in the data loaded from model's pickle file. I can see model.f in pickle file but unclear about model.r object structure.
Thanks.

Is it possible to fit scan mesh without landmark?

Thanks for your work!
I am using your code fit_scan.py to fit a mesh with arbitrary topology. It's hard to generate landmarks for all inputs. Is there any way to fit it without landmarks please?

fit to 3D scan

Thank you very much for providing high-quality work. I am a beginner at FLAME and would like to ask some questions.
I want to know the difference between "Fit FLAME to a 3D scan" and "Fit the 3D model to registered 3D meshes". The 3D model is FLAME? And the 3D scan is the unregistered 3D meshes?

ImportError: cannot import name 'sample2meshdist'

thank you for your great work.
I got a problem when I run your code. Could you help me with it? Thank you very much.

File "e:\flame-fitting\sbody\alignment\mesh_distance\mesh_distance.py", line 4, in
from sbody.alignment.mesh_distance import sample2meshdist
ImportError: cannot import name 'sample2meshdist'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.