cleardusk / 3ddfa Goto Github PK
View Code? Open in Web Editor NEWThe PyTorch improved version of TPAMI 2017 paper: Face Alignment in Full Pose Range: A 3D Total Solution.
License: MIT License
The PyTorch improved version of TPAMI 2017 paper: Face Alignment in Full Pose Range: A 3D Total Solution.
License: MIT License
Thanks for your wonderful job! Does it work for animal (like dog or cat) face alignment? I mean, change the training data and training by the same method. What's your advice?
Can you explain how the .mat files generated by visualize.py code are represented? I noticed that the meshes are 3x53215. I tried to plot those meshes in MATLAB but I didn't see face models.
First, Thanks to your great job for 3D face alignment. Here I got a problem about the visual 3D face rendering results. For example, I run the render_demo.m under the folder ‘visualize’ , and the ‘test1_0.mat’ and 'test1_1.mat' are created by running the command 'python3 main.py -f samples/test1.jpg', the rendering result is about x-axis mirror symmetry with the result provided in the ‘test1_0.jpg’ and 'test1_1.jpg', which means the face is inverted. And I get the correct result by replace the command ‘vertex = load('test1_0');‘ with ’vertex = load('image00427');‘ which used the ’image00427.mat‘ you provided. Why I got this problem? I look forward to your reply.
更多开放,更多的应用。
Hello cleardusk,
Could you please help me with this error?
I can not find this file any where: "params_aflw2000.npy"
Thank you,
Khanh
Hello. First of all thank you for sharing this work. How can we take the colors form the image and create a 2d image based on the colors and set of vertices? Thank you
I saw that you credited PRNet in this repo. Can I know if you have any idea which is more accurate? Thanks.
Hi, @cleardusk
Thank you for your code.
I am confused about the value of these two variables, offset_norm and magic_number. Would you please tell me how do you select them?
Your code:
def matrix2angle(R):
''' compute three Euler angles from a Rotation Matrix. Ref: http://www.gregslabaugh.net/publications/euler.pdf
Args:
R: (3,3). rotation matrix
Returns:
x: yaw
y: pitch
z: roll
'''
# assert(isRotationMatrix(R))if R[2, 0] != 1 or R[2, 0] != -1: x = asin(R[2, 0]) y = atan2(R[2, 1] / cos(x), R[2, 2] / cos(x)) z = atan2(R[1, 0] / cos(x), R[0, 0] / cos(x)) else: # Gimbal lock z = 0 # can be anything if R[2, 0] == -1: x = np.pi / 2 y = z + atan2(R[0, 1], R[0, 2]) else: x = -np.pi / 2 y = -z + atan2(-R[0, 1], -R[0, 2]) return x, y, z
Reference:
Hi I checked your estimate_pose.py code with the reference http://www.gregslabaugh.net/publications/euler.pdf
and it seems that x is supposed to be x = -asin(R[2,0])
? Is this a bug? Or is there any special reason to remove the negative sign? If so, can you explain why the sign is removed? Thanks!
Hi,
I tried the following code to get the rotation matrix and check if it is a rotation matrix or not, and if so, I get the three rotation angles from it. Suprisingly, isRotationMatrix
that you introduced in one of the issues returns False. Here's my code:
annotation = np.load('path/to/param_all_norm.pkl')
pose_params = annotations[0][-12:]
pose_params = pose_params.reshape((3, 4))
print(isRotationsMatrix(pose_params[:, 0:3]))
do you have any idea?
Thanks!
Hello,could you tell me what's the meaning of the parameter 's2d' in the code of your paper Face Alignment Across Large Poses: A 3D Solution. What does it stand for?
Thank you!
Thank you very much for your work!
Actually I run your code successfully, the structure of the code is extremely clear.
However, when I use MeshLab to visualize the .obj and .pyl, I find the rendering result is confusing:
It seems that the 3D model(.pyl files) is partially normal.
Did I do something wrong? Or should I try another visualization tool?
By the way ,don't you think the 3D models(.obj files) look similar?
Thank you very much!
Hi jianzhu,
I am a student from Zhejiang University and I have successfully run your code but I can't generate the pics with 3D mesh like imgs/vertex_3d.jpg by running visualize.py.
It would be greatly appreciated if you can tell me where is the code for offscreen 3D rendering to generate imgs/vertex_3d.jpg!
Thank you very much!
Besh wishes,
Minda.
thank you for sharing the work. Is there a way to get the pose of the face based on the vertices or something? I want to frontalize the face. Thank you
Line 37 in 13173be
Thanks for your nice work! One question, what does the '@' mean?
is there a way to get the 3d face without the neck? Or should I train the model again to do this? If yes, can you give us a hand?
Thank you very much
How to take the estimated pose information from the python code? I noticed that some generated .mat files inside the 300W-3D folder has the Pose_Param variable, which contains the pose information. But the .mat files generated by the visualize.py code do not provide this information.
hello author,
i tried your code for 3d face landmark in live camera, using mtcnn for face detection work and your face crop method. and i find the eyes landmark do not blink (i am blinking).
is this correct?
Hi,
In your documentation, in Application -> 2. Face Reconstruction, there is a 3D model and then the texture is added to that. Is this also addressed in your code?
What I understood from the .ply file is that it only contains shape, and not texture and light. Right?
Thanks again for your wonderful work!
Hi, thank you for sharing you work.
I read the TPAMI paper, but i didn't found the two stream architecture in this repositories. The paper propose Pose Adaptive Convolution and Projected Normalized Coordinate Code which are not contained in this repositories either.
Well, from the main.py, it just crops face region in image and feeds to mobilenet. I wonder why using simple regression strategy can produce promising results as you provided? Is data augmentation matters?
Hey @cleardusk
I have a question regarding the Gif image obama_three_styles.gif
,
Can you please explain how you generated two stlyes overlays in this GIF, I understood about the dlib facial points. I am kind of new to this topic. But I couldn't grasp how you are generating the fist two styles. Are they depth map and PNCC represented in different color ?
Thanks in advance
Hi cleardusk:
I found that the cropped image of AFLW are not resized to 120*120 directly from original image, so how to you generate them?
thank you very much!
Forest
Dear author, when refer to 3D face reconstuction, i see in your main.py which could only generate vertex list, could it generate texture list?
How to get a 3d face given a 2d face image? Thank you
Deal Sir,
Thank you for sharing your code!
1.As I read your paper, the input data not only include the raw image, but also include the PNCC and PAC data, my Question is, Will you update the script to generate the PNCC and PAC data, If not, Can you give me some tips to generate them?
2. You also did some face profiling, Can you give me some tips on how to do that?
After building cython module, the error message "ImportError: cannot import name 'mesh_core_cython" was showed.
What can I do??
Thank you for sharing your code. But I meet an error like this :
map_location = {f'cuda:{i}': 'cuda:0' for i in range(8)} me
^
SyntaxError: invalid syntax
Would you please tell me what I should do. Thank you
Would you happen to have any code that you can share to show the exact functions for rendering the 3D face meshes?
Originally posted by @solashirai in #12 (comment)
Thanks for your amazing work again! One question, what meaning of '0.14' and '1.58' as shown below? It's determined by datasets or experience? Thank you~
Lines 86 to 87 in 25f7467
Hello
I am A raw recruit about Face Alignment. Well, I see the output is 62 dimension of the mobilenet-1 model in the code, and I notice this is the first stage pre-train model.I guess I miss some crucial points. But what I want to know is that if I want to estimate the angle of a face image using this model, what should I focused on?
Thank you!
python main.py -f samples/test1.jpg
Traceback (most recent call last):
File "/home/ww/3DDFA/main.py", line 205, in
main(args)
File "/home/ww/3DDFA/main.py", line 45, in main
model.load_state_dict(model_dict)
File "/home/ww/anaconda3/envs/Stacked_Hourglass_Network_Keras/lib/python3.6/site-packages/torch/nn/modules/module.py", line 721, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for MobileNet:
Unexpected key(s) in state_dict: "bn1.num_batches_tracked", "dw2_1.bn_dw.num_batches_tracked", "dw2_1.bn_sep.num_batches_tracked", "dw2_2.bn_dw.num_batches_tracked", "dw2_2.bn_sep.num_batches_tracked", "dw3_1.bn_dw.num_batches_tracked", "dw3_1.bn_sep.num_batches_tracked", "dw3_2.bn_dw.num_batches_tracked", "dw3_2.bn_sep.num_batches_tracked", "dw4_1.bn_dw.num_batches_tracked", "dw4_1.bn_sep.num_batches_tracked", "dw4_2.bn_dw.num_batches_tracked", "dw4_2.bn_sep.num_batches_tracked", "dw5_1.bn_dw.num_batches_tracked", "dw5_1.bn_sep.num_batches_tracked", "dw5_2.bn_dw.num_batches_tracked", "dw5_2.bn_sep.num_batches_tracked", "dw5_3.bn_dw.num_batches_tracked", "dw5_3.bn_sep.num_batches_tracked", "dw5_4.bn_dw.num_batches_tracked", "dw5_4.bn_sep.num_batches_tracked", "dw5_5.bn_dw.num_batches_tracked", "dw5_5.bn_sep.num_batches_tracked", "dw5_6.bn_dw.num_batches_tracked", "dw5_6.bn_sep.num_batches_tracked", "dw6.bn_dw.num_batches_tracked", "dw6.bn_sep.num_batches_tracked".
can you give me some advice maybe the 'phase1_wpdc_vdc_v2.pth.tar' missing parameters?
Hi,cleardusk
Thank you for sharing your code!
The 3d face shape often contains the neck part. How to remove it? Thanks in advance.
Hi cleardusk, I wrote a script, which uses dlib to detect face. and then feed the cropped face into the trained network. The obtained results are quite worse and I can not find the reason. Maybe I missed some key steps. Thus, could you kindly write a demo, which can be tested on the real-world videos? Thanks
hello, thank you for this work. Is there a way to publish the remove neck version of BFM data also on google drive? Is quite hard for me to download them from pan.baidu.
Thank you
Dear Sir:
Currently, the training data and benchmark data sets are cropped based on labeled facial landmarks.
During inference, if we don't have landmarks, need face bounding box to crop the face. so in order to do the same preprocess both training and inference, need to regenerate training data according to face detection result. so is it possible to share current training data generation scripts.
thank you very much.
about WPDC,why do you set weight[0:12] to zeor?
and 3dmm parameters should be 235 dims but 62 dims in your project?
and wpdc loss uses many '.npy' files .What are these files?
In the requested environment, I trapped into the problem 'Segmentation fault (core dumped'.
Do you konw the reason?
Hello @cleardusk , Thanks for your fantistic reproducttion.
I am going to compare my own method with 3DDFA in another dataset. However, I'm not sure the data pre-processing method, espacially about pose and shape parameters. For example, the official data of ALFW is unormalized, and the vector length is longer than yours. So could you briefly introduce the normalize method and the way the choose the dimension of parameters?
THANKS!
Hi, is it possible to get the complete head of the human from a given 2D image which includes hair and neck with this code?
Thanks.
Hi, jianzhu,
I found that if i do not use dlib_landmarks, but use anothor rectangles. the 3DDFA result is not so good.
is it that roi_img containing face is not sufficient?
what additional requirements should roi_img meet?
thank you for you reply!
Dear Sir,
Thank you for this amazing work !
My question is :
If we have another 3DMM how can we re make the work to create antoher 300W Dataset with the new 3DMM parameters ? which steps should we take ?
Thank you.
Hi,
thank you for your contribution on this topic.
I am curious to understand how this improved implementation performs w.r.t FAN approach from "How far are we from solving the 2D & 3D Face Alignment problem? (and a dataset of 230,000 3D facial landmarks" work.
thank you.
Can you describe the parameters format? I noticed that the param_all_norm.pkl file contains 62 extracted parameters of each image from you dataset. By the FaceProfiling code (http://www.cbsr.ia.ac.cn/users/xiangyuzhu/projects/HPEN/main.htm) I generated a dataset containing my own profiles, but I didn't understand how to create the pkl file using FaceProfiling output variables.
Sorry to bother You. But I can't find the file 'param_all_norm.pkl' in your code. Could you tell me if I should run some code or download any files?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.