GithubHelp home page GithubHelp logo

yadiraf / prnet Goto Github PK

View Code? Open in Web Editor NEW
4.9K 189.0 945.0 7.52 MB

Joint 3D Face Reconstruction and Dense Alignment with Position Map Regression Network (ECCV 2018)

Home Page: http://openaccess.thecvf.com/content_ECCV_2018/papers/Yao_Feng_Joint_3D_Face_ECCV_2018_paper.pdf

License: MIT License

Python 100.00%
3d face reconstruction alignment swap

prnet's Introduction

Joint 3D Face Reconstruction and Dense Alignment with Position Map Regression Network

This is an official python implementation of PRN.

PRN is a method to jointly regress dense alignment and 3D face shape in an end-to-end manner. More examples on Multi-PIE and 300VW can be seen in YouTube .

The main features are:

  • End-to-End our method can directly regress the 3D facial structure and dense alignment from a single image bypassing 3DMM fitting.

  • Multi-task By regressing position map, the 3D geometry along with semantic meaning can be obtained. Thus, we can effortlessly complete the tasks of dense alignment, monocular 3D face reconstruction, pose estimation, etc.

  • Faster than real-time The method can run at over 100fps(with GTX 1080) to regress a position map.

  • Robust Tested on facial images in unconstrained conditions. Our method is robust to poses, illuminations and occlusions.

Applications

Basics(Evaluated in paper)

  • Face Alignment

Dense alignment of both visible and non-visible points(including 68 key points).

And the visibility of points(1 for visible and 0 for non-visible).

alignment

  • 3D Face Reconstruction

Get the 3D vertices and corresponding colours from a single image. Save the result as mesh data(.obj), which can be opened with Meshlab or Microsoft 3D Builder. Notice that, the texture of non-visible area is distorted due to self-occlusion.

New:

  1. you can choose to output mesh with its original pose(default) or with front view(which means all output meshes are aligned)
  2. obj file can now also written with texture map(with specified texture size), and you can set non-visible texture to 0.

alignment

More(To be added)

  • 3D Pose Estimation

    Rather than only use 68 key points to calculate the camera matrix(easily effected by expression and poses), we use all vertices(more than 40K) to calculate a more accurate pose.

    pose

  • Depth image

    pose

  • Texture Editing

    • Data Augmentation/Selfie Editing

      modify special parts of input face, eyes for example:

      pose

    • Face Swapping

      replace the texture with another, then warp it to original pose and use Poisson editing to blend images.

      pose

Getting Started

Prerequisite

  • Python 2.7 (numpy, skimage, scipy)

  • TensorFlow >= 1.4

    Optional:

  • dlib (for detecting face. You do not have to install if you can provide bounding box information. )

  • opencv2 (for showing results)

GPU is highly recommended. The run time is ~0.01s with GPU(GeForce GTX 1080) and ~0.2s with CPU(Intel(R) Xeon(R) CPU E5-2640 v4 @ 2.40GHz).

Usage

  1. Clone the repository
git clone https://github.com/YadiraF/PRNet
cd PRNet
  1. Download the PRN trained model at BaiduDrive or GoogleDrive, and put it into Data/net-data

  2. Run the test code.(test AFLW2000 images)

    python run_basics.py #Can run only with python and tensorflow

  3. Run with your own images

    python demo.py -i <inputDir> -o <outputDir> --isDlib True

    run python demo.py --help for more details.

  4. For Texture Editing Apps:

    python demo_texture.py -i image_path_1 -r image_path_2 -o output_path

    run python demo_texture.py --help for more details.

Training

The core idea of the paper is:

Using position map to represent face geometry&alignment information, then learning this with an Encoder-Decoder Network.

So, the training steps:

  1. generate position map ground truth.

    the example of generating position map of 300W_LP dataset can be seen in generate_posmap_300WLP

  2. an encoder-decoder network to learn mapping from rgb image to position map.

    the weight mask can be found in the folder Data/uv-data

What you can custom:

  1. the UV space of position map.

    you can change the parameterization method, or change the resolution of UV space.

  2. the backbone of encoder-decoder network

    this demo uses residual blocks. VGG, mobile-net are also ok.

  3. the weight mask

    you can change the weight to focus more on which part your project need more.

  4. the training data

    if you have scanned 3d face, it's better to train PRN with your own data. Before that, you may need use ICP to align your face meshes.

FQA

  1. How to speed up?

    a. network inference part

    you can train a smaller network or use a smaller position map as input.

    b. render part

    you can refer to c++ version.

    c. other parts like detecting face, writing obj

    the best way is to rewrite them in c++.

  2. How to improve the precision?

    a. geometry precision.

    Due to the restriction of training data, the precision of reconstructed face from this demo has little detail. You can train the network with your own detailed data or do post-processing like shape-from-shading to add details.

    b. texture precision.

    I just added an option to specify the texture size. When the texture size > face size in original image, and render new facial image with texture mapping, there will be little resample error.

Changelog

  • 2018/7/19 add training part. can specify the resolution of the texture map.
  • 2018/5/10 add texture editing examples(for data augmentation, face swapping)
  • 2018/4/28 add visibility of vertices, output obj file with texture map, depth image
  • 2018/4/26 can output mesh with front view
  • 2018/3/28 add pose estimation
  • 2018/3/12 first release(3d reconstruction and dense alignment)

License

Code: under MIT license.

Trained model file: please see issue 28, thank Kyle McDonald for his answer.

Citation

If you use this code, please consider citing:

@inProceedings{feng2018prn,
  title     = {Joint 3D Face Reconstruction and Dense Alignment with Position Map Regression Network},
  author    = {Yao Feng and Fan Wu and Xiaohu Shao and Yanfeng Wang and Xi Zhou},
  booktitle = {ECCV},
  year      = {2018}
}

Contacts

Please contact [email protected] or open an issue for any questions or suggestions.

Thanks! (●'◡'●)

Acknowledgements

prnet's People

Contributors

andili99 avatar ksachdeva avatar scott-vsi avatar yfeng95 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

prnet's Issues

printing the dense alignment images in readme

Very cool project. This request may be similar to issue 10. Is there a way to print the Dense alignment images (yellow + blue=green) shown in the readme? Is Matlab (not free) required?

"Simply use:
imshow(image);
hold on;
pcshow(vertices);
view(2);"

Also (btw), I have seen haar cascades combined with dlib for facial detection speedups:

path = "classifiers/haarcascade_frontalface_default.xml"
# detector = dlib.get_frontal_face_detector()
detector = cv2.CascadeClassifier(path)
predictor = dlib.shape_predictor(args["shape_predictor"])

photo on 4-28-18 at 12 00 am
screenshot from 2018-04-28 00-39-17
screenshot from 2018-04-28 00-39-38
screenshot from 2018-04-28 00-39-53
ezgif com-video-to-gif

ImportError: No module named 'render'

Hi,
Has anyone had this error?
Traceback (most recent call last):
File "demo.py", line 15, in
from utils.render_app import get_visibility, get_uv_mask, get_depth_image
File "C:\Users\PRNet-master\utils\render_app.py", line 2, in
from render import vis_of_vertices, render_texture
ImportError: No module named 'render'

Using python 3 (have done the fixes to "w" Also python run_basics.py runs fine)

Thanks in advance

FileNotFoundError when running run_basics.py and no output

I didn't install dlib and opencv2 yet, idk if that's the issue
This is the output:

2018-07-05 23:05:51.604383: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1356] Found device 0 with properties:
name: GeForce GTX 1060 6GB major: 6 minor: 1 memoryClockRate(GHz): 1.7715
pciBusID: 0000:01:00.0
totalMemory: 6.00GiB freeMemory: 4.97GiB
2018-07-05 23:05:51.612427: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1435] Adding visible gpu devices: 0
2018-07-05 23:05:52.415988: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:923] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-07-05 23:05:52.420490: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:929]      0
2018-07-05 23:05:52.423549: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:942] 0:   N
2018-07-05 23:05:52.427180: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 4740 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:01:00.0, compute capability: 6.1)
Traceback (most recent call last):
  File "run_basics.py", line 51, in <module>
    np.savetxt(os.path.join(save_folder, name + '.txt'), kpt)
  File "C:\Users\fede\env\PRnet\lib\site-packages\numpy\lib\npyio.py", line 1307, in savetxt
    open(fname, 'wt').close()
FileNotFoundError: [Errno 2] No such file or directory: 'TestImages/AFLW2000_results\\AFLW2000\\image00050.txt'

Also there are no images in the \TestImages\AFLW2000_results folder.

CV2 seamlessClone error in demo_texture.py

I am running opencv 2.4.11 in a conda environment. The basic demo works fine. I just get this error when I run the latest demo_texture.py :

Traceback (most recent call last): File "demo_texture.py", line 101, in <module> texture_editing(prn, parser.parse_args()) File "demo_texture.py", line 76, in texture_editing output = cv2.seamlessClone((new_image*255).astype(np.uint8), (image*255).astype(np.uint8), (face_mask*255).astype(np.uint8), center, cv2.NORMAL_CLONE) AttributeError: 'module' object has no attribute 'seamlessClone'

Query regarding dimensions

Hi,
This is great stuff. I have been able to run the code. I was curious to know if you have done any work around deriving the actual dimensions of the face...ex actual distance between eyes or eye and the nose etc.

Do let me know or kindly give me some pointers. I am assuming that this derivation is possible since the depth maps have been generated. Any help will be greatly appreciated.

kind Regards

Run Basics output?

I've never run or used code like this before...using it for a professor's research. When I run run_basics I get
ython run_basics.py
/usr/local/lib/python2.7/dist-packages/h5py/init.py:36: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type.
from ._conv import register_converters as _register_converters
2018-06-04 09:39:11.547895: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2018-06-04 09:39:11.711976: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1212] Found device 0 with properties:
name: GeForce GTX 1080 major: 6 minor: 1 memoryClockRate(GHz): 1.86
pciBusID: 0000:01:00.0
totalMemory: 7.92GiB freeMemory: 7.50GiB
2018-06-04 09:39:11.712004: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1312] Adding visible gpu devices: 0
2018-06-04 09:39:11.911343: I tensorflow/core/common_runtime/gpu/gpu_device.cc:993] Creating TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 7248 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080, pci bus id: 0000:01:00.0, compute capability: 6.1)
Is this correct output or should I be getting images or something.....?

How to get the z-value's map?

Can you tell me how to get the face depth-map throw. I want to get the map, but i don't know how to deal with it. Thank you very much.

A problem about the keypoints output

Hello! Thanks for the code!
I am a little confused about the the meaning of the output format. In fact, I am not clear about which part these keypoints represent.
Thank you!

texture demo issue

Hi,

thanks for the great project!
I have some trouble to get the texture demo working.

I get:

  File "demo_texture.py", line 101, in <module>
    texture_editing(prn, parser.parse_args())
  File "demo_texture.py", line 67, in texture_editing
    new_colors = prn.get_colors_from_texture(new_texture)
AttributeError: PRN instance has no attribute 'get_colors_from_texture'

Reconstruction Precision Problem

Hi,

Thanks for your code, and you did a great job!

When I reproduce your work, I found the model trained by me is not as precision as the model trained by you. For example, the following contrast figure.
default
The left is the output of your model, and the right is mine.

I find that the facial contour/texture details of the model I trained are very imprecise. Do you need other skills to train the model? Are there any other ways to improve performance other than those mentioned in the paper?

I look forward to your reply very much. Thank you.

Regarding licence of model file and dataset.

hi YadiraF,
I have gone through the repository and found that the license type used in this repository is "MIT".
1.Is the model file also covered under MIT license?
2.With your knowledge, any idea about the licensing of datasets you used?

关于 应用的 问题。

你的工作非常漂亮,我也研究了 eos 和 3dmm
1 表情同步
我在想多张图片去纠正 一个人的3d模型,比如正脸调整出一个3d模型,侧脸图片,再去调整 3d模型,最终更好的估算出此人的3d模型,虽然eos 有多张人脸的处理.
先估算出一个人准确的3d无表情模型,以此模型为基础,去估算后面的,得到表情变化参数。
然后把这些参数作用到其他3d人脸上。
希望以后可以和你交流

issue with versions i guess

hi, is it running on python 3?
regular demo was working I guess, but then
i get the following issue while trying to demo with my own image and no results has been stored

RunTimeError: module compiled against API version 0xc but this version of numpy is 0xa

How to generate UV position map from 300W-LP dataset.

Hi @YadiraF: Firstly, I want to say I read your paper and your work is really amazing, espicially for the speed performance. I also played with 300W-LP for a while and would like to know if you can provide the code for generaging the UV position map. From your paper, you sayed you aligned the 3D face point cloud with the 2D image. I'm curious about how you did it. Thank you very much.

Training Code

Hi YadiraF:
Will the training source code be open? and when?

Thank you very much!

run_basics issue

Hi,

When running the 'run_basics.py' file, I get the following error:

tensorflow.python.framework.errors_impl.NotFoundError: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for ././Data/net-data/256_256_resfcn256_weight
[[Node: save/RestoreV2_152 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2_152/tensor_names, save/RestoreV2_152/shape_and_slices)]]
[[Node: save/RestoreV2_182/_303 = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device_incarnation=1, tensor_name="edge_796_save/RestoreV2_182", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:GPU:0"]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "run_basics.py", line 13, in
prn = PRN(is_dlib = False)
File "/home/ywang/PRNet/api.py", line 34, in init
self.pos_predictor.restore('./'+prn_path)
File "/home/ywang/PRNet/predictor.py", line 94, in restore
tf.train.Saver(self.network.vars).restore(self.sess, model_path)
File "/home/ywang/anaconda3/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 1666, in restore
{self.saver_def.filename_tensor_name: save_path})
File "/home/ywang/anaconda3/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 889, in run
run_metadata_ptr)
File "/home/ywang/anaconda3/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1120, in _run
feed_dict_tensor, options, run_metadata)
File "/home/ywang/anaconda3/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1317, in _do_run
options, run_metadata)
File "/home/ywang/anaconda3/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1336, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.NotFoundError: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for ././Data/net-data/256_256_resfcn256_weight
[[Node: save/RestoreV2_152 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2_152/tensor_names, save/RestoreV2_152/shape_and_slices)]]
[[Node: save/RestoreV2_182/_303 = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device_incarnation=1, tensor_name="edge_796_save/RestoreV2_182", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:GPU:0"]]

Caused by op 'save/RestoreV2_152', defined at:
File "run_basics.py", line 13, in
prn = PRN(is_dlib = False)
File "/home/ywang/PRNet/api.py", line 34, in init
self.pos_predictor.restore('./'+prn_path)
File "/home/ywang/PRNet/predictor.py", line 94, in restore
tf.train.Saver(self.network.vars).restore(self.sess, model_path)
File "/home/ywang/anaconda3/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 1218, in init
self.build()
File "/home/ywang/anaconda3/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 1227, in build
self._build(self._filename, build_save=True, build_restore=True)
File "/home/ywang/anaconda3/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 1263, in _build
build_save=build_save, build_restore=build_restore)
File "/home/ywang/anaconda3/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 751, in _build_internal
restore_sequentially, reshape)
File "/home/ywang/anaconda3/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 427, in _AddRestoreOps
tensors = self.restore_op(filename_tensor, saveable, preferred_shard)
File "/home/ywang/anaconda3/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 267, in restore_op
[spec.tensor.dtype])[0])
File "/home/ywang/anaconda3/lib/python3.5/site-packages/tensorflow/python/ops/gen_io_ops.py", line 1021, in restore_v2
shape_and_slices=shape_and_slices, dtypes=dtypes, name=name)
File "/home/ywang/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/home/ywang/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 2956, in create_op
op_def=op_def)
File "/home/ywang/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 1470, in init
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access

NotFoundError (see above for traceback): Unsuccessful TensorSliceReader constructor: Failed to find any matching files for ././Data/net-data/256_256_resfcn256_weight
[[Node: save/RestoreV2_152 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2_152/tensor_names, save/RestoreV2_152/shape_and_slices)]]
[[Node: save/RestoreV2_182/_303 = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device_incarnation=1, tensor_name="edge_796_save/RestoreV2_182", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:GPU:0"]]

please help me

Compatibility with the Basel Face Model

Hi,

I have been trying to implement the training of this model by hand on my own. I am running into issues regarding the compatibility of your UV map with the data in 300W_LP that you say you trained on. In particular, the models in 300W_LP have ~10k more vertices than your UV map describes. The particular correspondence doesn't seem clear to me. Can you state which subset of these vertices you are referencing? I would really appreciate it!

Thanks for a great paper.

python 3 compatibility

it would be great to make this compatible with python 3!

i mentioned a problem in #21 but wanted to start a fresh issue here.

$ python run_basics.py 
2018-05-01 12:16:47.603620: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1349] Found device 0 with properties: 
name: GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.6575
pciBusID: 0000:05:00.0
totalMemory: 10.91GiB freeMemory: 8.34GiB
2018-05-01 12:16:47.603651: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1428] Adding visible gpu devices: 0
2018-05-01 12:16:47.806106: I tensorflow/core/common_runtime/gpu/gpu_device.cc:916] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-05-01 12:16:47.806140: I tensorflow/core/common_runtime/gpu/gpu_device.cc:922]      0 
2018-05-01 12:16:47.806148: I tensorflow/core/common_runtime/gpu/gpu_device.cc:935] 0:   N 
2018-05-01 12:16:47.806319: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1046] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 8060 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:05:00.0, compute capability: 6.1)
2018-05-01 12:16:47.807380: I tensorflow/core/common_runtime/process_util.cc:64] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
Traceback (most recent call last):
  File "run_basics.py", line 52, in <module>
    write_obj(os.path.join(save_folder, name + '.obj'), vertices, colors, prn.triangles) #save 3d face(can open with meshlab)
  File "/home/kyle/Documents/PRNet/utils/write.py", line 37, in write_obj
    f.write(s)
TypeError: a bytes-like object is required, not 'str'

3D object alignment with source photo

I'm having some trouble getting the OBJ mesh objects to align with the original photo. It seems like horizontal axis is swapped (although it depends what software you view it in. Any way to control axis orientation (or do you know what the default is for the libraries you're using?)

Related issue is how you created the 3D animated object you use in your demo video. I know this isn't strictly a coding issue, but I'm curious how you handle a sequence of OBJ files to turn them into an animation.

Realtime issue

Hi!
thanks for the great project!
I have a prolem that how to get face reconstruction or face alignment in real time?
Thank you very much!

run_basics issue

Hi,

When running the 'run_basics.py' file, I get the following error:

Traceback (most recent call last):
File "run_basics.py", line 13, in
prn = PRN(is_dlib = False, is_opencv=False)
TypeError: init() got an unexpected keyword argument 'is_opencv'

A quick look at the code shows that api.PRN does not take in an argument 'is_opencv'. Am I missing something simple here? Removing this argument gives me a separate error:

Traceback (most recent call last):
File "run_basics.py", line 52, in
write_obj(os.path.join(save_folder, name + '.obj'), vertices, colors, prn.triangles) #save 3d face(can > open with meshlab)
File "/Users/weston/Projects/PRNet/utils/write.py", line 37, in write_obj
f.write(s)
TypeError: a bytes-like object is required, not 'str'

Thanks for your help!

Checkerboard artifacts, jagged mesh output

It looks like current network produces jaggy meshed due to using stride=2 convolutions. This problem is described here: https://distill.pub/2016/deconv-checkerboard/ . If I undersand correctly, the suggestion is to replace stride=2 convolutions with x = tf.image.resize_images(x, size=np.array(x.get_shape()[1:3])*2) and stride=1 convolution.

Here are come screenshots to illustrate the problem:
net_forward output (normalized to 0-255 range):
232_pos
Zoomed:
moire

Jagged mesh output:
_999 615

MemoryError

A memory error occurred when I ran the demo_texture.py

Traceback (most recent call last):
File "demo_texture.py", line 101, in
texture_editing(prn, parser.parse_args())
File "demo_texture.py", line 24, in texture_editing
pos = prn.process(image)
File "/home/nvee/opt/git/python/PRNet/api.py", line 99, in process
detected_faces = self.dlib_detect(image)
File "/home/nvee/opt/git/python/PRNet/api.py", line 53, in dlib_detect
return self.face_detector(image, 1)
MemoryError: std::bad_alloc

Face Swapping Video Possible?

Hey,

Please forgive me if this is only for issues and not questions, I read we can post questions here so here are few questions about this.

First of all, amazing work. This is going to change the world as we know it.

I would greatly appreciate if you could help me with following questions,

  1. Is the face swap module included into this? If so, could you please point me to a link where it showcases how face swap can be used? I was trying to find this info but was unable to from readme page.

  2. Can Face Swap be used for videos? That is: if I extract several images from a video of Actor A (source) and Actor B (target) and potentially be able to swap face from A to B? Is this possible even with a GPU? I am currently running on Tesla K80 GPU on Ubuntu 16.04 LTS.

  3. Can I generate images of the face swap instead of OBJ? So that I can input 2 videos (source and target) and then be able to see the source's face on target's video by converting the images into video.

My whole idea here is to use this for face swapping for videos. That is, convert video to the image of both actors (source and target) then run this code and face swap them and then convert modified (read face swapped) images to video.

So is this possible with this code? I am really excited to use this and looking forward to your response.

Thank you in advance!

Where is run.py

Hello,I only found run_basics.py and ran it ,but I didn't find run.py.
Where I could get it?

About the speed

Hi, nice work. I've tried the demo, however, I didn't get the speed reported in the paper, the average time to detect the test images is 100ms per image, my GPU is tesla K80 and I think the GPU is powerful enough.

the landmark detection of eyes area are not good, even did not work

Dear Sir,

     Thank you for sharing you amazing code, I check out the video you have uploaded on youtube, which is https://youtu.be/tXTgLSyIha8. At the time of 0:54, the landmark detection of eyes area seems did not work at all, I wonder how to improve the landmark detection of this eyes area? Thank you for your reply.

best regards,
ze

empty output folder

python demo.py -i C:\Users\Damian\Documents\PRNet-master\TestImages\0.jpg -o C:\Users\Damian\Documents\PRNet-master\TestImages\out --isDlib False

then I executed it

`.
  from ._conv import register_converters as _register_converters

C:\Users\Damian\Documents\PRNet-master>pause
Presione una tecla para continuar . . .

but nothing happens and the "out" folder is empty

Applying pose estimate from source to target face

I'm digging into the code trying to see if there is a way I can use the the pose estimate from one face to transform another. Basically, I want to detect the pose of source face A, and generate a textured model of target face B, transform face B so it has the same pose as face A, and then output an image.

I tried simply applying the Euler angles you get from "--isPose True" in meshlab but the resulting adjustments weren't right.

It seems like there should be a simple way to do this within the demo_texture.py script, but I'm having trouble wrapping my head around the matrix transformations. Any tips?

How to get the *.mat files for jpg? And how to process a video file?

Hi I've tried the run_basics.py, it works for the images in TestImages folder. But how could I get the *.mat file?
And I run the python script for a video file(running with dlib) on a mac computer , I found that the processing speed is very slow(2,3 seconds per frame), so does it must run with GPU acceleration to get high processing speed?
Thanks!

'self' is an undefined name in predictor.py

https://github.com/YadiraF/PRNet/blob/master/predictor.py#L7

flake8 testing of https://github.com/YadiraF/PRNet on Python 3.6.3

$ flake8 . --count --select=E901,E999,F821,F822,F823 --show-source --statistics

./api.py:158:19: F821 undefined name 'cv2'
        texture = cv2.remap(image, pos[:,:,:2].astype(np.float32), None, interpolation=cv2.INTER_NEAREST, borderMode=cv2.BORDER_CONSTANT,borderValue=(0))
                  ^
./api.py:158:88: F821 undefined name 'cv2'
        texture = cv2.remap(image, pos[:,:,:2].astype(np.float32), None, interpolation=cv2.INTER_NEAREST, borderMode=cv2.BORDER_CONSTANT,borderValue=(0))
                                                                                       ^
./api.py:158:118: F821 undefined name 'cv2'
        texture = cv2.remap(image, pos[:,:,:2].astype(np.float32), None, interpolation=cv2.INTER_NEAREST, borderMode=cv2.BORDER_CONSTANT,borderValue=(0))
                                                                                                                     ^
./demo.py:14:14: E999 SyntaxError: invalid syntax
    print args.isDlib
             ^
./predictor.py:7:84: F821 undefined name 'self'
    assert num_outputs%2==0, "num_outputs must be divided by channel_factor %d." % self.channel_factor
                                                                                   ^
./utils/write.py:26:20: F821 undefined name 'obj'
        obj_name = obj + '.obj'
                   ^
1     E999 SyntaxError: invalid syntax
5     F821 undefined name 'cv2'
6

Texture Problem

Why don't I find good results in terms of texture?

saved

I changed the code to set opencv to True (for texture extraction) but nothing changed

Is this due to meshlab?
i have joined an output example

0.zip

Thank you

Small bug in 3D vertices saver

Hi,
First of all, I would like to thank you for sharing your impressive work.
The released code has a small bug in the 3D vertices saver:
The hight used in the line:
save_vertices[:,1] = h - 1 - save_vertices[:,1]
Is of the pre-cropped image (I.e. if max_size> 1000: image = rescale(image, 1000./max_size) may had a diffrent hight )
To fix the bug just replace the orignal line in the following lines with:
[h1, w1, _] = image.shape
save_vertices[:,1] = h1 - 1 - save_vertices[:,1]
Thank you,
Avi

Expression driver problem

Hello, senior, I am a student from USTC. I am interested in your research. I would like to ask if this solution is to achieve facial expressions in the later stage. Is there any good solution? Because I thought it was possible to drive by data, but it was resolutely denied by my peers and I also totally disagree that he is right, so I am here to answer you.

'obj' is an undefined name in utils/write.py

https://github.com/YadiraF/PRNet/blob/master/utils/write.py#L26

flake8 testing of https://github.com/YadiraF/PRNet on Python 3.6.3

$ flake8 . --count --select=E901,E999,F821,F822,F823 --show-source --statistics

./api.py:158:19: F821 undefined name 'cv2'
        texture = cv2.remap(image, pos[:,:,:2].astype(np.float32), None, interpolation=cv2.INTER_NEAREST, borderMode=cv2.BORDER_CONSTANT,borderValue=(0))
                  ^
./api.py:158:88: F821 undefined name 'cv2'
        texture = cv2.remap(image, pos[:,:,:2].astype(np.float32), None, interpolation=cv2.INTER_NEAREST, borderMode=cv2.BORDER_CONSTANT,borderValue=(0))
                                                                                       ^
./api.py:158:118: F821 undefined name 'cv2'
        texture = cv2.remap(image, pos[:,:,:2].astype(np.float32), None, interpolation=cv2.INTER_NEAREST, borderMode=cv2.BORDER_CONSTANT,borderValue=(0))
                                                                                                                     ^
./demo.py:14:14: E999 SyntaxError: invalid syntax
    print args.isDlib
             ^
./predictor.py:7:84: F821 undefined name 'self'
    assert num_outputs%2==0, "num_outputs must be divided by channel_factor %d." % self.channel_factor
                                                                                   ^
./utils/write.py:26:20: F821 undefined name 'obj'
        obj_name = obj + '.obj'
                   ^
1     E999 SyntaxError: invalid syntax
5     F821 undefined name 'cv2'
6

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.