GithubHelp home page GithubHelp logo

coma's People

Contributors

anuragranj avatar qianlim avatar timobolkart avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

coma's Issues

A question about computeError.py

Thanks for the great work! When I was running the generateErrors.sh script, the results show that the pca results actually get a lower Error than cnn results, which is shown as below:
error

And the plot also shows that the pca has a better Euclidean norm performance.
cheeks_in

I 'm wondering whether you can the same results as mine? Or have I mistaken something?

pip install -r requirements.txt, issue with tensorflow

Hi, when I try to run

pip install -r requirements.txt

I get the error:

No matching distribution found for tensorflow-gpu==1.3.0 (from -r requirements.txt (line 7))

However I've already installed tensorflow and tensorflow gpu, I can spot what the issue is.
I've tried to hack your requirement file (essentially just removing the version) it still doesn't work.

Can you help?

Did you trained 12 models for 12 kinds of expression?

Found there are 12 folders in the ./checkpoints folder, and each folder corresponds to an expression. Why? I just thought that there should only be one trained model which was trained by all the meshes of all expressions? Am I wrong?

CoMA dataset multi-view camera parameter

Dear authors,

Thanks for your great work and providing valuable datasets.

While I am trying to use provided datasets for research purposes, I coudn't find camera intrinsic/extrinsic parametrs (R, t, K) for multi-view images.

Could you provide camera parameters for CoMA multi-view images ?

Thanks you.

YJHong.

There are more 3D scans than registered meshes

Thanks for the great project and the dataset

In the downloadable registered meshes zip file (from the coma website), some folders contain only a few '.ply' registered meshes, whereas the corresponding 3D scan folder contains a few hundred scans

For example, in FaceTalk_170811_03274_TA/mouth_middle there are only 36 registered meshes.

Is this caused by some failure in the FLAME registration process?

Show samples from latent space.

Something wrong with OpenGL on my macOS. How can I save the intermediate results as mesh files from latent space? I am trying to visualize the results with MeshLab.

Implementation of KL divergence

Hi, thank you for the elegant work that inspires me a lot.
I'm curious about the implementation of kl divergence loss since it seems quite different from plain VAE. The latent space represents as Gaussian param (μ, σ) in plain VAE, whereas the proposed represents it as a 8-dim vector. The difference makes me confused about how to compute kl loss, but i'm not able to find where this loss function locates.
Could anyone tell me where the part of KL divergence loss locates?

How to generate random shapes?

My OS is windows, so it's hard for me to run your demos. What I want to do is using your generative model to generate random shapes(objs). Since you have only 10 latent features, then how to random these 10 values to generate random shapes? What's the rule?

How to save a mesh to RGB image?

Hi author,
Thanks for your work .I run your code successfully.But how to save the predicted mesh, a two dimension narray (5023, 3), to a ordinary RGB image to visualize the mesh like your paper figure 2.
I try code as fllows:

    vec = predictions[0]   # vec.shape = (5023, 3), is a narray
    predict_mesh = facedata.vec2mesh(vec)
    predict_mesh.show()

However, there is nothing to happen.
Therefore, could you provide some advices? Thanks you again.

Data preprocessing to unify the meshes' vertex number and faces

Hello,

First of all, really thanks for your great paper and code. I am trying to apply different dataset to your Network. For that, I tried to preprocess the dataset for the training. But it was not that easy. There are two main points for preprocessing.

  1. make all meshes have same number of vertices (to define the size of network)
  2. make all meshes have same connectivity (to use just one reference mesh file)

I am looking for some programs, ways, and methods to do these two preprocessing, but still don't know how to do that. How did you do that?

If you don't mind to share your knowledge and method to do that, please let me know.

Best Regards,
Jongwon Lee

ValueError: Can't load save_path when it is None.

I trained the model using the command stated in the readme file and when I'm trying to test, I'm getting this error:
File "main.py", line 96, in <module> predictions, loss = model.predict(X_test, X_test) File "/media/eman/HDD/SecondYear/NewComa/coma-trained/lib/models.py", line 29, in predict sess = self._get_session(sess) File "/media/eman/HDD/SecondYear/NewComa/coma-trained/lib/models.py", line 328, in _get_session self.op_saver.restore(sess, filename) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 1557, in restore raise ValueError("Can't load save_path when it is None.") ValueError: Can't load save_path when it is None.

The Comparison of FLAME and DeepFLAME

Hi, thanks your code.
There is an experiment in the paper,the comparison of FLAME and DeepFLAME, and Table 5 gives the results. Is the number in the table 5 show the Interpolation experiment results?or the Extrapolation experiment?or other experiment result.
If the number in table 5 show the Interpolation experiment results, whether it means FLAM and DeepFLAME do better than Mesh Autoencoder in the paper?

Training on a different template

I have been trying to retrain your model on a different template (different 3D mesh of different topology) but unfortunately, I'm failing due to the different number of vertices and different connectivity pattern from the raw data. Is it possible to do this type of retraining and if yes, what needs to be changed?

Thanks for help

Regarding network learning faces also

Hi Anurag,

Thanks for producing this work online. Great work indeed.

I was working on this project. Is it possible that the network can learn faces also apart from vertices. Since here the mesh is being reconstructed using the reference mesh faces. If we somehow need to denoise some data from mesh in the output, for that reason I require the network should also learn faces.

Is it possible from your point of view and if yes then what type of mathematical operations (like chebyshev etc) should be applied over faces.

Waiting for your response

Regards
Veronica

Possible bug in train validation split

coma/facemesh.py

Lines 32 to 34 in 643b234

self.vertices_train = np.load(self.train_file)
self.vertices_train = self.vertices_train[:-self.nVal]
self.vertices_val = self.vertices_train[-self.nVal:]

Is it possible there is some bug: like you wanted to take first n - nVal meshes as train and last nVal as validation and actually throwed away last nVal and taking first n - nVal as train and n-2nVal to nVal as validation?

I believe the fix is:

		vertices_train = np.load(self.train_file)
		self.vertices_train = vertices_train[:-self.nVal]
		self.vertices_val = vertices_train[-self.nVal:]

Prove me, if I am wrong.

The Euclidean error

Excuse me,The Euclidean error of mesh was calculated in the paper, and the Errors are in millimeters, but the CoMA dataset downloaded from http://coma.is.tue.mpg.de/ is apparently not in millimeters.So I want to know the the transformation between the two scales

Computing Graph Laplacians .. L is : [] Traceback (most recent call last): File "main.py", line 96, in <module> model = models.coma(L=L, D=D, U=U, **params) File "/home/wuxiaoliang/wuxlprojects/coma/lib/models.py", line 1124, in __init__ M_0 = L[0].shape[0] IndexError: list index out of range

Computing Graph Laplacians ..
L is : []
Traceback (most recent call last):
File "main.py", line 96, in
model = models.coma(L=L, D=D, U=U, **params)
File "/home/wuxiaoliang/wuxlprojects/coma/lib/models.py", line 1124, in init
M_0 = L[0].shape[0]
IndexError: list index out of range

excuse me, how can I solve this problem? @anuragranj @TimoBolkart

Qualitative results

Hi Can I ask you how you did the qualitative result like' error heat map ' showed in your paper? I didn't find the code for this part? Could you share the code?

image

mesh viewer

getting output as npy file which is containig the shape(num of meshes, no.of verts, 3), i added (verts, 3) in obj file, how to get the faces for them.
mesh visualizer is not showing anything if i set viz to True, it's saying "predictions_unperm" is not there.

opendr issue

python main.py --data data/processed/sliced --name sliced

Traceback (most recent call last):
File "main.py", line 2, in
from lib import models, graph, coarsening, utils, mesh_sampling
File "/home/dl/coma/lib/mesh_sampling.py", line 8, in
from opendr.topology import get_vert_connectivity, get_vertices_per_edge
ImportError: No module named opendr.topology

What do I have to consider for using larger and more complex data?

Hello, I made a simple visualization for testing face autoencoder : https://www.youtube.com/watch?v=PGeEPXblCI4&ab_channel=EJShim

And I am currently trying to use more complex data, which is randomly-transformed 16 teeth model.

It has totally 73316 points (face model have 5023 points).

CoMA somehow optimizes them, but the results seem very noisy compared to the face model :

image

Is the data too large or too much transformed?

Or, do you think this kind of data can be used? If so, what do I have to consider for improving results?

About mesh data real size

Hi, Anurag,

Thanks for this excellent work about 3D face at first.

I have a question about your released dataset. I find in your ECCV paper, you compared Euclidean distance with other methods and errors are in millimeters. But seems released dataset forget to mention measurement units. Would you mind sharing it?

pip install -r requirements.txt "Im Getting below error when i try to install "pip install -r requirements.txt",Please help me"

C:\Users\Thanush\github\cp-barebone-movie-app-final\cp_django_barebones (step_11_final_movie_app -> origin)
λ ls
init.py settings.py urls.py wsgi.py

C:\Users\Thanush\github\cp-barebone-movie-app-final\cp_django_barebones (step_11_final_movie_app -> origin)
λ conda create --name movie_app python=3.6
Collecting package metadata (repodata.json): done
Solving environment: done

Package Plan

environment location: C:\Users\Thanush\Anaconda3\envs\movie_app

added / updated specs:
- python=3.6

The following NEW packages will be INSTALLED:

certifi pkgs/main/win-64::certifi-2020.4.5.1-py36_0
pip pkgs/main/win-64::pip-20.0.2-py36_3
python pkgs/main/win-64::python-3.6.10-h9f7ef89_2
setuptools pkgs/main/win-64::setuptools-47.1.1-py36_0
sqlite pkgs/main/win-64::sqlite-3.31.1-h2a8f88b_1
vc pkgs/main/win-64::vc-14.1-h0510ff6_4
vs2015_runtime pkgs/main/win-64::vs2015_runtime-14.16.27012-hf0eaf9b_2
wheel pkgs/main/win-64::wheel-0.34.2-py36_0
wincertstore pkgs/main/win-64::wincertstore-0.2-py36h7fe50ca_0
zlib pkgs/main/win-64::zlib-1.2.11-h62dcd97_4

Proceed ([y]/n)? y

Preparing transaction: done
Verifying transaction: done
Executing transaction: done

To activate this environment, use

$ conda activate movie_app

To deactivate an active environment, use

$ conda deactivate

C:\Users\Thanush\github\cp-barebone-movie-app-final\cp_django_barebones (step_11_final_movie_app -> origin)
λ conda activate movie_app

CommandNotFoundError: Your shell has not been properly configured to use 'conda activate'.
If using 'conda activate' from a batch script, change your
invocation to 'CALL conda.bat activate'.

To initialize your shell, run

$ conda init <SHELL_NAME>

Currently supported shells are:

  • bash
  • cmd.exe
  • fish
  • tcsh
  • xonsh
  • zsh
  • powershell

See 'conda init --help' for more information and options.

IMPORTANT: You may need to close and restart your shell after running 'conda init'.
"""""""""""""""""""""""""""""""""""""""""""""""""
Im Getting below error when i try to install "pip install -r requirements.txt",Please help me
"""""""""""""""""""""""""""""""""""""""""""""""""

C:\Users\Thanush\github\cp-barebone-movie-app-final (step_11_final_movie_app -> origin)
λ pip install -r requirements.txt
Collecting airtable-python-wrapper==0.8.0 (from -r requirements.txt (line 1))
Using cached https://files.pythonhosted.org/packages/a2/5f/13d016710c494ff07da8e5d62f845d0abd3fe3bb35444624258d51182f9f/airtable_python_wrapper-0.8.0-py2.py3-none-any.whl
Collecting certifi==2018.1.18 (from -r requirements.txt (line 2))
Using cached https://files.pythonhosted.org/packages/fa/53/0a5562e2b96749e99a3d55d8c7df91c9e4d8c39a9da1f1a49ac9e4f4b39f/certifi-2018.1.18-py2.py3-none-any.whl
Requirement already satisfied: chardet==3.0.4 in c:\users\thanush\anaconda3\lib\site-packages (from -r requirements.txt (line 3)) (3.0.4)
Collecting Django==2.0 (from -r requirements.txt (line 4))
Downloading https://files.pythonhosted.org/packages/44/98/35b935a98a17e9a188efc2d53fc51ae0c8bf498a77bc224f9321ae5d111c/Django-2.0-py3-none-any.whl (7.1MB)
2% |█ | 174kB 2.1kB/s eta 0:53:48Exception:
Traceback (most recent call last):
File "C:\Users\Thanush\Anaconda3\lib\site-packages\pip_vendor\urllib3\response.py", line 360, in _error_catcher
yield
File "C:\Users\Thanush\Anaconda3\lib\site-packages\pip_vendor\urllib3\response.py", line 442, in read
data = self._fp.read(amt)
File "C:\Users\Thanush\Anaconda3\lib\site-packages\pip_vendor\cachecontrol\filewrapper.py", line 62, in read
data = self.__fp.read(amt)
File "C:\Users\Thanush\Anaconda3\lib\http\client.py", line 447, in read
n = self.readinto(b)
File "C:\Users\Thanush\Anaconda3\lib\http\client.py", line 491, in readinto
n = self.fp.readinto(b)
File "C:\Users\Thanush\Anaconda3\lib\socket.py", line 589, in readinto
return self._sock.recv_into(b)
File "C:\Users\Thanush\Anaconda3\lib\ssl.py", line 1052, in recv_into
return self.read(nbytes, buffer)
File "C:\Users\Thanush\Anaconda3\lib\ssl.py", line 911, in read
return self._sslobj.read(len, buffer)
socket.timeout: The read operation timed out

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\Thanush\Anaconda3\lib\site-packages\pip_internal\cli\base_command.py", line 179, in main
status = self.run(options, args)
File "C:\Users\Thanush\Anaconda3\lib\site-packages\pip_internal\commands\install.py", line 315, in run
resolver.resolve(requirement_set)
File "C:\Users\Thanush\Anaconda3\lib\site-packages\pip_internal\resolve.py", line 131, in resolve
self._resolve_one(requirement_set, req)
File "C:\Users\Thanush\Anaconda3\lib\site-packages\pip_internal\resolve.py", line 294, in _resolve_one
abstract_dist = self._get_abstract_dist_for(req_to_install)
File "C:\Users\Thanush\Anaconda3\lib\site-packages\pip_internal\resolve.py", line 242, in _get_abstract_dist_for
self.require_hashes
File "C:\Users\Thanush\Anaconda3\lib\site-packages\pip_internal\operations\prepare.py", line 334, in prepare_linked_requirement
progress_bar=self.progress_bar
File "C:\Users\Thanush\Anaconda3\lib\site-packages\pip_internal\download.py", line 878, in unpack_url
progress_bar=progress_bar
File "C:\Users\Thanush\Anaconda3\lib\site-packages\pip_internal\download.py", line 702, in unpack_http_url
progress_bar)
File "C:\Users\Thanush\Anaconda3\lib\site-packages\pip_internal\download.py", line 946, in _download_http_url
_download_url(resp, link, content_file, hashes, progress_bar)
File "C:\Users\Thanush\Anaconda3\lib\site-packages\pip_internal\download.py", line 639, in _download_url
hashes.check_against_chunks(downloaded_chunks)
File "C:\Users\Thanush\Anaconda3\lib\site-packages\pip_internal\utils\hashes.py", line 62, in check_against_chunks
for chunk in chunks:
File "C:\Users\Thanush\Anaconda3\lib\site-packages\pip_internal\download.py", line 607, in written_chunks
for chunk in chunks:
File "C:\Users\Thanush\Anaconda3\lib\site-packages\pip_internal\utils\ui.py", line 159, in iter
for x in it:
File "C:\Users\Thanush\Anaconda3\lib\site-packages\pip_internal\download.py", line 596, in resp_read
decode_content=False):
File "C:\Users\Thanush\Anaconda3\lib\site-packages\pip_vendor\urllib3\response.py", line 494, in stream
data = self.read(amt=amt, decode_content=decode_content)
File "C:\Users\Thanush\Anaconda3\lib\site-packages\pip_vendor\urllib3\response.py", line 459, in read
raise IncompleteRead(self._fp_bytes_read, self.length_remaining)
File "C:\Users\Thanush\Anaconda3\lib\contextlib.py", line 130, in exit
self.gen.throw(type, value, traceback)
File "C:\Users\Thanush\Anaconda3\lib\site-packages\pip_vendor\urllib3\response.py", line 365, in _error_catcher
raise ReadTimeoutError(self._pool, None, 'Read timed out.')
pip._vendor.urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='files.pythonhosted.org', port=443): Read timed out.

Downsampling the 3D vertices

Hi Anurag,

Thanks for answering the previous issue. Yes, It helped me to have clear view on the issue.

In your code, the downsampling and upsampling are being done before training the data. I have been working on this from past few days. I am working on noise removal from 3D mesh and want my network to learn vertices which need to be removed. Are there other techniques to downsample the data since currently vertex quadrics cost has been evaluated and valid vertices along with faces are generated before training.
Or should I downsample it while running the network and through loss network should detect the vertices and calculate faces that are valid.
I am new to 3D convolutional networks, requesting if you can guide.

Thanks and Regards
Veronica

psbody.mesh module not found issue

While running the process.py script, in order to read all the objects from a particular folder and save it as numpy arrays, got the following error.

python processData.py --data /clothed_output/ --save_path /clothed_out_nparray/

Traceback (most recent call last):
  File "processData.py", line 3, in <module>
    from facemesh import *
  File "/home/bharath/Desktop/mesh/coma-master/facemesh.py", line 4, in <module>
    from psbody.mesh import Mesh, MeshViewer, MeshViewers
ImportError: No module named psbody.mesh

Image and mesh do not match

Hello. Thank you for your great work!

I wish to do some experiments with your dataset and want to align images and mesh together. However, the number of images and meshes do not match and they are not aligned.

For example, the subject "FaceTalk_170725_00137_TA" has 134 images in mouth_extreme but it has only 64 meshes in the registered data.

Could you tell me specific orders for extracting those data? It will be a big help.

Thanks in advance.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.