GithubHelp home page GithubHelp logo

simpleig / geo-pifu Goto Github PK

View Code? Open in Web Editor NEW
109.0 109.0 17.0 100.94 MB

This repository is the official implementation of Geo-PIFu: Geometry and Pixel Aligned Implicit Functions for Single-view Human Reconstruction.

Python 43.69% Shell 0.05% MATLAB 0.07% Dockerfile 0.06% Jupyter Notebook 54.29% CMake 0.07% C++ 0.66% Cuda 0.71% GLSL 0.39%

geo-pifu's People

Contributors

simpleig avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

geo-pifu's Issues

Creating data for TrainDataset.py from render_mesh.py

I am training GeoPIFu with the Deephuman Dataset (same as the author).

The training data required for TrainDataset.py is
OBJ, RENDER, MASK, PARAM, UV_RENDER, UV_MASK, UV_NORMAL, and UV_POS.

And render_mesh.py provides only few of them.

How are we supposed to get the other items ?

Are we supposed to use the PIFu github to get these.
And even if we train on PIFu code we dont have a texture map(UV) provided with the Deephuman Dataset.
So,
do we need to create textures from UVTextureConverter.UVConverter.create_texture() and then use PIFu github code to get the above ?

Sry for the doubts. I am new to all this.

Train on another data

Hi,

Thanks for the code. Can we use render_mesh with other dataset? I have dataset which consists of only SMPL obj and and corresponding mesh in obj?

May I know how to proceed with data generation?

Thanks,
Sai

Can't replicate conda environment

The instructions describe how to configure the conda environment based on some unexisting file in the current master HEAD.
The geopifu_requirements.yml file does not exist.

conda env create -f geopifu_requirements.yml
conda activate geopifu

Are these instructions obsolete?

CD and PSD

Excuse me, I would like to ask what is the difference between CD metrics and PSD metrics for evaluating the quality of mesh reconstruction?

Will the code be published?

Your paper looks super interesting and Iโ€™d love to try running it on my computer.

Any plans to publish the code?

Out of memory during Inference

I'm using a GTX 1080 Ti (11GB Memory) for the testing of the given model with the pre-trained checkpoints. When I run the test_shape_iccv it displays CUDA out of memory. Does the model require more than 11 GB of memory for infernece?

Thanks in Advance.

When is your code available for trainging about Geo-PIFu.

Firstly, congratulations on acceptance for your paper!
I have being concentrating on your updating about code of this paper since I saw the preprint of your paper.I wonder when your code will be available for trainging about Geo-PIFu?

how to create deepVoxels.npy ?

While training train_query.py we need deepVoxels tensor.

That tensor can be created using get_deepVoxels( ) from TrainDataset.py
with the help of a numpy file named "__index__deepVoxels.npy"
deepVoxels_path = "%s/%06d_deepVoxels.npy" % (self.opt.deepVoxelsDir, idx)

I have gone through render_mesh.py but couldn't find a way to create deepVoxels.npy

can anyone tell me how to create it.

What's the difference between TrainDataset and TrainDatasetICCV?

I am interested in your work. your code is rich, that includes rendering, DeepHuman and PIFu, which makes me beneficial for my research. As far as I know, PIFu needs other rendering results. However, there is no dataset in the same format as the original PIFu, and only same code is obtained. Whether PIFu or Geo-PIFu, they use TrainDatasetICCV class. What's the difference between TrainDataset and TrainDatasetICCV?

Normalized z Values for 3D Feature Query

First of all, thank you very much for your valuable work and I have found it inspiring!

I am trying to understand the code and I am reading the part where 3D features are queried according to the points sampled for a forward pass, after image features are pre-computed and before 2D & 3D features are passed to the surface classifier. Here, I would like to ask a question about the use of normalized z values, e.g., in lines 145-165 of Geo-PIFu/geopifu/lib/geometry.py.

if self.opt.deepVoxels_fusion == "multiLoss":
for im_feat in self.im_feat_list:
# 2d features: [(B * num_views, opt.hourglass_dim, n_in+n_out), (B * num_view, 1, n_in+n_out)]
point_local_feat_list = [self.index(im_feat, xy), z_feat] # torch.nn.functional.grid_sample, in geometry.py
point_local_feat = torch.cat(point_local_feat_list, 1)
# 3d features
if self.opt.multiRanges_deepVoxels:
# tri-linear sampling: (BV,3,N) + (B,C=8,D=32,H=48,W=32) ---> (BV,C=56,N)
features_3D = self.multiRanges_deepVoxels_sampling(feat=deepVoxels, XYZ=torch.cat([xy, -1. * z], dim=1), displacments=self.displacments)
else:
# tri-linear sampling: (BV,3,N) + (B,C=8,D=32,H=48,W=32) ---> (BV,C=8,N)
features_3D = self.index_3d(feat=deepVoxels.transpose(0,1), XYZ=torch.cat([xy, -1. * z], dim=1))
# predict sdf
pred_sdf_list = self.surface_classifier(feature_2d=point_local_feat, feature_3d=features_3D)
for pred in pred_sdf_list:
pred_visible = in_img[:,None].float() * pred
self.intermediate_preds_list.append(pred_visible)

As in line 150, 2D pixel-aligned features corresponding to the sampled query points are associated with normalized depth values (z_feat). However, in line 156 or line 159, 3D features are queried based on the depth values that are not yet normalized (z). May I ask what would be the considerations behind the use of un-normalized z values for querying 3d features?

I further looked into how the 2D & 3D features are combined in the surface classifier (line 162), but I do not see any special manipulation involving the depth values.

If I am missing or misunderstanding anything there, I will appreciate a lot if you could let me know. Thank you.

Geo-PIFu training is too slow

Hi, I have finished the data preprocessing following your instruction and start to train the model. However, I found that it is too time-consuming to train Geo-PIFu. The first stage, i.e., apps.train_shape_coarse has been running for 3 days but is still at the 5th epoch (30 epochs in total). In README it is claimed that this stage only spend 2 days, so what is the problem? I train with 4 TITAN RTX GPUs and set the batch size to 16, it should not be so slow I guess.

opendr_requirements.yml file is outdated.

I have tried every method to setup env from the provided opendr_requirements.yml file.
All methods are from anaconda prompt (base):

  1. Tried normal install from : conda env create --file opendr_requirements.yml => failed with a lot of conflict errors
  2. To solve above i tried : this . Eventually editing the original yml file I moved the conflicting dependencies to pip. => fewer conflicting errors
  3. To solve the above issue, I saw this issue which told the problems caused by build numbers. And meanwhile I browsed PIFu's github repo and found there yml file. It had no version numbers on there dependencies. So I tried it & removed build numbers from the dependencies in the opendr_requirements.yml
    => No luck as there were still conflicts (which told me the conflicts aren't due to build numbers and are due to some internal python package conflicts)
  4. So I opened the given create_opendrEnv.sh file and manually entered all the code line by line.
    Some of the packages aren't available for python 2.7.
    Here is the create_opendrEnv.sh file. I have edited it to show which part of the code ran and what errors others produced.
    ~code lines marked with * aren't working.
    Check this : create_opendrEnv - Copy.txt
    Packages that are not supported for python 2.7 give this error :
    (ERROR: Could not find a version that satisfies the requirement _required_python_package (from versions: none)
    ERROR: No matching distribution found for _required_python_package )

I know you are busy but my dream is depended on this project.
Need to finish it somehow.
So Please respond.

Is there a demo code?

Hi, great work!

Is there a demo code so a single image can be reconstructed? I see the test code for all the test data but not for only one example.

Thanks

Ram memory out while creating query points. (on Colab PRO High Ram)

I am trying for the past 3 days to create query points on colab pro but its crashing.
I did try on my pc (high end gaming pc) but it still crashed.

Problem is that the code gets stuck on :

inside = mesh.contains(sample_points)

My solution
I noticed you did 3 things:

  1. Random rotation
  2. Normalization (X~[+-0.333], Y~[+-0.5], Z~[+-0.333])
  3. Changed the space by changing BMIN BMAx to [+-0.33333333 +-0.5 +-0.33333333]

So i tested the on only one of them at a time (replacing the other 2 with the org PIFu defaults).
Then i tested 2 of them at a time.
The problem still continued.

Then I took the mesh(deephuman dataset #10198) to load inside points using PIFu script:

It worked after i changed

sample_points = surface_points + np.random.normal(scale=1.*args.sigma/consts.dim_h, size=surface_points.shape) # (N1, 3)

to

sample_points = surface_points + np.random.normal(scale=5.0, size=surface_points.shape) #(as given in PIFu for Gaussian noise)

1.Can anyone tell me why this happened ?

  1. I know this is wrong according to geopifu. But still.
    And can i train geopifu on these query points ?

PLz tell me how can i proceed futher.

When will the code be released?

First congratulations on acceptance! We are really interested in your paper and hope to follow this one for the following works. Could you please share your codebase as soon as possible?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.