GithubHelp home page GithubHelp logo

open-pifuhd's Introduction

Open-PIFuhd

This is a unofficial implementation of PIFuhd

PIFuHD: Multi-Level Pixel-Aligned Implicit Function forHigh-Resolution 3D Human Digitization(CVPR2020)

Implementation

  • Training Coarse PIFuhd
  • Training Fine PIFuhd
  • Inference
  • metrics(P2S, Normal, Chamfer)
  • Gan generates front normal and back normal (Link)
  • Unsigned distance field and signed distance filed

Note that the pipeline I design do not consider normal map generated by pix2pixHD because it is Not main difficulty we reimplement PIFuHD.

Prerequisites

  • PyTorch>=1.6
  • json
  • PIL
  • skimage
  • tqdm
  • cv2
  • trimesh with pyembree
  • pyexr
  • PyOpenGL
  • freeglut (use sudo apt-get install freeglut3-dev for ubuntu users)
  • (optional) egl related packages for rendering with headless machines. (use apt install libgl1-mesa-dri libegl1-mesa libgbm1 for ubuntu users)
  • face3d

Data processed

We use Render People as our datasets but the data size is 296 (270 for training while 26 for testing) which is less than paper said 500.

Note that we are unable to release the full training data due to the restriction of commertial scans.

Initial data

I modified part codes in PIFu (branch: PIFu-modify, and download it into your project) in order to could process dirs where your model save

bash ./scripts/process_obj.sh [--dir_models_path]
#e.g.  bash ./scripts/process_obj.sh ../Garment/render_people_train/

Rendering data

I modified part codes in PIFu in order to could process dirs where your model save

python -m apps.render_data -i [--dir_models_path] -o [--save_processed_models_path] -s 1024 [Optional: -e]
#-e means use GPU rendering
#e.g.python -m apps.render_data -i ../Garment/render_people_train/ -o ../Garment/render_gen_1024_train/ -s 1024 -e

Render Normal Map

Rendering front and back normal map In Current Project

All config params is set in ./configs/PIFuhd_Render_People_HG_coarse.py, bash ./scripts/generate.sh

# the params you could modify from ./configs/PIFuhd_Render_People_HG_normal_map.py
# the import params here is 
#  e.g. input_dir = '../Garment/render_gen_1024_train/' and cache= "../Garment/cache/render_gen_1024/rp_train/"
# inpud_dir means output render_gen_1024_train
# cache means where save intermediate results like sample points from mesh

After processing all datasets, Tree-Structured Directory looks like following:

render_gen_1024_train/
├── rp_aaron_posed_004_BLD
│   ├── GEO
│   ├── MASK
│   ├── PARAM
│   ├── RENDER
│   ├── RENDER_NORMAL
│   ├── UV_MASK
│   ├── UV_NORMAL
│   ├── UV_POS
│   ├── UV_RENDER
│   └── val.txt
├── rp_aaron_posed_005_BLD
	....

Training

Training coarse-pifuhd

All config params is set in ./configs/PIFuhd_Render_People_HG_coarse.py, Where you could modify all you want.

Note that this project I designed is friend, which means you could easily replace origin backbone, head by yours :)

bash ./scripts/train_pfhd_coarse.sh

Training Fine-PIFuhd

the same as coarse PIFuhd, all config params is set in ./configs/PIFuhd_Render_People_HG_fine.py,

bash ./scripts/train_pfhd_fine.sh

**If you meet memory problems about GPUs, pls reduce batch_size in ./config/*.py **

Inference

bash ./scripts/test_pfhd_coarse.sh
#or 
bash ./scripts/test_pfhd_fine.sh

the results will be saved into checkpoints/PIFuhd_Render_People_HG_[coarse/fine]/gallery/test/model_name/*.obj, then you could use meshlab to view the generate models.

Metrics

export MESA_GL_VERSION_OVERRIDE=3.3 
# eval coarse-pifuhd
python ./tools/eval_pifu.py  --config ./configs/PIFuhd_Render_People_HG_coarse.py
# eval fine-pifuhd
python ./tools/eval_pifu.py  --config ./configs/PIFuhd_Render_People_HG_fine.py

Pretrained weights

We provide the pretrained models of PIFuhd(fine-pifuhd, coarse-pifuhd)

Note that the training models use front or back normal map rendered from mesh instead of being obtained by GANs. Therefore you need render the normal map of test obj

Demo

we provide rendering code using free models in RenderPeople. This tutorial uses rp_dennis_posed_004 model. Please download the model from this link and unzip the content. Use following command to reconstruct the model:

Debug

I provide bool params(debug in all of config files) to you to check whether your points sampled from mesh is right. There are examples:

Visualization

As following show, left is input image, mid is the results of coarse-pifuhd, right is fine-pifuhd

Reconstruction on Render People Datasets

Note that our training datasets are less than official one(270 for our while 450 for paper) resulting in the performance changes in some degree

IoU ACC recall P2S Normal Chamfer
PIFu 0.748 0.880 0.856 1.801 0.1446 2.00
Coarse-PIFuhd(+Front and back normal) 0.865(5cm) 0.931(5cm) 0.923(5cm) 1.242 0.1205 1.4015
Fine-PIFuhd(+Front and back normal) 0.813(3cm) 0.896(3cm) 0.904(3cm) - 0.1138 -

There is an issue why p2s of fine-pifuhd is bit large than coarse-pifuhd. This is because I do not add some post-processing to clean some chaos in reconstruction. However, the details of human mesh produced by fine-pifuhd are obviously better than coarse-pifuhd.

About Me

I hope that this project could provide some contributions to our communities, especially for implicit-field.

By the way, If you think the project is helpful to you, pls don’t forget to star this project : )

Related Research

Monocular Real-Time Volumetric Performance Capture (ECCV 2020) Ruilong Li*, Yuliang Xiu*, Shunsuke Saito, Zeng Huang, Kyle Olszewski, Hao Li

PIFuHD: Multi-Level Pixel-Aligned Implicit Function for High-Resolution 3D Human Digitization (CVPR 2020) Shunsuke Saito, Tomas Simon, Jason Saragih, Hanbyul Joo

ARCH: Animatable Reconstruction of Clothed Humans (CVPR 2020) Zeng Huang, Yuanlu Xu, Christoph Lassner, Hao Li, Tony Tung

Robust 3D Self-portraits in Seconds (CVPR 2020) Zhe Li, Tao Yu, Chuanyu Pan, Zerong Zheng, Yebin Liu

Learning to Infer Implicit Surfaces without 3d Supervision (NeurIPS 2019) Shichen Liu, Shunsuke Saito, Weikai Chen, Hao Li

open-pifuhd's People

Contributors

lingtengqiu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

open-pifuhd's Issues

文件Garment构成

您好,我下载了一些renderpeople数据,但是不知道Garment文件具体结构以及数据预处理碰到一些问题,不知您是否方便交流一下,我的邮箱是[email protected],万分感谢!

关于calib矩阵的一些问题

您好,最近刚入门这个方向,在看PIFu的时候发现了您的仓库,有一些相关的问题想请教一下:
(1)在PIFu的TrainDataset.py中,从生成的PARAM中提取出了ortho_ratio、scale、center参数,并利用这些参数计算得到了scale_intrinsic、uv_intrinsic,并最终算出了calib这样一个变换矩阵,但我并不是很懂calib的含义以及这个变换的过程,请您指点一下,或是能否给一些参考链接我去学习一下。
(2)第二个问题与(1)也有些相关,是在HGPIFuNet.py中,为什么某一个三维坐标值通过calib矩阵做投影之后,得到的z坐标值就是对应该点的深度值?
希望得到您的回复。

runtime error

Traceback (most recent call last):
File "./tools/train_pifu.py", line 56, in
train_dataloader = build_dataloader(train_data_set,cfg,args)
File "/home/gaojunxiao/train_data/Open-PIFuhd/./utils/dataloader.py", line 33, in build_dataloader
return get_loader(dataset,batch_size = batch_size,num_workers = num_workers,collect_fn = collect_fn,shuffle=shuffle,worker_init_function=worker_init_function)
File "/home/gaojunxiao/train_data/Open-PIFuhd/./utils/dataloader.py", line 50, in get_loader
return torch.utils.data.DataLoader(
File "/home/gaojunxiao/anaconda3/envs/pifu/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 266, in init
sampler = RandomSampler(dataset, generator=generator) # type: ignore
File "/home/gaojunxiao/anaconda3/envs/pifu/lib/python3.8/site-packages/torch/utils/data/sampler.py", line 103, in init
raise ValueError("num_samples should be a positive integer "
ValueError: num_samples should be a positive integer value, but got num_samples=0

What exactly is the cause of this error?
The dataset looks like this,your help would be appreciated. Thank you.

image

pretrained model

Thanks for you works~ Can you release the pretrained model(checkpoint)?

程式碼操作順序

不好意思,我是大學生想學習這項議題,想知道您可否提供詳細程式操作順序?感謝

RenderPeople dataset still available?

Hi there

Thanks for this great repo!

Just wondering - how did you get the dataset with 296 people. On the RenderPeople website link you include in the README, it seems the datasets there only have 3 rendered people. Did they change what is available on the site, or am I looking in the wrong place?

Thanks!

关于背部法线训练

你好,
我在训练背部法线的时候出现如下情况
image
请问你有没有遇到或者有没有什么头绪?
非常感谢!

How can I generate the depth map?

Hi, many thanks for this open repo of the PIFuHD.

I see that I'm able to get the normal map from this model. As an extension, I would like to retrieve the depthmap as output as well. Could you give a few pointers on how can I achieve it? If possible, a small script that converts the normal->depth map would be perfect.

Thanks.

The obj model generates exceptions

The entire github code has been run, and the resulting obj model is as follows:
The obj model generated by bash. / scripts/test_pfhd_coarse.sh:
image

image

The obj model generated by bash. / scripts/test_pfhd_fine.sh:
image
image

Is this effect right? If it is wrong, how should I solve it? I hope to get your help

demo commands

Hi,
in Demo there is a missing command line to run the demo:
"Use following command to reconstruct the model:" --> missing command below this line.

关于渲染法线时应该选择哪个管线(train_pipeline or test_pipeline)

问题是这样的:
我在使用bash ./scripts/generate.sh命令去渲染法线图时,注意到它的配置文件,即config文件夹下PIFuhd_Render_People_HG_normal_map.py有两种形式的法线生成配置,即train_pipelinetest_pipeline。当我使用test_pipeline时使用debug可以观察到与mask相对应的法线图,但当我使用train_pipeline时他们之间就不对应了。我想知道这是train过程中所需要的正常数据吗?还是说在渲染法线图时无论训练集还是测试集都应该使用test_pipeline来获得图像。(情况如下图)

正常情况,使用test_pipeline
front
图一:front图像(test_pipeline)(上图)
back
图二:back图像(test_pipeline)(上图)
inside
图三:inside图像(test_pipeline)(上图)
outside
图四:outside图像(test_pipeline)(上图)

貌似的异常情况,使用train_pipeline
front
图一:front图像(train_pipeline)(上图)
back
图二:back图像(train_pipeline)(上图)
inside
图三:inside图像(train_pipeline)(上图)
outside
图四:outside图像(train_pipeline)(上图)

在我使用小批量进行训练后,他们所得到的效果都不是很好,所以我无法判断到底哪个是对的
如果您可以帮我解答一下这个问题,我会非常感谢
我的邮箱:[email protected]

Pretrain model of pix2pixhd for RenderPeople is not found

Hi, when I reproduced, I found that the main code no generate code of normal maps. In this part, I think that you implemented it using pix2pixhd network, but the model you provided is 404. So I would like to ask if you can update the pre-training model of the GAN? Thanks!

Contract work?

Are you available for some remote contracting work? Contact me and let me know.

Error with ./scripts/train_pfhd_coarse.sh command

Hello.

I would like to report an error with open-PIFuHD.

When running the command "./scripts/train_pfhd_coarse.sh", I get the following error:
/home/sylvan/conda/lib/python3.8/site-packages/torchvision/io/image.py:13 : UserWarning : Failed to load image Python Extension : /home/sylvan/conda/lib/python3.8/site-packages/torchvision/image.so : undefined symbol : _ZN3c105ErrorC2ENS_14SourceLocationESs
warn(f "Failed to load image Python extension: {e}")
sh : 1 : Syntax error : "(" unexpected
Traceback (most recent call):
File "./tools/train_pifu.py", line 38, in
logger=setup_logger(cfg.name,rank= args.local_rank)
File "/home/sylvan/Downloads/open-pifuhd (convert character image to 3D model)/Open-PIFuhd-master/./utils/logger.py", line 29, in setup_logger
fh = logging.FileHandler(logger_name, mode='w')
File "/home/sylvan/conda/lib/python3.8/logging/init.py", line 1147, in init
StreamHandler.init(self, self._open())
File "/home/sylvan/conda/lib/python3.8/logging/init.py", line 1176, in _open
return open(self.baseFilename, self.mode, encoding=self.encoding)
FileNotFoundError: [Errno 2] No such file or directory : '/home/sylvan/Downloads/open-pifuhd (convert character image to 3D model)/Open-PIFuhd-master/checkpoints/PIFuhd_Render_People_HG_coarse/logger/2022-05-26-18/logger.trainer'.

Following the instructions of ainstallation, I had to modify the file "requirements.txt", here is its content:
cycler==0.10.0
decorator==4.4.1
imageio==2.8.0
kiwisolver==1.1.0
matplotlib==3.1.3
networkx==2.4
numpy
opencv-python==4.2.0.32
pathlib==1.0.1
pillow==7.0.0
pyexr
PyOpenGL==3.1.5
pyparsing==2.4.6
python-dateutil==2.8.1
PyWavelets==1.1.1
scikit-image==0.16.2
scipy==1.4.1
Shapely==1.7.0
six==1.14.0
tqdm==4.43.0
trimesh
xxhash==1.4.3

I don't know how to fix it, so I wanted to point it out to you, sorry for the inconvenience.

Translated with www.DeepL.com/Translator (free version)

一些问题想咨询

您好! 我最近自己设计了一个模型想实现法向图的预测,但是遇到了一些问题,不知道方不方便向您请教一下呢? 我的邮箱是:[email protected], 如果可以的话,非常感谢!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.