GithubHelp home page GithubHelp logo

kai-46 / iron Goto Github PK

View Code? Open in Web Editor NEW
289.0 289.0 24.0 1.9 MB

Inverse rendering by optimizing neural SDF and materials from photometric images

License: BSD 2-Clause "Simplified" License

Shell 1.11% Python 98.89%

iron's People

Contributors

kai-46 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

iron's Issues

About preprocessing pipeline

Hello! I tried to run IRON on my own dataset however it is not producing any viable results.
I assumed you were using NeRF++ for producing the cam_dict_norm.json, however there is very little detail on exactly how you processed the images obtained from camera.
Could you provide more details on the steps you took to process the images to create the train/test folders?

Exporting materials

An error "index 0 is out of bounds for axis 1 with size 0" occurred while exporting the material.
vertices, face_vertices, texturecoords, face_texturecoords = loadmesh_and_checkuv(mesh_fpath, out_dir)
This code gets the shape of the variables texturecoords and face_texturecoords are both (0,0). Why?

Why neus can be used for drak field images reconstruction

Hi author,
According to your code, neus is used for stage 1 reconstruction. However, for each input dark field image, the lighting changes along with the camera. As far as I know, neus should only work in static-lighting scenarios. I wonder why neus works in your setting? Does it mean neus explain the dynamic flash light as view-dependent effects?

Why should we use "roughness range loss"

Hi Kai,
Appreciate for your another great work! The results of IRON look amazing!
However, I am confused about the "roughness range loss" mentioned in the paper. Why should we encourage the estimated roughness to stay below 0.5? What if we meet object with large roughness in custom data?
Looking forward to your advice.

result mesh is not good

Hi Kai!
I just use train_scene.sh with "xmen" data, but the result mesh in ./exp_iron_stage2/xmen/mesh_and_materials_50000 is too smooth:
image
this is the config file:
general {
base_exp_dir = ./exp_iron_stage1/CASE_NAME/
recording = [
./,
./models
]
}

dataset {
data_dir = ./datasets/Luanetal2021/CASE_NAME/train/
render_cameras_name = cameras_sphere.npz
object_cameras_name = cameras_sphere.npz
}

train {
learning_rate = 5e-4
learning_rate_alpha = 0.05
end_iter = 100001

batch_size = 1800
validate_resolution_level = 4
warm_up_end = 5000
anneal_end = 50000
use_white_bkgd = False

save_freq = 10000
val_freq = 2500
val_mesh_freq = 5000
report_freq = 100

igr_weight = 0.1
mask_weight = 0.0

}

model {
nerf {
D = 8,
d_in = 4,
d_in_view = 3,
W = 256,
multires = 10,
multires_view = 4,
output_ch = 4,
skips=[4],
use_viewdirs=True
}

sdf_network {
    d_out = 257
    d_in = 3
    d_hidden = 256
    n_layers = 8
    skip_in = [4]
    multires = 6
    bias = 0.5
    scale = 1.0
    geometric_init = True
    weight_norm = True
}

variance_network {
    init_val = 0.3
}

rendering_network {
    d_feature = 256
    mode = idr
    d_in = 9
    d_out = 3
    d_hidden = 256
    n_layers = 8
    skip_in = [4]
    weight_norm = True
    multires = 10
    multires_view = 4
    squeeze_out = True
}

neus_renderer {
    n_samples = 64
    n_importance = 64
    n_outside = 32
    up_sample_steps = 4     # 1 for simple coarse-to-fine sampling
    perturb = 1.0
}

}
Do you know why?

And I can't find the data "dragon" for training and testing in the dataset:
image
could you provide it for me to compare with you results?

importing model into blender

Hi,
Thank you for a great contribution!
I am trying to import the created mesh.obj into blender, together with diffuse_albedo.png, specular_albedo.png, and roughness.png. I do this by creating a Principled BSDF material in blender and assigning the images to Base Color, Specular, and Roughness slots, respectively. The results are not what I expected. The images look dim, which I assume means that roughness is too high? On the other hand, the test-images rendered by python render_surface.py ... --render_all look good.
Any help is appreciated!

Cuda out of memory

train_scene.sh drv/rabbit
Hello Wooden
Load data: Begin
Not using masks
image shape, mask shape: torch.Size([324, 768, 1024, 3]) torch.Size([324, 768, 1024, 3])
image pixel range: 0.0 1.0
Load data: End
0%| | 0/100001 [00:00<?, ?it/s]
Traceback (most recent call last):
File "render_volume.py", line 449, in
runner.train()
File "render_volume.py", line 127, in train
render_out = self.renderer.render(
File "/home/michael/iron/models/renderer.py", line 374, in render
ret_fine = self.render_core(
File "/home/michael/iron/models/renderer.py", line 233, in render_core
gradients = sdf_network.gradient(pts)
File "/home/michael/iron/models/fields.py", line 110, in gradient
gradients = torch.autograd.grad(
File "/home/michael/anaconda3/envs/iron/lib/python3.8/site-packages/torch/autograd/init.py", line 275, in grad
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: CUDA out of memory. Tried to allocate 64.00 MiB (GPU 0; 5.80 GiB total capacity; 4.03 GiB already allocated; 118.56 MiB free; 4.08 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Wrote config file to ./exp_iron_stage2/drv/rabbit/args.txt
render_surface.py:256: DeprecationWarning: Starting with ImageIO v3 the behavior of this function will switch to that of iio.v3.imread. To keep the current behavior (and make this warning dissapear) use import imageio.v2 as imageio or call imageio.v2.imread directly.
im = imageio.imread(fpath).astype(np.float32) / 255.0
ic| fill_holes: False
handle_edges: True
is_training: True
args.inv_gamma_gt: False
0%| | 0/50001 [00:00<?, ?it/s]ic| args.out_dir: './exp_iron_stage2/drv/rabbit'
global_step: 0
loss.item(): 0.00573146715760231
img_loss.item(): 0.0
img_l2_loss.item(): 0.0
img_ssim_loss.item(): 0.0
eik_loss.item(): 0.00573146715760231
roughrange_loss.item(): 0.0
color_network_dict["point_light_network"].get_light().item(): 5.6220927238464355
1%|▎ | 499/50001 [01:35<3:20:37, 4.11it/s]ic| args.out_dir: './exp_iron_stage2/drv/rabbit'
global_step: 500
loss.item(): 0.014144735410809517
img_loss.item(): 0.0
img_l2_loss.item(): 0.0
img_ssim_loss.item(): 0.0
eik_loss.item(): 0.014144735410809517
roughrange_loss.item(): 0.0
color_network_dict["point_light_network"].get_light().item(): 5.224419593811035

About BRDF

Hi, Kai!
I read the code and am confused about the BRDF implemented. According to the code , specular_rgb = light_intensity * specular_albedo * F * D * G / (4.0 * dot + 1e-10) and F = 0.03867, the F is fixed and a 'specular_albedo' is multiplied in BRDF, which is different from simplified Disney BSDF used in PhySG. Could you introduce which BRDF you used?
Thank you!

About photometric image

Awesome work! Is this method only valid for photometric images (captured by a hand-held smartphone camera with LED flashlight turned on in a dark environment)? Can we use images captured by a phone in a common indoor scene?

How to train/test on my own dataset

Hi,
I want to try IRON on my own datasets, but I don't know how to get cam_dict_norm.json, could you share with me this part of code? Thanks!

The question about the edge gradient problem existed in IDR

Hi, I have read this paper and some questions appear. I agree with this paragraph that "For this reason, these methods compute biased gradients w.r.t. the weights of the neural SDF that move surface points along camera rays”. It is easy to understand due to the loss function mentioned in IDR. However, you also display Figure.4 to demonstrate your efficiency which immediately confuses me. Because IDR adopts mask loss to train so that the SDF value will be optimized to fit silhouette supervision and won't lead to similar degradation. Can you give me some inspiration or advice?

Is this a bug in the GGX renderer?

In the Cook-Torrance BRDF
f_specular = D * F * G / (4 * (n dot v) * (n dot l)),
In the co-located setting, you assume v==l,
so the equation should be
f_specular = D * F * G / (4 * (n dot v) ** 2), right?
but in your code implementation,

specular_rgb = light_intensity * specular_albedo * F * D * G / (4.0 * dot + 1e-10)

it is
f_specular = D * F * G / (4 * (n dot v),
not f_specular = D * F * G / (4 * (n dot v) ** 2)
Could you have a look at this? @Kai-46

Poor results for metal materials

Hi,
Thank you for this great work!
I tried IRON with my own datasets, when it came to metal materials, the results seems poor, do you know why?
image

About export materials

Hi!

Thank you for this great work!

I'm a little confused about export materials. why should repeat uvs to a neighborhood? and is the basic idea to use gaussian as a weight to sum up repeat values?

### repeat to a neighborhood

Blender Error: File format is not supported in file 'mesh.obj'

Finished importing: './exp_iron_stage2/superman/mesh_and_materials_10000/mesh.obj'
Progress: 100.00%

(  0.0002 sec |   0.0000 sec) OBJ Export path: './exp_iron_stage2/superman/mesh_and_materials_10000/mesh.obj'
      ( 93.8459 sec |  93.8253 sec) Finished writing geometry of 'mesh'.
  ( 93.8462 sec |  93.8459 sec) Finished exporting geometry, now exporting materials
  ( 93.8462 sec |  93.8460 sec) OBJ Export Finished

Progress: 100.00%

Error: File format is not supported in file '/IRON/exp_iron_stage2/superman/mesh_and_materials_10000/mesh.obj'

Blender quit
ic| 'Exporting materials...'
Warning: readOBJ() ignored non-comment line 3:

FileNotFoundError: [Errno 2] No such file or directory: './inverse_renderer.py'

Thank you for your sharing!
I clone your repo and download the dataset. When I run

bash train_scene.sh dataset/luan/buddha

There is an error because we do not have inverse_render.py. Besides, there is an AssertionError.

Hello Wooden
Load data: Begin
Not using masks
image shape, mask shape: torch.Size([80, 768, 1024, 3]) torch.Size([80, 768, 1024, 3])
image pixel range: 0.0 1.0
Load data: End
0%| | 0/100001 [00:00<?, ?it/s]
Traceback (most recent call last):
File "render_volume.py", line 449, in
runner.train()
File "render_volume.py", line 156, in train
self.optimizer.step()
File "/home/hyx/anaconda3/envs/iron/lib/python3.8/site-packages/torch/optim/optimizer.py", line 109, in wrapper
return func(*args, **kwargs)
File "/home/hyx/anaconda3/envs/iron/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/home/hyx/anaconda3/envs/iron/lib/python3.8/site-packages/torch/optim/adam.py", line 157, in step
adam(params_with_grad,
File "/home/hyx/anaconda3/envs/iron/lib/python3.8/site-packages/torch/optim/adam.py", line 213, in adam
func(params,
File "/home/hyx/anaconda3/envs/iron/lib/python3.8/site-packages/torch/optim/adam.py", line 255, in _single_tensor_adam
assert not step_t.is_cuda, "If capturable=False, state_steps should not be CUDA tensors."
AssertionError: If capturable=False, state_steps should not be CUDA tensors.
ic| args: Namespace(data_dir='./dataset/dataset/luan/buddha/train', eik_weight=0.1, export_all=False, gamma_pred=True, init_light_scale=8.0, inv_gamma_gt=False, is_metal=False, neus_ckpt_fpath='./exp_iron_stage1/dataset/luan/buddha/checkpoints/ckpt_100000.pth', no_edgesample=False, num_iters=50001, out_dir='./exp_iron_stage2/dataset/luan/buddha', patch_size=128, plot_image_name=None, render_all=False, roughrange_weight=0.1, ssim_weight=1.0)
Wrote config file to ./exp_iron_stage2/dataset/luan/buddha/args.txt
Traceback (most recent call last):
File "render_surface.py", line 73, in
shutil.copy2("./inverse_renderer.py", os.path.join(args.out_dir, "code/inverse_renderer.py"))
File "/home/hyx/anaconda3/envs/iron/lib/python3.8/shutil.py", line 435, in copy2
copyfile(src, dst, follow_symlinks=follow_symlinks)
File "/home/hyx/anaconda3/envs/iron/lib/python3.8/shutil.py", line 264, in copyfile
with open(src, 'rb') as fsrc, open(dst, 'wb') as fdst:
FileNotFoundError: [Errno 2] No such file or directory: './inverse_renderer.py'
ic| args: Namespace(data_dir='./dataset/dataset/luan/buddha/test', eik_weight=0.1, export_all=False, gamma_pred=True, init_light_scale=8.0, inv_gamma_gt=False, is_metal=False, neus_ckpt_fpath='./exp_iron_stage1/dataset/luan/buddha/checkpoints/ckpt_100000.pth', no_edgesample=False, num_iters=50001, out_dir='./exp_iron_stage2/dataset/luan/buddha', patch_size=128, plot_image_name=None, render_all=True, roughrange_weight=0.1, ssim_weight=1.0)
Wrote config file to ./exp_iron_stage2/dataset/luan/buddha/args.txt
Traceback (most recent call last):
File "render_surface.py", line 73, in
shutil.copy2("./inverse_renderer.py", os.path.join(args.out_dir, "code/inverse_renderer.py"))
File "/home/hyx/anaconda3/envs/iron/lib/python3.8/shutil.py", line 435, in copy2
copyfile(src, dst, follow_symlinks=follow_symlinks)
File "/home/hyx/anaconda3/envs/iron/lib/python3.8/shutil.py", line 264, in copyfile
with open(src, 'rb') as fsrc, open(dst, 'wb') as fdst:
FileNotFoundError: [Errno 2] No such file or directory: './inverse_renderer.py'

Could you please fix these bugs? Thank you for your time!

Low level of details for synthetic dragon

Hello!
Thank you for the great work!
Right now I'm trying to reproduce with your code reconstruction of dragon from 200 frames, rendered by mitsuba in 512x512 resolution. Unfortunately after 50000 steps of the second stage I realized that many details on the mesh are lost (if comparing to those pictures from the paper).
Dragon_51200
Dragon_51201
Dragon_51202

Could you please tell me, whether womask_iron.conf config file is the valid one for training with preserving small details? Which parameters might be tuned for better quality? Should be gamma-correction be applied for exr-files generated by mitsuba-renderer before feeding into networks? How input resolution affects shape precision?
Thanks!

About GGX Renderer

Hi,
I read the code and am confused about the BRDF implemented. Why the diffuse_rgb is compute like this:

diffuse_rgb = light_intensity * (diffuse_albedo / (1.0 - Fdr + 1e-10) / np.pi) * dot * T12 * T21 * m_invEta2
, but not: light_intensity * (diffuse_albedo / np.pi) ? Could you introduce the meaning of other variables you used(Fdr, T12, T21, m_invEta2)? If the light and camera are not colocated, how to get the T21, T12?

Exporting materials

I get same issue as Issues10:
An error "index 0 is out of bounds for axis 1 with size 0" occurred while exporting the material.
vertices, face_vertices, texturecoords, face_texturecoords = loadmesh_and_checkuv(mesh_fpath, out_dir)
This code gets the shape of the variables texturecoords and face_texturecoords are both (0,0). Why?

I load mesh_fpath using your code:
import igl
vertices, texturecoords, _, face_vertices, face_texturecoords, _ = igl.read_obj('./exp_iron_stage2/xmen/mesh_and_materials_50000/mesh.obj', dtype="float32")
print(vertices.shape, texturecoords.shape, face_vertices.shape, face_texturecoords.shape)

here is the result:
(1432877, 3) (0, 0) (2865856, 3) (0, 0)
Do you know why?

Traceback (most recent call last):
File "render_surface.py", line 549, in
export_mesh_and_materials(export_out_dir, sdf_network, color_network_dict)
File "render_surface.py", line 347, in export_mesh_and_materials
export_materials(os.path.join(export_out_dir, "mesh.obj"), material_predictor, export_out_dir)
File "/home/ecoplants/hdd/changfa/PBR/iron/models/export_materials.py", line 168, in export_materials
vertices, face_vertices, texturecoords, face_texturecoords = loadmesh_and_checkuv(mesh_fpath, out_dir)
File "/home/ecoplants/hdd/changfa/PBR/iron/models/export_materials.py", line 153, in loadmesh_and_checkuv
pcd, pcd_uv = sample_surface(vertices, face_vertices, texturecoords, face_texturecoords, n_samples=10**6)
File "/home/ecoplants/hdd/changfa/PBR/iron/models/export_materials.py", line 51, in sample_surface
A = texturecoords[face_texturecoords[sample_face_idx, 0], :]
IndexError: index 0 is out of bounds for axis 1 with size 0

Test data

Hi, thanks for you great works! Can you share your data with google drive or Baiduyun?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.