GithubHelp home page GithubHelp logo

cuiziteng / aleth-nerf Goto Github PK

View Code? Open in Web Editor NEW
63.0 63.0 3.0 7.48 MB

[AAAI 2024] Aleth-NeRF: Illumination Adaptive NeRF with Concealing Field Assumption (Low-light enhance / Exposure correction + NeRF)

License: Apache License 2.0

C++ 4.15% Cuda 25.74% CMake 0.42% Python 69.27% Shell 0.42%
aaai2024 computational-photography exposure-correction low-light-image-enhancement low-light-vision nerf

aleth-nerf's People

Contributors

cuiziteng avatar jeongyw12382 avatar mishig25 avatar seungjooshin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

aleth-nerf's Issues

Training Time

How many GPUs do you use for training, and how many hours does it take to converge the model?

Depth Image

Thank you for the outstanding work. I've observed that the depth images in the log directory are in grayscale, whereas those in the paper are colorful. Could you please provide the code or offer some suggestions on how to generate these colorful depth images?

Spiral Path for the video

Hi, Nice work

I wanted to find the code which calculates the poses for rendering the video. I went through the code but was unable to locate it. Would it be possible to please refer me to the section where you are calculating the poses for the path required to render the video?

About color deviation between test results and GT

Thank you for your greate work.

I am trying to replicating your project and I found there is color shift between the rendered images and your presented results in the paper. The results in the paper looks great. I am wondering if you received the results by using the hyperparameters specified in configs directory.

thanks.

Question about geometry.

Hi,there,
I was wonder that if it coulde rebuild a normal radiance(light) form low light condition, dose the geometry also rebuild well?

RuntimeError: number of dims don't match in permute

非常感谢你们的工作,
当我运行
CUDA_VISIBLE_DEVICES=0 python3 run.py --ginc configs/LOM/aleth_nerf/aleth_nerf_buu.gin --eta 0.1
出现错误RuntimeError: number of dims don't match in permute
打印输出:
the scene name is: buu
the log dir is: ./logs/aleth_nerf_blender_buu_220901eta0.1
Global seed set to 220901
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
/home/zenglongjian/.conda/envs/aleth_nerf/lib/python3.8/site-packages/pytorch_lightning/callba cks/model_checkpoint.py:613: UserWarning: Checkpoint directory /home/zenglongjian/Aelth-NeRF/l ogs/aleth_nerf_blender_buu_220901eta0.1 exists and is not empty.
rank_zero_warn(f"Checkpoint directory {dirpath} exists and is not empty.")
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]

| Name | Type | Params

0 | model | Aleth_NeRF | 1.3 M

1.3 M Trainable params
0 Non-trainable params
1.3 M Total params
5.296 Total estimated model params size (MB)
Epoch 0: 100%|█| 12523/12523 [59:02<00:00, 3.54it/s, loss=0.000127, v_num=0, train/psnr1=43.70, train/psnrDownloading: "https://download.pytorch.org/models/vgg16-397923af.pth" to /home/zenglongjian/.cache/torch/hub/checkpoints/vgg16-397923af.pth
100%|███████████████████████████████████████████████████████████████████| 528M/528M [01:02<00:00, 8.81MB/s]
Downloading: "https://github.com/richzhang/PerceptualSimilarity/raw/master/lpips/weights/v0.1/vgg.pth" to /home/zenglongjian/.cache/torch/hub/checkpoints/vgg.pth█████████████████▉| 527M/528M [01:02<00:00, 7.53MB/s]
100%|█████████████████████████████████████████████████████████████████| 7.12k/7.12k [00:00<00:00, 3.37MB/s]
Epoch 9: 100%|█| 12523/12523 [58:35<00:00, 3.56it/s, loss=9.05e-05, v_num=0, train/psnr1=44.00, train/psnrTrainer.fit stopped: max_steps=125000 reached.
Epoch 9: 100%|█| 12523/12523 [58:35<00:00, 3.56it/s, loss=9.05e-05, v_num=0, train/psnr1=44.00, train/psnr
the checkpoint path is: ./logs/aleth_nerf_blender_buu_220901eta0.1/last.ckpt
Restoring states from the checkpoint path at ./logs/aleth_nerf_blender_buu_220901eta0.1/last.ckpt
Lightning automatically upgraded your loaded checkpoint from v1.7.6 to v1.9.5. To apply the upgrade to yourles permanently, run python -m pytorch_lightning.utilities.upgrade_checkpoint --file logs/aleth_nerf_blendbuu_220901eta0.1/last.ckpt
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Loaded model weights from checkpoint at ./logs/aleth_nerf_blender_buu_220901eta0.1/last.ckpt
Testing DataLoader 0: 100%|███████████████████████████████████████████████████| 69/69 [00:15<00:00, 4.44itTraceback (most recent call last):
File "run.py", line 244, in
run(
File "/home/zenglongjian/.conda/envs/aleth_nerf/lib/python3.8/site-packages/gin/config.py", line 1605, inn_wrapper
utils.augment_exception_message_and_reraise(e, err_str)
File "/home/zenglongjian/.conda/envs/aleth_nerf/lib/python3.8/site-packages/gin/utils.py", line 41, in aunt_exception_message_and_reraise
raise proxy.with_traceback(exception.traceback) from None
File "/home/zenglongjian/.conda/envs/aleth_nerf/lib/python3.8/site-packages/gin/config.py", line 1582, inn_wrapper
return fn(*new_args, **new_kwargs)
File "run.py", line 191, in run
trainer.test(model, data_module, ckpt_path=ckpt_path)
File "/home/zenglongjian/.conda/envs/aleth_nerf/lib/python3.8/site-packages/pytorch_lightning/trainer/trar.py", line 794, in test
return call._call_and_handle_interrupt(
File "/home/zenglongjian/.conda/envs/aleth_nerf/lib/python3.8/site-packages/pytorch_lightning/trainer/caly", line 38, in _call_and_handle_interrupt
return trainer_fn(*args, **kwargs)
File "/home/zenglongjian/.conda/envs/aleth_nerf/lib/python3.8/site-packages/pytorch_lightning/trainer/trar.py", line 842, in _test_impl
results = self._run(model, ckpt_path=self.ckpt_path)
File "/home/zenglongjian/.conda/envs/aleth_nerf/lib/python3.8/site-packages/pytorch_lightning/trainer/trar.py", line 1112, in _run
results = self._run_stage()
File "/home/zenglongjian/.conda/envs/aleth_nerf/lib/python3.8/site-packages/pytorch_lightning/trainer/trar.py", line 1188, in _run_stage
return self._run_evaluate()
File "/home/zenglongjian/.conda/envs/aleth_nerf/lib/python3.8/site-packages/pytorch_lightning/trainer/trar.py", line 1228, in _run_evaluate
eval_loop_results = self._evaluation_loop.run()
File "/home/zenglongjian/.conda/envs/aleth_nerf/lib/python3.8/site-packages/pytorch_lightning/loops/loop., line 206, in run
output = self.on_run_end()
File "/home/zenglongjian/.conda/envs/aleth_nerf/lib/python3.8/site-packages/pytorch_lightning/loops/dataler/evaluation_loop.py", line 180, in on_run_end
self._evaluation_epoch_end(self._outputs)
File "/home/zenglongjian/.conda/envs/aleth_nerf/lib/python3.8/site-packages/pytorch_lightning/loops/dataler/evaluation_loop.py", line 288, in _evaluation_epoch_end
self.trainer._call_lightning_module_hook(hook_name, output_or_outputs)
File "/home/zenglongjian/.conda/envs/aleth_nerf/lib/python3.8/site-packages/pytorch_lightning/trainer/trar.py", line 1356, in _call_lightning_module_hook
output = fn(*args, **kwargs)
File "/home/zenglongjian/Aelth-NeRF/src/model/aleth_nerf/model.py", line 424, in test_epoch_end
darknesss = self.alter_gather_cat_conceil(outputs, "darkness", all_image_sizes)
File "/home/zenglongjian/Aelth-NeRF/src/model/interface.py", line 52, in alter_gather_cat_conceil
all = all.permute((1, 0, 2)).flatten(0, 1)
RuntimeError: number of dims don't match in permute
In call to configurable 'run' (<function run at 0x7efb2cf63550>)
Testing DataLoader 0: 100%|██████████| 69/69 [00:16<00:00, 4.31it/s]

How to make the gif in your website

I just want to know how to make the gif in your website?Is it made with images rendered from all training viewpoints?Or is it from custom viewpoints? Is there any detailed code for this?

render 3D

Hello, very nice work! I have a question. After testing the model we get 3 image outputs. Can you tell me how to render the 3D motion picture? Thanks!

About single image low-light enhancement

First of all, thank you very much for your contribution. You stated in the article that this method also works well for low-light enhancement of a single image, thanks to your prior Settings. I would like to ask if there is a single image enhancement code provided.

Using nerf in this paper to do single image low-light enhancement is a bit tedious, because it has to fix the camera pose and other parameters. Have you ever tried using your prior for low-light enhancement of a single image?

about environmental question

Hi, thank you for your great work. However, the following problems occurred when I configured the environment according to the steps. Could you please tell me how to solve it?
EJ )ZK6 HAN0%{_8(V X{UH

About Eq 9 in the paper

Great work, but I have a question. Is the implementation of volume rendering in the code consistent with Eq. 9 in the paper? From my understanding, the implementation in the code directly multiplies the local occlusion field darkness from the network output with trans. I interpret this to mean that in the implementation of trans in the code, the trans of a point is only influenced by the occlusion field at the current position, without considering the occlusion field before this point. As I'm relatively new to NERF, my understanding of the paper may not be thorough enough. Could you please address this issue?
In your code:
weights_dark = alpha * accum_prod_dark comp_rgb_dark = (weights_dark[..., None] * rgb * darkness).sum(dim=-2) acc_dark = weights_dark.sum(dim=-1)
In the paper:
BNO{(8V7T42~OI6CPTLFH8W

RestoreDet

你好,请问RestoreDet这篇文章的源码可以给一份吗?

model confusion

Hi, thank you for your great work. But I'm confused about these models, could you explain them please?
Thanks.
image

data type

How can we use nerfstudio's data in this project?

如何自制数据集

请问我自己拍的数据集,怎么得到transform_test.json这些分开的json文件? 先通过colmap,得到稀疏数据,怎么得到你数据集中的其他json文件呢?

Gradient computation for omega

I noticed that the transimittance of a point is related to the alpha and omega of all the points before it. But in finding the gradient of the loss function with respect to omega, the gradient of omega at the last point of a ray should not be available, right? Because the final color of the ray is calculated without the omega variable of the last point, right?

I noticed that the initial dimension of omega in your code is the number of sample points plus 1. I can't figure out how to solve the problem that the loss function can't calculate the gradient of the omega variable with respect to the last point of the ray.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.